text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
If you have n equilateral triangles, and you want to connect them all to each other at the edges, how many different shapes can you make? Triangles are identical in size and shapes that are rotationally congruent should be not be counted multiple times. Mirror images should be counted as different.
The comment by Blue already gave some useful links: these shapes made out of triangles are called polyiamonds, a generalization from the diamond made of two triangles. The version using squares instead of triangles would be called polyominos.
Most likely you can find the counts you're after on OEIS. But where the comment suggests A000577 which consideres reflected polyiamonds the same, the edited version of your post asks to count them as different (so you're dealing with "one-sided" polyiamonds). That would be A006534 instead, still for the case with holes. Your post doesn't exclude holes, and OEIS currently doesn't have a sequence for one-sided hole-less polyiamonds. Since holes can only occur for sizes of at least nine triangles (noniamonds and above), the distinction is irrelevant for smaller polyiamonds.
Finding a simple formula for these beasts would be quite tricky. My best bet at the moment would be pretty brute-force enumeration. From all the references mentioned on the A006534 page, Counting hexagonal and triangular polyominoes by Lunnon sounds the most promising, if you can get access to that. So far I've only read the abstract.
Not the answer you're looking for? Browse other questions tagged combinatorics geometry triangles or ask your own question.
How many different shapes can I make with this toy?
How many different box combinations can you get?
If you have a triangle with its mirror reflection, are they congruent?
How many different arrangements of triangles that are either red or blue around a regular heptagon are possible?
In how many different ways can we place 8 identical rooks on a chess board?
How many different shapes that consist of five bordering squares can there be in a $3 \times 3$ grid?
What planar concave polyforms bound 3D space? | CommonCrawl |
We show that the low ratios of $\alpha$ elements (Mg, Si, and Ca) to Fe recently found for a small fraction of extremely metal-poor stars can be naturally explained with the nucleosynthesis yields of core-collapse supernovae, i.e., $13-25M_\odot$ supernovae, or hypernovae. For the case without carbon enhancement, the ejected iron mass is normal, consistent with observed light curves and spectra of nearby supernovae. On the other hand, the carbon enhancement requires much smaller iron production, and the low [$\alpha$/Fe] of carbon enhanced metal-poor stars can also be reproduced with $13-25M_\odot$ faint supernovae or faint hypernovae. Iron-peak element abundances, in particular Zn abundances, are important to put further constraints on the enrichment sources from galactic archaeology surveys. | CommonCrawl |
(Primitive root) Let $p$ be a prime and $n > 1$ be a natural number. The set of all the roots $\alpha$ of the polynomial $x^n - 1 \in Z_p[x]$ forms a cyclic group of order $n$, where all $\alpha$ belongs to the decomposition field of $x^n - 1$ over $Z_p$. If $a$ generates the prementioned group then we call $a$ to be the primitive root or order $n$ over $Z_p$.
Browse other questions tagged field-theory extension-field roots-of-unity or ask your own question.
Concept of field of roots vs decomposition field (splitting field): what is the difference?
Do the $n$-th roots of unity of an *arbitrary* field form a cyclic group?
Is $\Bbb Z_p^2$ a Galois group over $\Bbb Q$? | CommonCrawl |
In this new article series QuantStart returns to the discussion of pricing derivative securities, a topic which was covered a few years ago on the site through an introduction to stochastic calculus.
Imanol Pérez, a PhD researcher in Mathematics at Oxford University, and an expert guest contributor to QuantStart will mathematically describe the Black-Scholes model for options pricing in this article and then subsequently outline its limitations in future posts.
Derivatives are financial instruments whose price depends on the performance of some underlying asset or assets. The world of financial derivatives is very complex, and derivatives can be very different from each other. In this article we will focus on a particular type of derivatives, options, and we will show how stochastic analysis can be used to find the fair price, a notion that will be made clear later, of a class of options.
European call options with underlying asset $S$, maturity time $T$ and exercise price $K$ give the contract owner the right (but not the obligation) to buy one share of $S$ at the price $K$ at time $T$.
Similarly, European put options give the contract owner the right (but not the obligation) to sell one share of $S$ at the price $K$ at time $T$.
then it is in the interest of the owner of a European call option to buy the asset at the price $K$. If the price at time $T$ is lower than the exercise price, buying the asset at the price $K$ is against the interest of the option owner, since the asset can be bought at a cheaper price directly in the market. Therefore, the option owner will earn $\Phi_c(S_T)$, with $\Phi_c(x):=\max(x-K, 0)$. Similarly, one can check that the owner of a European put option will receive $\Phi_p(S_T)$, with $\Phi_p(x):=\max(K-x, 0)$, from the option contract.
After defining these contracts, one would like to answer the following question: If $\Pi_t$ represents the price of a European option (either a put or call option), how can one find a fair value for $\Pi_t$?
where $r$ is the short rate of interest, $\alpha$ is the local mean rate of return of the stock, $\sigma$ is its volatility and $W$ is a Brownian motion. We will assume that $r$ is constant and $\alpha$ and $\sigma$ are functions of $t$ and $S_t$. As we see, we will assume that $S$ follows a geometric Brownian motion, which was mentioned in this article.
An investor can hold a portfolio $h_t=(h_t^1, h_t^2, h_t^3)$. The value $h_t^1$ will correspond to the money invested in the risk free asset $B$, $h_t^2$ the money invested in the stock $S$ and $h_t^3$ the money invested in the option contract. A negative value indicates short selling. We will also denote by $V^h_t$ the wealth of the portfolio $h$ at time $t$.
The conditions above avoid the possibility of being able to make a positive amount of money, without investing any money, and without taking any risk.
where $\Phi$ can be either $\Phi_c$ or $\Phi_p$, although more general contingent claims $\Phi$ can be used as well.
The Black–Scholes model is a simple model that can be very useful, but it has many limitations and it should therefore be treated with care. In subsequent articles we shall see some of the limitations of the model, and how one could solve them in order to obtain a model that adjusts better to reality. | CommonCrawl |
(School of Electronic Engineering and Computer Science, Queen Mary University of London, London, U.K.
A novel ultrahigh frequency radio frequency identification reader antenna based on electromagnetic coupling between two open-ended microstrip (MS) meander lines for near-field applications is investigated in this paper. The corresponding currents flowing along the two MS meander lines are reversed in phase with approximately identical amplitudes. Meander-line units are introduced to achieve a uniform distribution of strong magnetic and electric fields. The performance of an antenna prototype comprised of six pairs of meander lines is analyzed. The proposed antenna simultaneously exhibits a uniform magnetic field distribution with a reading region of 480 mm $\times200$ mm $\times20$ mm and a uniform linear electric field distribution with a reading region of 480 mm $\times420$ mm $\times300$ mm. The proposed antenna exhibits a low far-field gain, and has a bandwidth from 914 to 929 MHz. Both simulated and measured results have shown a good performance of the antenna. | CommonCrawl |
Model categories are categories with three distinguished classes of morphisms: the weak equivalences, the fibrations and the cofibrations. They provide a natural setting for (homotopy-theory) in an arbitrary category, by mimicking the usual properties of (co)fibrations and weak equivalences in topological spaces.
When does a fibration $f:X\rightarrow Y$ in a model category admit a section?.
CW complexes are the cofibrant objects in the Quillen model structure on Top?
Why are some maps monomorphisms?
Why Quillen equivalence is used to define "equivalent" model categories?
Perfect class of morphisms closed under retracts?
Homotopy category of Chain complex - isomoprhism = quasi isomoprhism?
What is the $\infty$-category associated to a model category?
Underlying quasicategory of a model category through framings?
Why is the flat-cotorsion pair actually a cotorsion pair?
Trivial cofibration is a deformation retract?
How can I understand maps out of a limit or into a colimit?
Are all isomorphisms contained in the class of weak equivalences?
How far are functors valued in Ho(Cat) from pseudofunctors? | CommonCrawl |
Recall from the Riemann-Stieltjes Integrability of Functions on Subintervals with Integrators of Bounded Variation that if $f$ is a function defined on $[a, b]$, $\alpha$ is of bounded variation on $[a, b]$, and $f$ is Riemann-Stieltjes integrable with respect to $\alpha$ on $[a, b]$ then $f$ is also Riemann-Stieltjes integrable with respect to $\alpha$ on any subinterval $[c, d] \subseteq [a, b]$.
This function has some nice properties which we prove in the following theorem.
a) $F$ is of bounded variation on $[a, b]$.
b) If $\alpha$ is continuous at $x_0 \in [a, b]$ then $F$ is continuous at $x_0$.
c) If $f$ is continuous at $x_0 \in [a, b]$, $\alpha$ is increasing on $[a,b]$, and $\alpha'(x_0)$ exists then $F'(x_0)$ exists.
Proof of a): Since $\alpha$ is of bounded variation on $[a, b]$ it is sufficient to show that (a) holds with an increasing integrator. Assume that $\alpha$ is increasing on $[a, b]$. | CommonCrawl |
where $s$ is the time iteration and $r$ is the location iteration, $x$ is the true location ($x=r\,\Delta x$), and $\alpha$, $\Delta x$, and $\Delta t$ are constants. The $\eta$ values are already solved, so I need to solve for $u(r,s+1)$. However, the solution includes values for $u(r-1,s+1)$ and $u(r+1,s+1)$, which requires this to be solved as a system of equations over $r$ for each $s$. I have boundary conditions of $u$ for all $r$ when $s=1$, as well as u at the first and last $r$ values for all $s$ (i.e. $u(1,s)$ and $u(r_\max,s)$). I'm not sure what to do from here though to solve for the "interior" $u$ values at $s+1$.
Thanks to TroyHaskin, I've realized that this is in reality a linear problem, since the u(s) values are known from the initial conditions, and is therefore easy to solve in Matlab. Thanks to everyone else for help as well.
Not the answer you're looking for? Browse other questions tagged pde finite-difference differential-equations or ask your own question.
What's a good numerical/optimization software package for solving the 2-D optimal stopping problem? | CommonCrawl |
transformation between square and a polygon?
I have a square and a polygon. I want to transform all the points inside this square such that they are mapped inside the polygon. I was trying using scale and rotate matrices but I am not able to come up with anything that is making sense to me. I also googled a lot for some algorithm which can be used to implement this but dint find anything. Can anybody please help me with an algorithm which is there to map the points in square to a polygon?
I want to map the points inside the square to points inside the polygon on the right.
You can check that this maps $[0,1]\times[0,1]$ onto the left half of your polygon (the half with vertices $A$, $F$, $D$, $E$. By suitable scaling, you can get a mapping from the left half of your rectangle to the left half of the polygon.
Do the same thing with the right halves.
Put the two mapping together to get a mapping from the whole rectangle to the whole polygon.
to construct a "ruled" surface between $P$ and $Q$. This maps from the unit square to the polygon. Adjust accordingly to get a mapping from your rectangle to the polygon.
Not the answer you're looking for? Browse other questions tagged geometry transformation coordinate-systems transformational-geometry or ask your own question.
Can we map a few random points on a plane to that of a regular polygon using some relation or transformation? | CommonCrawl |
I have no idea what this question even means. Are you asking whether there is a mystical process whereby you can evaluate 23! without computation?
But to determine what number is equal to 23!, what kind of process are you looking for that does not involve any arithmetic?
but I am not sure how that would be easier nor whether it would avoid inspection whatever that means. I can't imagine an easier computation than multiplying $n$ numbers together.
I think the OP wants the reverse... given x, solve n! = x for n.
For small n, brute force is simplest. For large n, use Stirling's formula and invert (may not be easy).
Well if that is what is being asked, it sure was asked obscurely.
If we are discussing positive integers, a fairly efficient algorithm is available. To determine whether any positive integer less than 200 trillion is a factorial of an integer, a binary search against a table of the factorials of 1 through 16 will require at most 4 comparisons. The brute force involved is that of a mouse.
Now, given an integer $m$, solve $(x-1) \log (x-1) = \log m - 1$ for $x_L$, and solve $x \log x = \log m$ for $x_R$, then if $m = n!$ for some $n$, it must be that $n \in [\exp(x_L), \exp(x_R)]$ which is easy to check. In fact, one can get an even faster computation by checking that $m$ is divisible by $\lceil x_L \rceil$ and if so, dividing it out and iterating. | CommonCrawl |
9 October. Leonid Petrov (Virginia). Cauchy identities, Yang-Baxter equation, and their randomization.
Abstract. Cauchy type summation identities for various families of symmetric polynomials (with Schur polynomials as the first example) are crucial in bringing exact solvability to various stochastic particle systems in the Kardar-Parisi-Zhang universality class. First breakthroughs in this direction about 20 years ago employed Robinson-Schensted-Knuth correspondences to study asymptotic fluctuations of longest increasing subsequences and TASEP (totally asymmetric simple exclusion process). Deforming the Schur structure, one can connect Cauchy identities to the Yang-Baxter equation for the six vertex model, and use this to exactly solve more general models such as ASEP. I will discuss how the structure of RSK correspondences should be adapted in connection with these deformations, providing a "bijective" point of view on the Yang-Baxter equation.
16 October. Jon Warren (Warwick). A first look at the Gaussian Free Field.
Abstract. I will try to give some intuition for why this is a fundamental process, starting with a discrete version, emphasizing the Markov property, and concluding with a quick look at an example of an interesting model in which the GFF arises.
23 October. Jon Warren and Sigurd Assing (Warwick). A first look at the Gaussian Free Field-II.
Abstract. Construction of a Gaussian measure whose covariance kernel is given by the inverse of the Laplacian on a bounded domain with Dirichlet boundary condition a la L. Gross's theory of Abstract Wiener Spaces.
30 October. Sigurd Assing (Warwick). The Gaussian Free Field-III.
6 November. Oleg Zaboronski (Warwick). Two-dimensional Green's functions.
and the asymptotic behaviour near the singularity.
13 November. Sigurd Assing (Warwick). Circle Averages and Markov Property of GFF.
Abstract. I will discuss circle averages but also revisit the Markov property introduced by Jon and try to connect it to a sharp Markov property. The main target is to prepare for the construction of the Liouville measure.
27 November. Oleg Zaboronski (Warwick). GFF: conformal invariance and the set of thick points.
and study (heuristically to start with) the set of $\alpha$-thick points of GFF.
4 December. Kurt Johansson (KTH). Understanding the two-time distribution in local random growth.
get some interesting information out of the formula for the two-time distribution.
15 January. Roger Tribe (Warwick). Liouville measure.
Abstract: Following Chapter 2 of Berestycki's notes, I will assign a rigorous meaning to the exponential of the Gaussian free field, an object of fundamental importance for the theory of random surfaces.
22 January. Roger Tribe (Warwick). Liouville measures-II.
29 January. Jacek Kiedrowski (Warwick). Knizhnik-Polyakov-Zamolodchikov formula.
Abstract: Following Chapter 3 of Beresticki's notes, I will introduce the KPZ formula which is closely linked with computing critical exponents of models of statistical physics.
05 February. Jacek Kiedrowski (Warwick). Knizhnik-Polyakov-Zamolodchikov formula-II.
Abstract: Following Chapter 3 of Beresticki's notes, I will recap the KPZ formula and prove the case of expected Minkowski dimension using the multifractal spectrum of Liouville measure.
12 February. Will FitzGerald (Warwick). Random planar maps and Liouville quantum gravity.
Abstract: I will try to give the intuition behind the connections (many of which are conjectural) between random planar maps and Liouville quantum gravity. This is following Chapter 4 of Berestycki's notes.
19 February. Will FitzGerald (Warwick). Scale-Invariant Random Surfaces.
26 February. Will FitzGerald (Warwick). Scale-Invariant Random Surfaces-II.
12 March. Jon Warren (Warwick). The Gaussian free field and SLE.
Abstract. I will discuss chapter 6 of Berestycki's notes and the paper by Sheffield, Conformal weldings of random surfaces: SLE and the quantum gravity zipper.
When two (appropriately chosen) quantum surfaces are ( appropriately ) joined together, they result in a new quantum surface containing the random line along which the original surfaces are joined. That random line, amazingly, turns out to be described by an SLE curve. | CommonCrawl |
Abstract: Adjoint functor theorems give necessary and sufficient conditions for a functor to admit an adjoint. In this paper we prove general adjoint functor theorems for functors between $\infty$-categories. One of our main results is an $\infty$-categorical generalization of Freyd's classical General Adjoint Functor Theorem. As an application of this result, we recover Lurie's adjoint functor theorems for presentable $\infty$-categories. We also discuss the comparison between adjunctions of $\infty$-categories and homotopy adjunctions, and give a treatment of Brown representability for $\infty$-categories based on Heller's purely categorical formulation of the classical Brown representability theorem. | CommonCrawl |
Universality of Wigner Random Matrices - Mathematical Physics - Download this document for free, or read online. Document in PDF available to download.
Abstract: We consider $N\times N$ symmetric or hermitian random matrices withindependent, identically distributed entries where the probability distributionfor each matrix element is given by a measure $ u$ with a subexponentialdecay. We prove that the local eigenvalue statistics in the bulk of thespectrum for these matrices coincide with those of the Gaussian OrthogonalEnsemble GOE and the Gaussian Unitary Ensemble GUE, respectively, in thelimit $N\to \infty$. Our approach is based on the study of the Dyson Brownianmotion via a related new dynamics, the local relaxation flow. We also show thatthe Wigner semicircle law holds locally on the smallest possible scales and weprove that eigenvectors are fully delocalized and eigenvalues repel each otheron arbitrarily small scales. | CommonCrawl |
Let $K$ be a non-empty, closed and convex subset of a real Hilbert space $H$. Let $T_i : K \to CB(K), i = 1,2, \ldots, N$, be a finite family of Lipschitz hemicontractive-type mappings with Lipschitz constants $L_i, i=1,2,\ldots, N$, respectively. It is our purpose, in this paper, to introduce a Halpern type algorithm which converges strongly to a common fixed point of a finite family of Lipschitz hemicontractive-type multivalued mappings under certain mild conditions. There is no compactness assumption on either the domain set or on the mappings $T_i$ considered.
Copyright © 2015 Sebsibe Teferi Woldeamanuel, Mengistu Goa Sangago, Habtu Zegeye. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | CommonCrawl |
I know kind of a very elementary method to factor this number. Consider the following: $$10^6-1 = (10^3-1)(10^3+1)=9 \times 11 \times (10^2+10+1)(10^2-10+1) = 9 \times 11 \times 111\times 91$$ I would then factor each number individually.
Is there a faster method? The great hint is that this number is a rep unit number = $9\ \times 111111$.
You have gotten as far as the difference of sixth powers will take you. Now $111$ is divisible by $3$ by the sum of digits test, you know $9=3^2,$ and you are down to $3^3\cdot 11 \cdot 37 \cdot 91$. I don't see anything better than trial division at this point. For $37$ you only need to go up to $5$ to see it is prime. For $91$ you need to go to $7,$ you find $91=7\cdot 13$ and you are done. Maybe you know the last two off the top of your head.
If you have a computer, you could use this idea for an algorithm to factor:$$10^n -1 \equiv 0 \mod p \iff 10^n \equiv 1 \mod p$$ This will be more feasible for large values of $n$ because most programming languages have an efficient modular exponentiation function that already exists. If a prime $p$ satisfies this equivalence, it's a prime factor, though you will have to run additional trials to find the degree of $p$.
Not the answer you're looking for? Browse other questions tagged number-theory elementary-number-theory divisibility exponentiation factoring or ask your own question.
Two numbers sharing an unknown prime factor. Find a method to quickly factor both numbers.
About the fact that every natural number which is coprime to $10$ has a multiple in the form that each digit is $1$.
Is this line of reasoning correct?
Prove that a number composed of only $2n$ "1"s, minus another number composed of only $n$ "2"s, is a perfect square.
Can factoring with the sum of 4 squares be made more efficient? | CommonCrawl |
Transition-region explosive events (EEs) are characterized by non-Gaussian line profiles with enhanced wings at Doppler velocities of 50-150 km/s. They are believed to be the signature of solar phenomena that are one of the main contributors to coronal heating. The aim of this study is to investigate the link of EEs to dynamic phenomena in the transition region and chromosphere in an active region. We analyze observations simultaneously taken by the Interface Region Imaging Spectrograph (IRIS) in the Si IV 1394\AA\ line and the slit-jaw (SJ) 1400\AA\ images, and the Swedish 1-m Solar Telescope (SST) in the H$\alpha$ line. In total 24 events were found. They are associated with small-scale loop brightenings in SJ 1400\AA\ images. Only four events show a counterpart in the H$\alpha$-35 km/s and H$\alpha$+35 km/s images. Two of them represent brightenings in the conjunction region of several loops that are also related to a bright region (granular lane) in the H$\alpha$-35km/s and H$\alpha$+35 km/s images. Sixteen are general loop brightenings that do not show any discernible response in the H$\alpha$ images. Six EEs appear as propagating loop brightenings, from which two are associated with dark jet-like features clearly seen in the H$\alpha$-35 km/s images. We found that chromospheric events with jet-like appearance seen in the wings of the H$\alpha$ line can trigger EEs in the transition region and in this case the IRIS Si IV 1394\AA\ line profiles are seeded with absorption components resulting from Fe II and Ni II. Our study indicates that EEs occurring in active regions have mostly upper-chromosphere/transition-region origin. We suggest that magnetic reconnection resulting from the braidings of small-scale transition region loops is one of the possible mechanisms of energy release that are responsible for the EEs reported in this paper. | CommonCrawl |
Abstract: Based on a criterion for weak compactness in the $\ell^p$ product of the sequence of Banach spaces $E_i$, $i = 1, 2, \ldots$, we construct a measure of weak noncompactness in this space. It is shown that this measure is regular but not equivalent to the De Blasi measure of weak noncompactness provided the spaces $E_i$ have the Schur property. Apart from this a formula for the De Blasi measure in the sequence space $c_0(E_i)$ is also derived. | CommonCrawl |
Gross-Pandharipande-Siebert have shown that 2-dimensional scattering diagrams compute some genus zero Gromov-Witten invariants of log Calabi-Yau surfaces. I will explain that the q-refined 2-dimensional scattering diagrams compute some higher genus Gromov-Witten invariants of log Calabi-Yau surfaces. This result can be motivated by the expected relation, going back to Witten, between open A-model and Chern-Simons theory.
I will talk about joint work with Abramovich, Chen, and Siebert aiming at generalising the Li-Ruan and Jun Li degeneration formulas for Gromov-Witten invariants. The goal is to understand how to use degenerations of smooth varieties into, say, arbitrary normal crossings varieties in order to calculate Gromov-Witten invariants of the smooth variety. Li-Ruan solved this problem in the case that a smooth variety degenerates into a union of two smooth varieties, but log Gromov-Witten theory aims to deal with much more general degenerations. I will attempt to give this talk without needing a log geometric background.
We propose a paradigm to machine-learn the ever-expanding databases which have emerged in mathematical physics, algebraic geometry and particle phenomenology, as diverse as the statistics of string vacua and the classification of varieties.
As concrete examples, we establish multi-layer neural networks as both classifiers and predictors and train them with a host of available data ranging from Calabi-Yau manifolds to quiver representations for gauge theories, achieving impressive precision in a matter of minutes. This paradigm should prove useful in various investigations in landscapes in physics as well as pure mathematics.
Intuitively, the Hilbert Scheme of $n$ points on a smooth surface S parameterises sets of n unordered points on $S$, "remembering" extra information if the points collide. For a fixed $S$ and varying n, it turns out there is a lot of structure hidden in the topology of Hilbert Schemes -- Goettsche gave a product formula for them, which was later shown to be a shadow of a Heisenberg algebra action. It is natural to ask what happens for orbifold surfaces. When $G$ is a finite subgroup of $SL_2$, Hilbert schemes of points on $\mathbb C^2/G$ are important players in Nakajima's construction of representations of quantum groups, and on the other hand are connected to the classical combinatorial notion of cores and quotients of partitions.
However, when $G$ is not a subgroup of $SL_2$, much less is known, and we start to remedy this. For $G$ a finite cycle group *not* in $SL_2$, we state conjectural analogs of Goettsche's product formula and of the Heisenberg action, and prove a few results using a variation of cores and quotients of partitions, showing connections to the minimal and maximal resolutions of $\mathbb C^2/G$.
I will describe open B-model invariants in the context of Saito-Givental theory that mirror open r-spin invariants constructed by Buryak, Clader, and Tessler. This is joint work with Mark Gross and Ran Tessler.
Non-homogeneous horospherical varieties have been classified by Pasquier and include the well known odd symplectic Grassmannians. In this talk I will explain how to study their quantum cohomology, with a view towards Dubrovin's conjecture. In particular, I will describe the cohomology groups of these varieties as well as a Chevalley formula, and prove that many Gromov-Witten invariants are enumerative. The consequence is that we can prove in many cases that the quantum cohomology is semisimple. I will also give a presentation of the quantum cohomology ring for odd symplectic Grassmannians. Finally, I will explain mirror constructions in two cases. This is joint work with R. Gonzales, N. Perrin, and A. Samokhin.
The computation of Gromov-Witten invariants of compact Calabi-Yau 3-folds such as quintic 3-fold has been a central problems in geometry and physics. Unfortunately, the higher genus computation turns out to be a very difficult problem. Via B-model and mirror symmetry, physicists have proposed a zoo of conjectures regarding the structure of theory as well as explicit computation. In this talk, we will review these conjectures and some of early attempts centering around the analytic continuation of Gromov-Witten theory.
The effort to study the analytic continuation of Gromov-Witten theory leads to the invention of FJRW-theory and more recent mathematical theory of gauged linear sigma model (GLSM). One consequence of these new theories is the appearance of an alternative definition of Gromov-Witten theory. The recent breakthrough starts from a logarithmic compactification of relevant GLSM. The localization formula of log GLSM moduli space immediately reduces Gromov-Witten invariants of quintic 3-fold to finitely many unknown "effective invariants" and a twisted theory. An "advanced Givental theory" leads to the computation of generating function in a closed form and the solutions of ALL the B-model conjectures. This is a joint work with Qile Chen, Felix Janda and Shuai Guo.
Calabi-Yau manifolds always occupy an important place in the classification of algebraic varieties, In dimension one and two, there are only one family of Calabi-Yau manifolds. In early 90's, physicists surprised mathematician by discovering millions of different families of Calabi-Yau 3-folds and there is a "mirror symmetry" among them. To put an order to such a chaotic situation, Miles Reid proposed to connect ALL the Calabi-Yau 3-folds by algebraic surgeries such as flop and extremal transition. The hope is that one can prove mirror symmetry of Calabi-Yau 3-folds by proving mirror symmetry among surgeries. His proposal was known as Reid's fantasy. Flop was well-understood. Mark Gross soon classified so called primitive extremal transitions. In 96's, Anmin Li and I started a program to calculate the change of Gromov-Witten invariants under these surgeries. We did the case of flop and conifold transition and failed for other types of transitions. After twenty years, Rongxiao Mi made some breakthrough recently using the language of quantum D-module (motivated by early works of Lee-Lin-Wang). | CommonCrawl |
A sparse matrix is just a matrix that is mostly zero. Typically, when people talk about sparse matrices in numerical computations, they mean matrices that are mostly zero and are represented in a way that takes advantage of that sparsity to reduce required storage or optimize operations.
This is similar to the Dictionary of Keys format and the COOrdinate format.
Of course, taken to the other extreme, this is quite inefficient. If this array were fully dense, with all nonzero values, we would have to store roughly three times as many numbers than if we had just stored the values consecutively in an array.
To understand how these different representations work, let's use some toy examples constructed from small matrices. In practice, there isn't much benefit to storing anything so small or so dense as a sparse matrix, but they're useful for illustrative purposes. Below we have a $(5, 5)$ matrix in which every value is either $0$ or $1$ with most values being $0$.
For the remainder of this post, we'll take advantage of HTML display in notebooks and the sympy pretty printer to display matrices using a little utility function.
Our first matrix here is sparse in the strict mathematical sense — it's mostly zero — but we're using np.matrix, a dense matrix object. To make sparse matrices, we'll make use of the objects provided by scipy.sparse.
The scipy sparse matrix constructors all accept dense matrices as inputs, which will allow us to create sparse matrices from our contrived examples and take them apart and see how they work.
First on our list is COO representation. The capitalization of the name might make it seem like an acronym, but it's just an abbreviation of coordinate, and the format itself is quite comprehensible.
The repr of a sparse matrix doesn't show any of the data like a standard matrix does. And sympy doesn't understand sparse matrices of this type. To see the data, we'll have to coerce the representation back to dense.
If you're like me, you might be tempted to dig into the scipy source to see how todense() is implemented on the various matrix representations. Unfortunately for us, the scipy source does not give itself over to inspection so easily. If you're comfortable with Fortran, LAPACK, BLAS, ATLAS, etc., the source might make more sense, but in that case, you likely have no need for this post. Instead, let's take a look at the way attributes on the COO matrix instance to see how the data is stored.
COO matrices store the value, row and column for each nonzero item in the matrix. While wikipedia describes the COO format as consisting of 3-value tuples with $(row, column, value)$ for each nonzero item, the scipy implementation stores the data, the row indices and the column indices each as their own array with a length equal to the number of nonzero items ($NNZ$).
This is easy to read and understand; at row $0$, column $3$, the value is $1$. In fact, we can easily see that all nonzero values are $1$.
Let's construct a slightly-less-trivial example where the values are the integers from $1$ to $10$.
scipy.sparse.coo_matrix accepts data in the canonical representation as two-tuple, in which the first item is the nonzero values, and the second item is itself a two-value tuple with the rows and columns repesctively. A second argument shape is required, or else it would be unclear whether empty rows and columns existed beyond the bounds of the explicitly provided data.
As I mentioned before, it's not easy to find and read the points in the scipy source where the various sparse representations are constructed and made dense. To illustrate how these operations and other work, let's make our own. I'm going to prefix all these simplistic sparse matrix classes with Naive because they're only for illustrative purposes. Real world sparse matrix libraries handle lots of corner cases, take advantage of sorting to optimize certain operations and call out to lower-level code to optimize other operations. Ours will do none of these things and instead focus on iteration, setting and getting values in order to make the details of these formats more intuitive.
Below is an abstract base class describing everything we want our sparse matrix classes to handle. We'll only handle a couple bits of common functionality in our base class. It accepts and validates a keyword argument for shape and saves that as an instance property; likewise for a dtype argument which sets the type of the data (eg. float, int). It also assumes that we'll define iteration, and uses that to implement a to_dense method and a utility method for pretty printing it using sympy.
# indices and assigning accordingly.
The COO format is simple and our NaiveCOOMatrix class reflects that simplicity.
The advantages of the format are easy to see, too. The canonical representation makes it trivial to iterate over the nonzero values; as a consequence, it's easy to construct, it's easy to iterate over the nonzero values, and it's easy to set and get items by their indices.
# wikipedia—a list of (row, column, value) tuples.
# trivial in this format.
# and look for matching coordinates.
# If we don't find it in the explicitly defined items, we know it's 0.
# If we don't find it, we can just append it to the items array.
# many items as we have nonzero values.
OK! We've made a class for representing a sparse matrix. Besides the obvious optimizations, ours differs in some really important ways from scipy.sparse.coo_matrix.
It accepts a list of (row, column, value) tuples rather 3 arrays, one of each kind.
We spell it to_dense() rather than todense() because we're good people who like nice APIs.
scipy.sparse.coo_matrix doesn't support indexing or assignment, and does support a whole range of mathematical operations.
Ours supports iterating nonzero values along with their indices, but doesn't guarantee an order. It's not clear how useful this is, but all the previously-stated caveats about this being for illustrative perhaps apply here.
Since ours accepts the data in a different format, let's put our data into that format and construct it.
We did it! Let's sanity check our implementation by accessing a defined value, $5$ at position $(2, 3)$.
Accessing $(0, 0)$, for which we didn't supply a value, should return $0$.
We define its __len__ as the number of its nonzero values.
If we assign a nonzero value to $(0, 0)$, we should be able to access it subsequently.
And now the len should be a bit bigger.
Assigning a new value to a coordinate with a nonzero value should overwrite the existing value and not increase the length.
So that's a COO matrix. One major downside of this representation is the one mentioned in our giant example in the opening. Depending on how sparse a matrix is, and ours is not very sparse, the COO representation might actually increase the required storage. Let's look at how many values it takes to represent out matrix.
The COO format requires storing 33 numbers to represent 11 nonzero numbers. Storing every value consecutively would only require storing 25 numbers. Different representations take advantage of the structure of the sparsity to minimize storage and optimize operations.
DOK stands for dictionary of keys and it's exactly what it sounds like. Of all the formats discussed in this post, it's by far the simplest to implement using vanilla Python. Like COO, it stores 3 numbers per each non-zero number, but it uses a dictionary where the key is the pair of row and column and the value is the number.
All scipy.sparse matrix constructors support being supplied a single argument with a dense matrix, so we'll create the same example as the previous using that call signature, and then let's take it apart and see what it's made of.
sparse_dok implements keys(), values() and items() just like a vanilla python dict.
Implementation-wise, it doesn't get simpler. We store the dict, and we use it for iteration, lookup and assignment.
The LIL or list of lists representation is also straightforward to understand and implement. LIL is a row-oriented representation, in which row-based operations are easier to implement and may be less complex to compute.
A LIL matrix is constructed from a single array of length $M$ (the number of rows) in which each item is a list of (column_index, value) pairs.
# which the pairs of column indexes and values are stored.
# supplied row index, then look for a matching column index.
# column_index, value pair to the row array.
# column index, value arrays in each per-row array.
CSR stands for compressed sparse row and is good for implementing fast arithmetic operations as well as slicing by row. It's more complicated than the previous examples and it can be used to take better advantage of the sparse structure.
Now let's take it apart to see what's inside.
It's clear enough that data is the nonzero values in "row-major" order, which is to say, left to right then top to bottom, much like how English text is read. It's less clear, though, what the other arrays are.
The second array is nondecreasing — each value is equal to or greater than the previous. Its first value is $0$ and its last is $10$. It's hard to say what that might be, because we provided 10 input values in the range $[0, 10)$.
Let's double our nonzero values to make it more clear what's going on here.
The data in the first array doubles and everything else stays the same. We can reasonably conclude that the second array doesn't hold the nonzero data but describes its position in the matrix.
Things become much more obvious when we look at indptr in pairs.
When we subtract the latter value from the former value, we get the number of nonzero values in each row. And this is the key to how the CSR representation works. indptr has pointers to the two other arrays, describing successively where the data for each row starts and begins. The numbers in data are, as already figured out, the nonzero numbers in the matrix. The numbers in indices are the column indexes at which those corresponding nonzero numbers belong.
Equipped with this knowledge, we can access the values and the column indices by row.
Now we know enough to write our own naive implementation. Like last time, we're going to change things from the way scipy works for sake of clarity.
# track of the row index.
# to look up the indices for values in that row.
# array, starting where the row starts and ending where the row ends.
# that corresponds to the index in values where the value is.
# increment all the row indices afterward.
# Counting nonzero values remains simple.
CSR is a row-oriented format which makes certain row-wise operations simpler to implement and less computationally complex to execute. If we wanted to get the nonzero values and their respective column indices for each row, we could do so easily.
Getting the nonzero values for each column from a CSR-represented matrix is significantly more difficult. In the implementation below, the row_extent pairs are used to create pairs of column indices and values for each row. In order to ensure that missing column show up in the result as empty arrays, we flat map those pairs into a single list, group by the column index, create a dictionary from those groups, and use that to look up the values per column.
CSC stands for 'compressed sparse column', and as you might expect, it's the sister format to CSR, except the pointer array holds the extents of the columns.
We'll make this using the signature that allows us to supply a dense matrix, which we'll make by calling the to_dense() method we defined on all these naive matrix objects.
scipy.sparse.csc_matrix uses the same naming convention for its canonical representation as does csr_matrix. We can see that our data, which was mostly in row-major order (save for that stray 11 we assigned post-construction), is no longer mostly-ordered in its representation because it's column-major ordered now.
Creating a NaiveSparseCSC class is largely a matter of swapping column and row in various places in our NaiveSparseCSR class. The same goes for writing column and row slicing functionality. The implementation is left as an exercise to the reader, or at least to the reader's imagination.
BSR stands for 'block sparse row' and it is also related to CSR.
We're running into the limitations of our small contrived example, so let's borrow this example from the scipy docs.
Like the name implies, BSR format represents a sparse matrix as a dense array of dense blocks.
Here, scipy infers the blocksize from the data we provide. Let's look at that data again.
data is an array in which every value is a $2 \times 2$ array—effectively, a little matrix. BSR requires that this block size divides the matrix's dimensions evenly, which allows the other indices to be relative to that block size. So our $6 \times 6$ matrix is indexable as a $3 \times 3$ matrix in which the items are not individual values but block matrices themselves. Once we make that jump in indexability, most of the rest is CSR-like.
# divides the matrix evenly.
# return an absolute column index.
# return an absolute coordinate.
# return an absolute row index.
# that block and the relative column index within that block.
# that block and the relative coordinate within that block.
# that block and the relative row index within that block.
# for that row of that block.
# becomes [[[1, 2], [5, 6]], [[3, 4], [7, 8]].
# row offset within the block.
# with their column indices alongside.
# to generate the tuple of absolute row, column and value.
# the addition of a coordinate transformation.
# From here, the logic is nearly identical to the CSR representation.
# the row and column offset we got by unscaling the input coordinate.
# Setting an existing value is quite similar to CSR.
# But we can't just add a value if it's not in an existing block.
# We have to initialize a a new empty block and add it to our blocks.
# times the number of values inside each block.
DIA format, short for diagonal, represents the data as a series of vectors along different diagonals, the diagonals themselves being indicated by relative offsets from the main diagonal.
The identity matrix provides a simple example.
We can construct the scipy sparse version by passing this dense matrix to the dia_matrix constructor.
Looking at its canonical representation, we find that we have an array of our main diagonal and a single offset value of zero.
To get a better handle on how this works, let's make a $5 \times 5$ matrix which has $1$ on its main diagonal, even numbers above it, odd numbers below it, up to $5$.
Looking inside this one we can see that our diagonals are each in their own array, with an array indicating their offsets from the main diagonal—negative being below, positive being above, and 0 being the main diagonal.
# Iterate a range (0, rows) to iterate by row index.
# index to access the diagonals.
# Whether a diagonal is visible is relative to the current row.
# that offset, which we use to check the offsets array to find the value.
# set the provided index to the provided value.
Sparse matrices might seem mystifying at first glance, but they're straightforward enough that every flavor of representation available in scipy.sparse is easy enough to read and write using (mostly) pure Python.
That being said, there's a lot more to learn about sparse matrices representations — particularly, how the various representations can be leveraged to optimize mathematical operations and the complexity of constructing and converting between these representations. I wanted to provide examples in this post of how sparse matrices naturally arise from certain applications and I still hope to do so in a future post.
Thanks for reading this far! You can ask questions, file corrections or otherwise make noise towards me on Twitter and I'll edit this post to link to the HN thread when and if it comes into existence. | CommonCrawl |
Celery — the legendary Pokenom has been spotted in Alexa Forest.
To become the best Pokenom trainer, Bash has arrived at Alexa Forest to capture Celery. After lots of information gathering, Bash was able to draw a map of Alexa Forest, and noted down $K$ sightings of Celery.
Alexa Forest's map is a convex polygon $A$ with $N$ vertices on the Cartesian plane. $K$ sightings of Celery can be considered as $K$ points — all are strictly inside Alexa Forest.
Bash is ready to search Alexa Forest to find Celery. However, Bash realized that Alexa Forest is simply too big. It would take decades to search the entire forest. But Bash is smart. Based on his research, Bash knows that Celery can only be found inside a polygon $Z$, where the vertices of $Z$ are a subset of $A$, and all $K$ sightings of Celery must be strictly inside polygon $Z$.
Of course, there can be multiple polygons $Z$ satisfying the above conditions. Your task is to help Bash find the polygon $Z$ with smallest number of vertices.
A point $P$ is strictly inside Polygon $A$, iff $P$ is inside $A$ and $P$ does not lie on the border of $A$.
The first line of input contains a single positive integer $N$ $(3 \le N \le 2 \cdot 10^5)$.
The next $N$ lines each contain $2$ integers $x_ i$, $y_ i$ — the coordinates of the $i$-th vertex of Alexa Forest $(-10^9 \le x_ i, y_ i \le 10^9)$. The vertices are listed in either clockwise or counterclockwise order. It is guaranteed that Alexa Forest is convex.
The next line contains a single positive integer $K$ $(1 \le K \le 10^5)$.
The next $K$ lines, each line contains $2$ integers $x_ i$, $y_ i$ — the coordinates of a sighting of Celery $(-10^9 \le x_ i, y_ i \le 10^9)$. All points are guaranteed to be inside Alexa Forest and no points are on the border of Alexa Forest.
Output a single integer — the smallest number of vertices of polygon $Z$.
In the first example, the only valid polygon satisfied is the whole Alexa Forest. | CommonCrawl |
Abstract: Coulomb branch chiral rings of $\mathcal N=2$ SCFTs are conjectured to be freely generated. While no counter-example is known, no direct evidence for the conjecture is known either. We initiate a systematic study of SCFTs with Coulomb branch chiral rings satisfying non-trivial relations, restricting our analysis to rank 1. The main result of our study is that (rank-1) SCFTs with non-freely generated CB chiral rings when deformed by relevant deformations, always flow to theories with non-freely generated CB rings. This implies that if they exist, they must thus form a distinct subset under RG flows. We also find many interesting characteristic properties that these putative theories satisfy which may behelpful in proving or disproving their existence using other methods. | CommonCrawl |
Bucur, D. and Buttazzo, G. (2009) On the characterization of the compact embedding of Sobolev spaces. arXiv .
Buttazzo, G. and Kawohl, B. (2009) Overdetermined boundary value problems for the $\infty$-Laplacian. arXiv .
Buttazzo, Giuseppe and Wagner, Alfred (2009) On some rescaled shape optimization problems. arXiv . | CommonCrawl |
We consider the problem of obtaining the approximate maximum a posteriori estimate of a discrete random field characterized by pairwise potentials that form a truncated convex model. For this problem, we propose an improved st-MINCUT based move making algorithm. Unlike previous move making approaches, which either provide a loose bound or no bound on the quality of the solution (in terms of the corresponding Gibbs energy), our algorithm achieves the same guarantees as the standard linear programming (LP) relaxation. Compared to previous approaches based on the LP relaxation, e.g. interior-point algorithms or tree-reweighted message passing (TRW), our method is faster as it uses only the efficient st-MINCUT algorithm in its design. Furthermore, it directly provides us with a primal solution (unlike TRW and other related methods which solve the dual of the LP). We demonstrate the effectiveness of the proposed approach on both synthetic and standard real data problems. Our analysis also opens up an interesting question regarding the relationship between move making algorithms (such as $\alpha$-expansion and the algorithms presented in this paper) and the randomized rounding schemes used with convex relaxations. We believe that further explorations in this direction would help design efficient algorithms for more complex relaxations. | CommonCrawl |
We introduce and analyze symmetric infinite-body optimal transport (OT) problems with cost function of pair potential form. We show that for a natural class of such costs, the optimizer is given by the independent product measure all of whose factors are given by the one-body marginal. This is in striking contrast to standard finite-body OT problems, in which the optimizers are typically highly correlated, as well as to infinite-body OT problems with Gangbo-Swiech cost. Moreover, by adapting a construction from the study of exchangeable processes in probability theory, we prove that the corresponding $N$-body OT problem is well approximated by the infinite-body problem. To our class belongs the Coulomb cost which arises in many-electron quantum mechanics. The optimal cost of the Coulombic N-body OT problem as a function of the one-body marginal density is known in the physics and quantum chemistry literature under the name SCE functional, and arises naturally as the semiclassical limit of the celebrated Hohenberg-Kohn functional. Our results imply that in the inhomogeneous high-density limit (i.e. $N\to\infty$ with arbitrary fixed inhomogeneity profile $\rho/N$), the SCE functional converges to the mean field functional. We also present reformulations of the infinite-body and N-body OT problems as two-body OT problems with representability constraints and give a dual characterization of representable two-body measures which parallels an analogous result by Kummer on quantum representability of two-body density matrices. | CommonCrawl |
Local normal form of a (several complex variable) holomorphic map at a point?
When $m=n=1$, the result is well-known: $F$ is either a constant function, or locally $z\mapsto z^n$. On the other hand, when $F'(0)$ is injective or surjective, then implicit function theorem tells us that $F$ is locally equivalent to $F'(0)$, which is classified up to congruence. When $m,n>1$, maybe there's no complete classification, but I want some partial results.
Thoughts: Maybe we can view the problem formally. Instead of holomorphic maps, we can consider formal power series, especially when $n=1$. Weierstrass preparation theorem might work.
Background: I learnt that injective holomorphic maps $\mathbb C^n\to\mathbb C^n$ are biholomorphic. The proof in the course is approximately this. However, such a proof doesn't give any information on the classification of maps. I'm just looking for a result which is strong enough to prove that proposition.
Browse other questions tagged complex-analysis several-complex-variables or ask your own question.
Why holomorphic injection on $C^n$must be biholomorphic?
Deep reason why infinite sheet means logarithm while finite sheet means polynomial?
Determining the biholomorphic self maps of a vertical strip.
Relation between value of derivative and 'size' of Codomain? | CommonCrawl |
Brzozowski derivatives are one of the shibboleths of functional programming: if you ask someone about implementing regular expressions, and you get back an answer involving derivatives of regular expressions, then you have almost surely identified a functional programmer.
Again, this is a simple inductive definition. So to match a string we just call the 1-character derivative for each character, and then check to see if the final regular expression is nullable. However, there's a catch! As you can see from the definition of $\delta_c$, there is no guarantee that the size of the derivative is bounded. So matching a long string can potentially lead to the construction of very large derivatives. This is especially unfortunate, since it is possible (using DFAs) to match strings using regular expressions in constant space.
Then his theorem guarantees that the set of derivatives is finite. However, computing with derivatives up to equivalence is rather painful. Even computing equality and ordering is tricky, since union can occur anywhere in a regular expression, and without that it's difficult to implement higher-level data structures such as sets of regular expressions.
So for the most part, derivatives have remained a minor piece of functional programming folklore: they are cute, but a good implementation of derivatives is not really simple enough to displace the usual Dragon Book automata constructions.
However, in 1995 Valentin Antimirov introduced the notion of a partial derivative of a regular expression. The idea is that if you have a regular expression $r$ and a character $c$, then a partial derivative is a regular expression $r'$ such that if $r'$ accepts a word $s$, then $r$ accepts $c \cdot s$. Unlike a derivative, this is only a if-then relationship, and not an if-and-only-if relationship.
Partial derivatives can be lifted to words just as ordinary derivatives can be, and it is relatively easy to prove that the set of partial word derivatives of a regular expression is finite. We can show this even without taking a quotient, and so partial derivatives lend themselves even more neatly to an efficient implementation than Brzozowski derivatives do.
I'll illustrate this point by using Antimirov derivatives to construct a DFA-based regular expression matcher in ~50 lines of code. You can find the Ocaml source code here.
First, let's define a datatype for regular expressions.
So C is the constructor for single-character strings, Nil and Seq correspond to $\epsilon$ and $r\cdot r'$, and Bot and Alt correspond to $\bot$ and $r \vee r'$.
Next, we'll define the nullability function, which returns true if the regular expression accepts the empty string, and false if it doesn't. It's the obvious recursive definition.
The aderiv function implements the Antimirov derivative function $\alpha_c$ described above. It's basically just a direct transcription of the mathematical definition into Ocaml. The deriv function applies the derivative to a whole set of regular expressions and takes the union.
Here, size is the number of states, and we'll use integers in the range [0,size) to label the states. We'll use fail to label the sink state for non-matching strings, and take trans to be the list of transitions. The final field is a list of accepting states for the DFA.
Now, we'll need a little more scaffolding. The enum function is a "functional for-loop" looping from i to max. We use this to write charfold, which lets us fold over all of the ASCII characters.
The find function takes a set of regular expression and returns a numeric index for it. To do this, it uses a state (n, m), where n is a counter for a gensym, and m is the map we use to map sets of regular expressions to their indices.
The main work happens in the loop function. The s parameter is the state parameter for find, and v is the visited set storing the set of states we have previously visited. The t parameter is the list of transitions built to date, and f are the final states generated so far.
The rs parameter is the current set of regular expressions. We first look up its index x, and if we have visited it, we return. Otherwise, we add the current state to the visited set (and to the final set if any of its elements are nullable), and iterate over the ASCII characters. For each character c, we can take the derivative of rs, and find its index y. Then, we can add the transition (x, c, y) to t, and loop on the derivative. Essentially, this does a depth-first search of the spanning tree.
We then kick things off with empty states and the singleton set of the argument regexp r, and build a DFA from the return value. Note that the failure state is simply the index corresponding to the empty set of partial derivatives. The initial state will always be 0, since the first state we find will be the singleton set r.
Since we have labelled states by integers from 0 to size, we can easily build a table-based matcher from our dfa type.
Here, the table type has a field m for the transition table, an array of booleans indicating whether the state accepts or not, and the error state. We build the table in the obvious way in the table function, by building an array of array of integers and initializing it using the list of transitions.
Then, we can give the matching function. The matches' function takes a table t, a string s, an index i, and a current state x. It will just trundle along updating its state, as long as we have not yet reached the end of the string (or hit the error state), and then it will report whether it ends in an accepting state or not. The re_match function just calls matches' at index 0, in state 0. | CommonCrawl |
Why do you post answers on math.stackexchange?
As a math student, this is a very helpful resource.
Also, can people do something useful with their "points" earned on the site?
I am needy and insecure and derive validation from the gratitude of strangers.
Because solving mathematical problems is interesting, fun and it helps to keep mental acumen.
I also find my teaching abilities, and perhaps writing abilities, to improve much faster because of this site. Also my grasp of advanced set theoretical concepts.
I read a story about some guy who was deathly sick and to pass the time before he expired he began to read Euclid's Elements. Needless to say he made a miraculous recovery. I was also close to buying the farm so I joined a bunch of math forums and became a little active on a few here and guess what? I feel like a 85 year old again. Sharing what you have no matter how small, giving back for everything that was given to me, that is what life is about. Sure trying to find people who know less than I do to help is hard but even a blind squirrel can find an acorn now and then.
Whenever I use a new program, get an error message or install a new hardware, I use google to find my problem on some forum where usually someone already answered it free of charge. This person will never even know that I was helped by the answer. Whenever I have a question about language that is not well answered by my textbook or a question about a fact, I use wikipedia and I find that someone compiled answers for free that are way better than my books. No feedback on the usefulness for me reaches the many editors of the respective articles.
One should contribute in some capacity to this collection of knowledge, and at some times, this is a place where it is more agreeable for me to do so than somewhere else.
The gamification might play some role there, but for me this is counteracted by the restriction on human contact. It is much more important to me that the interface just works reasonably well and allows others to find my answers reasonably easily.
There's something entertaining about answering questions that are hard enough to hold my interest but not so hard as to be a grueling experience. Also it's good to be helping students with math. I rarely use mathoverflow because the problems there are much harder and would take a while to solve if I even could. The problems in my field there often involve technical issues in other people's research.
To be honest, this site is also a way to pass the time when I have nothing else to do :) It's more intellectually stimulating than watching TV or whatnot.
Reputation doesn't play much of a role for me, possibly because I don't use my actual name here.
In 2010 a psychology graduate student did one or two surveys on motivation for people using, and especially answering, on MO, see http://tea.mathoverflow.net/discussion/890/mathoverflow-survey-update/ Tausczik eventually published her results in psychology journals. She had been an undergraduate math major at Berkeley when MO was still in its formative stages, so she had an early interest.
I was one of the ones telephoned. At the time, I gave a lot of high-flown reasons, helping people, what have you. I think that is how many people feel for the first year or two of contributing on a site of this type. For me, there was a middle era where i mostly felt i was just showing off what i knew (this is more MSE, really), finally a relaxed time where I just answer when I feel like it. Similar with MSE, just a delay of a year. Things being what they are, it appears that those who go full blast answering questions for years at a furious pace are fairly likely to quit completely when that becomes tiresome.
Anyway, the sites are not parallel in one way that is significant for the question. On MO it is peers helping peers. On MSE it is more one-sided, more strictly teaching, and motivation does differ. I suppose I have mixed together descriptions of my behavior on the two sites. Oh, well.
It occurs to me that I can be quite specific about reputation. On MO, before it joined stackexchange proper, it was possible for a 10K user to search deleted posts, including questions deleted by the person asking, all the way back to the beginning of the site. I did this in connection with this unpleasant episode: http://tea.mathoverflow.net/discussion/1187/extending-from-a-plane-in-r3-again-and-again-and-again/ Under the circumstances, I felt it was desirable to quickly get up to 10K here on MSE, but was disappointed to find that I had severely limited ability to search deleted posts, and no way at all to see self-deleted questions. Also there was something about not being 100% trusted until 20K. So I did that, and found no improvement in searching. After that, I felt that I had no specific benefit in answering tons of questions that I did not necessarily enjoy, or tinkering with adequate answers so as to get more points, so I just slowed down.
I'm just trying to become better at math. Learning from a book and by homework exercises is not that interesting to me, but I really do enjoy teaching others. In addition, I find that I tend to give the clearest and most rigorous answers when I'm trying to explain something to someone who doesn't understand, rather than to a grader or to myself.
Moreover, gamification i.e. the reputation system really works for me. I like collecting internet points, and they give me an incentive to do as much as I can (when I have time).
My answers on MathStackexchange are usually not too high level questions, simply because I'm not an expert in any math field.
Now I notice that when I write an answer it is mostly when I see the problem and think I can write a concise "cool" answer or apply a tool that surprised my when acquiring it. If you have an aha-moment, you feel like now you're the one who gets it and want to induce that point of view in others.
I also don't find answering particularly rewarding, because when a $\gg 10\,\mathrm k$-user answers in the same thread, he will automatically get more votes than me and I don't think it's always because of the answer, but because of established trust.
Well I hope this doesn't sound naive, but I just love teaching. At work I get paid for it, which of course is brilliant, getting paid for doing something you enjoy. But I'm still happy to do more, so there's MSE. And I guess reputation is payment of a kind. So in a way I suppose you could say it's all about me, though I do also think that helping (in however small a way) to raise the world's competence in and appreciation of mathematics is a worthwhile aim.
To be a bit more specific about what I get out of teaching: a significant number of students who ask questions (whether face-to-face or online) do not really understand what their difficulty is. It may not be the question they are actually asking, but rather some very deep-rooted misunderstanding. Identifying what their real problem is and trying to answer in such a way as to address both the question asked and the deeper issues is one of the most intellectually satisfying aspects of teaching.
My education does not feel for naught when I can answer a rare, specific question. This increases my satisfaction in learning.
Every question is a challenge. Can I come up with the answer? If already answered, can I come up with a different answer?
Climbing the reputation ladder is fun. Given my current reputation, I feel like I am a valuable contributor to the community.
The problem in question is a very satisfying question to have answered for myself, personally. I'd rather figure it out myself than have someone else answer it for me.
Occasionally I do enjoy helping people, but I tend to stay away from the typical calc stuff.
I think for three reasons.
I am not now a mathematician or in a mathematical job, but I have enjoyed maths all my life (done pretty well too), and I want to keep myself sharp and learn what's going on. I have discovered that I get more out of participating than reading - it varies over seasons and how busy I am how much I contribute here. But I enjoy it, and I learn from the engagement.
Second, I want to share my enjoyment of maths with others, and the tricks and insights and ideas I've learned along the way.
Third, which I've discovered by participating, I relish the challenge of helping someone to understand something when they seem to be stuck, not just giving the answer to the problem, but also some understanding which will develop mathematical skill and knowledge, and open up useful new ways of thinking about what to do when you are stuck.
The best way to learn is to teach, and answering questions helps me learn. Sometimes I recover lost knowledge, sometimes in attempting to answer a question in a different way I learn something new. I enjoy teaching, but I work in industry, so my opportunities to share insight are otherwise limited.
Because of my limited time, I try to limit myself to questions that I can answer quickly, and where I believe I can transmit a genuine insight to the OP. I enjoy lots of homework and algebra-to-calculus level questions, because I feel that math education is sorely deficient in these areas, and that if I can provide insight instead of an algorithm, then I am doing that person a favor.
Also, answering questions helps keep my LaTeX and communications skills sharp.
There is a constant stream of questions to keep me occupied when I get bored.
The site is well-organized and well-run, so the experience is very smooth.
This is the big one: The site is run as a meritocracy.
You are promoted based on your ability and the quality of your contributions, and I think that is precisely how a site like this should run.
I have not been posting answers to SE for a very long time, I have been asking questions for a while though, and generally using the site(s) to find answers that elude me.
The reasons I have for contributing range from noble to... not so much.
First would be the idea of paying it forward. Or maybe pay it back since I have improved my career and education by asking questions on various sites.
I am also making an effort to learn to be a more effective communicator, making posts is good practice for learning to write better. And the positive and negative feedback provides a good indicator of how well I am doing and if I need to edit my response.
Honestly I am also hoping that gaining reputation and more importantly engaging with people who run a business (more so on StackOverflow) will lead to an increase in job potential. Although I am not sure how well this mechanism will work.
And helping people really boosts self esteem.
As for why most of my time is spent on the Mathematics site? I can answer more questions consistently as a B.S. of Mathematics then as a self taught programmer of three years, also math is more fun where as programming is my job.
Just my experience, but I thought I would share it.
Self-esteem. Sometimes, when I read or hear things like Youtube comments, poorly spelled Facebook posts, oversimplified political rants on blogs, TV, bars, et cetera, I start feeling incredibly smart. I know I'm not, I just feel that way.
Figuring out what sort of explanations people are looking for can be an interesting puzzle, and articulating them can be good practice.
Mathematics is beautiful, and it feels good to have more people appreciate it more, even a little bit, even if that appreciation comes in the form of "this particular problem I'm forced to do is no longer as big of a headache".
Then the question becomes, why Math Stackexchange as opposed to some other forums (which can have their benefits over MSE)? The answer is probably that MSE is a Skinner Box, albeit one in which I have other good reasons to stay in.
A few reasons. First, I like learning about math. There are interesting questions and well written answers on here. Second, I like teaching math. There have been a few pedagogically interesting questions on here. One I saw recently (can't remember the link) was a student being confused about a "ball" in the $xy$ plane. They said no matter how small the radius was, a "ball" would come out of the plane to get some $z$ points. This was an interesting misconception and I'm glad I saw it. Third, all the bells and whistles and reputation and badges. I love positive feedback. Reputation makes me feel so accomplished.
Long term, I feel like the site has matured me. Some of my earlier answers (not THAT long ago) I wouldn't give now. Not that they're wrong, but I just would present things better. This maturity has helped in math outside of SE.
I'm curious whether I still recall how it works and sometimes it's simply fun to try out these muscles again (which I usually cannot do in my job). It kind of helps me to not forget all the stuff I was glad to know and able to apply.
I do see people come from university and sometimes would really love to see that they knew better. Not only math or their area of technical expertise, but also cooperating, asking questions which are worth looking at, formulating ideas. Maybe I do hope that answering questions here (and commenting on questions...) might be a small chance to improve on this in general. This is a rather vague idea, though.
finally I do think that in todays complicated world it is important that people know about basic math and that many have a hard time learning it. Answering questions on this site might (hopefully) reduce the general aversion against math.
Learning from others.This site is great resource for me not just for book advice but also for playing around with concepts and looking at them in a new way.
Giving back. Others have helped me understand things, so I think it's only fair that I pay it forward and help other people when I can.
Because I hate my job and need the daily distraction.
But seriously, the short answer is to teach and to learn.
I left academia several years ago and have recently begun to miss (parts of) it. This site is a great way to stay involved at what feels like just the right level. I can share knowledge, refresh my own, keep my $\TeX$ skills sharp, and learn cool new things.
First reason is, I am an academician. My field is Computer Science. I love math, and I love to teach. The purpose of my life is to share information. Share knowledge.
Second, I believe there is not a single thing on earth that cannot be explained using mathematics. History, linguistics, everything can be modelled or formulated.
The answer to second question would be: When I apply for a job, I'll refer to this site and say "Look at my math.stackexchange reputation. It is so high." :) Just kidding.
It gives me a feeling of satisfaction.
It sharpens my math/proof-writing skills.
I feel obligated to give back something... So that the rest of the community has spare time to solve my problems.
To have this fantastic resource and not to use it to share knowledge is almost a crime.
For me, answering here is not only to help others, but also to learn new techniques as well.
Often, a mathematics question can be solved in different approaches. And when I post answer here, I can compare my answers to the others and find out the witty ones. It is also worth it since people may leave comments under my answer suggesting improvement of my approaches. It helps me to solve math problems in a more concrete and careful manner.
I mostly ask questions obtained from developing my BigZ but when I rarely answer questions it's often for contributing with my personal (weird) perspective.
Not the answer you're looking for? Browse other questions tagged discussion big-list community .
What is the motivation for people to answer questions in Mathematics StackExchange?
What do you really like about working with/contributing to math-SE?
What is the Real Use of Reputation?
How democratic and fair is the reward/reputation system actually?
Why this type of questions are voted to be closed?
How should we deal with "sob stories"/gaming the system? | CommonCrawl |
Abstract: The distribution of finite time observable averages and transport in low dimensional Hamiltonian systems is studied. Finite time observable average distributions are computed, from which an exponent $\alpha$ characteristic of how the maximum of the distributions scales with time is extracted. To link this exponent to transport properties, the characteristic exponent $\mu(q)$ of the time evolution of the different moments of order $q$ related to transport are computed. As a testbed for our study the standard map is used. The stochasticity parameter $K$ is chosen so that either phase space is mixed with a chaotic sea and islands of stability or with only a chaotic sea. Our observations lead to a proposition of a law relating the slope in $q=0$ of the function $\mu(q)$ with the exponent $\alpha$. | CommonCrawl |
And what are some of the main algorithmic approaches to look at?
Generally, the problems of machine learning may be considered variations on function estimation for classification, prediction or modeling.
In supervised learning one is furnished with input ($x_1$, $x_2$, ...,) and output ($y_1$, $y_2$, ...,) and are challenged with finding a function that approximates this behavior in a generalizable fashion. The output could be a class label (in classification) or a real number (in regression)-- these are the "supervision" in supervised learning.
In the case of unsupervised learning, in the base case, you receives inputs $x_1$, $x_2$, ..., but neither target outputs, nor rewards from its environment are provided. Based on the problem (classify, or predict) and your background knowledge of the space sampled, you may use various methods: density estimation (estimating some underlying PDF for prediction), k-means clustering (classifying unlabeled real valued data), k-modes clustering (classifying unlabeled categorical data), etc.
Semi-supervised learning involves function estimation on labeled and unlabeled data. This approach is motivated by the fact that labeled data is often costly to generate, whereas unlabeled data is generally not. The challenge here mostly involves the technical question of how to treat data mixed in this fashion. See this Semi-Supervised Learning Literature Survey for more details on semi-supervised learning methods.
Unsupervised learning is when you have no labeled data available for training. Examples of this are often clustering methods.
In this case your training data exists out of labeled data. The problem you solve here is often predicting the labels for data points without label.
prediction. if you are predicting a real number, it is called regression. if you are predicting a whole number or class, it is called classification.
modeling. modeling is the same as prediction, but the model is comprehensible by humans. Neural networks and support vector machines work great, but do not produce comprehensible models . decision trees and classic linear regression are examples of easy-to-understand models.
similarity. if you are trying to find natural groups of attributes, it is called factor analysis. if you are trying to find natural groups of observations, it is called clustering.
association. it's much like correlation, but for enormous binary datasets.
Apparently Goldman Sachs created tons of great neural networks for prediction, but then no one understood them, so they had to write other programs to try to explain the neural networks.
Not the answer you're looking for? Browse other questions tagged machine-learning unsupervised-learning supervised-learning semi-supervised or ask your own question.
Understanding the difference between Supervised and unsupervised learning?
Why is (deep) unsupervised and semi-supervised learning so hard?
Difference between semi-supervised learning and prediction?
Are hot machine learning solutions -like the Show, Attend and Tell paper- instances of unsupervised learning? | CommonCrawl |
What does the sign that looks like $\geq$ except with the bottom line being sloped mean?
I'm referring to this symbol: $\geqslant$.
No symbol list I've come across mentions it. I saw it in the book Pattern Recognition and Machine Learning by Christopher M. Bishop.
The symbols $\geq$, $\geqslant$ and >=, all mean the same thing: Greater than or equal to.
The same applies to its converse: $\leq$, $\leqslant$ and <=, all mean: Less than or equal to.
The differences can be attributed to the character set available, and/or the particular font that is used. In this particular book, the author specifies, on page viii, than they have used $\LaTeX$.
Is there a difference between these integral notations?
What does the $\prod$ symbol mean?
What does an $\oplus$-sign in the superscript mean?
What does an arrow under a sigma mean?
What is the name of this "ퟙ" notation and what does it mean?
What does a function inside brackets and a minus sign mean?
What does the notation $\mathbb Z_9^n$ mean? | CommonCrawl |
Definition 15.31.1. Let $R$ be a ring and let $I \subset R$ be an ideal.
We say $I$ is a regular ideal if for every $\mathfrak p \in V(I)$ there exists a $g \in R$, $g \not\in \mathfrak p$ and a regular sequence $f_1, \ldots , f_ r \in R_ g$ such that $I_ g$ is generated by $f_1, \ldots , f_ r$.
We say $I$ is a Koszul-regular ideal if for every $\mathfrak p \in V(I)$ there exists a $g \in R$, $g \not\in \mathfrak p$ and a Koszul-regular sequence $f_1, \ldots , f_ r \in R_ g$ such that $I_ g$ is generated by $f_1, \ldots , f_ r$.
We say $I$ is a $H_1$-regular ideal if for every $\mathfrak p \in V(I)$ there exists a $g \in R$, $g \not\in \mathfrak p$ and an $H_1$-regular sequence $f_1, \ldots , f_ r \in R_ g$ such that $I_ g$ is generated by $f_1, \ldots , f_ r$.
We say $I$ is a quasi-regular ideal if for every $\mathfrak p \in V(I)$ there exists a $g \in R$, $g \not\in \mathfrak p$ and a quasi-regular sequence $f_1, \ldots , f_ r \in R_ g$ such that $I_ g$ is generated by $f_1, \ldots , f_ r$.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 07CV. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 07CV, in case you are confused. | CommonCrawl |
The subgroup $A_n$ of the symmetric group $S_n$ consisting of all even permutations. $A_n$ is a normal subgroup in $S_n$ of index 2 and order $n!/2$. The permutations of $A_n$, considered as permutations of the indices of variables $x_1,\ldots,x_n$, leave the alternating polynomial $\prod(x_i-x_j)$ invariant, hence the term "alternating group" . The group $A_m$ may also be defined for infinite cardinal numbers $m$, as the subgroup of $S_n$ consisting of all even permutations. If $n>3$, the group $A_n$ is $(n-2)$-fold transitive. For any $n$, finite or infinite, except $n=4$, this group is simple; this fact plays an important role in the theory of solvability of algebraic equations by radicals.
Note that $A_5$ is the non-Abelian simple group of smallest possible order.
This page was last modified on 17 April 2014, at 21:25. | CommonCrawl |
Phil Dunphy, a real estate agent, is considering whether he should list an unusual $551,061 house for sale. If he lists it, he will need to spend $5,233 in advertising, staging, and fresh cookies. The current owner has given Phil 6 months to sell the house. If he sells it, he will receive a commission of $21,449. If he is unable to sell the house, he will lose the listing and his expenses. Phil estimates the probability of selling this house in 6 months to be 42%. What is the expected profit on this listing?
Net= $.42\times 21449 - $expenses.
Phil is considering whether he should list a house for sale.
If he lists it, he will need to spend 5,233.
The current owner has given Phil 6 months to sell the house.
If he sells it, he will receive a commission of 21,449.
If he is unable to sell the house, he will lose the listing and his expenses.
Phil estimates the probability of selling this house in 6 months to be 42%.
What is the expected profit on this listing? | CommonCrawl |
No polynomial algorithms are known for finding the coefficients of the characteristic polynomial and characteristic equation of a matrix in max- algebra. The following are proved: (1) The task of finding the max-algebraic characteristic polynomial for permutation matrices encoded using the lengths of their constituent cycles is NP-complete. (2) The task of finding the lowest order finite term of the max-algebraic characteristic polynomial for a $\lbrace 0,-\infty \rbrace $ matrix can be converted to the assignment problem. (3) The task of finding the max-algebraic characteristic equation of a $\lbrace 0,-\infty \rbrace $ matrix can be converted to that of finding the conventional characteristic equation for a $\lbrace 0,1\rbrace $ matrix and thus it is solvable in polynomial time.
Klinz B.: Private communication (2002. | CommonCrawl |
The goal of this challenge is to produce a function of n which computes the number of ways to partition the n X 1 grid into triangles where all of the vertices of the triangles are on grid points.
For example, there are 14 ways to partition the 2 x 1 grid, so f(2) = 14 via the following partitions where the partitions have 2, 2, 2, 2, 4, and 2 distinct orientations respectively.
Port of @Bubbler's Jelly answer.
Very slow due to the permutations builtin.
Try it online or verify the first four inputs.
On a \$(n+1) \times (n+1)\$ chessboard, how many ways are there for a rook to go from \$(0,0)\$ to \$(n,n)\$ by just moving right \$+(1,0)\$ or up \$+(0,1)\$?
Basically you have the top and the bottom line of the \$1 \times n\$ grid. Now you have to fill in the non-horizontal line. Each triangle must have two non-horizontal lines. Whether one of its sides is part of the top or the bottom line corresponds to the direction and length you'd go in the rooks problem. This is OEIS A051708. As an illustration of this correspondence consider following examples. Here the top line corresponds to up-moves, while the bottom line corresponds to right-moves.
Thanks @PeterTaylor for -6 bytes and @PostLeftGarfHunter for -2 bytes!
A fairly direct implementation that recurses over 2 variables.
Using flawr's rook move interpretation ,a%b is the number of paths that get the rook from (a,b) to (0,0), using only moves the decrease a coordinate. The first move either decreases a or decreases b, keeping the other the same, hence the recursive formula.
We can avoid the repetition in map(a%)[0..b-1]++map(b%)[0..a-1] by noting that the two halves are the same with a and b swapped. The auxiliary call a?b counts the paths where the first move decreases a, and so b?a counts those where the first move decreases b. These are in general different, and they add to a%b.
The summation in a?b can also be written as a list comprehension a?b=sum[a%i|i<-[0..b-1]].
Finally, we get rid of % and just write the recursion in terms of ? by replacing a%i with a?i+i?a in the recursive call.
The new base case causes this ? to give outputs double that of the ? in the 49-byte version, since with 0?0=1, we would have 0%0=0?0+0?0=2. This lets use define f n=n?n without the halving that we'd other need to do.
This uses Bubbler's approach of summing over permutations of n 0s and n 1s.
-1 byte based on Peter Taylor's comment.
Uses flawr's illustration directly, instead of the resulting formula.
Ø.xŒ!QŒɠ€'§2*S Main link (monad). Input: positive integer N.
Take every possible route on a square grid. The number of ways to move L units in one direction as a rook is 2**(L-1). Apply this to every route and sum the number of ways to traverse each route.
Try it online! Explanation: Works by calculating the number of ways to partition a trapezium of opposite side lengths m,n into triangles which all lie on integer offsets. This is simply a general case of the rectangle of size n in the question. The number of partitions is given recursively as the sums of the numbers of partitions for all sides m,0..n-1 and n,0..m-1. This is equivalent to generalised problem of the rooks, OEIS A035002. The code simply calculates the number of partitions working from 0,0 up to n,n using the previously calculated values as it goes.
Loop over the rows 0..n.
Start with an empty row.
Loop over the columns in the row 0..n.
Take the row so far and the values in the previous rows at the current column, and add the sum total to the current row. However, if there are no values at all, then substitute 1 in place of the sum.
Add the finished row to the list of rows so far.
Output the final value calculated.
Uses the recursive formula found by Peter Taylor and flawr.
Not the answer you're looking for? Browse other questions tagged code-golf geometry combinatorics grid or ask your own question. | CommonCrawl |
Let $\mu \times \nu$ be the product measure of $\mu$ and $\nu$ on $M \times N$ (this is unique since $\mu$ and $\nu$ are $\sigma$-finite) and consider the space $L^1(M \times N, \mu \times n)$ as well.
Let $T : L^1(M, \mu) \times L^1(N, \nu) \to L^1(M \times N, \mu \times \nu)$ be defined for all $f \in L^1(M, \mu)$ and all $g \in L^1(N, \nu)$ by $T(f, g) = fg$, where $fg : L^1(M \times N, \mu \times \nu)$ is defined for all $(m, n) \in M \times N$ by $(fg)(m, n) = f(m)g(n)$.
Since the above equality holds true for all representations of $u$ we have that $\| \sigma (u) \|_1 \leq p(u)$ for all $u \in L^1(M, \mu) \otimes L^1(N, \nu)$. So the linear map $\sigma : L^1(M, mu) \otimes L^1(N, \nu) \to L^1(M \times N, \mu \times \nu)$ be be extended to a linear map $\sigma : L^1(M, \mu) \otimes_p L^1(N, \nu) \to L^1(M \times N, \mu \times \nu)$. | CommonCrawl |
An approach via holonomy groups is often a fruitful way to understand geometric structures, and there has thus been a long established theoretical pursuit to explore the geometric implications of reduced holonomy and to understand the possible holonomy groups for a given geometric structure. While in particular the holonomy groups of affine connections and (pseudo-)Riemannian metrics have been intensively studied, the appropriate notion of an holonomy reduction for general Cartan geometries has long remained elusive. For in this case, which includes the class of parabolic geometries and in particular projective and conformal structures, it is no longer geometrically evident how to interpret an holonomy group in terms of underlying geometric data. In this talk I am going to discuss a general holonomy reduction method for Cartan geometries developed in joint work with A. Cap (Univ. of Vienna) and A. R. Gover (Univ. of Auckland). The main result is the curved orbit decomposition theorem: It is shown that an holonomy reduction of a Cartan geometry gives rise to a natural decomposition of the underlying manifold into initial submanifolds, each of which carries an induced geometric structure and corresponds to a group-orbit on an homogeneous model. In particular, this provides an algebraic/geometric explanation of the singularity sets that are typically observed for parabolic holonomy reductions. The results are applied to study solutions of geometric overdetermined PDEs on parabolic geometries.
Wed Mar. 4 10:45-11:45 Ilya Kossovskiy (University of Vienna) On Poincare's "Probleme local"
In 1907 Poincare formulated his "Problem local": for given germs of real-analytic hypersurfaces in complex two-space, find all local biholomorphic maps between them. This problem can be interpreted in the framework of Cartan (equivalence of $G$-structures). In addition, it has important application to Several complex Variables, since the study of mappings between domain in complex space can be reduced to that of local maps between their boundaries. Poincare's question naturaly splits to the equivalence problem for two given germs, and to the problem of describing local automorphisms of real-analytic hypersurfaces. Poincare did a substantial progress in solving both problems, by showing first that two germs in general position are inequivalent, and second by showing that the dimension of the symmetry group of a germ in the Levi-nondegenerate case does not exceed 8. More detailed results in the Levi-nondegenerate case were obtained in further work of Cartan, Tanaka, Chern and Moser, and Beloshapka. However, for hypersurfaces with Levi degeneracies the question on possible automorphism groups remained unsolved. In the finite type case (i.e., when a real hypersurface does not contain germs of complex hypersurfaces) the problem was solved independently by Beloshapka, Ejov and Kolar. They showed that the dimension of the group does not exceed 4. However, their method (e.g., the method of "polynomial models") is not applicable to the infinite type case, i.e., when a real hypersurfaces contains a complex germ. It was a long-standing problem to obtain the description in the infinite type case. In our work with Shafikov, we developed a method of solving this problem by using connections between real hypersurfaces and second order complex differential equations. The infinite type case corresponds in this way to ODEs with an isolated singularity. By studying symmetries of an appropriate class of second order singular ODEs, we were able to classify all hypersurfaces with groups of dimension 4 and higher. It turns out that there is a gap for the dimensions which looks as $dim=\infty,8,5,4,3,2,1,0$ (this gap was conjectured by Beloshapka and known as the Dimension Conjecture).
Recent software advances in computer algebra systems and in laptop and desktop computing power have now given mathematicians effective tools for research in computational intensive fields such as differential geometry and its applications. In this talk, I will discuss new ways in which we can use computer algebra systems, ways which go well-beyond the use of such systems for long, complex computation. First, I want to show how Maple can be used to create dynamic, interactive data-bases of mathematical or mathematical physics knowledge and how this knowledge can be made accessible to a broader audience. As a case study, we will look at the subject of exact solutions to the Einstein equations of general relativity. I will describe our efforts: to create a comprehensive data-base of known solutions; to verify the correctness of these solutions; to calculate an extensive set of properties of these solutions; and to develop an easy-to-use search engine to access this data-base. Second, I want to give a brief demonstration of how Maple can be used to create rich interactive documents for teaching advanced mathematics. Here we will look at the structure theory of simple Lie algebras. This classification was begun by W. Killing, completed by E. Cartan in his PhD thesis, and cast into its current form by E. Dynkin. This material is covered in many text books and is readily available on the web. I'll show how one can use Maple to present this same material in a dynamical new way which makes the material much easier (and more fun!) to learn.
In this workshop we will: [i] review a a few basics of Maple; [ii] learn to create vector fields, differential forms and tensors; [iii] learn about the basic differential operators of Lie bracket, exterior derivative, Lie derivative and covariant derivatives; [iv] calculate the curvature tensor; [v] solve the Einstein equations for some simple metrics. If time permits, we will learn about Killing vectors and discuss ways to analyze the structure of Lie algebras. A second workshop will be given if there is sufficient interest.
It is a remarkable fact that Cartan's 1910 paper on the geometry of rank 2 distributions in 5 dimensions is still the basis for much current research in the field of geometric methods for differential equations. This paper, often referred to as the "five variables paper", is widely cited for its amazing solution to the equivalence problem for rank 2 distributions in 5 dimensions. Unfortunately, Cartan's original goal of integrating 2nd order partial differential equations in 1 dependent and 2 dependent variables, has been largely forgotten. I'll begin this talk with a review of the geometric theory of these PDE. Cartan's 1911 paper deals with systems of 2nd order partial differential equations in 1 dependent and 3 independent variables. In many ways, this paper is even more astonishing than the 1910 paper and surely contains a wealth of interesting and largely untouched research topics. I'll give a brief synposis ofthis paper and describe some of the recent related work of K. Yamaguchi and N. Sitton. The seminar will also contain some demonstrations of the DifferentialGeometry software package.
I will report on recent work with I. Anderson and P. Nurowski in which we present three classes of conformal structures for which the equations for the Fefferman-Graham ambient metric to be Ricci-flat are linear PDEs, which we solve explicitly. These explicit solutions enable us to discuss the holonomy of the corresponding ambient metrics. Our examples include conformal pp-waves and, more importantly, conformal structures that are defined by generic rank 2 and 3 distributions in respective dimensions 5 and 6. The corresponding explicit Fefferman-Graham ambient metrics provide a class of metrics with holonomy equal to the exceptional non-compact Lie group G_2 as well as ambient metrics with holonomy contained in Spin(4,3).
I will discuss some aspects of conformal structures that are determined by (2,3,5) distributions and admit almost Einstein scales. This is joint work in progress with Travis Willse.
(Joint work with Gui-Qiang Chen, Marshall Slemrod, Dehua Wang, and Deane Yang) In this talk, I will give an outline of our new proof for the local existence of a smooth isometric embedding of a smooth 3-dimensional Riemannian manifold with nonzero Riemannian curvature tensor into 6-dimensional Euclidean space. Our proof avoids the sophisticated microlocal analysis used in earlier proofs by Bryant-Griffiths-Yang and Nakamura-Maeda; instead, it is based on a new local existence theorem for a class of nonlinear, first-order PDE systems that we call "strongly symmetric positive." These are a subclass of the symmetric positive systems, which were introduced by Friedrichs in order to study certain PDE systems that do not fall under one of the standard types (elliptic, hyperbolic, and parabolic). As in earlier proofs, we construct solutions via the Nash-Moser implicit function theorem, which requires showing that the linearization of the isometric embedding PDE system near an approximate embedding has a smooth solution that satisfies "smooth tame estimates." We accomplish this in two steps: (1) Show that the approximate embedding can be chosen so that the reduced linearized system becomes strongly symmetric positive after a carefully chosen change of variables. (2) Show that any such system has local solutions that satisfy smooth tame estimates. The main advantage of our approach is that step (2) is much more straightforward than similar results for other classes of PDE systems used in prior proofs, while step (1) requires only linear algebra. The talk will focus on the main ideas of the proof; technical details will be kept to a minimum.
In 1907 Poincare showed that two real hypersurfaces in C^2 are generically not locally biholomorphically equivalent and hence inequivalent as Cauchy-Riemann (or CR) geometries. Applying this to the boundaries of domains in C^2 showed that the Riemann Open Mapping Theorem breaks down in higher dimensions. In the early 1930s Elie Cartan resolved the equivalence problem for CR hypersurfaces by constructing their basic geometric invariants. Cartan's methods have been extended in the modern CR tractor calculus, in an effort to better understand CR invariants (and to find them all). When considering holomorphic mappings between domains in complex spaces of different dimension the notions of CR mappings and CR embeddings arise. In this talk we show that the tractor calculus also provides us with a rich invariant calculus in the setting of CR embeddings. | CommonCrawl |
Abstract : We study the problem of realizing a given graph as an $\alpha$-complex of a set of points in the plane. We study the realizability problem for trees and $2$-trees. In the case of $2$-trees, we confine our attention to the realizability of graphs as the $\alpha$-complex minus faces of dimension two; in other words, realizability of the graph in terms of the $1$-skeleton of the $\alpha$-complex of the point set. We obtain both positive (realizability) and negative (non-realizability) results. | CommonCrawl |
Let $f=(f_1,..,f_n)$ be a system of $n$ complex homogeneous polynomials in $n$ variables of degree $d$. We call λ ∈ $\mathbb C$ an eigenvalue of $f$ if there exists a nonzero $v$ ∈ $\mathbb C$ with $f(v)=$ λ $v$, generalizing the case of eigenvalues of matrices ($d=1$). We derive the distribution of λ when the $f_i$ are independently chosen at random according to the unitary invariant Weyl distribution and determine the limit distribution for $n\to\infty$. | CommonCrawl |
just want somebody to help me verify if I'm doing using Newton-Raphson method correctly by checking the result of an equation.
I'm trying to find the zero of f(x) with initial guess $x_0 = 1.5$, and by applying the method TWICE, I got result 2.0466.
Could somebody please help me verify my answer is correct?
Your answer is almost correct. You have a rounding error. Try not to round the previous approximations. Use the "answer-key" on your calculator to store the exact answer. Only round at the very end. The correct answer is $2.0467$ to three decimal places.
How to correctly apply Newton-Raphson method to Backward Euler method? | CommonCrawl |
Lemma 5.3.4. Let $X \to Z$ and $Y \to Z$ be continuous maps of topological spaces. If $Z$ is Hausdorff, then $X \times _ Z Y$ is closed in $X \times Y$.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 08ZH. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 08ZH, in case you are confused. | CommonCrawl |
Résumé: The main goal of this paper is to present a methodology to design interval observers for discrete-time linear switched systems affected by bounded, but unknown disturbances. Two design techniques are presented. The first one requires that the observation error dynamics are nonnegative while the second one relaxes this restrictive requirement by a change of coordinates. Furthermore, ideas of using $H_\infty$ formalism to compute optimal gains are proposed. Finally, illustrative examples highlight the performance of our methodology. | CommonCrawl |
is randomness lost? As in, is the computational cost to guess the ciphertext from secretKey ⊕ (secretKey << 1) lower than the cost to guess secretKey?
While bmm6o's answer is correct, I want to give another angle onto things.
If $\ll$ denotes logical shift (i.e. fill up on the right with 0 instead of what was pushed out), then simply replace the top-right $1$ with $0$.
As it turns out, if you have the top-right $1$, the determinant of the matrix is $0$ which also means, the transformation is not a permutation, as there's no unique inverse function!
However, if we use the version with a logical shift instead, we always get a determinant of $1$ and thus the confirmation that this indeed describes a permutation (and if we want to, we can also invert it).
To see the above assertions, let's call the matrix $A$ and the value of the top-right bit $b$. Note $A-I$ is the companion matrix to the polynomial $x^n-b$, which means the characteristic polynomial of $A-I$ is $x^n-b$. Now the characteristic polynomial of a matrix $B$ is equal to $\det(B-xI)$, which in our case is $\det((A-I)-xI)=\det(A-(x+1)I)$ which for $x=-1$ yields $\det(A)=(-1)^n-b$, however in $\mathbb F_2$, $-1$ is equal to $1$, meaning $\det(A)=1-b$, which implies that the key transformation is invertible (and thus entropy-preserving) iff a logical instead of a cyclical shift is used for all values sizes of the input and output! Credit goes to Will Jagy for the inspiration (and for a shorter, but more mathy explanation).
So there is no loss of entropy.
If your secret key is cryptographically secure, you don't really gain anything by applying such a shift-XOR — but indeed, there is no entropy loss.
What's a bit unclear to me is why you would do that, or which (cryptographic) problem are you trying to solve by doing so.
In the end, you're merely applying a cryptographically insecure permutation on a secret… with no entropy loss but no real cryptographic gain either.
Depending on the specific scenario you might have in mind, let me drop a heads-up that using several outputs/derivations of a secret permuted this way can and will introduce attack vectors.
If by 'truly random secret key' you mean a sequence of random bits; and if you then XOR (exclusive or) those random bits with an equal amount of cipertext that was derived somehow from that secret key, I would say the result is also random.
This appears to be an example of a One Time Pad, where the ciphertext is the message and the 'truly random secret key' is the key. Given a random key the same length as the message, the output of a OTP is indistinguishable from a random value.
Crypto puzzle as proof of randomness?
How exactly is "true randomness" defined in the realms of cryptography?
How useful is NIST's Randomness Beacon for cryptographic use?
How much entropy is lost if 1 character is fixed in HMAC SHA-512?
Does the size of a file influence the randomness if used as random input?
Public Randomness-Based Random Number Generator using Mobile Phone? | CommonCrawl |
Question: Are there examples of interesting classes of graphs whose treewidth is not bounded by a constant, but by a low-growing function?
Are there well known graph classes with treewidth $O(\log\log n)$?
Are there well known graph classes with treewidth $O(\log n)$?
I would also be interested in classes of graphs with treewidth $O(\log^k n)$ or $O(\log\log...n)$ where the logarithm is repeated a constant number of times.
Obs: Of course it is easy to cook up artificial families of graphs with a given treewidth, like the family of $\;O(\log n)\times n\;$ grids. So I'm primarily looking for family of graphs which have been studied in other branches of graph theory and which happen to have treewidth $O(\log n)$ or $O(\log\log n)$, but non-constant treewidth.
I believe that the universal graphs for trees constructed by Chung and Graham 1983 have treewidth $\Theta(\log n)$. Or for a slightly simpler but similar example consider the transitive closures of balanced binary trees.
However, there's a negative result here, too. All the examples you give of interesting graph families are minor-closed, or very closely related to minor-closed families. But a minor-closed graph family either contains all planar graphs (and hence has maximum treewidth $\Theta(\sqrt n)$) or has a forbidden planar minor (and hence has bounded treewidth).
Not the answer you're looking for? Browse other questions tagged graph-theory treewidth or ask your own question.
k-outerplanar graphs are subgraphs of bounded diameter planar graphs? of bounded diameter bounded genus graphs?
Are there interesting graph classes where the treewidth is hard (easy) to compute? | CommonCrawl |
However, despite not having a horizontal asymptote, real logarithmic functions are slowly varying at infinity. A real-valued function [math]f[/math] is slowly varying at infinity if for all [math]a>0[/math],... A 'horizontal asymptote' is a horizontal line that another curve gets arbitrarily close to as $\,x\,$ approaches $\,+\infty\,$ or $\,-\infty\,$.
The Graph of a Logarithmic Function. Before we look at the graph of a logarithmic function, let's discuss the relationship between the log function and the exponential function. The Relationship Between Logarithmic Functions and Exponentials. Since the output of a logarithmic function IS the input of an exponential function, and the input of a logarithmic function IS the output of an... A function can have at most two oblique asymptotes, but only certain kinds of functions are expected to have an oblique asymptote at all. For instance, polynomials of degree 2 or higher do not have asymptotes of any kind.
5 Logarithmic Functions The equations y = log a x and x = ay are equivalent. The first equation is in logarithmic form and the second is in exponential form.... Another way of finding a horizontal asymptote of a rational function: Divide N(x) by D(x). If the quotient is constant, then y = this constant is the equation of a horizontal asymptote.
I have found a way to graph asymptotes of a rational function, f(x)/g(x), automatically. It can also detect the holes of a function, which is where any vertical lines "intersect" the function.... Asymptotes are often found in rotational functions, exponential function and logarithmic functions. Asymptote parallel to the x-axis is known as a horizontal axis. Asymptote parallel to the x-axis is known as a horizontal axis.
Do logarithmic functions have vertical asymptotes?
In analytic geometry, an asymptote (/ ˈ æ s ɪ m p t oʊ t /) of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the x or y coordinates tends to infinity.
Asymptotes are often found in rotational functions, exponential function and logarithmic functions. Asymptote parallel to the x-axis is known as a horizontal axis. Asymptote parallel to the x-axis is known as a horizontal axis. | CommonCrawl |
What should be the title for this problem?
The equation $\sin\theta = k$ is satisfied by two values of $\theta$ of the form $\alpha$ & $\pi - \alpha$ in the interval $0 \leq \theta \leq 2\pi$ .
Ok. But when I read the result for $\cos\theta = k$, the interval was given $-\pi < \theta \leq \pi$ .
Now, I want to know what is the cause for the different intervals. Can't they be same? I want to post this but not getting any proper title. Can anyone please suggest me a good title which portrays my problem? | CommonCrawl |
So far, I have figured out that the lower limit on $z$ is $0$, and the lower limit on $y$ is also $0$. How would one go about calculating the upper limit on z and y as well as the limits for $x$?
I believe there is another question similar to this, but it does not explain how to calculate the plane boundaries, which is what I'm having trouble with.
Sorry for the crappy graph.
The body is located between $0$ and $1$ in $x$-direction, hence $$ \int_0^1\ldots\ dx. $$ Now for each $x\in[0,1]$ you get the section of the body with the plane at $x$ that is parallel to $yz$-plane. For $x=1$ you get the largest triangle, for $x=0$ you get one point (the origin), for $x$ somewhere in between you get a smaller proportional (similar) triangle (red in the picture below). Now the red triangle is located in $y$-direction between $0$ and some largest possible $y(x)$ - it is the integration limits for $dy$. At last, for each $y$ in the red triangle you can choose the integration limits for $z$ - from $0$ to the largest possible $z$ in the green segment. They may depend on both $x$ and $y$.
Not the answer you're looking for? Browse other questions tagged integration multivariable-calculus plane-curves or ask your own question.
Find a transformation from tetrahedron to cube in $R^3$ to calculate a triple integral?
Let B be the solid tetrahedron bounded by x = 0, y = 0, z = 0, and x+y+z = 1. | CommonCrawl |
There are two families of non-BPS bi-spinors in the perturbative spectrum of the nine dimensional heterotic string charged under the gauge group $SO(16)\times SO(16)$. The relation between these perturbative non-BPS states and certain non-perturbative non-BPS D-brane states of the dual type I$^\prime$ theory is exhibited. The relevant branes include a $\Zop_2$ charged non-BPS D-string, and a bound state of such a D-string with a fundamental string. The domains of stability of these states as well as their decay products in both theories are determined and shown to agree with the duality map. | CommonCrawl |
Do these two definitions of hypersurfaces coincide?
How to show that there is a unique unbiased estimator of $p$?
$f,g$ coincide on $\mathbb N$ iff $f(x)=g(x)$ a.e.
How to prove that Frechet derivative exists and coincides with the Gateau derivative?
$X$ conditioned on $M$ is normally distributed, is $X$ also?
For which $c$ is $G$ a generating function? | CommonCrawl |
In the forward kinematics problem, the transformation describing the position and orientation of a tool, or end effector, is determined by known joint variables. Joint variables are associated with a particular axis, or joint, and are denoted by qi. The two common types of joints used in robot manipulators are revolute and prismatic joints. For a revolute joint, qi is the angle of rotation ($\Theta$i), while for a prismatic joint, qi is the joint displacement (di).
DH1 : The z-axis will lie along the joint. According to Spong, axis zi-1 lies along joint i. For example, the axis x1 is perpendicular to the axis zo .
DH2 : The x-axes lie along the common normals between the joint axes. According to Spong, axis zi-1 lies along joint i. For example, the axis x 1 intersects the axis z0 .
where R1o is the orientation frame 1 with respect to frame 0 (the base frame), and o1o is the position of frame 1 with respect to the base frame. The matrix A can be rewritten using the corresponding transformations for the rotations and translations of the frame.
with D&H parameters assigned according to the Spong et. al. convention (frame i on joint i-1). The transformation Tii-1 is the product of the rotation and translation matrices of matrix Ai. The upper left 3x3 matrix of the transformation corresponds to the orientation of the end effector with respect to the base frame. The three columns of this matrix give the direction of the x-, y-, and z-axes, from left to right. The 3x1 matrix in the upper right corner shows the location of the end effector with respect to the base frame.
Label all joints i = 1 to n.
Assign z-axes for joints 0 to n-1 (zo along joint 1, etc.).
Assign xo normal to zo.
Assign x1 through xn-1, which lie at the common normals between zo to zn-1.
Establish y1 to yn-1 to complete each frame.
Assign zn freely (but carefully) and define xn.
Create the table of DH link parameters, defining $\alpha$i, ai, di, and $\theta$i for each joint.
Create Tii-1 for i = 1 to n.
Solve Tno = T1o * T21 * … * Tnn-1.
Show the position as the last column of Tno and the orientation as the first three columns of Tno.
For a revolute joint, the joint is the axis of rotation. For a prismatic joint, the joint is on the motion path of the joint.
Note: * denotes variable joint parameters. For each link, only one unknown joint variable can exist for each joint axis.
1. Spong, et.al. Robot Modeling and Control. | CommonCrawl |
Abstract: Using a single functional form which is able to represent several different classes of statistical distributions, we introduce a preliminary study of the ferromagnetic Ising model on the cubic lattices under the influence of non-Gaussian local external magnetic field. Specifically, depending on the value of the tail parameter, $\tau $ ($\tau < 3$), we assign a quenched random field that can be platykurtic (sub-Gaussian) or leptokurtic (fat-tailed) form. For $\tau< 5/3$, such distributions have finite standard deviation and they are either the Student-$t$ ($1< \tau< 5/3$) or the $r$-distribution ($\tau< 1$) extended to all plausible real degrees of freedom with the Gaussian being retrieved in the limit $\tau \rightarrow 1$. Otherwise, the distribution has got the same asymptotic power-law behaviour as the $\alpha$-stable Lévy distribution with $\alpha = (3 - \tau)/(\tau - 1)$. The uniform distribution is achieved in the limit $\tau \rightarrow \infty$. Our results purport the existence of ferromagnetic order at finite temperatures for all the studied values of $\tau$ with some mean-field predictions surviving in the three-dimensional case. | CommonCrawl |
Note that for every value of $x$, the equation $y=x^3$ gives only one value of $y$. This means that every $x$ is paired with only one value of $y$. Thus, the given relation defines $y$ as a function of $x$. The value of $x$ can be any real number so the domain is $(-\infty, +\infty)$. The value of $x^3$ can be any real number. Thus, the range is $(-\infty, +\infty)$ . | CommonCrawl |
We present results from an analysis of stellar population parameters for 7132 galaxies in the 6dFGS Fundamental Plane (FP) sample. We bin the galaxies along the axes, $v_1$, $v_2$, and $v_3$, of the tri-variate Gaussian to which we have fit the galaxy distribution in effective radius, surface brightness, and central velocity dispersion (FP space), and compute median values of stellar age, [Fe/H], [Z/H], and [$\alpha$/Fe]. We determine the directions of the vectors in FP space along which each of the binned stellar population parameters vary most strongly. In contrast to previous work, we find stellar population trends not just with velocity dispersion and FP residual, but with radius and surface brightness as well. The most remarkable finding is that the stellar population parameters vary through the plane ($v_1$ direction) and across the plane ($v_3$ direction), but show no variation at all along the plane ($v_2$ direction). The $v_2$ direction in FP space roughly corresponds to `luminosity density'. We interpret a galaxy's position along this vector as being closely tied to its merger history, such that early-type galaxies with lower luminosity density are more likely to have undergone major mergers. This conclusion is reinforced by an examination of the simulations of Kobayashi (2005), which show clear trends of merger history with $v_2$. | CommonCrawl |
Hi, Is it true that all irreducible unitary representations of a residually finite group are finite dimensional?
Actually I suspect that it is not, but cannot find any example.
As Mark Sapir pointed out, this is of course false. There are tons of infinite dimensional irreducible representations of residually finite groups. In fact, I think that any finitely generated group for which the answer is yes, is virtually abelian.
However, you could also ask whether all irreducible representations of a given residually finite group are weakly contained in finite dimensional representations. This is true for $\mathbb F_2$ but for example not for $SL(3,\mathbb Z)$. For the group $\mathbb F_2 \times \mathbb F_2$, this problem is open and equivalent to Connes Embedding problem.
Are residually finite, perfect groups residually alternating?
Which finite groups have no irreducible representations other than characters? | CommonCrawl |
We've shown that deep linear networks — as implemented using floating-point arithmetic — are not actually linear and can perform nonlinear computation. We used evolution strategies to find parameters in linear networks that exploit this trait, letting us solve non-trivial problems.
Neural networks consist of stacks of a linear layer followed by a nonlinearity like tanh or rectified linear unit. Without the nonlinearity, consecutive linear layers would be in theory mathematically equivalent to a single linear layer. So it's a surprise that floating point arithmetic is nonlinear enough to yield trainable deep networks.
Numbers used by computers aren't perfect mathematical objects, but approximate representations using finite numbers of bits. Floating point numbers are commonly used by computers to represent mathematical objects. Each floating point number is represented by a combination of a fraction and an exponent. In the IEEE's float32 standard, 23 bits are used for the fraction and 8 for the exponent, and one for the sign.
As a consequence of these conventions and the binary format used, the smallest normal non-zero number (in binary) is 1.0..0 x 2^-126, which we refer to as min going forward. However, the next representable number is 1.0..01 x 2^-126, which we can write as min + 0.0..01 x 2^-126. It is evident that the gap between the 2nd number is by a factor of 2^20 smaller than gap between 0 and min. In float32, when numbers are smaller than the smallest representable number they get mapped to zero. Due to this 'underflow', around zero all computation involving floating point numbers becomes nonlinear.
An exception to these restrictions is denormal numbers, which can be disabled on some computing hardware. While the GPU and cuBLAS have denormals enabled by default, TensorFlow builds all its primitives with denormals off (with the ftz=true flag set). This means that any non-matrix multiply operation written in TensorFlow has an implicit non-linearity following it (provided the scale of computation is near 1e-38).
So, while in general the difference between any "mathematical" number and their normal float representation is small, around zero there is a large gap and the approximation error can be very significant.
This can lead to some odd effects where the familiar rules of mathematics stop applying. For instance, $(a + b) \times c$ becomes unequal to $a \times c + b \times c$.
For example if you set $a = 0.4 \times min$, $b = 0.5 \times min$, and $c = 1 / min$.
Then: $(a+b) \times c = (0.4 \times min + 0.5 \times min) \times 1 / min = (0 + 0) \times 1 / min = 0$.
However: $(a \times c) + (b \times c) = 0.4 \times min / min + 0.5 \times min \times 1 / min = 0.9$.
In another example, we can set $a = 2.5 \times min$, $b = -1.6 \times min$, and $c = 1 \times min$.
Then: $(a+b) + c = (0) + 1 \times min = min$.
However: $(b+c) + a = (0 \times min) + 2.5 \times min = 2.5 \times min$.
At this smallest scale the fundamental addition operation has become nonlinear!
We wanted to know if this inherent nonlinearity could be exploited as a computational nonlinearity, as this would let deep linear networks perform nonlinear computations. The challenge is that modern differentiation libraries are blind to these nonlinearities at the smallest scale. As such, it would be difficult or impossible to train a neural network to exploit them via backpropagation.
Beyond MNIST, we think other interesting experiments could be extending this work to recurrent neural networks, or to exploit nonlinear computation to improve complex machine learning tasks like language modeling and translation. We're excited to explore this capability with our fellow researchers. | CommonCrawl |
# Numerical instability occurs for very low inputs.
# looking at the simulations for many parameter sets.
# A more principled minimum value would be better.
# which differs from the original implementation by Izhikevich.
# to avoid at all costs.
However, I cannot seem to get a grasp of how these functions work and what logic I am supposed to place in each function. For example, where do I place the logic for my activation function (sigmoid, originally I though this was the rates function), how do I use the step_math function to feedback the post-spike information, what does the max_rates_intercepts do, how does the LIF model described in the code work, etc. It seems I like lack some foundation as I am new to Nengo and only have studied the examples in the user guide and this code before attempting to program this neural model. Is there more material I can study to get a better understanding, preferably video tutorials or a step by step explanation (more in-depth than just the code) of how to implement a neural model?
I have posted other topics here before for a better understanding of this and other material and have understood it but I think I need a further understanding.
Could you give some more details on the model? For instance, how is the filter described (e.g., as a transfer function, a state-space model, or some convolutional kernel)? What is the form of the static input-output response curve (e.g., a sigmoid)? And does the dice-roll represent a Poisson generator? Some of these details might matter in providing guidance as to the easiest route.
Also, what do you plan on doing with this model? Do you intend to create populations of them, with each neuron having a different encoding filter? And do you need to learn the optimal weights to have the population represent some particular vector over time?
The g(t) representing the membrane potential of the postsynaptic neuron will be will be determined by past spiking activity of the pre-synaptic neuron (X) and the post-synaptic neuron (Y, this post-synaptic spiking history is shown as the post-spike in the diagram which feeds back) as well as the bias of Y (B). Where the activities X and Y will be multiplied by the identity matrix (I) and the respective learnable weights (W). In other words, g(t) = (X * I * W) + (Y * I * W) + B.
In order to determine u(t), g(t) will be mapped to the sigmoid activation function.
u(t) will then be weighted by a probability to generate a spike via a Bernoulli random variable (which I was going to implement as a random number generator). The output of this processing will be output of this model.
I am having a hard time translating this into something that can be represented in Nengo. If you need any clarifications, please let me know. I am still in the process of figuring this out and any suggestions or hints would be greatly appreciated.
Bullet points 2 and 3 should be achievable by a fairly simple custom neuron model that defines the step_math function.
It looks like a filter (i.e., convolution over time) is being applied to the feedback post-spike. What is this filter? A lowpass (i.e., exponential decay)?
What is the filter driving the postsynaptic neuron (i.e., the squiggly line in the left-most box)? Is this filter the same for each postsynaptic neuron?
Bullet point 1 might be a little trickier as you may need to implement a custom unsupervised learning rule on full-weight matrices? An example of how to do this can be found below, although there is a bunch of boilerplate in build_bcm and BCM that is also needed. Might make sense to get the GLM working first and then add learning next. We can help you with some of these details.
The filter that is applied in order to produce g(t), the filter that is seen on the far left box, is synaptic kernel filter, alpha. This filter is applied to ALL the pre-synaptic neurons that are connected to the post-synaptic neuron which is producing our spike randomly (Bernoulli random variable). I generalized alpha before by saying g(t) = (X * I * W) + (Y * I * W) + B when it is really g(t) = (alpha * X) + (beta * Y) + B (beta is the other filter you asked to define which I will discuss in the next bullet point). Alpha or the synaptic kernel filter is defined as the matrix multiplication between matrix A and vector W. Matrix A is the identity matrix while vector W are the learning weights of each respective pre-synaptic neuron with length equal to the number of basis functions (in this case the basis function being a raised cosine).
Similarly, beta or the feedback kernel filter is the filter that lies between the post synaptic neuron's output and the input to the post synaptic neuron's activation function. This is filter affected the box labeled post-spike and is the box at the bottom of the diagram. It is defined as the matrix multiplication between matrix B and the vector v. Where, B is the identity matrix while vector V is the learning weight of the respective post-synaptic neuron with length equal to the number of basis functions (in this case the basis function being a raised cosine). This filter is applied to each post-synaptic neuron individually in the neural network but is applied to all of them.
If you need any clarifications, please let me know. I am still in the process of figuring this out and any suggestions or hints would be greatly appreciated. I also appreciate the assistance you have given me thus far.
The filter that is applied in order to produce g(t), the filter that is seen on the far left box, is synaptic kernel filter, alpha.
I'm confused as to the form of the filter again here. Keep in mind I'm referring to the temporal filtering of the spikes. Is this what you mean by raised-cosine: https://en.wikipedia.org/wiki/Raised-cosine_filter? If so how would you make this filter causal? In its basic form, this is a non-causal filter that requires knowledge of the future.
Section 2 labeled Spiking Neural Networks with GLM Neurons is the design I am referring to when describing this you can also see there is a similar figure for the model. Towards, the end of the section there is a description of SK and FK filters which I am not so sure I described clearly enough for your understanding. However, the paper's description is exactly what I am trying to achieve.
Okay I think I'm getting this now. So both the input and recurrent filters are a linear combination of the basis functions from https://www.nature.com/articles/nature07140 where the coefficients of the linear combinations are specific to each neuron and learned by some learning rule.
Do you know how far back in time these filters need to go? Figure 3 shows just 8 time-points – is this how many are needed? Maintaining 8 steps in a rolling window (queue) wouldn't be so difficult. This could be done by adding an (n, 8)-dimensional state matrix to the neuron model, that rolls the second axis each time-step.
The part that sounds difficult to me is that the parameters of your filters ($\alpha$ and $\beta$) need to change on-the-fly, and, in order to apply the learning update, the model must have access to the entire presynaptic activity vector?
Do you have any reference code in any language for this model? I think the most straightforward thing to do at this stage is to just write this as a plain Python loop using numpy vectors and matrices. This would help ensure you know exactly what you want it to do, and I don't expect it would be too many lines of code. Or is there some aspect of Nengo that you feel would be useful for next step(s)? In the mean time I'll see if I can get anyone else on this to help.
I would like to leave the amount of time points needed for the filters as a parameter for the learning model however, if this becomes to complex I think leaving it at 8 like the example shows will be fine. I believe that is correct the model must have access to the entire presynaptic activity vector. Additionally, alpha and beta were the main conceptual issues I had when trying to implement this in Nengo's learning model, I couldn't derive a flow of logic especially since I couldn't develop a way to access the entire presynaptic activity vector and put the weights in as parameters to the step_math function. Especially, since I didn't know how often the function's parameters gets updated, I assumed every simulation timestep.
I have not tried developing any reference code in any language for this model, I was told that Nengo was the best place to start which seemed logical at the time. I will try to develop the reference code as soon as possible. The aspect of the code that will be introduced later is the learning & decoding discussed later in the research paper. The backend for the simulation I will be using, will be Nengo Loihi, I am not sure whether this changes anything however, I was sure from reading the documentation that using the Loihi backend would not change my implementation if I developed it with the normal Nengo backend first.
Once again, I appreciate all your assistance on my current issue with developing this learning model.
Additionally, alpha and beta were the main conceptual issues I had when trying to implement this in Nengo's learning model, I couldn't derive a flow of logic especially since I couldn't develop a way to access the entire presynaptic activity vector and put the weights in as parameters to the step_math function. Especially, since I didn't know how often the function's parameters gets updated, I assumed every simulation timestep.
Yes this sounds like the main challenge to me.
The backend for the simulation I will be using, will be Nengo Loihi, I am not sure whether this changes anything however, I was sure from reading the documentation that using the Loihi backend would not change my implementation if I developed it with the normal Nengo backend first.
With the reference Nengo backend, the builder rules specify how to compile each object onto a CPU. These rules must be redefined for each backend. We are working hard to relax some of these constraints and automate this the best we can. However, some of the limitations are due to the architecture of Loihi itself, and thus cannot be relaxed on the current hardware. So for now, support is primarily focused on building Nengo models that use the simple spiking models that don't define custom builder logic. | CommonCrawl |
First Construction of Brownian Motion, convergence in $C[0,\infty)$, $D[0,\infty)$, Donsker's invariance principle, Properties of the Brownian motion, continuous-time martingales, optional sampling theorem, Doob-Meyer decomposition, stochastic integration, Ito's formula, martingale representation theorem, Girsanov's theorem, Brownian motion and the heat equation, Feynman- Kac formula, diffusion processes and stochastic differential equations, strong and weak solutions, martingale problem.
P. Billingsley, Convergence of probability measures .
Karatzas and Shreve, Brownian motion and stochastic calculus .
Revuz and Yor, Continuous martingales and Brownian motion .
A. Oksendal, Introduction to stochastic differential equations . | CommonCrawl |
The aim of this work is to generalize lacunary statistical convergence to weak lacunary statistical convergence and $\mathcal I$-convergence to weak $\mathcal I$-convergence. We start by defining weak lacunary statistically convergent and weak lacunary Cauchy sequence. We find a connection between weak lacunary statistical convergence and weak statistical convergence. | CommonCrawl |
Abstract: Deep convolutional neural networks (DCNN) have enjoyed great successes in many signal processing applications because they can learn complex, non-linear causal relationships from input to output. In this light, DCNNs are well suited for the task of sequential prediction of multidimensional signals, such as images, and have the potential of improving the performance of traditional linear predictors. In this research we investigate how far DCNNs can push the envelop in terms of prediction precision. We propose, in a case study, a two-stage deep regression DCNN framework for nonlinear prediction of two-dimensional image signals. In the first-stage regression, the proposed deep prediction network (PredNet) takes the causal context as input and emits a prediction of the present pixel. Three PredNets are trained with the regression objectives of minimizing $\ell_1$, $\ell_2$ and $\ell_\infty$ norms of prediction residuals, respectively. The second-stage regression combines the outputs of the three PredNets to generate an even more precise and robust prediction. The proposed deep regression model is applied to lossless predictive image coding, and it outperforms the state-of-the-art linear predictors by appreciable margin. | CommonCrawl |
A generalization of the original Diffie-Hellman key exchange in $(\mathbb Z$∕$p\mathbb Z)$* found a new depth when Miller and Koblitz suggested that such a protocol could be used with the group over an elliptic curve. In this paper, we propose a further vast generalization where abelian semigroups act on finite sets. We define a Diffie-Hellman key exchange in this setting and we illustrate how to build interesting semigroup actions using finite (simple) semirings. The practicality of the proposed extensions rely on the orbit sizes of the semigroup actions and at this point it is an open question how to compute the sizes of these orbits in general and also if there exists a square root attack in general.
In Section 5 a concrete practical semigroup action built from simple semirings is presented. It will require further research to analyse this system.
Keywords: one-way trapdoor functions, Public key cryptography, semigroup actions, Diffie-Hellman protocol, simple semirings..
Mathematics Subject Classification: Primary: 94A60, 11T71; Secondary: 16Y6.
Roland Martin. On simple Igusa local zeta functions. Electronic Research Announcements, 1995, 1: 108-111. | CommonCrawl |
The distance between neighbor sticks are unknown and for sure smaller than half the length of the sticks themselves as seen in the diagram. Though it could be any arbitrary distance smaller than half the length of a stick.
By just moving $\mathbf 4$ sticks or fewer among them, can you form $\mathbf 8$ equilateral triangles?
Note: There are at least two answers that I know.
Make one of these, maybe?
This shape isn't as nice as the first one though: there's going to be a lot of useless stick ends jutting out, just waiting to poke someone's eye out. The first shape can always be constructed so that every part of every stick is needed to form the triangles.
A third answer can be found between these two: move the top horizontal stick in the second shape down, so that it separates the top corner of the coloured area from the others. This flips one of the triangles, making the top part look like that in the first shape, but otherwise, everything stays the same. | CommonCrawl |
View from Cerro Aconcagua (highest point in South America, 22,841 feet, 8 February 1996).
Hagedorn: Classification and Normal Forms for Avoided Crossings of Quantum Mechanical Energy Levels.
figures required for above paper.
Hagedorn and Joye: Landau-Zener Transitions through Small Electronic Eigenvalue Gaps in the Born-Oppenheimer Approximation.
Hagedorn and Joye: Molecular Propagation through Small Avoided Crossings of Electron Energy Levels.
Hagedorn: Raising and Lowering Operators for Semiclassical Wave Packets.
Hagedorn and Robinson: Bohr-Sommerfeld Quantization Rules in the Semiclassical Limit.
Hagedorn and Meller: Resonances in a Box.
Hagedorn and Joye: Semiclassical Dynamics with Exponentially Small Error Estimates.
Hagedorn and Robinson: Approximate Rydberg States of the Hydrogen Atom that are Concentrated near Kepler Orbits.
Hagedorn and Joye: Semiclassical Dynamics and Exponential Asymptotics.
Hagedorn: Molecular Propagation through Crossings and Avoided Crossings of Electron Energy Levels.
Hagedorn and Joye: Exponentially Accurate Semiclassical Dynamics: Propagation, Localization, Ehrenfest Times, Scattering, and More General States.
Hagedorn and Joye: A Time-Dependent Born-Oppenheimer Approximation with Exponentially Small Error Estimates.
Hagedorn and Joye: Elementary Exponential Error Estimates for the Adiabatic Approximation.
Hagedorn: Simplified Semiclassical Propagation Estimates.
Hagedorn and Joye: Time Development of Exponentially Small Non-Adiabatic Transitions.
Hagedorn and Toloza: Exponentially Accurate Semiclassical Asymptotics of Low--Lying Eigenvalues for $2 \times 2$ Matrix Schr\"odinger Operators.
Hagedorn and Joye: Determination of Non-adiabatic Scattering Wave Functions in a Born-Oppenheimer Model.
Hagedorn and Toloza: Exponentially Accurate Quasimodes for the Time-Independent Born-Oppenheimer Approximation on a One--Dimensional Molecular System.
Hagedorn, Rousse, and Jilcott: The AC Stark Effect, Time-Dependent Born-Oppenheimer Approximation, and Franck-Condon Factors.
Hagedorn and Joye: Mathematical Analysis of Born-Oppenheimer Approximations.
Hagedorn and Joye: Recent Results on Non-Adiabatic Transitions in Quantum Mechanics.
Hagedorn and Joye: A Mathematical Theory for Vibrational Levels Associated with Hydrogen Bonds I: The Symmetric Case.
Hagedorn and Joye: Vibrational Levels Associated with Hydrogen Bonds.
Herman and Hagedorn: Does Moller-Plesset Perturbation Theory Converge? A Look at Two-Electron Systems.
Hagedorn and Joye: A Mathematical Theory for Vibrational Levels Associated with Hydrogen Bonds II: The Non-Symmetric Case.
Authors of the above paper on a hike in the Alps, 25 June 2011.
Hagedorn and Lasser: Molecular Quantum Dynamics (MFO Snapshot).
The Broken Wagon Ranch with Pikes Peak in the background.
Descending Pyramid Peak. (This was one of the few places where the rock was solid.) 13 August 2010. | CommonCrawl |
Now we don't need to consider $1\times 1$ or $1\times 2$ any longer as we have found the smallest rectangle tilable with copies of T plus copies of $1\times 1$ and $1\times 2$.
There are at least 10 more solutions. I tagged it 'computer-puzzle' but you can certainly work some of these out by hand. The larger ones might be a bit challenging.
which uses the same central figure formed by the Ts as the 1x5.
$8\times 8$ tiled with $1\times 6$. Not sure if optimal. | CommonCrawl |
This page is intended to be a part of the Numerical Analysis section of Math Online. Similar topics can also be found in the Linear Algebra section of the site.
As we have already seen, if we can write a square $n \times n$ matrix $A$ as a product of two matrices, $L$ and $U$ where $L$ is a lower triangular matrix with ones on the main diagonal and $U$ is an upper triangular matrix, then solving the system $Ax = b$ becomes a matter of simply applying forward substitution and backwards substitution.
Thus far though, we have found an $LU$ factorization of a matrix by first applying Gaussian Elimination to $A$ to get $U$, and then examining the multipliers in the Gaussian Elimination process to determine the entries below the main diagonal of $L$. We will now look at another method for finding an $LU$ decomposition of matrix without going through the process of Gaussian Elimination.
Doolittle's Method takes an $n \times n$ matrix $A$ and assume that an $LU$ decomposition exists. We then match the entries of $A$ with the products or necessary entries from $L$ and $U$. Doolittle's Method is best explained with an example. Suppose that $A$ is a $3 \times 3$ matrix and that an $LU$ decomposition exists. | CommonCrawl |
We consider the scattering of a beam particle with energy $E$, momentum $p$ and charge $ze$ off a charge distribution $\rho (x)$ of total charge $Ze$. We will consider the target to be much heavier than the probe, so that we can neglect the recoil and the outgoing energy of the scattered particle is the same as its incoming energy. We want to calculate the cross section in perturbation theory, so we need the interaction to be small which is the case if $$zZ \alpha \lt 1$$ where $\alpha$ is the fine structure constant.
Now where does that follow from? Why the inequality above implies that the Hamiltonian interaction will be "small"?
Browse other questions tagged quantum-mechanics electromagnetism particle-physics perturbation-theory or ask your own question.
Normalized probability distribution from the Coulomb/Rutherford scattering amplitude?
How can the phase of a wavefunction be measured?
QFT cross section: missing information? | CommonCrawl |
Apophenia version 1.0 is out!
What I deem to be the official package announcement is this 3,000 word post at medium.com, which focuses much more on the why than the what or how. If you follow me on twitter then you've already seen it; otherwise, I encourage you to click through.
This post is a little more on the what. You're reading this because you're involved in statistical or scientific computing, and want to know if Apophenia is worth working with. This post is primarily a series of bullet points that basically cover background, present, and future.
Apohenia is a library of functions for data processing and modeling.
The PDF manual is over 230 pages, featuring dozens of base models and about 250 data and model manipulation functions. So if you're thinking of doing any sort of data analysis in C, there is probably already something there for you to not reinvent. You can start at the manual's Gentle Introduction page and see if anything seems useful to you.
For data processing, it is based on an apop_data structure, which is a lot like an R data frame or a Python Pandas data frame, except it brings the operations you expect to be able to do with a data set to plain C, so you have predictable syntax and minimal overhead.
For modeling, it is based on an apop_model structure, which is different from anything I've seen in any other stats package. In Stats 101, the term statistical model is synonymous with Ordinary Least Squares and its variants, but the statistical world is much wider than that, and is getting wider every year. Apophenia starts with a broad model object, of which ordinary/weighted least squares is a single instance (apop_ols).
We can give identical treatment to models across paradigms, like microsimulations, or probabilistic decision-tree models, or regressions.
We can have uniform functions like apop_estimate and apop_model_entropy that accommodate known models using known techniques and models not from the textbooks using computationally-intensive generic routines. Then you don't have to rewrite your code when you want to generalize from the Normal distributions you started with for convenience to something more nuanced.
We can write down transformations of the form f:(model, model) $\to$ model.
Want a mixture of an empirical distribution built from observed data (a probability mass function, PMF) and a Normal distribution estimated using that data?
You want to modify your agent-based model via a Jacobian [apop_coordinate_transform], then truncate it to data above zero [apop_model_truncate]? Why not—once your model is in the right form, those transformations know what to do.
In short, we can treat models and their transformations as an algebraic system; see a paper I once wrote for details.
It means that this is reasonably reliable.
Can the United States Census Bureau rely on it for certain aspects of production on its largest survey (the ACS)? Yes, it can (and does).
Does it have a test bed that checks that for correct data-shunting and good numeric results in all sorts of situations? Yes: I could talk all day about how much the 186 scripts in the test base do.
Is it documented? Yes: the narrative online documentation is novella length, plus documentation for every function and model, plus the book from Princeton University Press described on the other tabs on this web site, plus the above-linked Census working paper. There's a lot to cover, but an effort has been made to cover it.
Are there still bugs? Absolutely, but by calling this v1, I contend that they're relatively isolated.
Is it idiot-proof? Nope. For example, finding the optimum in a 20-dimensional space is still a fundamentally hard problem, and the software won't stop you from doing one optimization run with default parameters and reporting the output as gospel truth. I know somebody somewhere will write me an angry letter about how software that does not produce 100% verifiably correct results is garbage; I will invite that future correspondent to stick with the apop_normal and apop_ols models, which work just fine (and the OLS estimator even checks for singular matrices). Meanwhile, it is easy to write models that don't even have proven properties such as consistency (can we prove that as draw count $\to\infty$, parameter estimate variance $\to 0$?). I am hoping that Apophenia will help a smart model author determine whether the model is or is not consistent, rather than just printing error: problem too hard and exiting.
It means that it does enough to be useful. A stats library will never be feature-complete, but as per the series of blog posts starting in June 2013 and, well, the majority of what I've done for the last decade, it provides real avenues for exploration and an efficient path for many of the things a professional economist/statistician faces.
It means I'm no longer making compatibility-breaking changes. A lot of new facilities, including the named/optional arguments setup, vtables for special handling of certain models, a decent error-handling macro, better subsetting macros, and the apop_map facilities (see previously) meant that features implemented earlier merited reconsideration, but we're through all that now.
It's a part of Debian! See the setup page for instructions on how to get it from the Debian Stretch repository. It got there via a ton of testing (and a lot of help from Jerome Benoit on the Debian Scientific team), so we know it runs on a lot more than just my own personal box.
The core is designed to facilitate incremental improvements: we can add a new model, or a new optimization method, or another means of estimating the variance of an existing model, or make the K-L divergence function smarter, or add a new option to an existing function, and we've made that one corner of the system better without requiring other changes or work by the user. The intent is that from here on out, every time the user downloads a new version of Apophenia, the interface stays the same but that the results get better and are delivered faster, and new models and options appear.
That means there are a lot of avenues for you and/or your students to contribute.
Did I mention that you'll find bugs? Report them and we'll still fix them.
It's safe to write wrappers around the core. I wrote an entire textbook to combat the perception that C is a scary monster, but if the user doesn't come to the 19,000-line mountain of code that is Apophenia, we've got to bring the mountain to the user.
For Julia, I presented version 0.01 of a Julia wrapper.
The esteemed Josh Tauberer threw together a zeroth draft of a Python wrapper.
By the way, Apophenia is free, both as in beer and as in speech. I forget to mention this because it is so obvious to me that software—especially in a research context—should be free, but there are people for whom this isn't so obvious, so there you have it.
I haven't done much to promote Apophenia. A friend who got an MFA from an art school says that she had a teacher who pushed that you should spend 50% of your time producing art, and 50% of your time selling your art.
I know I'm behind on the promotion, so, please: blog it, tweet it, post it on Instagram, make a wikipage for it, invite me to give talks at your department. People will always reinvent already-extant code, but they should at least know that they're doing so.
And my final request: try it! Apophenia doesn't look like every other stats package, and may require seeing modeling from a different perspective, but that just may prove to be a good thing. | CommonCrawl |
Titre thèse : Estimation d'une densité à plusieurs variables sous l'hypothèse de la structure d'indépendance.
In this paper, we address the problem of estimating a multidimensional density $f$ by using indirect observations from the statistical model $Y=X+\varepsilon$. Here, $\varepsilon$ is a measurement error independent of the random vector $X$ of interest, and having a known density with respect to the Lebesgue measure. Our aim is to obtain optimal accuracy of estimation under $L_p$-losses when the error $\varepsilon$ has a characteristic function with a polynomial decay. To achieve this goal, we first construct a kernel estimator of $f$ which is fully data driven. Then, we derive for it an oracle inequality under very mild assumptions on the characteristic function of the error $\varepsilon$. As a consequence, we get minimax adaptive upper bounds over a large scale of anisotropic Nikolskii classes and we prove that our estimator is asymptotically rate optimal when $p\in[2,+\infty]$. Furthermore, our estimation procedure adapts automatically to the possible independence structure of $f$ and this allows us to improve significantly the accuracy of estimation.
In this paper, we study the problem of pointwise estimation of a multivariate density. We provide a data-driven selection rule from the family of kernel estimators and derive for it a pointwise oracle inequality. Using the latter bound, we show that the proposed estimator is minimax and minimax adaptive over the scale of anisotropic Nikolskii classes. It is important to emphasize that our estimation method adjusts automatically to eventual independence structure of the underlying density. This, in its turn, allows to reduce significantly the influence of the dimension on the accuracy of estimation (curse of dimensionality). The main technical tools used in our considerations are pointwise uniform bounds of empirical processes developed recently in Lepski [Math. Methods Statist. 22 (2013) 83-99].
In this paper, we focus on the problem of a multivariate density estimation under an Lp-loss. We provide a data-driven selection rule from a family of kernel estimators and derive for it Lp-risk oracle inequalities depending on the value of p ≥ 1. The proposed estimator permits us to take into account approximation properties of the underlying density and its independence structure simultaneously. Specifically, we obtain adaptive upper bounds over a scale of anisotropic Nikolskii classes when the smooth- ness is also measured with the Lp-norm. It is important to emphasize that the adaptation to unknown independence structure of the estimated density allows us to improve significantly the accuracy of estimation (curse of di- mensionality). The main technical tools used in our derivation are uniform bounds on the Lp-norms of empirical processes developed in Goldenshluger and Lepski . | CommonCrawl |
9 Correct notation for "for all positive real $c$"
7 Why is every $p$-norm convex?
7 What is wrong with this argument that closed interval [0, 1] is not compact?
5 How to prove that if $f$ is continuous a.e., then it is measurable.
5 Is there any proof that there doesn't exist a circulant Hadamard matrix of size $8 \times 8$? | CommonCrawl |
If you have ever worked w/ financial data you have probably seen an unbalanced dataset. When you deal w/ loan defaults or fraud data, the examples of non-defaults and non-fraud far outweigh opposite.
In this scenario, you want to build a classifier that can identify fraudulent transactions in credit card histories. Fortunately, most transactions are legitimate, so perhaps only 0.1% of the data is a positive instance. The problem refers to the fact that for a large number of real world problems, the number of positive examples is dwarfed by the number of negative examples (or vice versa).
Imbalanced data is a problem because machine learning algorithms are too smart for your own good. For most learning algorithms, if you give them data that is 99.9% negative and 0.1% positive, they will simply learn to always predict negative. Why? Because they are trying to minimize error, and they can achieve 0.1% error by doing nothing! If a teacher told you to study for an exam with 1000 true/false questions and only one of them is true, it is unlikely you will study very long.
Really, the problem is not with the data, but rather with the way that you have defined the learning problem. That is to say, what you care about is not accuracy: you care about something else. If you want a learning algorithm to do a reasonable job, you have to tell it what you want!
The figure above is a clear example where using a typical accuracy score to evaluate our classification algorithm. For example, if we just used a majority class to assign values to all records, we will still be having a high accuracy, but we would be classifying all 1 (fraud) incorrectly!
Most likely, what you want is not to optimize accuracy, but rather to optimize some other measure, like f-score or AUC. You want your algorithm to make some positive predictions, and simply prefer those to be "good." We will shortly discuss two heuristics for dealing with this problem: subsampling and weighting. In subsampling, you throw out some of your negative examples so that you are left with a balanced data set (50% positive, 50% negative). This might scare you a bit since throwing out data seems like a bad idea, but at least it makes learning much more efficient. In weighting, instead of throwing out positive examples, we just give them lower weight. If you assign an of 0.00101 to each of the positive examples, then there will be as much weight associated with positive examples as negative examples.
Collect more data, however, this is not always possible.
ROC curves - calculates sensitivity/specificity ratio.
Essentially this is a method that will process the data to have an approximate 50-50 ratio.
Adding copies of the under-represented class (better when you have little data). The main advantage to the over-sampling algorithm is that it does not throw out any data.
Deletes instances from the over-represented class (better when he have lot's of data). The main advantage to subsampling is that it is more computationally efficient.
Apart from under and over sampling, there is a very popular approach called SMOTE (Synthetic Minority Over-Sampling Technique), which is a combination of oversampling and under-sampling, but the oversampling approach is not by replicating minority class but constructing new minority class data instance via an algorithm.
Add a term $\alpha$ to the cost function to more heavily penalize misclassifications of the minority class.
This example in python will under sample the dataset to create a balanced 50/50 ratio. This will be done by randomly selecting $x$ amount of sample from the majority class (not fraud), being $x$ the total number of records with the minority class (is fraud).
As we know, due to the imbalance of the data, many observations could be predicted as False Negatives, being, that we predict a normal transaction, but it is in fact a fraudulent one. Recall captures this.
If you train you model on the sampled dataset, then you can test your model on the original unbalanced dataset and achieve a higher Recall rate. | CommonCrawl |
This page is a summary of the work carried out on the Career Integration Grant project KineticCF which run from 2012 to 2015.
The work of this project focuses on the rigorous mathematical development of kinetic theory and the applications of its techniques to the study of models of population dynamics and cell fragmentation and growth in biology, to the dynamics of coagulation and fragmentation processes in physics, and to more recently developed models in the field of collective behavior. The aim is then twofold: to advance the understanding of basic equations in kinetic theory, such as the Boltzmann equation, and to employ known or newly developed techniques in this field to the rigorous treatment of models in the above mentioned areas, such as the Becker-Döring equation for nucleation, the growth-fragmentation model for cell populations, or individual-based models for collective behaviour.
Work has been carried out on the main lines detailed in the project. Particularly, advances have been obtained regarding entropy methods for the Boltzmann equation and their application to the behaviour of models with a background interaction. We also considered coagulation and fragmentation models, with an interesting breakthrough on entropy inequalities for the Becker-Döring equation (see results below). Work on collective behaviour models has also yielded some fundamental results in the theory of the aggregation equation. Intense collaborations have been carried out with researchers in the UK and international groups, especially in Imperial College London regarding collective behaviour; with University of Cambridge researchers regarding entropy inequalities for coagulation-fragmentation problems; and following ongoing research projects with the Universities of Parma and Torino.
A Spring School coorganised with M. Di Francesco was held at the University of Bath in 2014 on the topic of the relationship between microscopic and macroscopic behaviour of systems. You can find all information on the school, including the full program and some of the presented posters, at its website.
Regarding the Boltzmann equation and related models, we have studied an entropy-entropy production inequality for the logarithmic entropy in the linear Boltzmann equation. This was one of the objectives of the project and has been carried out successfully, with results gathered in a 2015 publication in Journal of Functional Analysis. Several applications of this inequality have been given in a preprint by Cañizo and Lods, where it is used as a means to studying trend to equilibrium of a nonlinear model including an interaction with a background at a fixed temperature.
Part of the proposed work on coagulation-fragmentation models concerns the Becker-Döring equation, a model for nucleation and growth which is relevant is processes of crystallization, aggregation of lipids, and phase change phenomena. Quite complete results have now been obtained regarding asymptotic behaviour of its subcritical solutions. A linearised study was carried out in Cañizo and Lods (2013), and optimal results on entropy-entropy production inequalities have been obtained in Cañizo, Einav, and Lods (2015). Using inequalities in the theory of Markov processes, we show that there cannot be full entropy production inequalities in general, clarifying previous results on the matter. These results have a strong analogy with results for the Boltzmann equation, showing that the proposed links between coagulation and kinetic models have been fruitful.
Finally, a fundamental result on the existence of minimisers for attractive-repulsive interaction potentials has been published in Archive for Rational Mechanics and Analysis. This is a step on the way of understanding the dynamics of several collective behaviour models, including the aggregation equation which has been an important focus of recent research in the field. We point out that the journals where the main results have appeared are of an excellent quality, ranking among the top journals in all common metrics. Published papers have already been used in several other models, with a remarkable impact for such recent works.
Below is a list of published results, with links to papers in the public preprint server arXiv.
José A. Cañizo, Amit Einav and Bertrand Lods. Trend to Equilibrium for the Becker-Döring Equations: An Analogue of Cercignani's Conjecture. Analysis & PDE 10(7):1663–1708, 2017.
In this work we investigate the rate of convergence to equilibrium for subcritical solutions to the Becker-Döring equations with physically relevant coagulation and fragmentation coefficients and mild assumptions on the given initial data. Using a discrete version of the log-Sobolev inequality with weights we show that in the case where the coagulation coefficient grows linearly and the detailed balance coefficients are of typical form, one can obtain a linear functional inequality for the dissipation of the relative free energy. This results in showing Cercignani's conjecture for the Becker-Döring equations and consequently in an exponential rate of convergence to equilibrium. We also show that for all other typical cases one can obtain an 'almost' Cercignani's conjecture that results in an algebraic rate of convergence to equilibrium. Additionally, we show that if one assumes an exponential moment condition one can recover Jabin and Niethammer's rate of decay to equilibrium, i.e. an exponential to some fractional power of $t$.
José A. Cañizo and Bertrand Lods. Exponential trend to equilibrium for the inelastic Boltzmann equation driven by a particle bath. Nonlinearity 5(29):1687–1715, 2016.
We consider the spatially homogeneous Boltzmann equation for inelastic hard spheres (with constant restitution coefficient $\alpha \in (0,1)$) under the thermalization induced by a host medium with a fixed Maxwellian distribution. We prove that the solution to the associated initial-value problem converges exponentially fast towards the unique equilibrium solution. The proof combines a careful spectral analysis of the linearised semigroup as well as entropy estimates. The trend towards equilibrium holds in the weakly inelastic regime in which $\alpha$ is close to $1$, and the rate of convergence is explicit and depends solely on the spectral gap of the elastic linearised collision operator.
Alethea B. T. Barbaro, José A. Cañizo, José A. Carrillo and Pierre Degond. Phase transitions in a kinetic flocking model of Cucker-Smale type. Multiscale Modelling and Simulation 14(3):1063–1088, 2016.
We consider a collective behavior model in which individuals try to imitate each others' velocity and have a preferred speed. We show that a phase change phenomenon takes place as diffusion decreases, bringing the system from a "disordered" to an "ordered" state. This effect is related to recently noticed phenomena for the diffusive Vicsek model. We also carry out numerical simulations of the system and give further details on the phase transition.
Marzia Bisi, José A. Cañizo and Bertrand Lods. Entropy dissipation estimates for the linear Boltzmann operator. Journal of Functional Analysis 269(4):1028–1069, 2015.
We prove a linear inequality between the entropy and entropy dissipation functionals for the linear Boltzmann operator (with a Maxwellian equilibrium background). This provides a positive answer to the analogue of Cercignani's conjecture for this linear collision operator. Our result covers the physically relevant case of hard-spheres interactions as well as Maxwellian kernels, and we always work with a cut-off assumption. For Maxwellian kernels, the proof of the inequality is surprisingly simple and relies on a general estimate of the entropy of the gain part operator due to Villani (1998) and Matthes and Toscani (2012). For more general kernels, the proof relies on a comparison principle. Finally, we also show that in the grazing collision limit our results allow to recover known logarithmic Sobolev inequalities.
J. A. Cañizo, J. A. Carrillo and F. S. Patacchini. Existence of Compactly Supported Global Minimisers for the Interaction Energy. Archive for Rational Mechanics and Analysis 217(3):1197–1217, 2015.
The existence of compactly supported global minimisers for continuum models of particles interacting through a potential is shown under almost optimal hypotheses. The main assumption on the potential is that it is catastrophic or not H-stable, which is the complementary assumption to that in classical results on thermodynamic limits in statistical mechanics. The proof is based on a uniform control on the local mass around each point of the support of a global minimiser, together with an estimate on the size of the 'holes' that a minimiser may have. The class of potentials for which we prove existence of minimisers includes power-law potentials and, for some range of parameters, Morse potentials, widely used in applications. Finally, using Euler-Lagrange conditions on local minimisers we give a link to classical obstacle problems in the calculus of variations.
José A. Cañizo and Bertrand Lods. Exponential convergence to equilibrium for subcritical solutions of the Becker–Döring equations. Journal of Differential Equations 255(5):905–950, 2013.
We prove that any subcritical solution to the Becker-Döring equations converges exponentially fast to the unique steady state with same mass. Our convergence result is quantitative and we show that the rate of exponential decay is governed by the spectral gap for the linearized equation, for which several bounds are provided. This improves the known convergence result by Jabin & Niethammer (see ref. ). Our approach is based on a careful spectral analysis of the linearized Becker-Döring equation (which is new to our knowledge) in both a Hilbert setting and in certain weighted $\ell^1$ spaces. This spectral analysis is then combined with uniform exponential moment bounds of solutions in order to obtain a convergence result for the nonlinear equation.
Expected final results and potential impact.
Collaborations are under way to extend the results mentioned above. We expect that further applications can be found for the inequalities involving the Boltzmann operator, possibly allowing for the study of models with inelastic collisions. For the Becker-Döring equation, behaviour of subcritical solutions is quite well understood now, and further work will probably involve a better understanding of the supercritical behaviour, which is still a challenging problem. Regarding collective behaviour models, a study of the dynamics of the aggregation equation is one of the problems being attacked now. For this, further properties of stationary solutions are probably needed, such as uniqueness and better information on the regularity. We intend to continue work on these problems in the next years. | CommonCrawl |
How to annoy Your Physics Professor!
I got the idea of this post, from the article - Things to do to Annoy Your Physics Professor", was written by some guy of University of Maryland. If you want to try them out, you do so at your own risk.
1. At some time during every lecture, slowly lift yourself up out of your chair and cry out, "Look! Anti-gravity!" As soon as the professor turns to look at you, let yourself fall back into your chair, shrug your shoulders and say, "Guess not."
2.Try to confuse him/her with sentences or questions containing a bunch of unrelated things, but sound like they could actually mean something. For example, "Why not just write the answer as a contour integral in the complex plane of a fourth order tensor in Minkowski space-time?" It helps to sound like you know what you're talking about, too.
3. At the beginning of every class, tell him/her that you've just broken a fundamental law of physics or solved an unsolved problem, but that you don't have any proof. For example, "Last night I discovered the Grand Unified Theory, but I lost the piece of paper I wrote it on."
4.Whenever teacher starts talking about vectors, raise your hand and ask him/her if this is where you get to use the right hand rule. After deriving an equation, say, "That's nice, but how does it relate to Newton's Laws?"
5. During labs, continually complain about simplifying the experiment by negelcting things like friction, air resistance, relativistic effects, etc. Remember, your main source of error in labs is God's will -- if He wanted the experiment to work, it would have, but since the experiment failed, He obviously didn't want it to work.
6.Tell your teacher that you want a real proof of Schrodinger's wave equation -- none of this demonstration about how it was arrived at.
8. Always plot your data with the x and y axes reversed. Use semi-log paper for graphs when no logarithms need to be taken, and use normal paper when logarithms need to be taken.
9. Solve all mechanics problems using *only* Newton's Laws and solve all E&M problems using *only* Maxwell's equations. And use the Schrodinger equation to solve classical mechanics problems and Newton's laws to solve quantum mechanics problems.
10. Threaten to renormalize his/her wave function to zero over all space and time. If you could do that, then not only would he/she not exist now, he/she would never have existed at all!
Income can neither be created nor destroyed.
In order to use income, you must pay your taxes.
As income approaches absolutely nothing, no taxes to pay.
12. Always use the full values of known constants and carry out as many decimal places as possible (e.g. $ c = 2.99792458 \times 10^8\ m/s^2$). This is the most fun for $\pi$ ($ \pi = 3.141592653589793238462643383279...$ you get the idea) and other numbers like that. Never round your answers. "Significant digits? Who needs them?"
13. In the class, start running into the wall. When the professor asks why, say that you are attempting to tunnel through the wall and prove that macroscopic tunnelling is possible. "There is a non-zero probability that I can tunnel through the wall, so if I run into the wall often enough, eventually I should be able to go through it!"
14.When the professor asks you to explain the concept of the theory of relativity, use a diagram. Put your name at the top and draw two branches from your name. Then put your parent's name. Everybody on the diagram should be relative!
15. When the professor requests that your solutions be given in MKS units, use CGS units instead. Likewise, when the professor asks that your solutions be given in CGS units, give them in MKS instead. When it doesn't matter, use the tie-breaker: the English system.
16. In the lab, charge a capacitor to full capacity and then short it out to make that cool spark and pop sound. Likewise, charge the capacitor until the thing just explodes . Always blame the manufacturer of the lab equipment that you are using for experimental error in any lab experiments.
18. Start creating your own conservation laws and give proof that the quantities you claim are conserved actually are. When the totalitarian principle is brought up, "Every process that is not forbidden must occur," start arguing with your professor that the reverse must also be true, "Every processes that does not occur, must be forbidden."
20. Refuse to study any particle physics unless you can use a particle accelerator or cyclotron to actually see some reactions.
21. When describing Newton's laws of conservation, give answers from the GOP's Contract with America.
22. Start hanging a magnetic tape up in the middle of class, stringing it as far across the room as you can possibly get, and explain that you're setting up a catenary.
23. When in the Physics lab, and asked to demonstrate projectile motion, get out the dartboard and play a game of darts.
24. When asked to show the symbols used for "Bra-Ket" notation, draw a bra for the bra part and a kite for the ket part. Call it "Bracket" notation in class, too. Not "Bra-Ket" notation.
25.Build a bridge out of toothpicks in E&M lab. Put a sign on it. "Wheatstone." Don't forget to put it across a small puddle of water on your lab bench.
*1)Mathematics professor Alexander Abian of Iowa State University has a theory that mass and time are equivalent and in order for time to advance, mass is necessarily lost.
*2)Apparently this method was actually used at one time and it has shown up at least one lab manual as an optional method for solving the problem.
I posted this article about 4 years ago, in one of my old blog called 'Science Catchup'. | CommonCrawl |
A cube has 24 orientations. By rolling the cube on its edge within the perimeter of a $2\times4$ rectangle 3 times, all 24 orientations are reached and the next roll returns the cube to both the starting position and starting orientation.
I've called the 24-node graph the "rolling cube graph". It's the bipartite double graph of the cuboctahedral graph.
A rolling icosahedron has 120 orientations. The top face can point up or down in each of 60 orientations. What is the smallest triangular grid for which a rolling icosahedron can roll through a complete Hamiltonian cycle of all 120 orientations? What are the properties of the 120-vertex cubic graph?
Similar question for the other 7 deltahedra. What is the smallest triangular grid allowing a complete cycle of all orientations?
For other polyhedra that can be rolled through all possible orientations on a simple 2D grid of polygons, what is the smallest grid that supports a Hamiltonian cycle?
For the 1x1x2 cuboid, here's a grid that allows a Hamiltonian path through all 24 orientations. Is there a grid with fewer cells?
I solved the icosahedron. There are two nice closed curves that will put a rolling icosahedron through all 120 orientations if the curves are repeated 5 times. The outside edges of the graph are the Hamiltonian cycle. Orientations are connected by rolling on one of the three bottom edges of the icosahedron.
The rolling icosahedron graph is equivalent to the Foster120B cubic symmetric graph.
The octahedron gives the Nauru graph.
The tetrahedron gives the cubical graph.
Not the answer you're looking for? Browse other questions tagged group-theory graph-theory recreational-mathematics polyhedra tiling or ask your own question.
Show that if $n\geq 3$, the complete graph on $n$ vertices $K_n$ contains a Hamiltonian cycle.
Hamiltonian cubic graphs contain at least three hamilton cycles.
How many Hamiltonian cycles are there in a complete graph if we discount the cycle's orientation or starting point? | CommonCrawl |
Palladium 103 is a newer sealed source that is used in place of 125I, particularly for permanent interstitial prostate implants. It is coated onto graphite pellets and encapsulated within a titanium shell as seeds.
103Pd is a pure gamma emitter, decaying through electron capture to 103Rh. Gamma energies range from 20 - 23 keV, with a mean energy of 21 keV. This is slightly lower than 125I, and 103Pd has a smaller HVL in lead and water (lead HVL 0.013 mm). The specific activity of 103Pd is $2.8 \times 10^3$ TeV/g.
Like 125I, 103Pd has a relatively fragile capsule which may be damaged. As a permanent implant, it dose not requrie disposal, and its shorter half life means that precautions after death and cremation are less of an issue. It should be stored within a lead safe at least 3 mm thick. The low dose rate and low photon energies mean that handling is simpler than the older 226Ra and 198Au implants.
103Pd is supplied by the manufacture and is sterilised prior to insertion. As a permanent implant, disposal issues are less of a concern but precautions following death of the patient should be observed for several months following insertion. | CommonCrawl |
Volumes of n-balls: what is so special about n=5?
How to define a differential form on a fractal?
Why is Euler's Gamma function the "best" extension of the factorial function to the reals?
Why do we care about $L^p$ spaces besides $p = 1$, $p = 2$, and $p = \infty$?
Can we cover the unit square by these rectangles?
Geometric interpretation of the half-derivative?
Why have we chosen our number system to be decimal (base 10)?
Nonexistence of boundary between convergent and divergent series?
Are there smooth bodies of constant width?
Establishing zeta(3) as a definite integral and its computation.
What does the word "symplectic" mean?
Which Fréchet spaces have a dual that is a Fréchet space? | CommonCrawl |
Abstract: Legacy codes in computational science and engineering have been very successful in providing essential functionality to researchers. However, they are not capable of exploiting the massive parallelism provided by emerging heterogeneous architectures. The lack of portable performance and scalability puts them at high risk: either they evolve or they are doomed to disappear. One example of legacy code which would heavily benefit from a modern design is FLEUR, a software for electronic structure calculations. In previous work, the computational bottleneck of FLEUR was partially re-engineered to have a modular design that relies on standard building blocks, namely BLAS and LAPACK. In this paper, we demonstrate how the initial redesign enables the portability to heterogeneous architectures. More specifically, we study different approaches to port the code to architectures consisting of multi-core CPUs equipped with one or more coprocessors such as Nvidia GPUs and Intel Xeon Phis. Our final code attains over 70\% of the architectures' peak performance, and outperforms Nvidia's and Intel's libraries. Finally, on JURECA, the supercomputer where FLEUR is often executed, the code takes advantage of the full power of the computing nodes, attaining $5\times$ speedup over the sole use of the CPUs. | CommonCrawl |
The consecutive odds ratios of the binomial $(n, p)$ distribution help us derive an approximation for the distribution when $n$ is large and $p$ is small. The approximation is sometimes called "the law of small numbers" because it approximates the distribution of the number of successes when the chance of success is small: you only expect a small number of successes.
As an example, here is the binomial $(1000, 2/1000)$ distribution. Note that $1000$ is large, $2/1000$ is pretty small, and $1000 \times (2/1000) = 2$ is the natural number of successes to be thinking about.
Though the possible values of the number of successes in 1000 trials can be anywhere between 0 and 1000, the probable values are all rather small because $p$ is small. That is why we didn't even bother computing the probabilities beyond $k = 15$.
Since the histogram is all scrunched up near 0, only very few bars have noticeable probability. It really should be possible to find or approximate the chances of the corresponding values by a simpler calculation than the binomial formula.
To see how to do this, we will start with $P(0)$.
Let $n \to \infty$ and $p_n \to 0$ in such a way that $np_n \to \mu > 0$. It's important to ensure that $p_n$ doesn't go to 0 so fast that $np_n \to 0$ as well, because in that case all the probability just gets concentrated at the value 0 when $n$ is large.
Let $P_n(k)$ be the binomial $(n, p_n)$ probability of $k$ successes.
when $n$ is large, because $p_n \sim 0$ and $np_n \sim \mu$.
when $n$ is large, because $k$ is constant, $np_n \to \mu$, $p_n \to 0$, and $1-p_n \to 1$. By induction, this implies the following approximation for each fixed $k$.
if $n$ is large, under all the additional conditions we have assumed. Here is a formal statement.
This is called the Poisson approximation to the binomial. The parameter of the Poisson distribution is $\mu \sim np_n$ for large $n$.
The distribution is named after its originator, the French mathematician Siméon Denis Poisson (1781-1840).
The expansion is infinite, but we are only going up to a finite (though large) number of terms $n$. You now start to see the value of being able to work with probability spaces that have an infinite number of possible outcomes.
We'll get to that in a later section. For now, let's see if the approximation we derived is any good.
Use stats.poisson.pmf just as you would use stats.binomial.pmf, but keep in mind that the Poisson has only one parameter.
The prob140 function that draws overlaid histograms is called Plots (note the plural). The syntax has alternating arguments: a string label you provide for a distribution, followed by that distribution, then a string label for the second distribution, then that distribution.
Does it look as though there is only one histogram? That's because the approximation is great! Here are the two histograms individually.
In lab, you will use total variation distance to get a bound on the error in the approximation.
Part of the answer is that if a function involves parameters, you can't understand how it behaves by just computing its values for some particular choices of the parameters. In the case of Poisson probabilities, we will also see shortly that they form a powerful distribution in their own right, on an infinite set of values. | CommonCrawl |
You want to buy $n$ items from a shop. You can visit the shop several times and you don't have to buy all the items together.
The prices for the items are given in cents. When you are at the cashier, the total price for the items will be rounded to the nearest multiple of 5. This may allow you to save money by visiting the shop several times and grouping the items cleverly.
For example, if you want to buy three items with prices 11, 32 and 56 cents, you can visit the shop twice: first buy the 32 cent item (for 30 cents) and then buy the 11 and 56 cent items (for 65 cents). Now the price for all the items is only 95 cents.
You are given the prices for the items and your task is to find out the lowest possible total price for the items.
The first input line contains an integer $t$: the number of test cases. After this, the test cases are described as follows.
The first line contains an integer $n$: the number of items. The second line contains $n$ integers $p_1,p_2,\ldots,p_n$: the price for each item.
For each test case, output the lowest total price for the items. | CommonCrawl |
What is the Weightage of Inductance in GATE Exam?
Total 1 Questions have been asked from Inductance topic of Electric Circuits subject in previous GATE papers. Average marks 2.00.
Two identical coils each having inductance L are placed together on the same core. If an overall inductance of $\alpha$L is obtained by interconnecting these two coils, the minimum value of $\alpha$ is ________. | CommonCrawl |
Does anyone of any connections that have been made between the notions of cybernetics, autopoeitic systems, or even George Spencer Browns Laws of Form, and homotopy theory or category theory?
I suppose right now such connections might be somewhat tenuous, as the latter is pretty philosophical, and the former is pretty formal, but it seems that there are perhaps connections to be made. Especially, with the notion of the distinctions in space that Spencer-Brown talks about, ideas of homology and homotopy occur to me. Also, possible connections with David Spivak's ontology logs and various ideas about categorical database type things. I'm also wondering if there are any connections between the aforementioned more philosophical topics and internal logic of topoi, or even topological semantics.
Any references or ideas on this would be really appreciated, it's just kind of a shot in the dark. Perhaps the question is not well formed.
Format: MarkdownItexI suppose you've already seen some of Louis Kaufman's ruminations on Laws of Form in his On Knots?
I suppose you've already seen some of Louis Kaufman's ruminations on Laws of Form in his On Knots?
Format: MarkdownItex> Does anyone of any connections that have been made between the notions of cybernetics, autopoeitic systems, or even George Spencer Browns Laws of Form, and homotopy theory or category theory? Why "or even"? LoF is just Boolean algebra in new notation, no? That is easily related to category theory, see for instance at [[internal logic]]. On the other hand, "cybernetics" and "autopoeitic systems" is about complex systems akin to biological systems. This is way, way beyond what category theory/mathematics usually describes in any substantial detail.
Why "or even"? LoF is just Boolean algebra in new notation, no? That is easily related to category theory, see for instance at internal logic. On the other hand, "cybernetics" and "autopoeitic systems" is about complex systems akin to biological systems. This is way, way beyond what category theory/mathematics usually describes in any substantial detail.
Well, by "or even" I meant that I guess it's more likely that there is a connection, but it wouldn't be as interesting. However, I'm specifically thinking of the LoF ideas of differentiation in terms of homology, though again that might be sort of a trivial connection. He also introduces this notion of oscillation (p. 59) that causes me to ruminate on homology. Wondering if his "marks" on forms can be thought of as homology classes, or homotopy classes of maps, somehow embedding LoF inside of homotopy theory, though that might be sort of a silly idea. Additionally, I think Spencer-Brown would not appreciate your statement about LoF being just Boolean algebra in new notation, specifically with respect to the idea of "imaginary" truth values (i.e. the value of the statement "This statement is false" can be consistently labeled as "imaginary" to some effect, which I have not completely understood yet).
And regarding your statement about cybernetics, Niklas Luhmann specifically uses the ideas of LoF to describe at least one autopoietic system. I'm still in the process of studying this stuff, but I'm wondering then if these notions could be, at the very least, updated to correspond to the more fashionable language of categories. That's just a basic consideration though. There is the idea of a self-creating system, and the notion of a localization in a category to a category which is in fact equivalent to its overcategory occurs to me, making the localization an equivalence. Could this have anything to do with some kind of internal groupoid or something? At the very least we could certainly attempt to think of "operations" as arrows in a category, though this is a rough idea, and perhaps even associativity would fail here.
Format: MarkdownItex> Additionally, I think Spencer-Brown would not appreciate your statement about LoF being just Boolean algebra in new notation But that's easily checked. The Wikipedia page spells it out. LoF is not interesting for its mathematics, which is trivial, but because of it's style of speaking about mathematics.
But that's easily checked. The Wikipedia page spells it out. LoF is not interesting for its mathematics, which is trivial, but because of it's style of speaking about mathematics.
Format: MarkdownItexOkay fair enough, except you still ignore the notion of second-degree equations, which is kind of what I'm finding interesting. But I guess you would argue that it has all been taken care of with Church's work on so-called restricted recursive arithmetic (as mentioned in the Wiki), which I had not heard of.
Okay fair enough, except you still ignore the notion of second-degree equations, which is kind of what I'm finding interesting. But I guess you would argue that it has all been taken care of with Church's work on so-called restricted recursive arithmetic (as mentioned in the Wiki), which I had not heard of.
Format: MarkdownItexLet me turn this around to a positive statement: LoF is cool. But [[type theory]] contains it and is way cooler. And [[schreiber:Quantum gauge field theory in Cohesive homotopy type theory|way more interesting]] ;-) Let's just state some basics on type theory in the cool fashion of LoF: * let's make the cool move of denoting the [[empty type]], often denoted $\emptyset$, by no symbol at all. * then [[negation]] of $a$ is denoted simply $$ a \to $$ That's at least as cool as the $$ a \neg $$ from LoF, isn't it :-) * in this notation (assuming Boolean logic), the [[unit type]] is simply $$ \to $$ (namely $\emptyset \to \emptyset$ if we did write out the empty type ) which competes for coolness with LoF's $$ \neg $$ * and we have $$ \to \to = $$ (namely $(\emptyset \to \emptyset) \to \emptyset = \emptyset$ ). And there we go, the **Laws of Type**™ But the real fun is that, cool as this already is, this is but a tiny-tiny fragment of what mathematics is founded on. And the rest is even cooler, still. Namely, this is just (Boolean) [[(-1)-groupoid|(-1)-type theory]]. The **Laws of (-1)Type**™. Next comes the **Laws of 0Type**™ which is where traditionally most of mathematics happens. Around here, we are fond of the full story: the **[[homotopy type theory|Laws of ∞Type]]**™.
let's make the cool move of denoting the empty type, often denoted ∅v;\emptyset, by no symbol at all.
(namely (∅v;→∅v;)→∅v;=∅v;(\emptyset \to \emptyset) \to \emptyset = \emptyset ).
But the real fun is that, cool as this already is, this is but a tiny-tiny fragment of what mathematics is founded on. And the rest is even cooler, still.
Namely, this is just (Boolean) (-1)-type theory. The Laws of (-1)Type™. Next comes the Laws of 0Type™ which is where traditionally most of mathematics happens. Around here, we are fond of the full story: the Laws of ∞Type™.
Haha, indeed. I'm a little bit familiar with type theory. And I agree, almost all of LoF is not particularly mathematically interesting, though I still think it might be interesting to think about autopoietic systems from the point of view of category theory. But maybe this is nothing more interesting than what essentially boils down to set theory and logic, although the study of homotopy-type theory occurs to me, specifically the work of Steve Awodey, though I know next to nothing about that stuff, so I won't try to talk about it.
I will also check out the laws of \infty type, because I had not heard of that.
Let me expand, in a more serious tone.
Here is what I think what Spencer Brown was really after. (Judging just from my personal gut impression from reading the book. I may be entirely wrong.) It's something that I sympathize with. When I was young, I was spending hours and days (too many) thinking about something similar. At some point I gave it up – luckily – and started learning and then doing real science. By some cosmic chance, these days, decades after, it seems as if I am coming full circle back to those old days… up to some non-trivial holonomy.
What I mean is this: at times one may feel a deep miracle in the fact that we write symbols to paper and then these symbols carry meaning and information about reality.
Write your equations on the tiles of the floor. When you're done, wave a wand over the equations tell them to fly.
That feeling of magic ("wave a wand") when speaking about how formulas relate to reality is, I think, what drove Spencer Brown. It connects to what is probably a deep root in the human psyche: the notion of casting a spell, of saying words and creating reality.
In an age of science, we are used to disregard spells as a figment of our ancestor's imagination. But at the same time, we have realized much of what they were, roughly, dreaming of: we can write symbols on paper and predict the future from them – to some extent, say when we are computing the arrival of an asteroid or the time when the sun will burn out, or the time when our galaxy will collide with its neighbour. That's already amazing.
But that's just the physics aspect of it. From there it gets even more miraculous: by the unreasonable effectiveness of mathematics we find that many of these spells all follow from a handful of master spells. Einstein's equations and those of Yang-Mills theory. Apart from all the constants of nature in it, the standard model of particle physics is a simple mathematical formula that fits on two lines. Given all the things that follow from it, this is a rather mighty spell.
And then it gets even more miraculous still: as I have just written about with Mike in Quantum gauge field theory in Cohesive homotopy type theory (schreiber), at least some central general structure of that mighty spell is rooted in just the bare Laws of ∞Type.
The theme of this book is that a universe comes into being when a space is severed or taken apart.
I imagine that this was what Spencer Brown was after. A kind of creation myth of the world from algebra of symbols. I am not sure if this is a reasonable thing to do. But I think I do understand the inner conditions that could make one try to do this.
Format: MarkdownItexThat Wheeler quote continues apparently as: > Write your equations on the tiles of the floor. When you're done, wave a wand over the equations tell them to fly. Not one will sprout wings and fly. The universe flies, it has a life to it that no equation has.
Write your equations on the tiles of the floor. When you're done, wave a wand over the equations tell them to fly. Not one will sprout wings and fly. The universe flies, it has a life to it that no equation has.
Format: MarkdownItexNot that it's particularly deep, but since we are talking about it at all: I see that some Chris Holt observes that natural interpretation of Spencer Brown's "empty symbol" as the [[empty type]] in type theory, as in comment #7 above, _[here](https://groups.google.com/forum/?fromgroups=#!msg/sci.logic/KoHTFMM1c1k/WcSNn8ttWxcJ)_ in a comment on sci.logic.
I see that some Chris Holt observes that natural interpretation of Spencer Brown's "empty symbol" as the empty type in type theory, as in comment #7 above, here in a comment on sci.logic.
Format: MarkdownItexOh, sure. Thanks for catching that. All the better.
Oh, sure. Thanks for catching that. All the better.
Format: MarkdownItexI am again in procrastination mood, so allow me to come back to the above game. But also, I need to finally learn to speak [[Coq]], not just read it. You can regard the following as a basic question about Coq. Or as a comment on _Laws of Form_ in type theory :-). So I start with the latter: observe that it's kind of cute that the formal definition of the [[empty type]] as an [[inductive type]] is pretty much verbatim what Spence Brown is suggesting: the absence of a symbol Inductive empty : Type := :-) Just for distraction purposes I opened my Coq editor and tried to see if I can prove that $\emptyset \to \emptyset$ is equivalent to the unit type. But I get stuck already with such a kindergarten Coq-problem. Here is the code that I type Require Import Homotopy. Inductive empty : Type := . Definition tzimtzum : (empty -> empty). intro H. exact H. Defined. Lemma contractible : is_contr (empty -> empty). Proof. exists tzimtzum. intro y. induction y. After this I am puzzled: I expected the last line to finish off the proof (though that's using intuition more than actual reasoning about what the type theory engine does in the background). But instead after that last line Coq claims that the remaining subgoal is to prove empty which of course I cannot. What's wrong with my proof?
I am again in procrastination mood, so allow me to come back to the above game. But also, I need to finally learn to speak Coq, not just read it. You can regard the following as a basic question about Coq. Or as a comment on Laws of Form in type theory :-).
Inductive empty : Type := .
Definition tzimtzum : (empty -> empty).
Lemma contractible : is_contr (empty -> empty).
which of course I cannot.
What's wrong with my proof?
Format: MarkdownItexThat's slightly weird; I would expect the tactic "induction" to give you an error message, since $y$ does not belong to an inductive type so you can't do induction over it. Evidently Coq is guessing that you're trying to do something fancy and outsmarting itself. Since your remaining goal is an equality of two functions, it's almost certain that what you need to use is function extensionality — there's almost no other way to prove in Coq that two functions are equal. Try "apply funext." and go from there.
That's slightly weird; I would expect the tactic "induction" to give you an error message, since yy does not belong to an inductive type so you can't do induction over it. Evidently Coq is guessing that you're trying to do something fancy and outsmarting itself.
Since your remaining goal is an equality of two functions, it's almost certain that what you need to use is function extensionality — there's almost no other way to prove in Coq that two functions are equal. Try "apply funext." and go from there.
Format: MarkdownItexIf you replace $empty$ with some arbitrary other type $A$, then you get the error I expect: "Not an inductive product". But if you replace it with a type that is inductive, like $unit$ or $nat$, you get the error "$y$ is used in conclusion". I have no idea what Coq is trying to do. The manual is not helpful; in the description of the tactic "induction" it says "The type of the argument term must be an inductive constant", which is manifestly not true here.
If you replace emptyempty with some arbitrary other type AA, then you get the error I expect: "Not an inductive product". But if you replace it with a type that is inductive, like unitunit or natnat, you get the error "yy is used in conclusion". I have no idea what Coq is trying to do. The manual is not helpful; in the description of the tactic "induction" it says "The type of the argument term must be an inductive constant", which is manifestly not true here.
Format: MarkdownItexWow, thanks Mike, for the detailed reply. (I am only looking at this reply now, since I must force myself to take care of some other tasks. :-) I'll try again a little later. Thanks again.
Wow, thanks Mike, for the detailed reply.
I'll try again a little later. Thanks again.
Format: MarkdownItexComing back to this over a year later... haha. For what it's worth, I'm still not satisfied with all of this. I keep thinking about paradoxes these days, and the fact that paradoxes are not just dead ends or something, but rather, alternating systems. Paradoxes seem to encode the notion of movement or alternation in a way that no other static symbol or formulation can. That is, a paradox is slippery in the sense that it changes meaning upon being perceived. This is somehow almost its definition. We never experience a paradox as being both false and true simultaneously. We experience it as true, then false, then true again, then false again (in the sense that implication is a path parameterized by time as we experience it). As such, this to feels something like a element in the fundamental group of... reality... haha! I mean, it's a loop that can't be closed, it doesn't resolve. It's like one of those little alternating guys in John Conway's Game of Life (or that weird little thing in lambda calculus that just gets longer every time you $\beta$-reduce it, or whatever it's called). Okay, after rereading what I've written, I've come to the conclusion that I need to go to bed.
Coming back to this over a year later… haha. For what it's worth, I'm still not satisfied with all of this. I keep thinking about paradoxes these days, and the fact that paradoxes are not just dead ends or something, but rather, alternating systems. Paradoxes seem to encode the notion of movement or alternation in a way that no other static symbol or formulation can. That is, a paradox is slippery in the sense that it changes meaning upon being perceived. This is somehow almost its definition. We never experience a paradox as being both false and true simultaneously. We experience it as true, then false, then true again, then false again (in the sense that implication is a path parameterized by time as we experience it). As such, this to feels something like a element in the fundamental group of… reality… haha! I mean, it's a loop that can't be closed, it doesn't resolve. It's like one of those little alternating guys in John Conway's Game of Life (or that weird little thing in lambda calculus that just gets longer every time you β\beta-reduce it, or whatever it's called).
Okay, after rereading what I've written, I've come to the conclusion that I need to go to bed. | CommonCrawl |
Complex structure on the six dimensional sphere from a spontaneous symmetry breaking Journ. Math. Phys. 56, 043508-1-043508-21 (2015) journal version, current arXiv version.
Since this is obviously an important and groundbreaking result (if true), published in a physical journal, I am interested whether it is accepted by mathematicians.
I am Gabor Etesi, the author of the current paper "Complex structure on the six dimensional sphere from a spontaneous symmetry breaking", Journ. Math. Phys. 56, 043508-1-043508-21 (2015). First of all I would like to thank for the interest in my work on this classical problem. Because I have been asked by Andre Henriques, hereby I confirm that this published 2015 version is indeed completely independent of the wrong and withdrawn 2005 version available on arXiv. Therefore it is absolutely unnecessary to spend time with that version.
(ii) A complex structure on $S^6$ is then constructed as the Fourier expansion of the usual Samelson complex structure (regarded as spontaneously broken vacuum solution in the sense of item (i) above) on the exceptional Lie group $G_2$. The mathematical theory of this Fourier expansion is itself very useful and is contained in the text.
Please note that this is a freshly new approach to this old problem, apparently without a predecessor.
Neverthekess after this observation the proof proceeds as follows: $J_H$ is Fourier expanded and the corresponding ground mode, denoted by $J$ in the text after eq. (21), descends to $S^6$. The very important subtlety however is this (explained carefully in Section III): in our situation (i.e., Fourier expanding general sections of general vector bundles), there is NO canonical way to perform a Fourier expansion. Instead there is "moduli space" of possible Fourier expansions resulting in inequivalent ground modes. This is because doing fiberwise integration does NOT commute with gauge transformations hence Fourier expanding a gauge transformed (on $H$) section is NOT the same as gauge transforming (on $TS^6$) the ground mode of a Fourier expanded section. I construct in Lemma 5.1 a distinguished "$\alpha$-twisted" Fourier expansion of whose ground mode $J$ coincides with $J_H$ itself.
I think that most of the concerns and uncertainty about the published version is related with the historical remark that the "relationship" between Yang--Mills theory (mathematically invented in the 1980's) and classical complex manifold theory, more precisely the Kodaira--Spencer deformation theory (invented in the 1950-60's) is not fully clarified. By this I mean that apparently the action of the gauge group on an (almost) complex manifold $(M,J)$ is dubious: it can describe both just a symmetry transformation of $(M,J)$ or an effective deformation of $(M,J)$. But these certainly should be carefully distinguished. My suggestion is formulated in the "Principle" of Section II (but this point might require a more conceptional and less ad hoc work, I agree).
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry complex-geometry or ask your own question.
Is there a complex structure on the 6-sphere?
Which almost complex manifolds admit a complex structure?
Do transvers foliations induce complex structure? | CommonCrawl |
Does integrating a finite number over infinite time equal infinity?
Hi, I was wondering, if I integrate a finite number such as 3 over an infinite amount of time, would the result be infinity? Or does it simply approach infinity but never reach infinity? Thanks.
Or does it simply approach infinity but never reach infinity? Thanks.
This sentence doesn't make sense. Infinity isn't a number, it's not something you can reach.
If this limit is not unique and finite, we say that the integral diverges. We don't say that it approaches infinity or is infinity.
One reason that an integral might diverge is that it's value grows without bound as $x$ grows without bound.
Finally, we do occasionally mention infinity, but the following is no more than a synonym for the above statements. We can say that the integral tends to infinity as $x$ tends to infinty ($x \to \infty$).
And this limit does not exist, the expression $c(x-a)$ diverges. $c(x - a)$ grows without bound as $x \to \infty$. $c(x - a) \to \infty$ as $x \to \infty$.
Wonderful, thank you, that makes sense, it's been 20 years since I studied that. The limit as x approaches infinity is the wording I was thinking of but how to apply it I couldn't remember. What you presented answered my question, thank you! | CommonCrawl |
... This paper presents the development of a physics-based multiple-input-multiple-output algorithm for real-time feedback control of snowflake divertor (SFD) configurations on the National Spherical Torus eXperiment Upgrade (NSTX-U). A model of the SFD configuration response to applied voltages on the divertor control coils is first derived and then used, in conjunction with multivariable control synthesis techniques, to design an optimal state feedback controller for the configuration. To demonstrate the capabilities of the controller, a nonlinear simulator for axisymmetric shape control was developed for NSTX-U which simultaneously evolves the currents in poloidal field coils based upon a set of feedback-computed voltage commands, calculates the induced currents in passive conducting structures, and updates the plasma equilibrium by solving the free-boundary Grad-Shafranov problem. Closed-loop simulations demonstrate that the algorithm enables controlled operations in a variety of SFD configurations and provides capabilities for accurate tracking of time-dependent target trajectories for the divertor geometry. In particular, simulation results suggest that a time-varying controller which can properly account for the evolving SFD dynamical response is not only desirable but necessary for achieving acceptable control performance. The algorithm presented in this paper has been implemented in the NSTX-U Plasma Control System in preparation for future control and divertor physics experiments.
... I estimate the impact of public pre-kindergarten for 4-year-olds on the provision of private child care for younger children by considering New York City's 2014 Universal Pre-K expansion. Private child care facilities often care for children from infancy or toddlerhood through pre-K. A public option for older children could therefore affect availability, prices, or quality of care for younger children. This effect could be positive or negative depending on the structure of the child care market, the design of the public pre-K program, and parent preferences. I use a panel dataset covering all licensed child care facilities in New York City and a difference-in-differences strategy that compares changes over time for neighborhoods with more versus fewer new public pre-K sites. I estimate that the public pre-K program reduced the capacity for children younger than 2 years old at private child care centers by 2,700 seats. The entire decrease in capacity occurs in areas with high poverty, and this decline was not o set by an increase in provision in the home day care market. In complementary analysis, I find a within- center increase in public complaints and inspection violations for day care centers that are closer to new public pre-K sites, suggesting a decrease in quality due to the increased competition from public pre-K. A back-of-the-envelope calculation indicates that for every seven 4-year-olds who shifted from day care centers to public pre-K, there was a reduction of one day care center seat for children under the age of 2.
October, November and December : Brexit centre stage at last.
... This report reviews the history of juvenile justice in New Jersey, and details racial disparities in police contact and sentencing. It includes six policy proposals to "fundamentally transform" the juvenile justice system in New Jersey.
Contributors: Guttenfelder, W., Kaye, S.M., Kreite, D.M., Bell, R.E., Diallo, A., LeBlanc, B.P., McKee, G.R., Podesta, M., Sabbagh, S.A., Smith, D.R.
... Transport analysis, ion-scale turbulence measurements, and initial linear and nonlinear gyrokinetic simulations are reported for a transport validation study based on low aspect ratio NSTX-U L-mode discharges. The relatively long, stationary L-modes enabled by the upgraded centerstack provide a more ideal target for transport validation studies that were not available during NSTX operation. Transport analysis shows that anomalous electron transport dominates energy loss while ion thermal transport is well described by neoclassical theory. Linear gyrokinetic GYRO analysis predicts that ion temperature gradient (ITG) modes are unstable around normalized radii $\rho$=0.6-0.8, although $E\timesB$ shearing rates are larger than the linear growth rates over much of that region. Deeper in the core ($\rho$=0.4-0.6), electromagnetic microtearing modes (MTM) are unstable as a consequence of the relatively high beta and collisionality in these particular discharges. Consistent with the linear analysis, local, nonlinear ion-scale GYRO simulations predict strong ITG transport at $\rho$=0.76, whereas electromagnetic MTM transport is important at $\rho$=0.47. The prediction of ion-scale turbulence is consistent with 2D beam emission spectroscopy (BES) that measures the presence of broadband ion-scale fluctuations. Interestingly, the BES measurements also indicate the presence of bi-modal poloidal phase velocity propagation that could be indicative of two different turbulence types. However, in the region between ($\rho$=0.56, 0.66), ion-scale simulations are strongly suppressed by the locally large $E\timesB$ shear. Instead, electron temperature gradient (ETG) turbulence simulations predict substantial transport, illustrating electron-scale contributions can be important in low aspect ratio L-modes, similar to recent analysis at conventional aspect ratio. However, agreement within experimental uncertainties has not been demonstrated, which requires additional simulations to test parametric sensitivities. The potential need to include profile-variation effects (due to the relatively large value of $\rho_*$=$\rho_i$/a at low aspect ratio), including electromagnetic and possibly multi-scale effects, is also discussed. | CommonCrawl |
Using the "enthalpy-based thermal evolution of loops" (EBTEL) model, we investigate the hydrodynamics of the plasma in a flaring coronal loop in which heat conduction is limited by turbulent scattering of the electrons that transport the thermal heat flux. The EBTEL equations are solved analytically in each of the two (conduction-dominated and radiation-dominated) cooling phases. Comparison of the results with typical observed cooling times in solar flares shows that the turbulent mean free-path $\lambda_T$ lies in a range corresponding to a regime in which classical (collision-dominated) conduction plays at most a limited role. We also consider the magnitude and duration of the heat input that is necessary to account for the enhanced values of temperature and density at the beginning of the cooling phase and for the observed cooling times. We find through numerical modeling that in order to produce a peak temperature $\simeq 1.5 \times 10^7$~K and a 200~s cooling time consistent with observations, the flare heating profile must extend over a significant period of time; in particular, its lingering role must be taken into consideration in any description of the cooling phase. Comparison with observationally-inferred values of post-flare loop temperatures, densities, and cooling times thus leads to useful constraints on both the magnitude and duration of the magnetic energy release in the loop, as well as on the value of the turbulent mean free-path $\lambda_T$. | CommonCrawl |
I was told shrouded propellers are more efficient becuase tip vorticies are eliminated by the wall which would imply no induced drag but apparently that is wrong Do ducted fans eliminate induced drag? therefore I've been trying to figure out why they are still more efficient than an open propeller despite still having induced drag. It must have less induced drag.
However unlike winglets the walls don't have a pressure difference on either side (it's not an airfoil) therefore the vortex can't be there.
So the vortex must be around the whole wall because above the wall the pressure is low and below it the pressure is high.
But this doesn't change the effective wingspan so why does it have less induced drag?
Therefore induced drag is minimized. Is this correct?
Basically, yes. The difference between shrouded and unshrouded propeller is that the shrouded one can produce uniform thrust across the diameter, while for the unshrouded one the thrust decreases near the tips.
That way a shrouded propeller accelerates more air than an unshrouded one of the same diameter. This air therefore needs to be accelerated to lower speed, and therefore carries away less kinetic energy, requiring less induced power¹.
However, diameter can be varied, so the efficiency comparison is not that straightforward. When the propeller spins relatively slowly, making it larger is better, similarly to how increasing wing span is better, aerodynamically, than adding winglets.
However increasing the speed of the tips increases parasite drag, especially if it becomes supersonic. And since increasing size while maintaining angular speed increases the orbital speed of the tips, increasing size only helps to a certain point. That's when shrouds become useful.
¹ In propellers and rotors it is called induced power rather than induced drag, because it counts directly against the engine power. It also describes the physics better, since in both cases it is the work that is done on the air by the reaction to the generated lift/thrust.
A close shroud, possibly even attached to propeller tips, of sufficient width will stop these vortex. However it may not stop all lateral flow because of the helical flow of the propwash. The helical flow can be reduced with static blades much like those used in axial flow compressors on gas-turbine engines. The shroud does increase add parasitic and form drag but it most definitely does reduce induced drag.
No, the vortices are trapped in the tip clearance.
What if there's no "outside" at all to your setup? Imagine a theoretical scenario where the entire space outside the duct is solid to $\infty$. Where are the vortices now? They could only be within the tip clearance.
I just realized this paper from another question is the perfect answer to this question.
And your assumption of shroud somehow making the downwash uniform is also wrong. Note that although the drawing in the statement of the problem or this paper is for a two-blade shrouded fan, even a fan with a very high solidity factor, e.g. 0.8~0.9, as used in high bypass-ratio turbofans, does not equalize fan wake, and that equalization happens only due to shear friction between the infinitesimal pockets of air themselves.
Found this CFD of a turbofan's fan.
Not the answer you're looking for? Browse other questions tagged aerodynamics propeller efficiency fluid-mechanics wing-tip-vortex or ask your own question.
Why do propeller blades not have winglets?
Why don't more airplanes incorporate spiroid winglets? | CommonCrawl |
What crossed over at the electroweak crossover?
It's been known for quite some time now that the electroweak "transition" in the early universe is first-order for a Higgs mass of less than about 75 GeV, but for a larger Higgs mass (including the 125 GeV mass that appears to describe our universe), the transition goes away entirely and becomes a crossover at which no physical quantities change non-analytically. (E.g. see here and here.) If understand the first reference correctly, in the $m_H < 75$ GeV regime, the simplest quantity that jumps discontinuously across the transition is the magnetic screening length, or equivalent its inverse, the "magnetic screening mass" (although the mass remains strictly positive both above and below the transition).
But I've never heard anyone actually identify which two physical quantities cross over at the electroweak "crossover" for $m_h > 75$ GeV. What are they? Put another way, if everything changes analytically as a function of temperature in the heavy-Higgs regime, then on what physical basis can we identifying one particular temperature as the "crossover temperature"?
The most natural order parameter for the EW phase transition is the square of the Higgs field, $\langle H^aH^a\rangle$, because in the mean field (weak coupling) limit this is just the VEV squared. To distinguish the order of the phase transition we can study fluctuations (order parameter susceptibilities), for example $$ \langle [H^aH^a-\langle H^aH^a\rangle]^2 \rangle $$ In a cross over transitions fluctuations peak at some pseudo-critical temperature, but they do not diverge as the volume $V\to\infty$.
Note that for a sharp phase transition all possible order parameters are non-analytic at the same $T_c$, but in a crossover transition different order parameters may give different pseudo-critical temperatures. This will obviously get worse if the crossover is broad, which is the case as one goes away from the critical endpoint.
Also note that the electroweak transition does not involve a change of symmetry, and $\langle H^aH^a\rangle$ is not a sharp order parameter in the sense that it is not zero in both phases. This is, of course, the reason why the transition can have an endpoint in the first place.
Not the answer you're looking for? Browse other questions tagged quantum-field-theory particle-physics higgs symmetry-breaking electroweak or ask your own question.
Could sphaleron-induced proton decay also cause vacuum decay?
Are there massless bosons at scales above electroweak scale?
Possible intermediate scales of baryogenesis?
How are new particles theorized and proven to exist?
What did electroweak symmetry breaking actually look like?
Could the electroweak spontaneous symmetry breaking explain dark matter and dark energy? | CommonCrawl |
I'm confused about the notation and don't understand what the zeroes inside the bracket mean. This is mentioned in context of the initial conditions for a time-dependent RG analysis, so does the $0$ represent $t = 0$ ? If so, why does the statement follow from $\alpha_k$ being real?
$|\alpha_k>$ is the usual coherent state.
Browse other questions tagged quantum-mechanics quantum-field-theory notation coherent-states or ask your own question.
How are matrices used to represent quantities, and what is the meaning of a matrix?
What is the symmetric multiplication of operators? | CommonCrawl |
mxnet.gluon.contrib Contrib neural network module.
The Gluon Contrib API, defined in the gluon.contrib package, provides many useful experimental APIs for new features. This is a place for the community to try out the new features, so that feature contributors can receive feedback.
In the rest of this document, we list routines provided by the gluon.contrib package.
Concurrent Lays Block s concurrently.
HybridConcurrent Lays HybridBlock s concurrently.
Identity Block that passes through the input directly.
SparseEmbedding Turns non-negative integers (indexes/tokens) into dense vectors of fixed size.
PixelShuffle1D Pixel-shuffle layer for upsampling in 1 dimension.
PixelShuffle2D Pixel-shuffle layer for upsampling in 2 dimensions.
PixelShuffle3D Pixel-shuffle layer for upsampling in 3 dimensions.
VariationalDropoutCell Applies Variational Dropout on base cell.
Conv1DRNNCell 1D Convolutional RNN cell.
Conv2DRNNCell 2D Convolutional RNN cell.
Conv1DLSTMCell 1D Convolutional LSTM network cell.
Conv2DLSTMCell 2D Convolutional LSTM network cell.
Conv3DLSTMCell 3D Convolutional LSTM network cell.
Conv1DGRUCell 1D Convolutional Gated Rectified Unit (GRU) network cell.
Conv2DGRUCell 2D Convolutional Gated Rectified Unit (GRU) network cell.
Conv3DGRUCell 3D Convolutional Gated Rectified Unit (GRU) network cell.
LSTMPCell Long-Short Term Memory Projected (LSTMP) network cell.
IntervalSampler Samples elements from [0, length) at fixed intervals.
WikiText2 WikiText-2 word-level dataset for language modeling, from Salesforce research.
WikiText103 WikiText-103 word-level dataset for language modeling, from Salesforce research.
Block that passes through the input directly.
This block can be used in conjunction with HybridConcurrent block for residual connection.
# use net's name_scope to give child Blocks appropriate names.
This SparseBlock is designed for distributed training with extremely large input dimension. Both weight and gradient w.r.t. weight are RowSparseNDArray .
input_dim (int) – Size of the vocabulary, i.e. maximum integer index + 1.
output_dim (int) – Dimension of the dense embedding.
dtype (str or np.dtype, default 'float32') – Data type of output embeddings.
weight_initializer (Initializer) – Initializer for the embeddings matrix.
data: (N-1)-D tensor with shape: (x1, x2, ..., xN-1) .
out: N-D tensor with shape: (x1, x2, ..., xN-1, output_dim) .
Standard BN implementation only normalize the data within each device. SyncBN normalizes the input within the whole mini-batch. We follow the implementation described in the paper .
Note: Current implementation of SyncBN does not support FP16 training. For FP16 inference, use standard nn.BatchNorm instead of SyncBN.
in_channels (int, default 0) – Number of channels (feature maps) in input data. If not specified, initialization will be deferred to the first time forward is called and in_channels will be inferred from the shape of input data.
momentum (float, default 0.9) – Momentum for the moving average.
epsilon (float, default 1e-5) – Small float added to variance to avoid dividing by zero.
center (bool, default True) – If True, add offset of beta to normalized tensor. If False, beta is ignored.
scale (bool, default True) – If True, multiply by gamma . If False, gamma is not used. When the next layer is linear (also e.g. nn.relu ), this can be disabled since the scaling will be done by the next layer.
use_global_stats (bool, default False) – If True, use global moving statistics instead of local batch-norm. This will force change batch-norm into a scale shift operator. If False, use local batch-norm.
beta_initializer (str or Initializer , default 'zeros') – Initializer for the beta weight.
gamma_initializer (str or Initializer , default 'ones') – Initializer for the gamma weight.
moving_mean_initializer (str or Initializer , default 'zeros') – Initializer for the moving mean.
moving_variance_initializer (str or Initializer , default 'ones') – Initializer for the moving variance.
data: input tensor with arbitrary shape.
out: output tensor with the same shape as data .
Pixel-shuffle layer for upsampling in 1 dimension.
Pixel-shuffling is the operation of taking groups of values along the channel dimension and regrouping them into blocks of pixels along the W dimension, thereby effectively multiplying that dimension by a constant factor in size.
For example, a feature map of shape \((fC, W)\) is reshaped into \((C, fW)\) by forming little value groups of size \(f\) and arranging them in a grid of size \(W\).
factor (int or 1-tuple of int) – Upsampling factor, applied to the W dimension.
data: Tensor of shape (N, f*C, W).
out: Tensor of shape (N, C, W*f).
Perform pixel-shuffling on the input.
Pixel-shuffle layer for upsampling in 2 dimensions.
Pixel-shuffling is the operation of taking groups of values along the channel dimension and regrouping them into blocks of pixels along the H and W dimensions, thereby effectively multiplying those dimensions by a constant factor in size.
For example, a feature map of shape \((f^2 C, H, W)\) is reshaped into \((C, fH, fW)\) by forming little \(f \times f\) blocks of pixels and arranging them in an \(H \times W\) grid.
Pixel-shuffling together with regular convolution is an alternative, learnable way of upsampling an image by arbitrary factors. It is reported to help overcome checkerboard artifacts that are common in upsampling with transposed convolutions (also called deconvolutions). See the paper Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network for further details.
factor (int or 2-tuple of int) – Upsampling factors, applied to the H and W dimensions, in that order.
data: Tensor of shape (N, f1*f2*C, H, W).
out: Tensor of shape (N, C, H*f1, W*f2).
Pixel-shuffle layer for upsampling in 3 dimensions.
Pixel-shuffling (or voxel-shuffling in 3D) is the operation of taking groups of values along the channel dimension and regrouping them into blocks of voxels along the D, H and W dimensions, thereby effectively multiplying those dimensions by a constant factor in size.
For example, a feature map of shape \((f^3 C, D, H, W)\) is reshaped into \((C, fD, fH, fW)\) by forming little \(f \times f \times f\) blocks of voxels and arranging them in a \(D \times H \times W\) grid.
factor (int or 3-tuple of int) – Upsampling factors, applied to the D, H and W dimensions, in that order.
data: Tensor of shape (N, f1*f2*f3*C, D, H, W).
out: Tensor of shape (N, C, D*f1, H*f2, W*f3).
Contrib recurrent neural network module.
prefix (str, default 'conv_rnn_') – Prefix for name of layers (and name of weight if params is None).
input_shape (tuple of int) – Input tensor shape at each time step for each sample, excluding dimension of the batch size and sequence length. Must be consistent with conv_layout . For example, for layout 'NCHW' the shape should be (C, H, W).
i2h_pad (int or tuple of int, default (0, 0)) – Pad for input convolution.
i2h_dilate (int or tuple of int, default (1, 1)) – Input convolution dilate.
h2h_dilate (int or tuple of int, default (1, 1)) – Recurrent convolution dilate.
conv_layout (str, default 'NCHW') – Layout for all convolution inputs, outputs and weights. Options are 'NCHW' and 'NHWC'.
input_shape (tuple of int) – Input tensor shape at each time step for each sample, excluding dimension of the batch size and sequence length. Must be consistent with conv_layout . For example, for layout 'NCDHW' the shape should be (C, D, H, W).
i2h_pad (int or tuple of int, default (0, 0, 0)) – Pad for input convolution.
i2h_dilate (int or tuple of int, default (1, 1, 1)) – Input convolution dilate.
h2h_dilate (int or tuple of int, default (1, 1, 1)) – Recurrent convolution dilate.
conv_layout (str, default 'NCDHW') – Layout for all convolution inputs, outputs and weights. Options are 'NCDHW' and 'NDHWC'.
1D Convolutional LSTM network cell.
activation (str or Block, default 'tanh') – Type of activation function used in c^prime_t. If argument type is string, it's equivalent to nn.Activation(act_type=str). See Activation() for available choices. Alternatively, other activation blocks such as nn.LeakyReLU can be used.
prefix (str, default 'conv_lstm_') – Prefix for name of layers (and name of weight if params is None).
2D Convolutional LSTM network cell.
3D Convolutional LSTM network cell.
1D Convolutional Gated Rectified Unit (GRU) network cell.
activation (str or Block, default 'tanh') – Type of activation function used in n_t. If argument type is string, it's equivalent to nn.Activation(act_type=str). See Activation() for available choices. Alternatively, other activation blocks such as nn.LeakyReLU can be used.
prefix (str, default 'conv_gru_') – Prefix for name of layers (and name of weight if params is None).
2D Convolutional Gated Rectified Unit (GRU) network cell.
3D Convolutional Gated Rectified Unit (GRU) network cell.
Applies Variational Dropout on base cell. (https://arxiv.org/pdf/1512.05287.pdf, https://www.stat.berkeley.edu/~tsmoon/files/Conference/asru2015.pdf).
Variational dropout uses the same dropout mask across time-steps. It can be applied to RNN inputs, outputs, and states. The masks for them are not shared.
The dropout mask is initialized when stepping forward for the first time and will remain the same until .reset() is called. Thus, if using the cell and stepping manually without calling .unroll(), the .reset() should be called after each sequence.
base_cell (RecurrentCell) – The cell on which to perform variational dropout.
drop_inputs (float, default 0.) – The dropout rate for inputs. Won't apply dropout if it equals 0.
drop_states (float, default 0.) – The dropout rate for state inputs on the first state channel. Won't apply dropout if it equals 0.
drop_outputs (float, default 0.) – The dropout rate for outputs. Won't apply dropout if it equals 0.
length (int) – Number of steps to unroll.
If inputs is a single Symbol (usually the output of Embedding symbol), it should have shape (batch_size, length, ...) if layout is 'NTC', or (length, batch_size, ...) if layout is 'TNC'.
If inputs is a list of symbols (usually output of previous unroll), they should all have shape (batch_size, ...).
begin_state (nested list of Symbol, optional) – Input states created by begin_state() or output state of another cell. Created from begin_state() if None .
layout (str, optional) – layout of input symbol. Only used if inputs is a single Symbol.
merge_outputs (bool, optional) – If False , returns outputs as a list of Symbols. If True , concatenates output across time steps and returns a single symbol with shape (batch_size, length, ...) if layout is 'NTC', or (length, batch_size, ...) if layout is 'TNC'. If None , output whatever is faster.
valid_length (Symbol, NDArray or None) – valid_length specifies the length of the sequences in the batch without padding. This option is especially useful for building sequence-to-sequence models where the input and output sequences would potentially be padded. If valid_length is None, all sequences are assumed to have the same length. If valid_length is a Symbol or NDArray, it should have shape (batch_size,). The ith element will be the length of the ith sequence in the batch. The last valid state will be return and the padded outputs will be masked with 0. Note that valid_length must be smaller or equal to length .
outputs (list of Symbol or Symbol) – Symbol (if merge_outputs is True) or list of Symbols (if merge_outputs is False) corresponding to the output from the RNN from this unrolling.
states (list of Symbol) – The new state of this RNN after this unrolling. The type of this symbol is same as the output of begin_state() .
where \(r_t\) is the projected recurrent activation at time t , \(h_t\) is the hidden state at time t , \(c_t\) is the cell state at time t , \(x_t\) is the input at time t , and \(i_t\), \(f_t\), \(g_t\), \(o_t\) are the input, forget, cell, and out gates, respectively.
hidden_size (int) – Number of units in cell state symbol.
projection_size (int) – Number of units in output symbol.
i2h_weight_initializer (str or Initializer) – Initializer for the input weights matrix, used for the linear transformation of the inputs.
h2h_weight_initializer (str or Initializer) – Initializer for the recurrent weights matrix, used for the linear transformation of the hidden state.
h2r_weight_initializer (str or Initializer) – Initializer for the projection weights matrix, used for the linear transformation of the recurrent state.
i2h_bias_initializer (str or Initializer, default 'lstmbias') – Initializer for the bias vector. By default, bias for the forget gate is initialized to 1 while all other biases are initialized to zero.
h2h_bias_initializer (str or Initializer) – Initializer for the bias vector.
prefix (str, default 'lstmp_') – Prefix for name of Block`s (and name of weight if params is `None ).
params (Parameter or None) – Container for weight sharing between cells. Created if None .
data: input tensor with shape (batch_size, input_size) .
states: a list of two initial recurrent state tensors, with shape (batch_size, projection_size) and (batch_size, hidden_size) respectively.
out: output tensor with shape (batch_size, num_hidden) .
next_states: a list of two output recurrent state tensors. Each has the same shape as states .
Samples elements from [0, length) at fixed intervals.
length (int) – Length of the sequence.
interval (int) – The number of items to skip between two samples.
rollover (bool, default True) – Whether to start again from the first skipped item after reaching the end. If true, this sampler would start again from the first skipped item until all items are visited. Otherwise, iteration stops when end is reached and skipped items are ignored.
WikiText-2 word-level dataset for language modeling, from Salesforce research.
Each sample is a vector of length equal to the specified sequence length. At the end of each sentence, an end-of-sentence token '' is added.
root (str, default $MXNET_HOME/datasets/wikitext-2) – Path to temp folder for storing data.
segment (str, default 'train') – Dataset segment. Options are 'train', 'validation', 'test'.
vocab (Vocabulary, default None) – The vocabulary to use for indexing the text dataset. If None, a default vocabulary is created.
seq_len (int, default 35) – The sequence length of each sample, regardless of the sentence boundary.
WikiText-103 word-level dataset for language modeling, from Salesforce research.
root (str, default $MXNET_HOME/datasets/wikitext-103) – Path to temp folder for storing data. | CommonCrawl |
Green's conjecture says that vanishing syzgies of a canonical curve is equivalent to the non-existence of certain linear series on the curve. Turning things around, we might hope that many syzygies imply the existence of many linear systems. In this talk I will survey our knowledge on syzygies of canonical curves and then report on work of Hanieh Keneshlou, who used this approach to study the scheme of curves of genus 11 with several maps of degree 6 to $\mathbb P^1$. | CommonCrawl |
I believe I understand the primary structure, I am not sure what's the difference between secondary and tertiary structure. Tertiary structure seems to be 3d structure of a polypeptide but I am not at all clear about the secondary structure.
Proteins are made up of long chains of amino acids covalently joined together. The order of amino acids is the primary structure of a protein. In physiologic conditions, these long chains of amino acids fold into three dimensional shapes. The terms secondary and tertiary structure are used to refer to specific aspects of the three dimensional shape.
Both secondary and tertiary structure refer to the three-dimensional shape of a protein. Secondary structure is the regular repeating patterns, generally stabilized by hydrogen bonds between the NH and CO groups of the peptide backbone. Tertiary structure can be thought of as the way secondary structural elements fold together to create the overall shape of the protein.
Secondary structure refers to regular repeating patterns stabilized by hydrogen bonding between the NH and CO groups of the peptide bond. Typically textbooks will focus on the $\alpha$-helix and the $\beta$-sheet.
The $\beta$-sheet forms a very different secondary structure, but it's also stabilized by hydrogen bonding between NH and CO groups of the peptide bond.
Tertiary structure is the larger three-dimensional structure of a protein in its environment. It can be useful to think of tertiary structure as the way different secondary structural elements fold together to form the full, active protein. Notice in this figure, also from Berg, the way the elements of secondary structure (the $\alpha$-helices) fold into an overall three dimensional shape that can nicely cradle the heme group.
Secondary structure proteins are repeating chains of alpha helixes and beta sheets like spider silk. Tertiary structure proteins are globular proteins, like enzymes.
Not the answer you're looking for? Browse other questions tagged biochemistry proteins protein-structure or ask your own question.
how do they identify different protein chains? | CommonCrawl |
Abstract: Recently a broad class of superconformal inflationary models was found leading to a universal observational prediction $n_s=1-2/N$ and $r=12/N^2$. Here we generalize this class of models by introducing a parameter $\alpha$ inversely proportional to the curvature of the inflaton Kahler manifold. In the small curvature (large $\alpha$) limit, the observational predictions of this class of models coincide with the predictions of generic chaotic inflation models. However, for sufficiently large curvature (small $\alpha$), the predictions converge to the universal attractor regime with $n_s=1-2/N$ and $r=12\alpha/N^2$, which corresponds to the part of the $n_s-r$ plane favored by the Planck data. | CommonCrawl |
where $x = (x^1,\ldots,x^n)$, $y = (y^1,\ldots,y^n)$ and each $x^i$ and $y^i$ are $O(\log n)$ bit strings. In [Raz, McKenzie FOCS 1997] and [Göös, et al. FOCS 2015], the simulation algorithm is implemented for functions composed with the Indexing gadget, where the size of the gadget is polynomial in the input length of the outer function $f$.
Added a citation to an independent work of Chattopadhyay, et al. who also prove a simulation theorem using the Inner Product gadget. | CommonCrawl |
A snail slithers around on a coordinate grid. At what position does he finish?
Next, $y=4x+6$ has gradient $4$ and $y$-intercept $6$ - so it slopes up steeply and crosses the $y$ axis at $(0,6)$.
You can already see that the height of the triangle is $8$ units, from $-2$ to $6$ on the $y$ axis.
Suppose $y=-2$ and $y=4x+6$ intersect at some point $(a,b)$.
$(a,b)$ is on the line $y=-2$, so $b=-2$.
$(a,b)$ is on the line $y=4x+6$, so $b=4a+6$. But $b=-2$, so $-2=4a+6\Rightarrow -8=4a\Rightarrow -2=a$.
So $y=-2$ and $y=4x+6$ intersect at $(-2,-2)$.
Suppose $y=-2$ and $x+y=6$ intersect at some point $(c,d)$.
$(c,d)$ is on the line $y=-2$, so $d=-2$.
$(c,d)$ is on the line $x+y=6$, so $c+d=6$. But $d=-2$, so $c-2=6\Rightarrow c=8$.
So $y=-2$ and $x+y=6$ intersect at $(8,-2)$. This is shown below.
So the base of the triangle is from $x=-2$ to $x=8$, which is $10$ units.
So the area of the triangle is $\frac12\times8\times10=40$ square units.
The gradient of the line $x+y=6$ is $-1$, so when going down $8$ units from the top to the bottom of the triangle, it must also go along $8$ units, as shown on the right.
The line $y=4x+6$ has gradient $4$, so it is $4$ times steeper. This means that going down by the same amount corresponds to going $4$ times less far along.
So the base of the blue triangle shown below is only $8\div4=2$ units.
We could find the areas of the green and blue triangles separately, or of the whole triangle, whose base is $10$ units in total.
Whole triangle: $32+8=40$ square units, or $\frac12\times8\times10=40$ square units.
Plotting the lines carefully, you can find the dimensions of the triangle.
Now, you can count that the base is $10$ units and the height is $8$ units, so the area is $\frac12\times8\times10=40$ square units. | CommonCrawl |
Klimina L. A., Lokshin B. Y.
An autonomous dynamical system with one degree of freedom with a cylindrical phase space is studied. The mathematical model of the system is given by a second-order differential equation that contains terms responsible for nonconservative forces. A coefficient $\alpha$ at these terms is supposed to be a small parameter of the model. So the system is close to a Hamiltonian one.
In the first part of the paper, it is additionally supposed that one of nonconservative terms corresponds to dissipative or to antidissipative forces, and coefficient $b$ at this term is a varied parameter. The Poincaré – Pontryagin approach is used to construct a bifurcation diagram of periodic trajectories with respect to the parameter b for sufficiently small values of $\alpha$.
In the second part of the paper, a system with nonconservative terms of general form is studied. Two supplementary systems of special form are constructed. Results of the first part of the paper are applied to these systems. Comparison of bifurcation diagrams for these supplementary systems has allowed deriving necessary conditions for the existence of periodic trajectories in the initial system for sufficiently small $\alpha$.
The third part of the paper contains an example of the study of periodic trajectories of one system, which, for zero value of the small parameter, coincides with a Hamiltonian system $H_0$. It is proved that there exist periodic trajectories which do not satisfy the Poincaré – Pontryagin sufficient conditions for emergence of periodic trajectories from trajectories of the system $H_0$. | CommonCrawl |
When the axiom of choice is not available, the concept of cardinality is somewhat more subtle, and there is in general no fully satisfactory solution of the cardinal assignment problem. Rather, in ZF one works directly with the equinumerosity relation.
In ZF, the axiom of choice is equivalent to the assertion that the cardinals are linearly ordered. This is because for every set $X$, there is a smallest ordinal $\alpha$ that does not inject into $X$, the Hartog number of $X$, and conversely, if $X$ injects into $\alpha$, then $X$ would be well-orderable.
The Dedekind finite sets are those not equinumerous with any proper subset. Although in ZFC this is an equivalent characterization of the finite sets, in ZF the two concepts of finite differ: every finite set is Dedekind finite, but it is consistent with ZF that there are infinite Dedekind finite sets. An amorphous set is an infinite set, all of whose subsets are either finite or co-finite.
This page was last modified on 4 January 2012, at 09:22. | CommonCrawl |
subject to the requirements that $0 \le i \le j \le n$ and $j \le i+w$. Or, in other words, I want to find the sequential window of width at most $w$ such that the sum of the numbers in the window is maximized.
Is there an $O(n)$ time algorithm for this task? Or, what is the most efficient algorithm for this task?
There is a trivial $O(nw)$ time algorithm, but I can't see how to do better than that. If $w=\infty$, Kadane's algorithm solves this in $O(n)$ time, but I can't see how to generalize it to my problem. So, can we do better than $O(nw)$ time?
I encountered this problem in the context of an image processing task I'm facing.
The problem you are describing can be solved in $O(n)$.
For the actuall implementation we use a data structure called Minimum Queue. It is a queue (FIFO) that does the normal push/pop operations in amortized constant time and it additionally supports extracting the minimum of its current stored elements in constant time.
One way of implementing a Minimum Queue is to store the elements in a dequeue using value-index pairs. And you keep this queue in non-decreasing order, i.e. you only store the subset of pairs that are or can become the minimum at some point and ignore the remaining elements. This means whenever you push a new element, you remove all previous elements that are bigger from the back of the queue, since the currently added element will dominate them at all times. Therefore the minimum will always be the first element.
This approach, and also two (similar) alternatives are described on cp-algorithms.com.
Not the answer you're looking for? Browse other questions tagged algorithms arrays maximum-subarray or ask your own question.
How to devise an algorithm to arrange (resizable) windows on the screen to cover as much space as possible?
Why does Kadane's algorithm solve the maximum sub-array problem?
How can I efficiently find the largest positive interval in an unsorted array? | CommonCrawl |
[1507.00013] Symmetry Restored in Dibosons at the LHC?
Title:Symmetry Restored in Dibosons at the LHC?
Abstract: A number of LHC resonance search channels display an excess in the invariant mass region of 1.8 - 2.0 TeV. Among them is a $3.4\,\sigma$ excess in the fully hadronic decay of a pair of Standard Model electroweak gauge bosons, in addition to potential signals in the $HW$ and dijet final states. We perform a model-independent cross-section fit to the results of all ATLAS and CMS searches sensitive to these final states. We then interpret these results in the context of the Left-Right Symmetric Model, based on the extended gauge group $SU(2)_L\times SU(2)_R\times U(1)'$, and show that a heavy right-handed gauge boson $W_R$ can naturally explain the current measurements with just a single coupling $g_R \sim 0.4$. In addition, we discuss a possible connection to dark matter. | CommonCrawl |
Perhaps it will be impossible to come to a consensus about this, but I'd like to know what people's preferences are as to using infinity vs. $\infty$ vs. oo vs. ∞ when talking about (infinity,n)-categories and the like. It's relevant because I'd ideally like to be able to find questions/answers that mention (infinity,n)-categories without having to do a couple separate searches. People might also have other considerations regarding this choice that I haven't thought of.
I also wonder if there is a consensus about this on the nLab? I am not an active nLab member, but maybe someone else here is and knows?
Sorry if I'm being pedantic.
With the disclaimer that this is just my opinion, I think the title should be in plain (spoken) English and the text should say $(\infty, n)$ or (∞, n) as it's a standard notation. oo should be discouraged.
Note that ∞-categories seems to be a standard notation for what others call (∞, 1)-categories, so the search by notation won't be trivial. In my opinion, you're better off aggregating several tags, especially [ct.category-theory] and finding out the questions you need manually from that list.
I agree with Ilya that it should be $(\infty, n)$, (infinity, n), or (∞, n), but not (oo,n).
As the offending party, my reasoning was that it looks nicer in the topic title to put (oo,n), but in the question text, I always use "$(\infty ,n)$-category". I didn't know what the unicode for it was, but it would be nice if on the sidebar there was a reference for the unicode for commonly used mathematical symbols like infinity, if only for topic titles.
You can just use the HTML entity ∞ rather than the numeric Unicode value. However my recollection is that neither of these work in titles and you have to type symbols directly somehow (I usually copy and paste from Wikipedia).
Is this a bug, btw, the titles don't allow many Unicode characters (or rather render them badly)? I notced Greg's first question was messed up by this.
Titles escape the ampersand — which isn't the best idea, imho.
@Harry, I would do ∞-categories in the title in this particular case.
Harry: Naw, you're not the "offending party"; I mean I tried out "oo" in a recent post too. And I've noticed that Urs Schreiber and various other n-category-cafe people seem like to use "oo" as well.
What's the advantage of unicode and HTML entities over LaTeX? Is it that the former is somehow a more standardized, universal, more-likely-to-be-around-in-50-years kind of thing?
@Kevin: They don't require jsmath rendering and also work in the titles.
@Ilya: How the heck did you just do that if not by html, unicode, or copy/paste?
Can we put that character on the sidebar for those of us who are too lazy to map the character to our keyboards?
On the nLab we almost always use TeX $\infty$. That doesn't work in page titles or hyperlinks, though, so page titles are written in English, but usually with redirects that use Unicode, so that we can type [[∞-category]] to make a nice-looking link. There's been some discussion of whether this should be the other way 'round, but that's the way it is at the moment.
Regarding typing Unicode, there is something called SCIM, but I haven't managed to get it to work myself. What I use is that in Emacs, you can hit Ctrl-\ and type "tex" when prompted for an input method, after which you can simply type "\infty" and the Unicode character ∞ will come out. And the Firefox plugin "itsalltext" is convenient for editing textareas in an external editor (like Emacs). | CommonCrawl |
Christiane Neuber, June Uebeler, Thomas Schulze, Hannieh Sotoud, Ali El-Armouche and Thomas Eschenhagen.
Guanabenz Interferes with ER Stress and Exerts Protective Effects in Cardiac Myocytes.. PloS one 9(6):e98893, January 2014.
Abstract Endoplasmic reticulum (ER) stress has been implicated in a variety of cardiovascular diseases. During ER stress, disruption of the complex of protein phosphatase 1 regulatory subunit 15A and catalytic subunit of protein phosphatase 1 by the small molecule guanabenz (antihypertensive, $\alpha$2-adrenoceptor agonist) and subsequent inhibition of stress-induced dephosphorylation of eukaryotic translation initiation factor 2$\alpha$ (eIF2$\alpha$) results in prolonged eIF2$\alpha$ phosphorylation, inhibition of protein synthesis and protection from ER stress. In this study we assessed whether guanabenz protects against ER stress in cardiac myocytes and affects the function of 3 dimensional engineered heart tissue (EHT). We utilized neonatal rat cardiac myocytes for the assessment of cell viability and activation of ER stress-signalling pathways and EHT for functional analysis. (i) Tunicamycin induced ER stress as measured by increased mRNA and protein levels of glucose-regulated protein 78 kDa, P-eIF2$\alpha$, activating transcription factor 4, C/EBP homologous protein, and cell death. (ii) Guanabenz had no measurable effect alone, but antagonized the effects of tunicamycin on ER stress markers. (iii) Tunicamycin and other known inducers of ER stress (hydrogen peroxide, doxorubicin, thapsigargin) induced cardiac myocyte death, and this was antagonized by guanabenz in a concentration- and time-dependent manner. (iv) ER stressors also induced acute or delayed contractile dysfunction in spontaneously beating EHTs and this was, with the notable exception of relaxation deficits under thapsigargin, not significantly affected by guanabenz. The data confirm that guanabenz interferes with ER stress-signalling and has protective effects on cell survival. Data show for the first time that this concept extends to cardiac myocytes. The modest protection in EHTs points to more complex mechanisms of force regulation in intact functional heart muscle.
H-Q Jiang, M Ren, H-Z Jiang, J Wang, J Zhang, X Yin, S-Y Wang, Y Qi, X-D Wang and H-L Feng.
Guanabenz delays the onset of disease symptoms, extends lifespan, improves motor performance and attenuates motor neuron loss in the SOD1 G93A mouse model of amyotrophic lateral sclerosis.. Neuroscience, 2014.
Abstract Amyotrophic lateral sclerosis (ALS) is a relentlessly progressive neurodegenerative disease characterized by the loss of motor neurons in the motor cortex, brain stem and spinal cord. Currently, there is no cure for this lethal disease. Although the mechanism underlying neuronal cell death in ALS remains elusive, growing evidence supports a crucial role of endoplasmic reticulum (ER) stress in the pathogenesis of ALS. Recent reports show that guanabenz, a novel inhibitor of eukaryotic initiation factor 2$\alpha$ (eIF2$\alpha$) dephosphorylation, possesses anti-prion properties, attenuates ER stress and reduces paralysis and neurodegeneration in mTDP-43 Caenorhabditis elegans and Danio rerio models of ALS. However, the therapeutic potential of guanabenz for the treatment of ALS has not yet been assessed in a mouse model of ALS. In the present study, guanabenz was administered to a widely used mouse model of ALS expressing copper zinc superoxide dismutase-1 (SOD1) with a glycine to alanine mutation at position 93 (G93A). The results showed that the administration of guanabenz significantly extended the lifespan, delayed the onset of disease symptoms, improved motor performance and attenuated motor neuron loss in female SOD1 G93A mice. Moreover, western blotting results revealed that guanabenz dramatically increased the levels of phosphorylated-eIF2$\alpha$ (P-eIF2$\alpha$) protein, without affecting total eIF2$\alpha$ protein levels. The results also revealed a significant decrease in the levels of the ER chaperone glucose-regulated protein 78 (BiP/Grp78) and markers of another two ER stress pathways, activating transcription factor 6$\alpha$ (ATF6$\alpha$) and inositol-requiring enzyme 1 (IRE1). In addition, guanabenz increased the protein levels of anti-apoptotic B cell lymphoma/lewkmia-2 (Bcl-2), and down-regulated the pro-apoptotic protein levels of C/EBP homologous protein (CHOP), Bcl-2-associated X protein (BAX) and cytochrome C in SOD1 G93A mice. Our findings indicate that guanabenz may represent a novel therapeutic candidate for the treatment of ALS, a lethal human disease with an underlying mechanism involving the attenuation of ER stress and mitochondrial stress via prolonging eIF2$\alpha$ phosphorylation.
Melissa J Fullwood, Wei Zhou and Shirish Shenolikar.
Targeting phosphorylation of eukaryotic initiation factor-2$\alpha$ to treat human disease.. Progress in molecular biology and translational science 106:75–106, 2012.
Abstract The unfolded protein response, also known as endoplasmic reticulum (ER) stress, has been implicated in numerous human diseases, including atherosclerosis, cancer, diabetes, and neurodegenerative disorders. Protein misfolding activates one or more of the three ER transmembrane sensors to initiate a complex network of signaling that transiently suppresses protein translation while also enhancing protein folding and proteasomal degradation of misfolded proteins to ensure full recovery from ER stress. Gene disruption studies in mice have provided critical insights into the role of specific signaling components and pathways in the differing responses of animal tissues to ER stress. These studies have emphasized an important contribution of translational repression to sustained insulin synthesis and $\beta$-cell viability in experimental models of type-2 diabetes. This has focused attention on the recently discovered small-molecule inhibitors of eIF2$\alpha$ phosphatases that prolong eIF2$\alpha$ phosphorylation to reduce cell death in several animal models of human disease. These compounds show significant cytoprotection in cellular and animal models of neurodegenerative disorders, highlighting a potential strategy for future development of drugs to treat human protein misfolding disorders. | CommonCrawl |
"Skew-symmetric differentiation matrices and spectral methods on the real line"
A most welcome feature of orthogonal bases employed in spectral methods is that their differentiation matrix is skew symmetric, since this makes energy conservation automatic in conservative time-evolving problems. A familiar example is given by Hermite functions, which are dense in $L(-\infty,\infty)$ and give raise to a skew-symmetric, tridiagonal differentiation matrix. In this talk, describing joint work with Marcus Webb (KU Leuven), we present full characterisation of all orthogonal systems acting on $L(-\infty,\infty)$, dense either there or in a Paley—Wiener space, and that have a differentiation matrix which is skew-symmetric, tridiagonal and irreducible. We also present a constructive algorithm for their generation — essentially, given any symmetric Borel measure on $(-\infty,\infty)$ or on $(-a,a)$ for some $a>0$, there exists a unique (up to rescaling) basis of this kind and it can be generated constructively. We conclude with a number of examples, related to Konoplev, Carlitz and Freud measures. | CommonCrawl |
This seminar is supposed to be a contemporary introduction to modern type theory that is suitable for mathematicians needs, in the sense that this theory forms a foundation of mathematics and the language for theorems formulation and their proofs.
The seminar will focus on 1) direct proof-term construction in cubical type theory with computer-based type checker; 2) solving home assignments in cubical type checker; 3) helping people write cubical code; 4) writing lecture notes by students in LaTeX; 5) making small course project as the final task (could be porting some library or theory from Agda or even formalizing a new part of mathematics); 6) gathering smart people together; 7) having fun.
While this seminar will try to follow HoTT book as a main source, I will try to give some theory not present in HoTT, such as algebraic topology or higher category theory. On the other hand, I will skip some parts from HoTT. Please treat this course as author's one, but heavily inspired by HoTT book and cubical type checkers.
What is a cubical type theory and why is it important? The cubical type theory is a theory where univalence axiom by Voevodsky has computational interpretation. There are several type checkers which use different flavors of cubical type theory while sharing the same concept. Most notable stable implementations are: 1) agda --cubical; 2) cubicaltt; 3) cubicaltt/hcomptrans; 4) yacctt; 5) RedPRL. We will show differences between them, and students are free to choose the type checker of their flavor (for home assignments), but at learning classes (joint hackathon) I will use cubicaltt flavor I most experienced with.
The first section is an introduction to type theory, its subject matter, and the method of computer science. We will show basic principles of lambda calculus, its most powerful version — System $P_\omega$, Calculus of Constructions, and Pure Type Systems. Then we will encode the theorems of MLTT type system in cubical type checkers. This section will cover Pi, Sigma, Equ, W types.
The second section is an introduction to basic inductive types and induction principle. Inductive types form a basis of so-called Calculus of Inductive Constructions, the theory behind Coq. We will show how proving theorems with induction principle will differ from cubical constructions. Also, we introduce a general Impredicative Encoding of Inductive Types (Awodey). This section will cover: Empty, Unit, Either, Prod, Bool, Maybe, Nat, List, Fix, Mu, Nu, Free, Cofree types.
This topic gives a fundamental view of Pi types as Fibrations used in constructive Equivalence. Also here we give an infinite hierarchy of types, containing higher globular equalities used as a base for IPL, Set Theory, and Simplicial Geometry. This section will cover Fiber, n-Groupoid, $\infty$-Groupoid types.
There exist three equalities in Type Theory: 1) built-in into type checker, representing Id, Equ or Path type with J eliminator; 2) Fibration based Equivalence; 3) Isomorphism containing retract and section. This fact leads to Univalence Axiom that states that all equalities are equal and this could be proved in cubical type checkers. This section will cover: Equiv, Iso, Homotopy, Univalence, Injective, Surjective types.
One of the primary motivation of HoTT is constructive geometry and cell complexes, the spaces constructed by gluing n-discs along their boundaries. In this section, we show how to build topological primitives and how to reason about them. This section will cover: Line, Suspension, n-Sphere, Pushout, Pullback, Truncation, Quotient types.
This section will cover: Infinitesimal type.
This section is an introduction to Intuitionistic Propositional Logic (IPL), its difference from Classical Logic, Axiom of Choice, Law of Excluding Middle, Propositional Truncation, Hedberg Theorem, Decidable Types, Constructive Set Theory. This section will cover: Prop, Set, Decidable, Stable, Discrete types.
More than 10 Fields awards are taken with Category Theory (CT) as an instrument. For doing Math, CT is crucial. As a companion to HoTT chapter on CT, we suggest the course of Steve Awodey (the disciple of Saunders Mac Lane), who also made HoTT possible.
This section will give the fundamental notion of following types: Precategory, Rezk Completion, Functor, Natural Transformation, Adjoint, Cone, Structure Identity Principle, Limit, Pullback, Kan Extension, Monomorphism, Epimorphism, Universal Mapping Property.
The primary application of CT in Computer Science is building the categorical model of Dependently Typed Lambda Calculus as an internal language of Locally Cartesian Closed Category. The two models mentioned in this course are Categories with Families (Dybjer) and C-Systems (Voevodsky).
Also as examples of Category instances following types are used: Category of Sets, Category of Functions, Category of Categories, Category of Functors, Slice Category, Product Category. Using those we can construct basic examples of 2-categories, and give a very basic intro to $\infty$-Categories. Types in this section: Category of Commutative Monoids, Category of Abelian Groups, Grothendieck Group.
Topos Theory will give a more in-depth look into categorical models and make a stronger foundation to build a bridge between CT and Logic of Space.
In this section, I will mention Subobject Classifier and Topos types.
Types in this section: Monoid, Commutative Monoid, Group, Ring, Abelian Group, Abelian Ring, Hopf, Loop Space, Homotopy Group.
Types in this section: Abstract Homogeneous Structure, Shape, Étale Map, Automorphism, Fiber Bundle, Manifold, Covering Space (G-Set).
As for the tower of type theories that are necessary for CCHM (Cohen-Coquand-Huber-Mortberg) introduction we will use the following setting.
These historical prerequisites (PTS, MLTT, CiC) and will be given along the way of the course.
Cohesive Type Theory is satisfying the needs of modalities, such as connectedness, compactness, infinitesimal shapes, etc. Differential Cohesive Type Theory is the internal language of differential cohesive $(\infty,1)$-topoi, with additional structure, just as $(\infty,1)$-categories add a globular path equality structure to locally cartesian closed categories and its internal language — MLTT. Please refer to cohesivett for more information.
Cohesive types are modeled in the base library as undefined axioms. Process Calculus is another example of modality types with special spawn arrows hiding the underlying run-time implementation. | CommonCrawl |
Subsets and Splits