text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
But of course, this can't be done with an even number $w$.
Note: I'm using Flatten to do a jagged transpose.
Is there a way to improve my generalization to reach @rasher's original efficiency? Or, is there a clever way to get MaxFilter to max over even-numbered windows?
I'd also very much appreciate if someone could explain what about my code could have introduced such inefficiency.
I found a clever way to make MaxFilter work with even windows, and used conditionals to combine the odd- and even- cases. This solution is, surprisingly, more efficient than even @MrWizard's compiled function, and increasingly so as the window size increases.
To create MovingMin, it will work to just swap all references to Max and Min with the opposites.
If the window size $w$ is odd, then we can just use MaxFilter with a radius of $(w-1)/2$ and drop the same amount of elements (i.e. as the radius) from each end of the resulting list.
...so after dropping the first and last two elements (same as we do for the odd case), we just have to additionally drop every $w$th element, and we're left with the maximums we're looking for.
Later, I replaced $-\infty$ with Min@list as the former method was adding an inefficiency (see @MrWizard's comment).
@MrWizard's cf performs well, but increases in time linearly with the window size.
The MaxFilter functions (split between odd and even since they're essentially two unrelated functions), in contrast, decrease in time consumption as window size increases! I imagine this is because MaxFilter probably optimizes by caching the index of its latest maximum as it runs, e.g.
where the first maximum is 14 and the cached index is 4. This way, as it moves forward, it only needs to compare one number, e.g.
where since 6 is not greater than the current maximum of 14, there's no need to compare the new set of 4 elements. This can continue until the cached index "expires," e.g.
With such an algorithm, greater window sizes would mean greater savings.
This code is much more efficient if given an actual lower bound, e.g. -10^-6, rather than using -∞. @MrWizard explains the reason in the comments.
Here's the (unedited, messy) test code I used.
The first improvement's impact was clear. The second improvement would of course slow it down (since computing a Min can't be faster than a pre-supplied lower bound), but it was worth eliminating a parameter. Therefore both improvements have been incorporated into the new "tl;dr."
CompilationTarget -> "C", RuntimeOptions -> "Speed"
However I do not have a C compiler installed to test this.
Here's a version allowing MaxFilter to work with windows of even length k. It runs MaxFilter with window radius k/2-1, then corrects the output. MovingMaxEven is slower than Andrew's MovingMax, for example, 1.01 s versus his 0.78 s on 10 million points.
The compiled version runs in 0.90 s.
How to efficiently select element in a list of lists? | CommonCrawl |
For ease of explanation, suppose that you began with a two-dimensional surface in (x1, x2, x3, x4)-space, and the surface begins as a flat planar region in the (x1, x2)-plane. The boundaries of this flat region is not that important to resolve, but it would have a closed boundary. Pretend it is a square. So this surface might be specified by a triangular mesh, so a listing of points and their neighbours.
Through some operation, the plane now gets deformed into 4D. So for instance, I might assign +(0, 0, 1, 0) for some points and -(0, 0, 0, 1) for other points. At this point, you have a 2-manifold in 4D.
The problem is that I need to remesh this surface. I'd like to capture things like folds in the surface, and I'd like to remesh so that triangles that get stretched out are subdivided, and to avoid things like skinny triangles.
I understand that there is a lot of info on how this is done for two-dimensional planar surfaces in 2D, like Shewchuk's Triangle program. I've read Persson and Strang's article on simple meshing in 2D, but I don't think this is generalizable to 2-manifolds in 4D.
Ultimately, I'm looking for a more-or-less simple algorithm (or program). The need for accuracy is not terribly high, and the boundary regions of my surface are not important to accurately resolve.
Can anybody suggest literature I can refer to? There is a startlingly large amount of work done on meshing and remeshing, so it's difficult to see what is applicable for my problem.
You can use a 2D anisotropic mesh generator (see e.g. H. Borouchaki, George, P. L. , F. Hecht , P. Laug and E. Saltel, Delaunay mesh generation governed by metric specifications. Part 1: Algorithms, Finite Elements in Analysis and Design, Vol. 25, pp. 61-83, 1997).
The metric is a function that associates a $2\times 2$ symmetric positive definite matrix with each point of the domain. This metric specifies the desired edge length as follows: suppose you have an edge $(x_1, y_1) - (x_2, y_2)$, then what you want is $e^t G(x,y) e = 1$, where e = $(x_2, y_2) - (x_1, y_1)$ and where $x=(x_1 + x_2)/2, y = (y_1 + y_2)/2)$. Intuitively, the squared norm of $e = \| e \|^2 = e^t e$ is replaced with $e^t G e$. This provides a means of "shearing" and "scaling" the definition of distances.
In practice, the metric can be stored at the vertices of an input background mesh and linearly interpolated over the triangles of the background mesh. Then the anisotropic mesh generator will create a new triangulation of the input geometry where all the edges are (mostly) of length 1 with respect to the metric.
In your case, you can see your 4D input domain as a 2D domain transformed into 4D with a (vector-valued) function $\Phi(x,y)$. Now if you generate a 2D mesh deformed in such a way that it will exactly counterbalance the deformation created by the mapping $\Phi$, then you will have mostly equilateral triangles in 4D. This can be achieved by using for $G(x,y)$ the pullback metric of $\Phi$, given by $G(x,y) = J^t J$, where $J$ denotes the Jacobian matrix of $\Phi$. This technique is used for tesselating parametric surfaces (Splines, Nurbs ...) using a 2D anisotropic mesh generator.
If a mapping $\Phi$ and its Jacobian cannot be computed, there is an alternative method that we used (in 6D in our case), that directly optimizes and computes the mesh in high dimensional space: http://www.imr.sandia.gov/papers/abstracts/Le621.html (but it is much more costly, the anisotropic meshing steered by $J^tJ$ is probably much better in most cases).
Not the answer you're looking for? Browse other questions tagged computational-geometry mesh-generation or ask your own question.
How to cap and mesh this cylindrical surface? | CommonCrawl |
Question. How to adjust axes and grid in GUI matlab? how to get more precision for example [-1 -0.5 0] other than [-1 0] I would like to make the axes and the grid more precise as shown in the image below but i dont know how . i would like that when...... Getting Started With MATLAB Prepared by the staff of the Indiana University Stat/Math Center What is Matlab? MATLAB is a computer program for people doing numerical computation, especially linear algebra (matrices).
Show transcribed image text (5 points in total) For this question, we need more precision (more digits after the decimal points) than the Matlab default precision. Use 'Format Long' to get more digits. For function fCx 1. 1+x3 Find the power series expansion of f (x) (Hint: Command 'taylor may be useful) up to x20 Use your answer in part(a) to find the first three non-zero terms of a numerical... If you are in fact running out of memory, then the most obvious solution is to get more memory (if you can). Beyond that, you will have to re-write your code to work under your memory constraints.
Possible duplicates: how to Display data in matrix with with more than 4 decimals, Is it possible to show numbers in non-engineering format in Matlab?. Short answer: look into the FORMAT command. � gnovice Mar 1 '11 at 4:21... Then, two get more precision I add two cubes in each direction (so in the second iteration I have a block of size 3x3x4). The problem is that this method converges to $\alpha\approx 5.35$, when it should be $\alpha\approx 1.64$.
Matlab uses the complete number for maths. How it is formatted for visualisation is irrelevant. If you need the difference to a high number of decimal places then that is different, but up to double precision the maths is precise for comparing equality or greater than operations. | CommonCrawl |
Zhigang Wang, Lei Wang, Yachun Li. Renormalized entropy solutions for degenerate parabolic-hyperbolic equationswith time-space dependent coefficients. Communications on Pure & Applied Analysis, 2013, 12(3): 1163-1182. doi: 10.3934\/cpaa.2013.12.1163.
L. Bakker. The Katok-Spatzier conjecture, generalized symmetries, and equilibrium-free flows. Communications on Pure & Applied Analysis, 2013, 12(3): 1183-1200. doi: 10.3934\/cpaa.2013.12.1183.
Mostafa Bendahmane, Kenneth Hvistendahl Karlsen, Mazen Saad. Nonlinear anisotropic elliptic and parabolic equations with variable exponents and $L^1$ data. Communications on Pure & Applied Analysis, 2013, 12(3): 1201-1220. doi: 10.3934\/cpaa.2013.12.1201.
M. Carme Leseduarte, Ramon Quintanilla. Phragm\u00E9n-Lindel\u00F6f alternative for an exact heat conduction equation with delay. Communications on Pure & Applied Analysis, 2013, 12(3): 1221-1235. doi: 10.3934\/cpaa.2013.12.1221.
Futoshi Takahashi. On the number of maximum points of least energy solution to a two-dimensional H\u00E9non equation with large exponent. Communications on Pure & Applied Analysis, 2013, 12(3): 1237-1241. doi: 10.3934\/cpaa.2013.12.1237.
Liping Wang, Juncheng Wei. Infinite multiplicity for an inhomogeneous supercritical problem inentire space. Communications on Pure & Applied Analysis, 2013, 12(3): 1243-1257. doi: 10.3934\/cpaa.2013.12.1243.
Seunghyeok Kim. On vector solutions for coupled nonlinear Schr\u00F6dinger equations with critical exponents. Communications on Pure & Applied Analysis, 2013, 12(3): 1259-1277. doi: 10.3934\/cpaa.2013.12.1259.
Hua Nie, Wenhao Xie, Jianhua Wu. Uniqueness of positive steady state solutions to the unstirred chemostat model with external inhibitor. Communications on Pure & Applied Analysis, 2013, 12(3): 1279-1297. doi: 10.3934\/cpaa.2013.12.1279.
Xuanji Jia, Zaihong Jiang. An anisotropic regularity criterion for the 3D Navier-Stokes equations. Communications on Pure & Applied Analysis, 2013, 12(3): 1299-1306. doi: 10.3934\/cpaa.2013.12.1299.
Huaiyu Jian, Xiaolin Liu, Hongjie Ju. The regularity for a class of singular differential equations. Communications on Pure & Applied Analysis, 2013, 12(3): 1307-1319. doi: 10.3934\/cpaa.2013.12.1307.
Takamori Kato. Global well-posedness for the Kawahara equation with low regularity. Communications on Pure & Applied Analysis, 2013, 12(3): 1321-1339. doi: 10.3934\/cpaa.2013.12.1321.
Juan Calvo. On the hyperbolicity and causality of the relativistic Euler system under the kinetic equation of state. Communications on Pure & Applied Analysis, 2013, 12(3): 1341-1347. doi: 10.3934\/cpaa.2013.12.1341.
Julie Lee, J. C. Song. Spatial decay bounds in a linearized magnetohydrodynamic channel flow. Communications on Pure & Applied Analysis, 2013, 12(3): 1349-1361. doi: 10.3934\/cpaa.2013.12.1349.
Boumediene Abdellaoui, Ahmed Attar. Quasilinear elliptic problem with Hardy potential and singularterm. Communications on Pure & Applied Analysis, 2013, 12(3): 1363-1380. doi: 10.3934\/cpaa.2013.12.1363.
Zhijun Zhang. Large solutions of semilinear elliptic equationswith a gradient term: existence and boundary behavior. Communications on Pure & Applied Analysis, 2013, 12(3): 1381-1392. doi: 10.3934\/cpaa.2013.12.1381.
John R. Graef, Shapour Heidarkhani, Lingju Kong. Multiple solutions for a class of $(p_1, \\ldots, p_n)$-biharmonic systems. Communications on Pure & Applied Analysis, 2013, 12(3): 1393-1406. doi: 10.3934\/cpaa.2013.12.1393.
Fengping Yao. Optimal regularity for parabolic Schr\u00F6dinger operators. Communications on Pure & Applied Analysis, 2013, 12(3): 1407-1414. doi: 10.3934\/cpaa.2013.12.1407.
Morteza Fotouhi, Leila Salimi. Controllability results for a class ofone dimensional degenerate\/singular parabolic equations. Communications on Pure & Applied Analysis, 2013, 12(3): 1415-1430. doi: 10.3934\/cpaa.2013.12.1415.
Hayk Mikayelyan, Henrik Shahgholian. Convexity of the freeboundary for an exterior free boundary problem involving the perimeter. Communications on Pure & Applied Analysis, 2013, 12(3): 1431-1443. doi: 10.3934\/cpaa.2013.12.1431.
Geng Chen, Ping Zhang, Yuxi Zheng. Energy conservative solutions to a nonlinear wave system of nematic liquid crystals. Communications on Pure & Applied Analysis, 2013, 12(3): 1445-1468. doi: 10.3934\/cpaa.2013.12.1445.
Ji\u0159\u00ED Benedikt. Continuous dependence of eigenvalues of $p$-biharmonic problems on $p$. Communications on Pure & Applied Analysis, 2013, 12(3): 1469-1486. doi: 10.3934\/cpaa.2013.12.1469.
Domenica Borra, Tommaso Lorenzi. Asymptotic analysis of continuous opinion dynamics models under bounded confidence. Communications on Pure & Applied Analysis, 2013, 12(3): 1487-1499. doi: 10.3934\/cpaa.2013.12.1487.
John M. Hong, Cheng-Hsiung Hsu, Bo-Chih Huang, Tzi-Sheng Yang. Geometric singular perturbation approach to the existence and instability of stationary waves for viscous traffic flow models. Communications on Pure & Applied Analysis, 2013, 12(3): 1501-1526. doi: 10.3934\/cpaa.2013.12.1501. | CommonCrawl |
Let $R$ be a commutative ring. then $I\trianglelefteq R\implies R[x]\otimes_R(R/I)\cong R[x]/I[x]$ as rings.
Let $\alpha$ be defined s.t. $\alpha(f(x)\otimes (r+I))=(r+f(x))+I[x]$ where $f\in R[x]$, $r\in R$. then $\alpha:R[x]\otimes_R(R/I)\to R[x]/I[x]$.
$g(x)+I[x]\in R[x]/I[x] \implies g(x)=h(x)\cdot x+c$ for some $c\in R$, $h(x)\in R[x]$ $\implies \alpha((h(x)\cdot x)\otimes (c+I))=g(x)$.
Homomorphism: $\alpha([f_1(x)\otimes (r_1+I)]\cdot [f_2(x)\otimes (r_2+I)])=\alpha(f_1f_2\otimes(r_1r_2+I))=f_1f_2+r_1r_2+I$?
Browse other questions tagged abstract-algebra group-theory ring-theory tensor-products or ask your own question.
Is it true that $R/(I+J) \cong (R/I)/J$?
Is an $R$-module $A$ a module over the image of a homomorphism $f:R\to f(R)$?
homomorphism keeps the unit and commutativity?
Homomorphism of Rings from Composition of 2 Ring homomorphisms. | CommonCrawl |
General logical metatheorems for functional analysis.
New effective uniformity results in fixed point theory.
Proof mining in $CAT(0)$-spaces and $\mathbb R$-trees.
Model elimination and cut elimination.
Phase transitions in logic and combinatorics.
Primitive Recursive Selection Functions for Provable Existential Assertions over Abstract Algebras.
Friday January 5, 2007, 2:15 p.m.-4:40 p.m.
Computational power of bounded arithmetic from the predicative viewpoint.
Constructing expansions of the real field by restricted transcendental analytic functions with decidable theories.
Quantitative results in o-minimal topology. | CommonCrawl |
When a function calls itself, its called Recursion. It will be easier for those who have seen the movie Inception. Leonardo had a dream, in that dream he had another dream, in that dream he had yet another dream, and that goes on. So it's like there is a function called $$dream()$$, and we are just calling it in itself.
Recursion is useful in solving problems which can be broken down into smaller problems of the same kind. But when it comes to solving problems using Recursion there are several things to be taken care of. Let's take a simple example and try to understand those. Following is the pseudo code of finding factorial of a given number $$X$$ using recursion.
The following image shows how it works for $$factorial(5)$$.
Base Case: Any recursive method must have a terminating condition. Terminating condition is one for which the answer is already known and we just need to return that. For example for the factorial problem, we know that $$factorial(0) = 1$$, so when $$x$$ is 0 we simply return 1, otherwise we break into smaller problem i.e. find factorial of $$x-1$$. If we don't include a Base Case, the function will keep calling itself, and ultimately will result in stack overflow. For example, the $$dream()$$ function given above has no base case. If you write a code for it in any language, it will give a runtime error.
Number of Recursive calls: There is an upper limit to the number of recursive calls that can be made. To prevent this make sure that your base case is reached before stack size limit exceeds.
The problem can broken down into smaller problems of same type.
Problem has some base case(s).
Base case is reached before the stack size limit exceeds.
So, while solving a problem using recursion, we break the given problem into smaller ones. Let's say we have a problem $$A$$ and we divided it into three smaller problems $$B$$, $$C$$ and $$D$$. Now it may be the case that the solution to $$A$$ does not depend on all the three subproblems, in fact we don't even know on which one it depends.
Let's take a situation. Suppose you are standing in front of three tunnels, one of which is having a bag of gold at its end, but you don't know which one. So you'll try all three. First go in tunnel $$1$$, if that is not the one, then come out of it, and go into tunnel $$2$$, and again if that is not the one, come out of it and go into tunnel $$3$$. So basically in backtracking we attempt solving a subproblem, and if we don't reach the desired solution, then undo whatever we did for solving that subproblem, and try solving another subproblem.
Let's take a standard problem.
N-Queens Problem: Given a chess board having $$N \times N$$ cells, we need to place $$N$$ queens in such a way that no queen is attacked by any other queen. A queen can attack horizontally, vertically and diagonally.
So initially we are having $$N \times N$$ unattacked cells where we need to place $$N$$ queens. Let's place the first queen at a cell $$(i,j)$$, so now the number of unattacked cells is reduced, and number of queens to be placed is $$N-1$$. Place the next queen at some unattacked cell. This again reduces the number of unattacked cells and number of queens to be placed becomes $$N-2$$. Continue doing this, as long as following conditions hold.
The number of unattacked cells is not $$0$$.
The number of queens to be placed is not $$0$$.
If the number of queens to be placed becomes $$0$$, then it's over, we found a solution. But if the number of unattacked cells become $$0$$, then we need to backtrack, i.e. remove the last placed queen from its current cell, and place it at some other cell. We do this recursively.
Here's how it works for $$N=4$$.
So, clearly, the above algorithm, tries solving a subproblem, if that does not result in the solution, it undo whatever changes were made and solve the next subproblem. If the solution does not exists $$(N = 2)$$, then it returns $$false$$. | CommonCrawl |
When I inverse $\Omega$ it gives me an error that Matrix is singular.
In the above document there are some hints how to avoid this error. But I failed to understand that. On the eighth page of the above document - on page 410 - they discuss it in section 4. The GraphSLAM Algorithm portion. If anybody can understand it, please help me to understand.
In particular, line 2 in GraphSLAM_linearize initializes the information elements. The "infinite" information entry in line 3 fixes the initial pose $x_0$ to (0 0 0)$^T$ . It is necessary, since otherwise the resulting matrix becomes singular, reflecting the fact that from relative information alone we cannot recover absolute estimates.
Due to singularity in matrix (i.e. the condition of matrix during which determinant of matrix becomes zero), matrix cannot be inverted.
Just as manna kalsariya suggested, you can try to use the pseudo inverse. The README of the Universal Java Matrix Package (see "Quick Start") shows that the pseudo inverse is implemented and available for you to use. Instead of Matrix mu=omega.inv().mtimes(Xi);, try Matrix mu=omega.pinv().mtimes(xi);.
Not the answer you're looking for? Browse other questions tagged mobile-robot sensors slam singularity or ask your own question.
GraphSLAM: why are constraints imposed twice in the information matrix? | CommonCrawl |
How exactly to show that s-matrix elements diverges because time-ordering is not well determined?
I was reading the "in-in" formalism (or "closed time path formalism" used in condensed matter physics) in cosmology created by Schwinger in 1961, and there is a saying: "they care about correlation functions instead of S-matrix scattering amplitudes". When I learn QFT, these two things are almost the same thing and are related by LSZ formula. Why they use in-in instead of in-out? what's the difference between correlation functions and S-matrix?
Correlation functions (or Wightman N-point functions) are expectation values of renormalized products of field operators at finite times. The ordering of the operators matters since fields at general arguments do not commute.. The correlation functions need for their nonperturbative definition via a path integral definition the in-in formalism (= closed time path, CTP, Schwinger-Keldysh formalism) where one integrates over a doubled time contour.
The S-matrix elements are computed from the expectations of time-ordered products of field operators (hence independent of the ordering of the operators), which occur in the LSZ formula and in functional derivatives of the standard path integral. They express in-out properties of asymptotic states of scattering experiments. They are obtained in a path integral formulation by integration along a single time path from $t=-\infty$ to $t=+\infty$. As such they also appear inside the CTP formalism.
The information in a time-ordered products is less than in the ordinary product as one can calculate $T(\phi(x)\phi(y))$ from $\phi(x)\phi(y)$ and $\phi(y)\phi(x)$ (away from its singularity at $(x-y)^2=0$), while the converse is not possible.
Correlation functions are important if you want to see the Hilbert space. Therefore the CTP path integral takes a doubled time path, so that it returns to the initial state, which computes expectation values in the initial state. The images of the initial state under products of field operators span a dense set of vectors in the Hilbert space. Therefore, at least in in principle, one can compute inner products of arbitrary state vectors using the CTP formalism. The S-matrix doesn't contain this information.
As a consequence, the in-out description of quantum field theory - though simpler and covered by every textbook on QFT - is incomplete as it only gives the asymptotic properties of a quantum field, while the in-in description - though more involved and only in textbooks treating nonequilibrium statistical mechanics - gives everything - the asymptotics and the finite time behavior. | CommonCrawl |
Abstract: In this paper, we consider discrete-time dynamic games of the mean-field type with a finite number $N$ of agents subject to an infinite-horizon discounted-cost optimality criterion. The state space of each agent is a locally compact Polish space. At each time, the agents are coupled through the empirical distribution of their states, which affects both the agents' individual costs and their state transition probabilities. We introduce a new solution concept of the Markov-Nash equilibrium, under which a policy is player-by-player optimal in the class of all Markov policies. Under mild assumptions, we demonstrate the existence of a mean-field equilibrium in the infinite-population limit $N \to \infty$, and then show that the policy obtained from the mean-field equilibrium is approximately Markov-Nash when the number of agents $N$ is sufficiently large. | CommonCrawl |
Estimating Parameter - What is the qualitative difference between MLE fitting and Least Squares CDF fitting?
Why are large Hermitian Toeplitz matrices approximately diagonalized by sinusoids?
How to interpret $\int_0^\infty \exp(ikx) dx$ in distribution theory?
"Inverse" of nondecreasing, right-continuous function?
If $(T^*)^n$ converges pointwise in $\ell^1$, what can we conclude about $T^n$?
Column sums of $A$ from column sums of $A A^T$? | CommonCrawl |
The goal is to tile rectangles as small as possible with the given hexomino, in this case number 2 of the 25 hexominoes which cannot tile a rectangle alone. We allow the addition of copies of a rectangle. For each rectangle $a\times b$, find the smallest area larger rectangle that copies of $a\times b$ plus at least one of the given hexomino will tile.
Now we don't need to consider $1\times 1$ (or $1\times 2$) further as we have found the smallest rectangle tilable with copies of the hexomino plus copies of $1\times 1$ (or $1\times 2$).
No Computer All of them could be tiled by hand (with significant effort in some cases), so I'm making this a no-computer puzzle. This also means please don't look up answers on the web... if you post an answer it should be because you found it 'by hand'. This does not preclude you from for example using an image program to manipulate shapes on the screen, just from using a computer to search for or automate the arrangement.
Here are the first few in the order I found them.
Here's a (computer-assisted, I admit) generalizable solution for $1 \times n = 4k$. By subdividing the rectangles, you can find solutions for $n$ not divisible by $4$. In those cases, the 'padding' (i.e. areas with just rectangles) don't need to be so large. | CommonCrawl |
A "1-expression" is a formula in which you add ($+$) or multiply ($\times$) the number 1 any number of times to create a natural number. Parentheses are allowed.
$1 + 1 + ((1 + 1 + 1 + 1) × (1 + 1 + 1 + 1 + 1)) = 22$.
This is a 1-expression with 11 times a 1 in it.
$1 + ((1 + 1 + 1) × (1 + ((1 + 1) × (1 + 1 + 1)))) = 22$.
This is a 1-expression with "1" only used ten times. Therefore, 10 is the minimum "1-value" of 22—that is, there is no 1-expression with which you can make 22 where you use a 1 less than 10 times.
Your task is to determine the minimum 1-value of 73.
$((1 + 1)*(1+1)*(1+1)*(1+1+1)*(1+1+1))+1$, for a total of 13 ones.
This was accomplished by multiplying together the prime factors of 72, and adding one.
Hugh and Keelhaul have already given correct solutions to the specific problem posed. If anyone wants to experiment with this, here's some fairly dumb Python code to find optimal solutions by brute force.
A few empirical observations: when n is composite the best solution is usually the product of solutions for some factors of n. For n=10, n=22, n=25, n=28, n=33 there are equally good solutions using factorizations of n-1 instead. I think n=46 is the first time it's strictly better not to factorize n: 2*(2+3*(1+2*3)) costs 13, while 1+3*3*5 costs only 12. I bet there are n for which the best solution is of form a*b+c*d, but after a small amount of experimenting I haven't found one yet.
What is the minimum number of matchsticks that you can use to build expressions having the following results? | CommonCrawl |
Here we have collected our favourite sets, from the Mandelbrot set to the Mahut-Isner set!
by Chalkdust. Published on 17 November 2016.
A set is a collection of items. Sprinkled throughout Issue 04 of Chalkdust were some of our favourite sets. Here we have collected them together and we'd really love to hear yours. You can write about them at the bottom of this post!
The figure above is the Mandelbrot set computed on a 100 $\times$ 100 grid and the figure refines the grid all the way up to 1000 $\times$ 1000 pixels.
If you want to construct my favourite set, start with the interval $[0, 1]$. Next, remove the open middle third interval. This gives you two line segments: $[0, 1/3]$ and $[2/3, 1]$. Again, delete the middle third for each remaining interval (which leaves you with four new intervals). Now repeat the final step ad infinitum.
Once you're done, you're left with the Cantor set (also called the Cantor comb). But what does the Cantor set look like? Infinitely many discrete points? An infinite collection of line segments? The answer is a bit of both, and that's because the dimension of my favourite set is neither zero nor one: it's $\ln2/\ln3\approx 0.63093$. The other wonderful feature of this set is that it's a fractal, so if you zoom in on a tiny portion, you get the Cantor set again!
My favourite set is the empty set ($\varnothing$). The empty set is the only set of its kind and contains no elements. The empty set is a subset of any subset but the only subset of the empty set is itself.
The empty set is not nothing, but is a set that contains nothing. If I had a bag with multicoloured counters in, these are the elements of a non-empty set. On the other hand, if I had a bag with no counters in, there are no elements in the set and this is an example of the empty set.
As you can see, this definition causes things to get complicated quite quickly!
If you could count forever, you would reach infinity. You might be surprised to learn that there are bigger things than this infinity. My favourite set, $\aleph_1$ (aleph one), is the smallest set that is bigger than the counting infinity.
Cantor's diagonal argument (if you've not heard of it, Google it!) can be used to show that there are more real numbers than natural numbers, and therefore that one infinite thing is bigger than another.
But $\aleph_1$ is even weirder than this. It is not known whether or not $\aleph_1$ and the real numbers are the same size. In fact, this is not just unknown, but it cannot be proven either way using the standard axioms of set theory! (The suggestion that they are the same size is called the continuum hypothesis.) So my favourite set is bigger than the smallest infinity—but we can't work out by how much.
My favourite set is the final set in the match between Nicolas Mahut and John Isner at Wimbledon in 2010. The set lasted over 11 hours with a final score of 70–68 to Isner. | CommonCrawl |
Parameter estimation is one of the keystones in Adaptive control; the main idea of parameter estimation is to construct a parametric model and then use optimization methods to minimize the error between the true parameter and the estimation. Least-square algorithm is one of the common optimization methods.
I take adaptive control course this semester, and in the homework after the second lecture, we are going to derive the formular for the estimated parameter vector $\theta(t)$ at time $t$.
$\theta^*$ is the truth value of the parameters in the system, which is what we want to reach or approach. However we only have the knowledge of $y(t)$ and $\theta(t)$. So we are going to estimate $\theta^*$ based on what are known ($y$, $\phi$ and so on).
$y(t)$ is the output of the system, $1\times1$ scaler, which could be measured and it is known.
$\phi(t)$ is the input or reference of the system, which is also known.
$\theta(t)$ is the estimated system parameter at time $t$, which is what we want.
Cost function $J(\theta(t))$ is what we would like to minimized, so that the error between $\theta\phi$ and $y$ is minimized.
There are two algorithms that can solve this problem. The first one is Gradient (Descent) Algorithm, another one is what we are going to demonstrate here, Least-squares algorithm. Basically, the differences between these two algorithms are the cost function they employed and the method to minimize the cost function.
where $\theta_0=\theta(0)$ is the initial estimation of parameters, and $Q_0=Q_0^T$ is a symmetric and positive definite matrix.
Positive definite matrix is always invertible, so this formula can suit for more situations. | CommonCrawl |
Abstract: The theory of vectors and spinors in 9+1 dimensional spacetime is introduced in a completely octonionic formalism based on an octonionic representation of the Clifford algebra $\Cl(9,1)$. The general solution of the classical equations of motion of the CBS superparticle is given to all orders of the Grassmann hierarchy. A spinor and a vector are combined into a $3 \times 3$ Grassmann, octonionic, Jordan matrix in order to construct a superspace variable to describe the superparticle. The combined Lorentz and supersymmetry transformations of the fermionic and bosonic variables are expressed in terms of Jordan products. | CommonCrawl |
Are there optimizers where it is possible to specify ordinal ranking of parameters?
Assume that $f$ is smooth ($n$-th order differentiable in each of the parameters).
An approach I often use when applying unconstrained optimisation algorithms to constrained problems is to transform the parameter space such that the constraints cannot be violated.
Of course this results in $\theta^*_1\geq\theta^*_2\geq\theta^*_3$ which isn't quite what you asked for. To get a strict ranking you'll need to bump $x_1-x^2_2$ and $x_1-x^2_2-x^2_3$ down at the last digit of precision.
thus spake a.k.thus spake a.k.
These variants of your constraints are linear, so provided that your function $f$ is well-behaved (smooth, easy to calculate, easy to compute derivatives, derivatives are well-conditioned, etc.), any constrained optimization solver should be able to solve your problem without issue.
Not the answer you're looking for? Browse other questions tagged optimization constrained-optimization or ask your own question.
Does the amount of correlation of model parameters matter for nonlinear optimizers?
Optimization of a blackbox function with an equality constraint? | CommonCrawl |
All the $n$ and $i$ are the smallest ones. I'm just curious about how fast $n$ increases when $i$ increases.(Under $2\times 10^9$,there's no $n$ for which $i$ is larger than $2000$) .Any comments will be appreciated.
I think your problem is very difficult, more than the usual (difficult) problems about prime numbers. What you have on hand is only "brute force" with the limits of computation that this implies. Am I wrong?
if m is not a multiple of three, take it's remainder on division by 3, i will have that same remainder, or be 3.
you could repeat this with any modular arithmetic mod a prime.
Not the answer you're looking for? Browse other questions tagged number-theory prime-numbers analytic-number-theory experimental-mathematics goldbachs-conjecture or ask your own question.
Goldbach Conjecture, a simple statement.
A heuristic argument for the Goldbach conjecture? | CommonCrawl |
We will explain the proof of a Carleson measure estimate on solutions of parabolic equations with real measurable time-dependent coefficients that implies that the parabolic measure is an $A_\infty$ weight.
This corresponds to the parabolic analog of a recent result by Hofmann, Kenig, Mayboroda and Pipher for elliptic equations. Our proof even simplifies theirs. As is well known, the $A_\infty$ property implies that $L^p$ Dirichlet problem is well-posed. An important ingredient of the proof is a Kato square root property for parabolic operators on the boundary, which can be seen as a consequence of certain square function estimates applicable to Neumann and regularity problems. All this is joint work with Moritz Egert and Kaj Nyström. | CommonCrawl |
Abstract : Let $f_1,\ldots, f_s$ be formal power series (respectively polynomials) in the variable $x$. We study the semigroup of orders of the formal series in the algebra $K[[ f1,\ldots, f_s]] \subseteq K[[ x ]]$ (respectively the semigroup of degrees of polynomials in $K[f_1,\ldots,f_s]\subseteq K[x]$). We give procedures to compute these semigroups and several applications. | CommonCrawl |
The milk business is booming! Farmer John's milk processing factory consists of $N$ processing stations, conveniently numbered $1 \ldots N$ ($1 \leq N \leq 100$), and $N-1$ walkways, each connecting some pair of stations. (Walkways are expensive, so Farmer John has elected to use the minimum number of walkways so that one can eventually reach any station starting from any other station).
To try and improve efficiency, Farmer John installs a conveyor belt in each of its walkways. Unfortunately, he realizes too late that each conveyor belt only moves one way, so now travel along each walkway is only possible in a single direction! Now, it is no longer the case that one can travel from any station to any other station.
However, Farmer John thinks that all may not be lost, so long as there is at least one station $i$ such that one can eventually travel to station $i$ from every other station. Note that traveling to station $i$ from another arbitrary station $j$ may involve traveling through intermediate stations between $i$ and $j$. Please help Farmer John figure out if such a station $i$ exists.
The first line contains an integer $N$, the number of processing stations. Each of the next $N-1$ lines contains two space-separated integers $a_i$ and $b_i$ with $1 \leq a_i, b_i \leq N$ and $a_i \neq b_i$. This indicates that there is a conveyor belt that moves from station $a_i$ to station $b_i$, allowing travel only in the direction from $a_i$ to $b_i$.
If there exists a station $i$ such that one can walk to station $i$ from any other station, then output the minimal such $i$. Otherwise, output $-1$. | CommonCrawl |
Abstract: In this paper we propose and investigate a novel nonlinear unit, called $L_p$ unit, for deep neural networks. The proposed $L_p$ unit receives signals from several projections of a subset of units in the layer below and computes a normalized $L_p$ norm. We notice two interesting interpretations of the $L_p$ unit. First, the proposed unit can be understood as a generalization of a number of conventional pooling operators such as average, root-mean-square and max pooling widely used in, for instance, convolutional neural networks (CNN), HMAX models and neocognitrons. Furthermore, the $L_p$ unit is, to a certain degree, similar to the recently proposed maxout unit (Goodfellow et al., 2013) which achieved the state-of-the-art object recognition results on a number of benchmark datasets. Secondly, we provide a geometrical interpretation of the activation function based on which we argue that the $L_p$ unit is more efficient at representing complex, nonlinear separating boundaries. Each $L_p$ unit defines a superelliptic boundary, with its exact shape defined by the order $p$. We claim that this makes it possible to model arbitrarily shaped, curved boundaries more efficiently by combining a few $L_p$ units of different orders. This insight justifies the need for learning different orders for each unit in the model. We empirically evaluate the proposed $L_p$ units on a number of datasets and show that multilayer perceptrons (MLP) consisting of the $L_p$ units achieve the state-of-the-art results on a number of benchmark datasets. Furthermore, we evaluate the proposed $L_p$ unit on the recently proposed deep recurrent neural networks (RNN).
This paper propose a new activation function that computes a Lp norm from multiple projections on an input vector. The p value can be learned from training example, and can also be different for each hidden unit. The intuition is that 1) for different datasets there may exist different optimal p-values, so it make more sense to make p tunable; 2) allowing different unit take different p-values can potentially make the approximation of decision boundaries more efficient and more flexible. The empirical results support these two intuitions, and achieved comparable results on three datasets.
A generalization of pooling but applied through channels, when the data and weight vector dot product plus bias is constrained to non-negative case, the $L_\infty$ is equivalent to maxout unit.
Empirical performance is not very impressive, although evidence of supporting the intuition occurs. | CommonCrawl |
Ardentov A. A., Le Donne E., Sachkov Y. L.
This paper is a continuation of the work by the same authors on the Cartan group equipped with the sub-Finsler $\ell_\infty$ norm. We start by giving a detailed presentation of the structure of bang-bang extremal trajectories. Then we prove upper bounds on the number of switchings on bang-bang minimizers. We prove that any normal extremal is either bang-bang, or singular, or mixed. Consequently, we study mixed extremals. In particular, we prove that every two points can be connected by a piecewise smooth minimizer, and we give a uniform bound on the number of such pieces. | CommonCrawl |
We study correlations in fermionic systems with long-range interactions in thermal equilibrium. We prove an upper-bound on the correlation decay between anti-commut-ing operators based on long-range Lieb-Robinson type bounds. Our result shows that correlations between such operators in fermionic long-range systems of spatial dimension $D$ with at most two-site interactions decaying algebraically with the distance with an exponent $\alpha \geq 2\,D$, decay at least algebraically with an exponent arbitrarily close to $\alpha$. Our bound is asymptotically tight, which we demonstrate by numerically analysing density-density correlations in a 1D quadratic (free, exactly solvable) model, the Kitaev chain with long-range interactions. Away from the quantum critical point correlations in this model are found to decay asymptotically as slowly as our bound permits. | CommonCrawl |
I like short descriptions and I think you so too.
Let's say you have a number, N. Now try to find the last four digits of N!.
I think you know that $ N! = (1 \times 2 \times 3 \times ... \times N) $.
The input begins with a single integer indicating the number of test cases T (1 ≤ T ≤ 100). Each of the following test cases consists of a number N (0 ≤ N ≤ 1018).
For each test case output the last four digits of N factorial (N!).
If N! is less than 4 digits don't forget to add 0s to left. | CommonCrawl |
and let $C_0\subseteq C(X)$ be a subring containing all constants and separating the points of $X$, i.e. for any two different points $x_1, x_2\in X$ there exists a function $f\in C_0$ for which $f(x_1)\neq f(x_2)$. Then $[C_0]=C(X)$, i.e. every continuous function on $X$ is the limit of a uniformly converging sequence of functions in $C_0$.
The expository article [a4] is recommended in particular.
This page was last modified on 12 December 2013, at 15:13. | CommonCrawl |
PDF We analyse an absorption event within the H$\alpha$ line wings, identified as a surge, and the co-spatial evolution of an EUV brightening, with spatial and temporal scales analogous to a... 27/04/2011�� The purpose of the present study was to evaluate effects of regular practice of sun salutation on muscle strength, general body endurance and body composition. Methods Subjects (49 male and 30 female) performed 24 cycles of sun salutation, 6 days a week for 24 weeks.
and slow gas exchange with atmosphere. Gas exchange is principally by diffusion whereby gases move down a partial pressure gradient from higher to lower partial pressure. In the case of O 2, movement is from higher atmospheric partial pressure of O 2 to lower soil air partial pressure of O 2. The opposite is true for CO 2 and H 2O. Following a heavy rain the O 2 partial pressure in soil... Astronomy 291 54 The Sun�s Angular Momentum � Whereas the Sun contains 99.9% of the mass of the Solar System, Jupiter has most of the angular momentum.
Astronomy 291 54 The Sun�s Angular Momentum � Whereas the Sun contains 99.9% of the mass of the Solar System, Jupiter has most of the angular momentum.
Atmospheric entry is the movement of an object from outer space into and through the gases of an atmosphere of a planet, dwarf planet, or natural satellite. | CommonCrawl |
Characterization and Modeling of Advanced Modified Surfaces.
A promising approach to surface modification involves diamondlike carbon (DLC) coatings with "functionally-graded surfaces" (FGS) as substrates. Titanium nitride substrates have a great potential as a FGS for the DLC coating, since titanium and titanium alloy surfaces processed by enhanced glow discharge nitriding develop a nitrogen concentration profile that results in gradual increase in the material hardness in the surface region. An investigation of the atomic structure of DLC films and the surface layer structure produced in the enhanced glow discharge nitriding was conducted in the course of this work. DLC and Si-DLC films were found to be mainly amorphous, dense and of high hardness, with featureless and very smooth surfaces. For the DLC films, the sp$\rm\sp3/sp\sp2$ ratio varied between 3.2 and 4.1. A microstructure that can be described as small graphitelike clusters interconnected by a network of sp$\sp3$-bonded carbon was suggested for these films. Characterization of the Si-DLC films revealed a wide variation in the $\rm sp\sp3/sp\sp2$ ratio, between 1.5 and 5.4. The effect of Si atoms incorporated in the DLC structure seems to be the prevention of aromatic clustering and promotion of the formation of sp$\sp3$ bonds. A structural model consisting of a mixed $\rm sp\sp2$-sp$\sp3$ carbon network with the C(sp$\sp2$) atoms present in olefinic rather than aromatic form was suggested. X-ray absorption examination of the nitrided surfaces demonstrated an increase of the nearest-neighbor N coordination numbers and higher phase fractions of $\delta$-TiN as the particle energy and current density were increased. Processing conditions corresponding to the energies of the bombarding particles around 1 keV resulted in relatively thick and continuous TiN layers with a structure virtually identical to that of the TiN standard. A two-phase model of the outer layer, describing the structure as a mixture of $\delta$-TiN and $\alpha$-Ti, was proposed. This model was found to be in excellent agreement with experimental data for samples processed at the particle energies of approximately 1 keV and above. The present results clearly show that the bombarding flux energy plays the key role in the formation of the outer layer structure and thus, the migration of nitrogen into the substrate.
Palshin, Vadim Gennadievich, "Characterization and Modeling of Advanced Modified Surfaces." (1998). LSU Historical Dissertations and Theses. 6754. | CommonCrawl |
Abstract: Many practical machine learning tasks employ very deep convolutional neural networks. Such large depths pose formidable computational challenges in training and operating the network. It is therefore important to understand how fast the energy contained in the propagated signals (a.k.a. feature maps) decays across layers. In addition, it is desirable that the feature extractor generated by the network be informative in the sense of the only signal mapping to the all-zeros feature vector being the zero input signal. This "trivial null-set" property can be accomplished by asking for "energy conservation" in the sense of the energy in the feature vector being proportional to that of the corresponding input signal. This paper establishes conditions for energy conservation (and thus for a trivial null-set) for a wide class of deep convolutional neural network-based feature extractors and characterizes corresponding feature map energy decay rates. Specifically, we consider general scattering networks employing the modulus non-linearity and we find that under mild analyticity and high-pass conditions on the filters (which encompass, inter alia, various constructions of Weyl-Heisenberg filters, wavelets, ridgelets, ($\alpha$)-curvelets, and shearlets) the feature map energy decays at least polynomially fast. For broad families of wavelets and Weyl-Heisenberg filters, the guaranteed decay rate is shown to be exponential. Moreover, we provide handy estimates of the number of layers needed to have at least $((1-\varepsilon)\cdot 100)\%$ of the input signal energy be contained in the feature vector. | CommonCrawl |
Is this the best hyperbolic fit to this data?
This was a bit less than I was anticipating so I would like to know if there is a better way to fit a hyperbolic curve to my data. I think my R^2 calculation is ok since it works well with real excel trendlines.
That is not how I would do it but it gives a pretty good result for a fit of the type you are attempting.
Do you have a theoretical reason for seeking a hyperbolic fit? There are other models that will give better fits but they are of no use if your are forced to a hyperbolic.
The attachment shows plots of your data, the hyperbolic fit and the least squares fit of the above model ($r^2=0.966$).
The fit is obtained using the non-linear solver in Gnumeric, there is one for Excel also but it is not always installed.
Last edited by CaptainBlack; November 21st, 2014 at 10:28 AM.
Wow that's awesome. I do have a reason to look for the best possible hyperbolic fit, but I am also interested in the best possible fit of any kind of simple function.
Is it reasonable to believe that a better hyperbolic curve could have a significantly better r2? Like 0.6 or 0.8?
I tried a power law function and that had a r2 = 0.95 so yours is indeed better.
I will try out the non-linear solver, although I'm not really sure how I can get m, a and b using it.
I wonder does this function have a limit x->infinity?
Last edited by nekdolan; November 21st, 2014 at 11:52 AM.
No the improvement in $r^2$ for a non-linear fit hyperbolic model is negligible.
Thanks than that's that, but I still would like to know (see my second edit) the limit of the function you have found.
The limit as $x \to \infty$ is $m$. | CommonCrawl |
arxiv.org. math. Cornell University, 2013. No. 1301.0342.
Bufetov A. I., Mkrtchyan S., Scherbina M., Soshnikov A.
Panov V. math. arxive. Cornell University, 2017. No. 1703.10463.
Kazakov M., Kalyagin V. A. In bk.: Models, Algorithms and Technologies for Network Analysis, Springer Proceedings in Mathematics & Statistics. Vol. 156. Switzerland: Springer, 2016. P. 135-156.
Bufetov A. Journal of Mathematical Physics. 2013. Vol. 54. No. 113302. P. 1-10.
To a $N \times N$ real symmetric matrix Kerov assigns a piecewise linear function whose local minima are the eigenvalues of this matrix and whose local maxima are the eigenvalues of its $(N-1) \times (N-1)$ submatrix. We study the scaling limit of Kerov's piecewise linear functions for Wigner and Wishart matrices. For Wigner matrices the scaling limit is given by the Verhik-Kerov-Logan-Shepp curve which is known from asymptotic representation theory. For Wishart matrices the scaling limit is also explicitly found, and we explain its relation to the Marchenko-Pastur limit spectral law.
Marshakov A., Миронов А. Д., Морозов А. Ю. Journal of Geometry and Physics. 2011. Vol. 61. P. 1203-1222.
Bufetov A. I., Mkrtchyan S., Scherbina M. et al. Journal of Statistical Physics. 2013. Vol. 152. No. 1. P. 1-14. | CommonCrawl |
QRHFIRR The symmetries of the orbitals which are involved in the QRHF orbital occupation alteration. A minus sign indicates that a $\beta$ orbital of the associated symmetry will be depopulated, while a positive sign indicates that an $\alpha$ orbital will be populated.
Data type: floating point. Dimension: QRHFTOT. Written by: xjoda.
This page has been visited 731 times since December 2010. | CommonCrawl |
How does fmincon in MATLAB calculate gradients?
I am trying to solve numerically a constrained optimisation problem in MATLAB, and I am wondering how the fmincon function calculates gradients when one isn't provided. Does anyone here know, or know how I might be able to find out?
Running the optimisation problem takes more time than I'd like it to, so I was hoping to speed it up by providing the gradient analytically. However, when I do this, I end up with wildly different solutions that seem less plausible than the solution that MATLAB generates when I do not provide the gradient.
As a check, I used the CheckGradients option in fmincon. Predictably, the gradient I provided did not pass this test. The same happens even when I set FiniteDifferenceType to 'central'. One obvious explanation is that the derivatives I provided truly are incorrect. However, I've gone over them several times and I'm fairly certain they are not.
As a sanity check, I tried to calculate the gradient of my objective numerically, using gradient, which the documentation suggests is calculated using finite differences. Unfortunately, the output of gradient is nowhere near the gradient calculated by fmincon.
I'm really not sure what's going on, and I'd appreciate it if anyone can help shed light on this situation.
Edit: I'm more interested in why fmincon and gradient produce different numerical derivatives, despite ostensibly both being calculated using finite differences. Unless I've misunderstood the options, the difference persists even when I give them the same finite difference step size.
$V(\mathbf p, \mathbf q)$ is actually the value function of some linear programming problem, and I've written a script that invokes linprog to calculate the value of my objective. $(\mathbf p, \mathbf q)$ also enters linearly into the objective and constraints in that problem. $C$, however, is non-linear.
The fmincon documentation is fairly clear on HOW it calculates gradients. Specifically, the documentation for the FiniteDifferenceType and FiniteDifferenceStepSize options explain this in some detail. fmincon is using either forward (default) or central difference formulas with the step size selected according to the documentation for FiniteDifferenceStepSize.
So the relevant question is not HOW are they calculated but why do the gradients calculated by finite difference differ so significantly from those calculated from an analytical expression? Usually this is caused by the finite difference step size being either too large or too small for the function being numerically "differentiated." The problem with a too-large step size is obvious. The problem with a too-small step size is that roundoff error makes the calculation unreliable. Some experimentation with different step sizes is often needed to find a value that is appropriate for a particular function.
This is explained in more detail in this paper by Iott and Haftka where they discuss an approach for step size selection.
Not the answer you're looking for? Browse other questions tagged optimization matlab constrained-optimization numerics or ask your own question.
How to speed convergence when optimizing a linear objective with nonlinear constraints?
How can I minimize the number of non-zero elements in the solution vector subject to linear constraints (MATLAB)?
Non linear programming solvers with API for MATLAB? | CommonCrawl |
A formula system can transform a formula natural language representation ("NLR") into a representation which shows the formula in traditional mathematical notation. This transformation can include creating a state machine with transition mappings between states that match to initial parts of the NLR. These transition mappings can include global transition mappings that are first attempted to be matched to the beginning of the NLR and then state specific transition mappings can be matched to the NLR. The formula system can consume the NLR, transitioning from state to state as indicated by the transition mappings and removing the matched initial part from the NLR, until the NLR has been fully consumed. In some cases, the formula system can recursively or iteratively create additional state machines to consume portions of the NLR. Some states provide a result (e.g. portion of a formula representation) which are combined to create the final formula representation.
1. A method for transforming a natural language representation of a formula into a formula representation, the method comprising: receiving the natural language representation of the formula; initializing a state machine; consuming multiple parts of the natural language representation of the formula, until the natural language representation of the formula has been fully consumed, wherein consuming each particular part of the multiple parts is performed by: matching at least some of the particular part of the natural language representation of the formula to a transition mapping, wherein the transition mapping includes a mapping destination that specifies a next state to which the state machine will transition; in response to the matching, instantiating the next state indicated by the mapping destination of the matched transition mapping; and storing any state results of the next state, wherein at least two states are reached, during the consuming of the multiple parts of the natural language representation of the formula, that produce state results; and combining the stored state results of the at least two states into the formula representation which, when rendered, displays the formula in mathematical notation.
2. The method of claim 1 further comprising, during the consuming of each identified part of at least one of the multiple parts, and in response to instantiating the next state indicated by the mapping destination for the identified part: recursively transforming a portion of the natural language representation of the formula into a sub-formula representation that, when rendered, displays part of the formula in mathematical notation, wherein the recursion consumes the portion of the natural language representation; and wherein the sub-formula representation is at least part of the state results for the identified part.
3. The method of claim 1, wherein the instantiating of each particular state of the at least two states includes generating a part of the state results of that particular state; and wherein another part of the state results of that particular state are generated by performing state end actions taken in response to an indication that the particular state is about to end.
4. The method of claim 3, wherein the formula representation is a block of HTML code or XML code; and wherein the formula representation is associated with CSS specifying how the rendering of the HTML or XML block is to occur.
5. The method of claim 1 wherein the matching of the at least some of the particular part of the natural language representation of the formula to the transition mapping is performed by: comparing A) an initial section of the particular part to B) patterns of multiple transition mappings, until the pattern of one of the multiple transition mappings matches the initial section of the particular part, wherein at least some of the patterns in the multiple transition mappings are regular expressions.
6. The method of claim 1 wherein the matching of the at least some of the particular part of the natural language representation of the formula to the transition mapping is performed by comparing A) the at least some of the particular part to B) patterns in a set of multiple transition mappings that are global to multiple of the instantiated states.
7. The method of claim 1 wherein the matching of the at least some of the particular part of the natural language representation of the formula to the transition mapping is performed by comparing A) the at least some of the particular part to B) patterns in a set of multiple transition mappings that are specific to a type of a current state of the state machine.
8. The method of claim 1 wherein the matching of the at least some of the particular part of the natural language representation of the formula to the transition mapping is performed by: first comparing A) the at least some of the particular part to B) patterns in a state specific set of multiple transition mappings that are specific to a type of a current state of the state machine, in a first order defined among the state specific set of multiple transition mappings; determining that none of the state specific set of multiple transition mappings match the at least some of the particular part; and in response to the determining, comparing C) the at least some of the particular part to D) patterns in a global set of multiple transition mappings that are global to multiple of the instantiated next states, in a second order among the global set of multiple transition mappings.
9. The method of claim 8 further comprising: identifying that none of the global set of multiple transition mappings match the at least some of the particular part; and in response to the identifying, modifying the particular part of the natural language representation of the formula by removing one or more characters from the beginning of the particular part of the natural language representation of the formula; and comparing E) the at least some of the modified particular part to either or both of F) the patterns in the global set of multiple transition mappings, in the first order; or G) the patterns in the state specific set of multiple transition mappings, in the second order.
10. The method of claim 1, where producing the state results for at least one particular state of the at least two states includes filling in a template, that is associated with the particular state, with content extracted from the particular part of the natural language representation of the formula that was matched to a transition mapping that caused the particular state to be instantiated.
11. A computer-readable storage medium storing instructions that, when executed by a computing system, cause the computing system to perform operations for transforming a natural language representation of a formula into a formula representation, the operations comprising: receiving the natural language representation of the formula; consuming multiple parts of the natural language representation of the formula, wherein consuming each particular part of the multiple parts is performed by: matching at least some of the particular part of the natural language representation of the formula to a transition mapping, wherein the transition mapping includes a mapping destination that specifies a next state; in response to the matching, instantiating the next state indicated by the mapping destination of the matched transition mapping; and storing any state results of the next state, wherein at least two states are reached, during the consuming of the multiple parts of the natural language representation of the formula, that produce state results; and combining the stored state results of the at least two states into the formula representation which, when rendered, displays the formula in mathematical notation.
12. The computer-readable storage medium of claim 11, wherein the operations further comprise, during the consuming of each identified part of at least one of the multiple parts, and in response to instantiating the next state indicated by the mapping destination for the identified part: recursively transforming a portion of the natural language representation of the formula into a sub-formula representation that, when rendered, displays part of the formula in mathematical notation, wherein the recursion consumes the portion of the natural language representation; and wherein the sub-formula representation is at least part of the state results for the identified part.
13. The computer-readable storage medium of claim 11, wherein the instantiating of each particular state of the at least two states includes generating a part of the state results of that particular state; and wherein another part of the state results of that particular state are generated by performing state end actions taken in response to an indication that the particular state is about to end.
14. The computer-readable storage medium of claim 11, wherein the matching of the at least some of the particular part of the natural language representation of the formula to the transition mapping is performed by comparing A) the at least some of the particular part to B) patterns in a set of multiple transition mappings that are global to multiple of the instantiated states.
15. The computer-readable storage medium of claim 11, wherein the matching of the at least some of the particular part of the natural language representation of the formula to the transition mapping is performed by comparing A) the at least some of the particular part to B) patterns in a set of multiple transition mappings that are specific to a type of a current state.
16. The computer-readable storage medium of claim 11, wherein the matching of the at least some of the particular part of the natural language representation of the formula to the transition mapping is performed by: first comparing A) the at least some of the particular part to B) patterns in a state specific set of multiple transition mappings that are specific to a type of a current state of the state machine, in a first order defined among the state specific set of multiple transition mappings; determining that none of the state specific set of multiple transition mappings match the at least some of the particular part; and in response to the determining, comparing C) the at least some of the particular part to D) patterns in a global set of multiple transition mappings that are global to multiple of the instantiated next states, in a second order among the global set of multiple transition mappings.
17. The computer-readable storage medium of claim 16, wherein the operations further comprise: identifying that none of the global set of multiple transition mappings match the at least some of the particular part; and in response to the identifying, modifying the particular part of the natural language representation of the formula by removing one or more characters from the beginning of the particular part of the natural language representation of the formula; and comparing E) the at least some of the modified particular part to either or both of F) the patterns in the global set of multiple transition mappings, in the first order; or G) the patterns in the state specific set of multiple transition mappings, in the second order.
18. A computing system for transforming a natural language representation of a formula into a formula representation, the system comprising: one or more processors; and a memory storing instructions that, when executed by the one or more processors, cause the computing system to perform operations comprising: consuming multiple parts of the natural language representation of the formula, wherein consuming each particular part of the multiple parts is performed by: matching at least some of the particular part of the natural language representation of the formula to a transition mapping, wherein the transition mapping includes a mapping destination that specifies a next state; in response to the matching, instantiating the next state indicated by the mapping destination of the matched transition mapping; and storing any state results of the next state, wherein at least two states are reached, during the consuming of the multiple parts of the natural language representation of the formula, that produce state results; and combining the stored state results of the at least two states into the formula representation which, when rendered, displays the formula in mathematical notation.
19. The system of claim 18, wherein the instantiating of each particular state of the at least two states includes generating a part of the state results of that particular state; and wherein another part of the state results of that particular state are generated by performing state end actions taken in response to an indication that the particular state is being exited.
20. The system of claim 18, wherein the matching of the at least some of the particular part of the natural language representation of the formula to the transition mapping is performed by: first comparing A) the at least some of the particular part to B) patterns in a state specific set of multiple transition mappings that are specific to a type of a current state of the state machine, in a first order defined among the state specific set of multiple transition mappings; determining that none of the state specific set of multiple transition mappings match the at least some of the particular part; and in response to the determining, comparing C) the at least some of the particular part to D) patterns in a global set of multiple transition mappings that are global to multiple of the instantiated next states, in a second order among the global set of multiple transition mappings.
This application claims priority to U.S. Provisional Patent Application No. 62/481,877, titled "Method of Producing Mathematical Text," which is herein incorporated by reference in its entirety.
The present disclosure is directed to using a digital state machine to transform a natural language representation of a formula into a representation that uses mathematical notation when rendered.
Digital mathematical notation is a field that was developed through the 1980s to facilitate the creation and transaction of mathematical models on a computer through a computer friendly format. This field is integral to professions in the academic, scientific, and financial professional sector. All competitive models, however, rely on creating a user interface that has a pseudo keyboard layout to prompt the user to build the equation either by "dragging and dropping" content or by clicking on mathematical formulae digital buttons and replacing any derivative fields (i.e. a formula creation wizard). An examples of the formula creation wizard method famously includes Microsoft Word's Equation Macro.
There are multiple shortcomings to the formula creation wizard approach. Users must context switch from keyboard input to utilizing a mouse or touchpad to move the digital cursor to create the mathematical formula, a slow and inefficient process. This impacts the ability for individuals in mathematics-relevant sectors to execute their intended equations quickly on any integrated interface. This has particular impacts in the education sector where math is ever-present and effective note-taking must be performed quickly. The relative slowness of contemporary formula creation wizard models for inputting math onto the computer makes the manipulation of formulas both difficult and unnatural for end users.
There is one method, however, that allows sole use of the keyboard for writing mathematical structures. This model is known as LaTeX, which is derived from the TeX language developed by Donald Knuth and is maintained by The TeX Users Group (TUG). However, there are multiple shortcomings to this approach that rival those of drag and drop or click and build equation interfaces. For example, users must first learn the LaTeX language, which can involve reading a handbook and being aware of sub-practices such as compiling, mark-up syntax, and package inclusion. LaTeX is solely a programming language and therefore is not admissible as a spoken or easily teachable natural language. Further, the user must download or utilize a LaTeX converter and requires time and technical expertise to set up the digital "architecture" of the LaTeX document. In addition, a LaTeX document has the shortcoming of being unreadable by the casual user, prohibiting users that are not devout on learning the language or downloading a compiler. These shortcomings of LaTeX are especially pertinent in the educational sector, where students are not yet at a technical level of mathematics that would be equivalent to learning a full markup language such as LaTeX.
FIG. 1 is a block diagram illustrating an overview of devices on which some implementations can operate.
FIG. 2 is a block diagram illustrating an overview of an environment in which some implementations can operate.
FIG. 4 is a flow diagram illustrating a process used in some implementations for transforming a natural language representation of a formula into a formula representation.
FIG. 5 is a flow diagram illustrating a process used in some implementations for matching a part of a natural language representation of a formula to a state transition mapping.
FIG. 6 is a flow diagram illustrating a process used in some implementations for instantiating a new state indicated by a transition mapping destination.
FIG. 7 is a conceptual diagram illustrating an example system that converts a natural language representation of a formula into a formula representation.
FIG. 8 is a conceptual diagram illustrating an example of state results during a transformation of a natural language representation of a formula into a formula representation.
FIG. 9 is a conceptual diagram illustrating an example of a portion of a state machine showing transitions between states in relation to text describing an integral formula.
FIG. 10 shows several conceptual diagrams illustrating examples of textual NLR input and resulting formula representation outputs.
Embodiments of a formula system are described herein that can transform a natural language representation of a formula (referred to herein as a "NLR") into a formula representation. A "natural language" as used herein is a language spoken or written by humans, as opposed to a programming language or a machine language. (See the Microsoft Computer Dictionary, 5th Edition.) A formula representation is a representation of a formula that, when rendered, is provided in traditional mathematical notation. The formula system allows a user to easily obtain a formula representation by entering a NLR version of the formula, e.g. through a keyboard (e.g. a physical external device or a digital on-screen keyboard), through spoken words (e.g. entered through a microphone), through text recognition of handwriting or a document, etc. For example, a user can enter "an integral from 0 to 100 of x squared" and the formula system can automatically produce a formula representation of: ∫0100x2 dx.
The formula system can perform this transformation by creating a state machine that has transition mappings between states that can match to parts of a NLR. In some implementations, the formula system first attempts to match an initial part of the NLR to a set of transition mappings local to a current state of the state machine, and if no matches are found, to state specific transition mappings that are global to multiple of the states of the state machine. The formula system can consume the NLR, transitioning from state to state as indicated by the mappings and removing the matched initial part from the NLR, until the NLR has been fully processed. In some cases, as part of operating a current state, the formula system can recursively or iteratively create additional state machines to consume portions of the NLR associated with that state. Each state can provide a result (e.g. a portion of a formula representation) or context for other states. Results from the various states and state machines can be combined to create the final formula representation. In various implementations, the formula representation can be an image, a markup-language version of the formula (e.g. in HTML or XML) with a set of instructions (e.g. CSS) for displaying the markup-language version as a formula (e.g. in a browser), a block of LaTeX markup (which can later be rendered either using LaTeX or through a further conversion, e.g. to HTML/CSS, for rendering, or another data object that is configured to output a formula representation (e.g. input for a Microsoft Word macro that will create a formula object in the Word interface).
In some versions of the prior art, generating formula representations that use traditional mathematical notation is slow due to the need to use multiple input devices and context switches to a formula creation wizard, which can be difficult and slow. In the other versions of the prior art, entering a formula can be a cryptic process of entering a programming language representation of the formula. Such prior art systems for pure keyboard entry of formulas have a high barrier to entry as they require special training and programming architecture while also being error prone, as users have to convert what they want to show to the unnatural programming language format, increasing the cognitive burden. The formula system disclosed herein provides a technical improvement over these systems to make formula entry fast though a single input device while increasing accuracy by using natural language input. These improvements are realized by implementing the computing procedures described below that transform natural language representations of a formula into a formula representation that shows traditional mathematical notation. This formula system is the first capable of utilizing natural language input as the sole form of entry to form mathematical equations, eliminating the delay of using a formula creation wizard while also eliminating the need for user entry of complicated and error-prone programming or mark-up language.
Several implementations are discussed below in more detail in reference to the figures. FIG. 1 is a block diagram illustrating an overview of devices on which some implementations of the disclosed technology can operate. The devices can comprise hardware components of a device 100 that can convert a natural language representation of a formula into a formula representation. Device 100 can include one or more input devices 120 that provide input to the CPU(s) (processor) 110, notifying it of actions. The actions can be mediated by a hardware controller that interprets the signals received from the input device and communicates the information to the CPU 110 using a communication protocol. Input devices 120 include, for example, a mouse, a keyboard, a touchscreen, an infrared sensor, a touchpad, a wearable input device, a camera- or image-based input device, a microphone, or other user input devices.
CPU 110 can be a single processing unit or multiple processing units in a device or distributed across multiple devices. CPU 110 can be coupled to other hardware devices, for example, with the use of a bus, such as a PCI bus or SCSI bus. The CPU 110 can communicate with a hardware controller for devices, such as for a display 130. Display 130 can be used to display text and graphics. In some implementations, display 130 provides graphical and textual visual feedback to a user. In some implementations, display 130 includes the input device as part of the display, such as when the input device is a touchscreen or is equipped with an eye direction monitoring system. In some implementations, the display is separate from the input device. Examples of display devices are: an LCD display screen, an LED display screen, a projected, holographic, or augmented reality display (such as a heads-up display device or a head-mounted device), and so on. Other I/O devices 140 can also be coupled to the processor, such as a network card, video card, audio card, USB, firewire or other external device, camera, printer, speakers, CD-ROM drive, DVD drive, disk drive, or Blu-Ray device.
The CPU 110 can have access to a memory 150 in a device or distributed across multiple devices. A memory includes one or more of various hardware devices for volatile and non-volatile storage, and can include both read-only and writable memory. For example, a memory can comprise random access memory (RAM), various caches, CPU registers, read-only memory (ROM), and writable non-volatile memory, such as flash memory, hard drives, floppy disks, CDs, DVDs, magnetic storage devices, tape drives, device buffers, and so forth. A memory is not a propagating signal divorced from underlying hardware; a memory is thus non-transitory. Memory 150 can include program memory 160 that stores programs and software, such as an operating system 162, formula transformation system 164, and other application programs 166. Memory 150 can also include data memory 170 that can include transition mappings, stored results from visited states of a state machine, configuration data, settings, user options or preferences, etc., which can be provided to the program memory 160 or any element of the device 100.
Some implementations can be operational with numerous other computing system environments or configurations. Examples of computing systems, environments, and/or configurations that may be suitable for use with the technology include, but are not limited to, personal computers, server computers, handheld or laptop devices, cellular telephones, wearable electronics, gaming consoles, tablet devices, multiprocessor systems, microprocessor-based systems, set-top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, or the like.
FIG. 2 is a block diagram illustrating an overview of an environment 200 in which some implementations of the disclosed technology can operate. Environment 200 can include one or more client computing devices 205A-D, examples of which can include device 100. Client computing devices 205 can operate in a networked environment using logical connections 210 through network 230 to one or more remote computers, such as a server computing device.
Client computing devices 205 and server computing devices 210 and 220 can each act as a server or client to other server/client devices. Server 210 can connect to a database 215. Servers 220A-C can each connect to a corresponding database 225A-C. As discussed above, each server 220 can correspond to a group of servers, and each of these servers can share a database or can have their own database. Databases 215 and 225 can warehouse (e.g. store) information. Though databases 215 and 225 are displayed logically as single units, databases 215 and 225 can each be a distributed computing environment encompassing multiple computing devices, can be located within their corresponding server, or can be located at the same or at geographically disparate physical locations.
FIG. 3 is a block diagram illustrating components 300 which, in some implementations, can be used in a system employing the disclosed technology. The components 300 include hardware 302, general software 320, and specialized components 340. As discussed above, a system implementing the disclosed technology can use various hardware including processing units 304 (e.g. CPUs, GPUs, APUs, etc.), working memory 306, storage memory 308 (local storage or as an interface to remote storage, such as storage 215 or 225), and input and output devices 310. In various implementations, storage memory 308 can be one or more of: local devices, interfaces to remote storage devices, or combinations thereof. For example, storage memory 308 can be a set of one or more hard drives (e.g. a redundant array of independent disks (RAID)) accessible through a system bus or can be a cloud storage provider or other network storage accessible via one or more communications networks (e.g. a network accessible storage (NAS) device, such as storage 215 or storage provided through another server 220). Components 300 can be implemented in a client computing device such as client computing devices 205 or on a server computing device, such as server computing device 210 or 220.
General software 320 can include various applications including an operating system 322, local programs 324, and a basic input output system (BIOS) 326. Specialized components 340 can be subcomponents of a general software application 320, such as local programs 324. Specialized components 340 can include pre-transformer and post-transformer 344, formula transition state machine 346, transition mapper 348, state result combination engine 350, and components which can be used for providing user interfaces, transferring data, and controlling the specialized components, such as interface 342. In some implementations, components 300 can be in a computing system that is distributed across multiple computing devices or can be an interface to a server-based application executing one or more of specialized components 340.
Pre-transformer and post-transformer 344 can adjust an input NLR, e.g. by removing unnecessary words such as "the" or converting certain phrases to phrases used in transitions between states of state machine 346. Pre-transformer and post-transformer 344 can also adjust formula results, such as by removing unnecessary parentheses, associating a CSS script to a HTML block or adding CSS inline to such a HTML block, or converting a formula representation into an image.
Formula transition state machine 346 can include a sate machine (e.g. implemented as a set of extensions to a state class) that tracks which state is the current state, where entering and exiting each state can produce state results. State results, for example, can include filling in a template for the current state with a portion of a natural language representation of a formula that was passed to the state. In some implementations, evaluating a state can include recursively implementing a new version of transition state machine 346 to process sub-strings of the natural language representation.
Transition mapper 348 can control which state is transitioned to next, from the current state. Transition mapper 348 can select a next state by matching an initial portion of the natural language representation of a formula to a transition mapping. In various cases, the match can be made to one of a set of global transition mappings or to one of a set of transition mappings specific to the current state. Each transition mapping can have a pattern portion specifying a pattern to match to the begging of the natural language representation and can have a mapping destination, specifying a state for formula transition state machine 346 to transition to next.
State result combination engine 350 can take stored results, generated from various states, and combine them into an overall state result for a particular state or the state machine. This combining can include, for example, concatenation of state results.
Those skilled in the art will appreciate that the components illustrated in FIGS. 1-3 described above, and in each of the flow diagrams discussed below, may be altered in a variety of ways. For example, the order of the logic may be rearranged, substeps may be performed in parallel, illustrated logic may be omitted, other logic may be included, etc. In some implementations, one or more of the components described above can execute one or more of the processes described below.
FIG. 4 is a flow diagram illustrating a process 400 used in some implementations for transforming a natural language representation of a formula into a formula representation. Process 400 begins at block 402 and continues to block 404. At block 404, process 400 can receive a natural language representation of a formula (NLR). In various implementations, this NLR can be input through a keyboard, microphone, handwriting, extraction from a document, etc. In some implementations, as a user enters natural language into an interface (e.g. a word processing system, browser-based or other online application, mobile app, or other system that receives natural language) process 400 can be automatically re-executed continuously or at intervals, analyzing and converting sections of user input. In some implementations, process 400 can recognize keywords or phrases to select a portion of the input as a NLR on which to initiate the remainder of process 400. In some implementations, the user can indicate that process 400 should be performed on a recent or a selected portion of input, e.g. by actuating a digital "convert" button that instructs the application to convert the NLR to a formula representation. In some implementations, a combination of these procedures is used. For example, the system can monitor for key phrases in entered text, and upon recognizing one, can show a context tool near the phrase, the actuation of which initiates process 400 with a NLR related to the key phrase. Key phrases, for example, can include words (e.g. integral, exponent, matrix, formula, equation, etc.) or characters (e.g. +, −, ̂, *, ect.).
At block 406, process 400 can perform pre-transformation procedures on the NLR. Pre-transforming can take various forms such as removing unnecessary words, converting equivalent words to a common format, or adding or removing likely spacing or structural marks. For example, pre-transforming can convert each of "to the," "take the exponent," "take a power of" and other ways of indicating exponent to the common symbol "A". As another example, pre-transforming can remove all instances of "the" except where it is part of the phrase "to the," indicating exponentiation. As a further example, the NLR 1+2 can be converted to 1+2, so consistent spacing is used. In some implementations, pre-transforming can include adding a particular "start of string" indicator to the beginning of the NLR or adding a particular "end of string" indicator to the end of the NLR.
At block 408, process 400 can begin a loop between blocks 408-414, which will process the NLR until it has been fully consumed (i.e. all parts of the NLR have been processed). When the NLR has been fully consumed, process 400 continues from block 408 to block 416. When at least a part of the NLR has not been consumed, process 400 continues from block 408 to block 410. If this is the first time process 400 has arrived at block 408, process 400 can also initialize a state machine that will control processing in the loop between blocks 408-414. In various implementations, the state machine can be a traditional state machine (e.g. a data object with a preconfigured set of state variables) or can be another digital object that stores mappings between contexts and produces a result corresponding to a portion of the NLR that corresponds to the digital object (e.g. a state class with a hierarchy of class extensions defining characteristics of the various states, which can be constructed as a state is entered).
At block 410, process 400 can match part of the NLR to a transition mapping. Transition mappings can have a pattern portion that can be matched to part of a NLR and a mapping destination portion indicating a next state that process 400 should go to. In some implementations, the mapping destination can be variable depending on a context produced by the state or by other states, as discussed below in relation to block 412. The state machine can include a set of global transition mappings and each state can have zero or more state-specific transition mappings. In some implementations, the set of global transition mappings and each set of state-specific transition mappings can have an order, either within that set or across both sets. Process 400 can match an initial portion of the NLR (from the start of the NLR to any remaining amount of the NLR up to the entire NLR) to one of the patterns in a transition mapping. In some implementations, one of the sets of mappings can include a default mapping indicating a mapping destination if no other match can be found for the initial portion of the NLR. In various implementations, the pattern can be a string for comparison (with or without wildcards) or can be a more complicated object, such as a regular expression. In some implementations, a global transition mapping can be for a start of string indicator, with a mapping destination pointing to a first state of the state machine. In some cases, when a portion of the NLR is matched, it is removed from the NLR for further processing of the loop between blocks 408-414.
Mappings can be for any group of numbers, characters, or other symbols, allowing the NLR to be provided by the user and matched using a combination of how a formula would be spoken and symbolic representations. For example, "x squared over y plus a/b−(2̂3)/6" can be successfully matched to state transitions, even though the second half is not written in a way that a person is likely to speak (i.e. they are more likely to say "a over b" instead of "a slash b". In some implementations, the mappings can include LaTeX commands as the keywords/phrases pattern portion. For example, "\infty"—the LaTeX command for the infinity symbol—can be included as a mapping to the "infinity" state. This would allow users to write an NLR, for example, "limit as x approaches \infty of (4x−2)/(2x+1) equals 2," which can be successfully converted to a formula representation with an ∞ symbol. In addition, some implementations the system can successfully convert arbitrary LaTeX math statements in the NLR to the formula representation. This can be accomplished by setting a global mapping that recognizes a starting delimiter (e.g. "\(" or "$$") and uses a separate LaTeX interpreter to parse the text up until the ending delimiter (e.g. "\)" or another "$$"). Additional details regarding matching an part of the NLR to a transition mapping are provided in relation to FIG. 5.
At block 412, process 400 can initiate a new state indicated by the mapping destination of the transition mapping matched at block 410. Initiating a new state can include transitioning to a state in the state machine. In some implementations, initiating a new state can include a series of state start actions that can produce context for performing other actions of the state or of other states and/or can produce state results, such as a portion of output representing the formula (e.g. a block of HTML). For example, initiating a new state can be done by creating a new object that is an extension of a state class with a constructor function that receives the matched part of the NLR and/or some remaining portion of the NLR as a parameter. Which state extension is uses is controlled by the mapping destination. The constructor can call a function for generating HTML from the NLR that will transform the received text into corresponding HTML, e.g. by extracting portions from the NLR and inserting them into one or more HTML templates defined for the state extension. Entering a state can also produce context for other states, e.g. by pushing data onto a stack (or other data object) that is available to other states. This data object can control the operation of, or transition between, the various states, such as by selecting a particular state transition when a transition mapping has variable possible destinations. For example, with the NLR "integral of cosine of x plus sine of x", when each of the "integrand", "cosine", and "sine" states are entered, an indicator of this state can be pushed onto a stack. When the state completes, it looks at the state to determine which state to return to. Thus, when the "cosine" state returns the state machine can transition to the "integrand" state, causing the results of the "cosine" state in be in the integrand, even though there is no word in the NLR input prompting the system make this return transition.
In some implementations, initiating a new state can include a recursive call to process 400 to generate output corresponding to sub-parts of the received NLR portion. In some implementations, when the received portion of the NLR has been processed, the state extension can include a destructor that can provide additional context, remove context added by the constructor, or provide additional output (e.g. additional HTML blocks). In some implementations, constructing or destructing a state can cause symbols that were not typed by the user to be included in the state results. For example, "wrt x" can become "dx" when rendered even though a 'd' was never typed. In some implementations, this is accomplished by adding the additional symbols to results of sub-states, e.g. the character 'd' can be added to the value (e.g. "x") returned by instantiating a 'wrt' state to end up with <wrt>dx</wrt>. In some implementations, this can be accomplished in a rendering step where the wrt tag adds a 'd' to its body.
In some cases, a state can apply one of multiple templates associated with the state. For example "integral from 0 to 1 of x wrt x" and "integral with bounds 0 and 1 of x wrt x" produce the same output, but the former uses the template "integral symbol, lower bound, upper bound, integrand, wrt" and the latter the template "integral symbol, bounds, integrand, wrt" where the "bounds" state itself has a template of "lower bound, upper bound". The integral state can detect which template to use given the input. This allows the algorithm to support a wider range of NLRs for the same math expression. Further details regarding initiate a new state based on a mapping destination are provided in relation to FIG. 6. At block 414, the state results from block 412 can be stored for later combination into an overall representation of the formula.
Once the loop between blocks 408-414 has consumed the entire NLR, processing continues to block 416. At block 416, process 400 can combine the results stored at block 414. In some implementations, some or all of the results can be typed to indicate how the combination will occur, with each type specifying a procedure for making the combination. For example, a result can specify a division type which can indicate a procedure for selecting particular stored results to include in a numerator portion and other particular stored results to include in a denominator portion. In some implementations, untyped results can be combined using a default procedure, such as concatenation.
As another example, process 400 can take a block of HTML from block 416, apply CSS to render a result, and take a snapshot of the result as an image. At block 420, process 420 can return a result of the transforming. For example, the result can be a block of XML, HTML, an image, etc. Process 400 can then continue to block 422, where it ends.
FIG. 5 is a flow diagram illustrating a process 500 used in some implementations for matching a part of a natural language representation of a formula to a state transition mapping. In some implementations, process 500 can be initiated by process 400 at block 410. Process 500 begins at block 502 and continues to block 504. At block 504, process 500 can receive a NLR.
At block 506, process 500 can traverse any state specific mappings that are assigned to the current state initiated at block 412 (or the first state of the state machine). In some implementations, the state specific mappings can be ordered, in which case the traversal of the state specific transition mappings can be performed in the order. This traversal can include determining if a beginning portion of the NLR matches a pattern portion of any of the state specific transition mappings. The pattern portion of a transition mapping can be anything that can be compared to a string, such as an exact string, a string with wildcards, a regular expression, etc. The beginning portion of the NLR can be any amount of the NLR that starts from a first character of the NLR.
At block 508, if a match to a state specific transition mapping was found at block 506, process 500 continues to block 516. Otherwise, process 500 continues to block 510. At block 510, process 500 can traverse a set of global transition mappings to determine if any global transition mapping match the initial portion of the NLR. This traversal can be accomplished in a manner similar to that described for block 506. The global transition mappings can be for portions of mathematical notations that can occur independently of previous portions of a formula, and thus do not require context or processing from a previously entered state to correctly produce a function representation. For example, if a user entered "x̂3/y" and the "x" portion had already been processed and removed, so the remaining part of the NLR is "̂3/y", one of the global transition mappings can include a pattern that matches the initial NLR portion "A", causing this transition mapping to be selected in the traversal. Examples of global transition mappings include global mappings in three categories: (1) special functions, (2) symbols, and (3) modifiers. Examples of special functions can include: "integral", "log"; "A"; "limit"; "square root"; "absolute value"; "floor"; "boldface"; and "blackboard", where boldface and blackboard are special commands to change the font of the next word or letter typed. For example, "blackboard Z" can specify the symbol for a set of integers. Examples of symbols can include: "alpha", "dot", or "parens". Example of modifiers can include: "bar" (e.g. "x bar" is an x with a bar over it), "tilde", "hat", and "-dot" (the hyphen can be used here to avoid ambiguity with a dot symbol next to a character usually denoting multiplication). In some implementations, the global mappings can be ordered, in which case the traversal of the global transition mappings can be performed in the order.
In some implementations, a default state or default processes can be selected if the beginning of the NLR does not match any of the global or state specific transition mappings. For example, an empty NLR can cause process 500 to return with an indication that the current state should return its stored results. Alternatively, if no match to a global transition mapping if found, at block 512, process 500 can go to block 514 to take a default action. If a match is found, process 500 can continue to block 516.
At block 514, no global or state specific transition mapping has been found. In response, process 500 can modify the NLR by removing a beginning portion from the NLR. In various implementations, the removed portion can be the first character or first word of the NLR. Process 500 can then return to block 506 to again attempt to match the now modified NLR to the global transition mappings, and if no global match is found, to the state specific transition mappings at block 510.
Once a match has been found, at block 516, process 500 can return a mapping destination from the matched transition mapping. Each transition mapping can refer to a mapping destination, which identifies a next state to transition to from the current state. For example, if the current state is a state for an integral, and the matched transition mapping is a close_integral state, which has been matched to the NLR "with respect to x" process 500 can return an indication of the close integral state, which can cause process 400 to generate and transition to the close_integral state at block 412. Process 500 can the proceed to block 518, where it ends.
FIG. 6 is a flow diagram illustrating a process 600 used in some implementations for instantiating a new state indicated by a transition mapping destination. In some implementations, process 600 can be initiated by process 400 at block 412. Process 600 begins at block 602 and continues to block 604. At block 604, process 600 can receive part of a NLR and a mapping destination. In some implementations, the NLR part can be a part of a NLR that was matched to a pattern of a transition mapping at block 506 or 510 and the mapping destination can be the mapping destination returned based on that match at block 516. In some implementations, the NLR part can also include a portion of the NLR after the matched portion that is also before a next match in the NLR. For example, when the current state is an integral state, and the NLR is "from −100 to 100 of x̂3 dx" a match is for the word "from" can indicate that the next state is a lower bound state, and a next match can be for the word "to", between these is the substring "−100" which can be the NLR part that is passed to process 600. In some implementations, the NLR part can be the remainder of the string, e.g. "−100 to 100 of x̂3 dx", the initial part including "−100" can be matched at block 410 to a new expression, the result of which can be determined at block 610 and included in output defining the integral lower bound, as discussed below.
At block 606, process 600 can create the state indicated by the mapping destination. This can be accomplished by calling a constructor function corresponding to the mapping destination of an object that extends a state class. In some implementations, instead of creating a new state, the state machine can be fully formed, and process 600 transitions to the indicated mapping destination state, e.g. by updating a pointer to the corresponding state. At block 608, process 600 can perform actions that the new state indicates should occur when the state is first entered. For example, state start actions can include pushing items onto a context stack (or augmenting another context data structure) or filling in a template corresponding to the state. In some implementations, a template corresponding to a state can be a snippet of structured data that will form part of the formula representation output from process 400. As a more specific example, if the new state is an integral state, the template can be the XML snippet, e.g. "<integral>". In some implementations, performing state start actions can modify the NLR part to remove portions corresponding to formula sections added through the template. For example, if the NLR part was "integral from 0 to 10 of 2x" performing the state start actions can include causing "integral" (i.e. the portion of the NLR matched at block 510) to be removed from the NLR part.
At block 610, process 600 can recurse on any remaining portion of the NLR part received at block 604. In some implementations, recursion on a NLR portion can include calling a new instance of process 400 on the remaining portion. Continuing the previous example, the remaining portion of the NLR could be "from 0 to 10 of 2x". This recursion can first determine a state specific transition matching (at block 510) from the integral state to a "from" state when performing the matching of "from 0" that adds (at blocks 608 and 612) template "<from>0</from>". The "from" state would then return to the parent integral state. This recursion can next determine a state specific transition matching (at block 510) from the integral state to a "to" state when performing the matching of "to 10". The "to" state can add (at blocks 608 and 612) a template "<to>10</to>" and then can returns to the parent integral state. This recursion can next determine a global transition matching (at block 506) from the integral state to a multiplication state when performing the matching of "2x". The multiplication state can add (at block 608) a template "<expression>2x</expression>" and then return to the parent integral state. This is a simplified example, as in some implementations, further recursion would have been performed in each phase to create each expression. The state results from the recursion can be added to the template state results generated at block 608. In the previous example, upon exiting block 610, the state results are "<integral><from>0</from><to>10</to><expression>2x</expression>".
When the recursion of the remaining portion of the NLR is complete, process 600 continues to block 612 where any state end actions can be performed. Performing state end actions can include removing context created at block 608 (e.g. popping variables off a context stack). In some implementations, process 600 can identify that the recursion is complete by encountering a keyword or symbol closing a sub-portion of the NLR. For example, a common key symbol that indicates the end of a sub-portion is close parentheses. As a more specific example, the NLR "sin(" causes the state machine to enter a "sin-parens" state which is ended when ")" is read. In this case, the ")" is removed from the remaining NLR to be parsed. In other cases, that removal may not occur. For example, in the NLR "integral of x from 0 to 1", the word "from" triggers the end of the "integrand" state (and thus the return to the integral state), but the word "from" is not removed as it is needed so that the integral state knows to transition to a lower bound state.
Performing state end actions can also include filling in a template corresponding to exiting the state. In some implementations, an exit template corresponding to a state can be a snippet of structured data that will form part of the formula representation output of process 400. Continuing the previous example, the recursion on the NLR part has consumed the NLR part such that it is now empty. The state end actions can determine that an "integral end" state had not been entered during the processing of the integral state, and thus it can add a default "<WRT>x</WRT>" to the state results, where the "x" is identified as the primary variable in the expression portion of the integral result. Further state actions can add a closing template to the state results. Continuing the above example, "</integral>" can be the closing template for the integral state. Thus, upon leaving block 612 in this example, the state results are "<integral><from>0</from><to>10<110><expression>2x</expression><WRT>x</WRT></integr al>". These state results are only an example, and other start and end state actions with corresponding templates could produce other state results, such as: "<integral from ='0' to ='10' WRT='x'><multiply><integer>2</integer><variable>x</variable></multiply></integral>". In this case, the recursive process of block 610 would have returned results which the state end actions would incorporate into the results of the current state, e.g. by filling in the "from='0'" parameter to the <integral>template.
In some implementations, performing state end actions can modify the NLR part to remove portions corresponding to formula sections added through the template. For example, if the NLR part had included the phrase "with respect to x" this portion could have been removed from the NLR part when it added "<WRT>x</WRT>" to the state results.
Once the state end results have been computed they can be returned or otherwise stored at block 614. Process 600 can then continue to block 616, where it ends.
FIG. 7 is a conceptual diagram illustrating an example 700 system that converts a natural language representation of a formula into a formula representation. In example 700, a user is typing into a user interface and has entered the input 702 "If you use the formula integral from zero to infinity of log(x)/y with respect to x". In example 700, input is continuously analyzed for a formula and thus processing has, in previous iterations, disregarded "If you use the formula taking" as not being part of a mathematical notation. Also in previous processing, as the user has entered "integral from zero to infinity of log(x)/y with respect to x," the system can have created corresponding portions of a formula representation, e.g. when the user entered "integral" the system may have replaced it with ∫ and when the user entered "from zero to infinity" the system may have updated the representation to replaced it with ∫0∞, automatically as the user continued to type. Example 700 picks up at 750 where the user has completed entering the formula, and the NLR 772 including "the integral from zero to infinity of log(x)/y with respect to x" is passed to the pre-processor 706.
Pre-processor 706 can transform the NLR to remove unnecessary words or characters or perform replacements specified by a replacement dictionary. In example 700, NLR 772 is converted to remove "the" unless it is part of the phrase "to the" so that NLR 774 is "integral from zero to infinity of log(x)/y with respect to x". NLR 774, at step 752, is provided to transition mappings 708 and to state instantiator 710.
At step 754, a current state, with a set of state specific transition mappings, are provided to transition mappings 708. At this point, the current state is a default first state, instantiated in response to receiving a NLR to process.
Next, transition mappings 708 attempts to match an ordered set of global transition mappings to an initial part of NLR 774 and if no match is found, continues to match an ordered set of transition mappings from the current state to the initial part of the NLR 774. If still no match is found, transition mappings can take a default action, such as removing the first character from the NLR and trying again. When a match is found, a mapping destination 778 from the matched transition mapping is provided from transition mappings 708 to state instantiator 710. In this case, "integral" is matched to a state specific mapping with an integral state mapping destination 778.
Each of these blocks can be provided at 770 to post-processor 718 for final analysis, such as removing unnecessary parentheses, applying CSS, etc. When the initial NLR 772 has fully consumed and the combined state results have been passed through post processing 718, these results 792 can be provided at 796 back to replace the NLR in the input. The results can be incorporated at 798 into the rendered modified input 799.
FIG. 8 is a conceptual diagram illustrating an example 800 of state results during a transformation of a NLR into a formula representation. Example 800 begins when a received NLR 802 is "integral from 0 to ∞ of log(x)/y wrt x". Though this example uses the ∞ symbol for conciseness, in some cases other infinity indicators, such as the word "infinity" could be entered. In example 800, the NLR has been shown with a circled s symbol at the beginning, indicating a start of string character.
Processing of NLR 802 begins whit no current state but a global transition mapping of a start of string character to an expression state indicates that the first state will be an expression state. The system, upon making this match, removes the start of string character from the NLR, resulting in NLR 804. Because there is no current state, there are no state results.
Processing of NLR 804 begins in current state: expression. The beginning of NLR portion 804 is matched with an expression state specific mapping that maps the word "integral" to an integral state. The system, upon making this match, removes "integral" from the NLR, resulting in NLR 806. In this case, moving from the start of string character to the expression state produced no state results. In some implementations, the expression state can produce results which may not produce renderable output but establishes context for other features, such as an HTML block <math class="expression">, which can have associated CSS elements or can establish a hierarchy that controls how other sub-blocks are rendered.
The beginning of NLR portion 806 is matched with a "from" state specific mapping that maps the word "from" to a from state. The system, upon making this match, removes "from" from the NLR and recurses on a substring that is between the matched "from" string to a next match of either a new expression start or, as in example 800, a "to" match, resulting in NLR 808.
Processing of NLR 808 begins in current state: from. Upon initializing the from state, state results for updating the ∫ symbol to include a 0 lower bound are produced. For example, the state results could be a flag to update the parent "<span class='integral'>" to be "<span class='integral' lowerBound='0'>". Another example is an XML block that could be included inside the XML block generated in the integral parent state with the content "<intLowerBound>0</intLowerBound>", where the 0 is taken from the NLR 808. In some implementations, evaluating NLR 808 would match the 0 to another expression state where the 0 would be the state result. After producing this state result, 0 would be removed from the NLR and it would be empty. This could cause the state transition mapping to be a return to the parent state, as indicated by the double lines under NLR portion 810.
Upon returning to the parent integral state, the remaining NLR portion 811, is processed, which excludes "from 0" removed in reaching the "from" state. Processing of NLR 811 begins in current state: integral. Returning to the integral state does not produce any new state results. The beginning of NLR portion 811 is matched with a "to" state specific mapping that maps the word "to" to a to state. The system, upon making this match, removes "to" from the NLR and recurses on a substring that is between the matched "to" string to a next match to a new expression start, resulting in NLR 812.
Processing of NLR 812 begins in current state: to. Upon initializing the "to" state, state results for updating the ∫ symbol to include an ∞ upper bound are produced. For example, the state results could be a flag to update the parent "<span class='integral' lowerBound='0' upperBound='∞'>". Another example is an HTML block that could be included inside the HTML block generated in the integral parent state with the content "<span class='integralUpperBound'>∞", where the ∞ is taken from the NLR 812. After producing this state result, ∞ would be removed from the NLR and it would be empty. This could cause the state transition mapping to be a return to the parent integral state, as indicated by the double lines under NLR portion 814. In each case, when a state exits, further state results can be created. For example, where the HTML block "<span class='integralUpperBound'>∞" is the state result created from entering the state, the closing tag "</span>" can be the additional state result created when exiting the state.
Upon returning to the parent integral state, the remaining NLR portion 815, is processed, which excludes "to ∞" removed in reaching the "to" state. Processing of NLR 815 begins in current state: integral. Returning to the integral state does not produce any new state results. The beginning of NLR portion 815 is matched with an "expression" state specific mapping that maps to the word "of" when in the integral state to an expression. The system, upon making this match, removes "of" from the NLR and recurses on a substring that is between the matched "of" string to a next match to an end of the integral "wrt" (with respect to), resulting in NLR 816.
Processing of NLR 816 begins in current state: expression. The beginning of NLR portion 816 is matched with an expression state specific mapping that maps the word "log(" to a log state. The system, upon making this match, removes "log(" from the NLR, and selects the substring between "log(" and closing parenthesis ")", resulting in NLR 818. In this case, moving from to the expression state produced no state results.
Processing of NLR 818 begins in current state: log. Upon initializing the log state, state results for "log" are produced. For example, the state results could be "<span class='logarithm'>". Another example is an XML block "<expression type='logarithm'>log(". The beginning of NLR 818, i.e. "(" is matched to a transition mapping of expression, with NLR portion 820 "x". An illustration of data resulting from processing of NLR portion 820 is excluded for conciseness, except for the eventual result of "x" being produced, returning to the parent expression state, then to the parent log state. Upon exiting the log state, the results of the log state, which now include the portion created upon entering the log state combined, at 832 and 834, with the results of the expression state: "<span class='logarithm'>(x)". and upon exiting the log state which can then be updated to include a closing tag "</span>", such that the final result from the log state is, in one example, "<span class='logarithm'>(x)</span>". At this point, NLR 816 that existed upon entering the parent expression state has been consumed to the point of being NLR 821 "/y".
Upon returning to the parent expression state, the remaining NLR portion 821, is processed. Returning to the expression state does not produce any new state results. The beginning of NLR portion 821 is matched with an "divide" state specific mapping that maps character "/" to a divide state. The system, upon making this match, removes "/" from the NLR and recurses on a substring that is between the matched "/" string to an end of the expression, which is the end of NLR 821. An illustration of data resulting from processing of NLR portion 822 is excluded for conciseness, except for the eventual result of y (which could be represented as "<span class='denominator'>y</span>") being produced. At 836 and 838, the results of the parent expression state can be determined, e.g. by combining "<span class='logarithm'>(x)</span>" with "<span class='denominator'>y</span>". In some implementations, divide can be a special case where combining includes more than concatenation and includes modification of the parent expression results, such as to wrap the sibling results in a "numerator" tag and wrapping both in a division tag. For example, the results of the expression that includes division can be: "<span class='division'><span class='numerator'><span class='logarithm'>(x)</span></span><span class='denominator'>y</span></span>". Upon making this combination, the expression state for the expression inside the integral exits, returning to the parent integral state. At this point, the remaining NLR portion 823 is "wrt x".
In some implementations, the actions performed when performing state start actions (e.g. at block 608) or the actions performed in the state end actions (e.g. at bock 612) can include modifications to state results for parent and/or sibling state results, as in the "divide" example above. As another example, state results from states arrived at through the "modifiers" global transition mappings can result in such parent or sibling result modifications. As a more specific example, when a "bar" state is encountered in the NLR (e.g. with the phrase "x bar") the "bar" state will modify the previous result "x" so that it has a bar over it.
Upon returning to the parent integral state, the remaining NLR portion 823, is processed. Returning to the integral state does not produce any new state results. The beginning of NLR portion 823 is matched with an "integral end" state specific mapping that maps the characters "wrt", when in the integral state, to an integral ending state. The system, upon making this match, removes "wrt" from the NLR and recurses on a substring that is between the matched "wrt" string to end of the integral notation, resulting in NLR 824.
Processing of NLR 824 begins in current state: integral end. Moving from the integral state to the integral ends state produces a result of "d" concatenated with either a variable specified in the received NLR portion, or with a variable determined to be primary (e.g. the only variable used, used most often, or is alphabetically first) in an expression body of the integral. In this case, the variable "x" is specified in the NLR portion, thus the results of the integral end state is "dx". The integral end state is configured to only have a mapping that returns to the parent integral state.
Upon returning to the parent integral state, the NLR 806 has been fully consumed and each of the states initiated matched to substrings of NLR 806 have returned results. In response, at steps 827, 828, 830, 840, and 842, these results are combined to create content that is rendered as formula 846. For example, this output can be the following block of HTML: "<span class='integral'><span class='integralLowerBound'>0</span><span class='integralUpperBound'>∞</span><span class='division'><span class='numerator'><span class='logarithm'>(x)</span></span><span class='denominator'>y</span></span></span>". Remaining actions portions that may be performed, e.g. upon exiting the expression state corresponding to NLR portion 804 or upon reaching the end of the original input NLR 802 are excluded from example 800.
FIG. 9 is a conceptual diagram illustrating an example 900 of a portion of a state machine showing transitions between states in relation to text describing an integral formula.
FIG. 10 shows several conceptual diagrams illustrating examples 1010, 1020, and 1030, showing textual NLR inputs and resulting formula representation outputs.
Reference in this specification to "implementations" (e.g. "some implementations," "various implementations," "one implementation," "an implementation," etc.) means that a particular feature, structure, or characteristic described in connection with the implementation is included in at least one implementation of the disclosure. The appearances of these phrases in various places in the specification are not necessarily all referring to the same implementation, nor are separate or alternative implementations mutually exclusive of other implementations. Moreover, various features are described which may be exhibited by some implementations and not by others. Similarly, various requirements are described which may be requirements for some implementations but not for other implementations.
As used herein, being above a threshold means that a value for an item under comparison is above a specified other value, that an item under comparison is among a certain specified number of items with the largest value, or that an item under comparison has a value within a specified top percentage value. As used herein, being below a threshold means that a value for an item under comparison is below a specified other value, that an item under comparison is among a certain specified number of items with the smallest value, or that an item under comparison has a value within a specified bottom percentage value. As used herein, being within a threshold means that a value for an item under comparison is between two specified other values, that an item under comparison is among a middle specified number of items, or that an item under comparison has a value within a middle specified percentage range. Relative terms, such as high or unimportant, when not otherwise defined, can be understood as assigning a value and determining how that value compares to an established threshold. For example, the phrase "selecting a fast connection" can be understood to mean selecting a connection that has a value assigned corresponding to its connection speed that is above a threshold. | CommonCrawl |
Petya has a polygon consisting of $$$n$$$ vertices. All sides of the Petya's polygon are parallel to the coordinate axes, and each two adjacent sides of the Petya's polygon are perpendicular. It is guaranteed that the polygon is simple, that is, it doesn't have self-intersections and self-touches. All internal area of the polygon (borders are not included) was painted in black color by Petya.
Also, Petya has a rectangular window, defined by its coordinates, through which he looks at the polygon. A rectangular window can not be moved. The sides of the rectangular window are parallel to the coordinate axes.
Blue color represents the border of a polygon, red color is the Petya's window. The answer in this case is 2.
Determine the number of black connected areas of Petya's polygon, which can be seen through the rectangular window.
The first line contain four integers $$$x_1, y_1, x_2, y_2$$$ ($$$x_1 < x_2$$$, $$$y_2 < y_1$$$) — the coordinates of top-left and bottom-right corners of the rectangular window.
The second line contains a single integer $$$n$$$ ($$$4 \le n \le 15\,000$$$) — the number of vertices in Petya's polygon.
Each of the following $$$n$$$ lines contains two integers — the coordinates of vertices of the Petya's polygon in counterclockwise order. Guaranteed, that the given polygon satisfies the conditions described in the statement.
All coordinates of the rectangular window and all coordinates of the vertices of the polygon are non-negative and do not exceed $$$15\,000$$$.
Print the number of black connected areas of Petya's polygon, which can be seen through the rectangular window.
The example corresponds to the picture above.
Server time: Apr/22/2019 04:09:04 (g1). | CommonCrawl |
Both convolutional and recurrent operations are building blocks that process one local neighborhood at a time.
Despite the size of the dataset, some of our models train to convergence in less than a day on a single machine using TensorFlow.
Furthermore, based on the temporal segment networks, we won the video classification track at the ActivityNet challenge 2016 among 24 teams, which demonstrates the effectiveness of TSN and the proposed good practices.
In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating $3\times3\times3$ convolutions with $1\times3\times3$ convolutional filters on spatial domain (equivalent to 2D CNN) plus $3\times1\times1$ convolutions to construct temporal connections on adjacent feature maps in time.
Advantages of TLEs are: (a) they encode the entire video into a compact feature representation, learning the semantics and a discriminative feature space; (b) they are applicable to all kinds of networks like 2D and 3D CNNs for video classification; and (c) they model feature interactions in a more expressive way and without loss of information.
One of the challenges in modeling cognitive events from electroencephalogram (EEG) data is finding representations that are invariant to inter- and intra-subject differences, as well as to inherent noise associated with such data.
The non-local module is designed for capturing long-range spatio-temporal dependencies in images and videos. | CommonCrawl |
1 . There exist finitely many conjugacy classes of maximal compact subgroups of $G$. If $k = \mathbf R$, then all maximal compact subgroups are conjugate.
2 . If $k$ is nonarchimedean, then maximal compact subgroups are always open.
Let $K$ be a compact subgroup of $G$, and $P$ a parabolic subgroup of $G$. Suppose that the product mapping $P \times K \rightarrow G$ is a homeomorphism.
3 . If $P, K$ are as above, $G = PK$ is called an Iwasawa decomposition.
If my definition of Iwasawa decomposition is incorrect, please keep in mind my meaning of the term in the following questions.
4 . If $P$ is a parabolic subgroup of $G$, then there exists a compact subgroup $K$ such that $PK$ is an Iwasawa decomposition.
5 . If $P_0$ is a minimal parabolic, there exists a maximal compact subgroup $K$ such that $P_0K$ is an Iwasawa decomposition.
Let $P_0$ be a minimal parabolic of $G$, let $\mathbf A_0$ be a maximal $k$-split torus of $\mathbf G$ with $\mathbf A_0(k) \subseteq P_0$, and let $\Phi = \Phi(\mathbf A_0,\mathbf G)$ be the roots of $\mathbf A_0$ in $G$. Let $\Delta$ be the base of $\Phi$ corresponding to $P_0$, so that the subsets of $\Delta$ correspond to $P_0$-standard parabolics of $G$. Let $\theta \subseteq \Delta$, and let $w \in G(k)$ be an element in the Weyl group $N_G(A_0)/Z_G(A_0)$ such that $w(\theta) \subseteq \Delta$. Let $P, P'$ be the standard parabolics of $G$ corresponding to $\theta, w(\theta)$.
6 . If $K$ is a compact subgroup such that $PK$ is an Iwasawa decomposition, then $P'K$ is also one.
Browse other questions tagged reference-request lie-groups algebraic-groups reductive-groups or ask your own question.
Is there a "big open cell" analogue for parabolic subgroups? | CommonCrawl |
Abstract: We study the behavior of zeros and mass of holomorphic Hecke cusp forms on $SL_2(\mathbb Z) \backslash \mathbb H$ at small scales. In particular, we examine the distribution of the zeros within hyperbolic balls whose radii shrink sufficiently slowly as $k \rightarrow \infty$. We show that the zeros equidistribute within such balls as $k \rightarrow \infty$ as long as the radii shrink at a rate at most a small power of $1/\log k$. This relies on a new, effective, proof of Rudnick's theorem on equidistribution of the zeros and on an effective version of Quantum Unique Ergodicity for holomorphic forms, which we obtain in this paper.
We also examine the distribution of the zeros near the cusp of $SL_2(\mathbb Z) \backslash \mathbb H$. Ghosh and Sarnak conjectured that almost all the zeros here lie on two vertical geodesics. We show that for almost all forms a positive proportion of zeros high in the cusp do lie on these geodesics. For all forms, we assume the Generalized Lindelöf Hypothesis and establish a lower bound on the number of zeros that lie on these geodesics, which is significantly stronger than the previous unconditional results. | CommonCrawl |
Vector spaces of all homogeneous continuous polynomials on infinite dimensional Banach spaces are infinite dimensional. But spaces of homogeneous continuous polynomials with some additional natural properties can be finite dimensional. The so-called symmetry of polynomials on some classes of Banach spaces is one of such properties. In this paper we consider continuous symmetric $3$-homogeneous polynomials on the complex Banach space $L_\infty$ of all Lebesgue measurable essentially bounded complex-valued functions on $[0,1]$ and on the Cartesian square of this space. We construct Hamel bases of spaces of such polynomials and prove formulas for representing of polynomials as linear combinations of base polynomials. Results of the paper can be used for investigations of algebras of symmetric continuous polynomials and of symmetric analytic functions on $L_\infty$ and on its Cartesian square. In particular, in order to describe appropriate topologies on the spectrum (the set of complex valued homomorphisms) of a given algebra of analytic functions, it is useful to have representations for polynomials, obtained in this paper. | CommonCrawl |
In developing a model of bacteria growth, we detailed every step of building the model from the data. Since we assumed the change in the population size in one time step was a linear function of the population size, the model was so simple that we could even solve it. We ended up with a fairly simple expression showing the exponential growth of the population size.
Here we'll examine a situation that seems completely different. We'll look at what happens when we give a patient a bolus injection (a one-time injection) of penicillin. In this case, of course, the penicillin won't start multiplying like bacteria in the patient. Instead, the body (i.e., the kidneys) will start removing the penicillin from the body. However, if we make a model where the amount of penicillin removed is a linear function of the amount penicillin in the blood, the model is starting to look a lot like the bacteria growth model. In fact, we'll get exponential decay of the amount of penicillin in the blood.
There's one more big difference between the bacteria growth example and this page. Here, we'll just get you started on the process, giving you background information about the drug clearance. We then let you go make the model on your own.
When penicillin was first discovered, its usefulness was limited by the efficiency with which the kidney eliminates penicillin from the blood plasma (blood minus blood cells) passing through it. The modifications that have been made to penicillin (leading to amphicillin, moxicillin, mezlocillin, etc.) have enhanced its ability to cross membranes and reach targeted infections and reduced the rate at which the kidney clears the plasma of penicillin.
Even with these improvements in penicillin, the kidneys can still remove penicillin fairly rapidly. In this project, you will build a mathematical model of penicillin clearance based on an assumption of how the kidneys operate. The secret to your success will be to build into the model a key parameter that captures the speed of the penicillin clearance. Then, you can estimate the model parameter by fitting predictions of the model to data. Lastly, you will compare your model predictions to the data to see how well the model matches the data.
The assumption behind the model is that the amount of penicillin removed by the kidneys in a five minute interval is proportional to the total amount of penicillin. We can formulate this assumption as a word model for the renal (i.e., kidney) clearance of penicillin.
In each five minute interval following penicillin injection, the kidneys remove a fixed fraction of the penicillin that was in the plasma at the beginning of the five minute interval.
Your goal is to translate this word model into a mathematical model that has a parameter that determines how much penicillin the kidneys remove in each interval. You can then use the below data to determine this parameter.
The following table and graph contain data for serum penicillin concentrations at 5 minute intervals for the 20 minutes following a bolus injection (a one-time injection) of 2 g into the serum of "five healthy young volunteers" (read "medical students") taken from T. Bergans, Penicillins, in Antibiotics and Chemotherapy, Vol 25, H. Schøonfeld, Ed., S. Karger, Basel, New York, 1978. We are interpreting serum in this case to be plasma.
the assumption that the drop in penicillin concentration each 5 minutes will depend linearly on the concentration.
If all goes well, you should be able to create a model that has an unknown parameter, fit the model to determine that parameter from the data, and then compare your model prediction to the data to see how well you did.
When you are all finished, you can compare your results to some findings from the research literature. Analysis of some numbers from the research literature seems to indicate that the all of the blood plasma of a human passes through the kidneys every 5 minutes and that the kidneys remove about 20% of the penicillin in the blood that passes through them.1 You can determine if your analysis of the above data yields a result close to that rate of clearance.
Instructions from writing up the penicillin project are here.
For more practice on building dynamical system models, try out the exercises.
That all of the plasma passes through the kidney in 5 minutes is taken from Rodney A. Rhoades and George A. Tanner, Medical Physiology, Little, Brown and Company, Boston, 1995. "In resting, healthy, young adult men, renal blood flow averages about 1.2 L/min", page 426, and "The blood volume is normally 5-6 L in men and 4.5-5.5 L in women.", page 210. "Hematocrit values of the blood of health adults are $47 \pm 5\%$ for men and $42 \pm 5\%$ for women", page 210 suggests that the amount of plasma in a male is about 6 L $\times$ 0.53 = 3.18 L. Moreover, J. A. Webber and W. J. Wheeler, Antimicrobial and pharmacokinetic properties, in Chemistry and Biology of β-Lactin Antibiotics, Vol. 1, Robert B. Morin and Marvin Gorman, Eds. Academic Press, New York, 1982, page 408 report plasma renal clearances of penicillin ranging from 79 to 273 ml/min. Plasma (blood minus blood cells) is approximately 53% of the blood so plasma flow through the kidney is about 6 liters $\times$ 0.53/5 min = 0.636 l/min. Clearance of 20% of the plasma yields plasma penicillin clearance of 0.636 = 0.2 = 0.127 l/min = 127 ml/min which is between 79 and 273 ml/min.
Constructing a mathematical model for penicillin clearance by James L. Cornette, Ralph A. Ackerman, and Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. For permissions beyond the scope of this license, please contact us. | CommonCrawl |
Collaborate on your data. Generate more results.
Rinocloud makes it easy to organise and discuss all your research data.
For scientists that work with data.
Adding extra context to your data, like parameters, metadata, discussion, means that you can deduce more insights and generate more results.
We integrate with various systems to record parameters with datasets, so everything that happened in an experiment or simulation is saved.
Integrated notebooks and team discussion means that all the insights to your results are contained in one, searchable, place.
A familiar file system, with massively powerful search.
By querying all the parameters saved during an experiment or simulation, you can find exactly the data or notes you want, instantly.
Your files are also saved in a conventional file layout, so you can save information in folders just like normal.
You can also make loose queries or use wildcards where you don't know exactly what you're looking for before hand.
Collaborate with your team easily.
See your entire teams data and notes: No more emailing files or MS word documents. No more data loss if a team member gets a new job.
Share portions of your project with collaborators, or invite others into your team.
If you have a sensitive project, you can control how, and who, accesses the data.
Fitted gaussian with $\sigma = \lambda \alpha$, where $\alpha$ is the new value of 120 ps.
@michele Are you sure about the value of $\alpha$ - I thought it was around 400 ps.
Hmmm, perhaps I need to revise some of my models.
All your notes, next to your data.
With Rinocloud you can reference files, folders, collections and search results all from the notebook. With our markdown, or rich text-editor, you can embed search results, tables, graphs and equations.
You can export your notebooks to word, markdown and more.
If you choose to, your team can comment on your notebooks, allowing you to gather feedback in one place.
Linear optical quantum computing has been proven to be computationally efficient with single photon sources and a series of beamsplitters and phase shifters. Although few photon gates have been demonstrated using bulk optics, scaling to more complex circuits requires integrated photonic technology.
2 rinocloud.api_key = "your api key"
Our integrations make it easy to organise your data automatically.
Plug into your experiments and simulations using our bindings for Python, MATLAB, LabView: Making it easy to save all data and parameters to a secure central location.
We have many more integrations coming soon... request one!
Here's what some of our users are saying.
Dr. Gediminas Juska. Photonics researcher, Tyndall Institute.
Bianka Seres. Biology PhD student, University of Cambridge.
Rob Shalloo. Physics PhD student, University of Oxford.
If you have any questions, have a look at our FAQ and Privacy pages. | CommonCrawl |
Abstract: Mayall II = G1 is one of the most luminous globular clusters (GCs) in M31. Here, we determine its age and mass by comparing multicolor photometry with theoretical stellar population synthesis models. Based on far- and near-ultraviolet GALEX photometry, broad-band UBVRI, and infrared JHK_s 2MASS data, we construct the most extensive spectral energy distribution of G1 to date, spanning the wavelength range from 1538 to 20,000 A. A quantitative comparison with a variety of simple stellar population (SSP) models yields a mean age that is consistent with G1 being among the oldest building blocks of M31 and having formed within ~1.7 Gyr after the Big Bang. Irrespective of the SSP model or stellar initial mass function adopted, the resulting mass estimates (of order $10^7 M_\odot$) indicate that G1 is one of the most massive GCs in the Local Group. However, we speculate that the cluster's exceptionally high mass suggests that it may not be a genuine GC. We also derive that G1 may contain, on average, $(1.65\pm0.63)\times10^2 L_\odot$ far-ultraviolet-bright, hot, extreme horizontal-branch stars, depending on the SSP model adopted. On a generic level, we demonstrate that extensive multi-passband photometry coupled with SSP analysis enables one to obtain age estimates for old SSPs to a similar accuracy as from integrated spectroscopy or resolved stellar photometry, provided that some of the free parameters can be constrained independently. | CommonCrawl |
Hi, I am in a linear algebra class, and I don't understand how to work through this problem.
"Consider the linear system below, where a and b represent numbers"
(a) Find the particular values for a and b which make the system consistent.
(b) Find the particular values for a and b which make the system inconsistent.
(c) What relation between the values of a and b implies the system is consistent.
When I try to solve the system with elimination, both of the x and y variables disappear on the first step. Not sure what to do.
Now multiply the first by $2$ and add to second. Is that a consistent system? WHY?
Now multiply the first by $2$ and add to second. Is that an inconsistent system? WHY?
Ok I tried other values, and it seems that as long as b = -2a the system is consistent, am i on the right track and is there a better way to word that?
If $(a,b) = (a, -2a)$ and $a=0$ they do not have opposite signs, but the system is consistent otherwise.
How did you come up with (5,-10)?
It looks like you added the coefficients together..
Well if you try to solve a $2\times 2$ system of linear equations, you multiply on by a constant and add to the other to eliminate a variable. In this problem we must get $0=0$ So I just chose numbers that work.
@MrJank: You've got some help. See what you can do from here. If you still can't get it then just say so.
Multiplying the first equation by -2, -4x+ 6y= -2a. So we have -4x+ 6y= -2a and -4x+ 6y= b. In order that the be a solution (that the equations be "consistent") we must have b= -2a. Any other values will make the system inconsistent.
b = -2a ... let's call this a "condition".
Numbers a and b that do not meet this condition make the system inconsistent.
Numbers a and b that meet this condition make the system consistent. Which means that there is at least one solution for x and y.
So you end up with 2 identical equations. All they tell you is that: y = (2/3)x - (1/3)a. As you know, you cannot solve 2 variables from 1 equation. This one equation only tells you what is the relationship between the 2 variables (but not their concrete values).
Later you will learn about dependence/independence of the system of linear equations.
Then you will be able to run a simple test on the matrix that describes your system of equations and quickly tell that the system is dependent.
Last edited by troymius; Sep 3rd 2018 at 03:54 AM. | CommonCrawl |
How to make users follow a better etiquette?
We've all seen it. A question of a new user, copied from a problem book, 5 people answering it with one-liners within 10 minutes, the user saying maybe a "thankx" and either picking a best answer completely at random or never being seen again.
I'm surprised to see even users of 2000+ reputation (not going to blame anyone in particular, sorry) behaving like that. Why not enforce the standard from other communities a little better? That is, have posters wait 12 hours or so for everyone getting a fair chance of writing a quality answer that might help someone else than just them as well, letting the community vote, and choosing by quality rather than just picking the first or none at all?
This pace actually reinforces people in writing short incomplete answers informative only to the asker, as spending more on a single question would have them see 5 other answers and an accept before they are even done writing theirs.
Update: a possible feature-request in the comments below (the first two written by me).
You solve this problem by having no one answer the question. Have the community vote to close it or downvote it to deletion without giving any answer.
Unfortunately there are lots of users itching for that sweet sweet 15 points of reputation that comes from posting the best 30 second answer to a dumb question that doesn't deserve an response in the first place.
I see some of these kinds of question, and I admit that I answer them from time to time.
I think, though, that I'm pretty good at recognizing when people are looking for an easy way out, and I'll usually answer in a way that points them in the right direction, but stops short of answering it. This isn't technically an answer to their question, but time and time again I've been rewarded by the community for such answers, so I keep giving them.
Then again, sometimes I'll post "What have you tried? Where are you stuck?" in the comments of the question. It all depends on my level of energy, I guess.
If I sense entitlement from the asker, I might be a little more direct: "This isn't a homework-answering engine."
Other times still I'll answer the question. ("It's your lucky day!") If someone beats me to it, oh well, it happens. If the question gets closed while I'm writing my response, that happens, too.
But over time, the good questions will be upvoted more than the "thin" ones, and exposure to those questions will take care of themselves.
Basically, we have enough good teachers here that we can all approach these kinds of questions in different, but appropriate, ways, and the site gets better as a result.
Questions on other sites ask for information.
Look that over very carefully.
The predominant item is the second one I've listed.
Mathematics is something that you understand, not data or information which fundamentally must come from an external source.
You can derive the entirety of mathematics yourself. Or, you could learn it on an alien planet—who knows? The ideas of mathematics are fundamental. They have to do with basic postulates (axioms) and conclusions derived from those axioms. These ideas and understandings are independent of language, race, Earth, or even physics.
These OSes were created a certain way (by their creators) and thus the information about how they work is not something you can make up yourself. You can make up your own OS, of course, but unless you make it "in agreement" with certain common principles of "*nix" systems, you won't have made a Unix or a Linux-based operating system at all. Thus, answers there must provide information.
Math has no such restrictions.
You can create any mathematics you like and use it in any fashion you want, to accomplish whatever you want to accomplish with it. It's an adjunct or servomechanism to your own mind.
There is a fundamental difference between information and understanding.
Given that, what is there to ask about Maths?
By observation, most of the questions on Math SE have to do with solving specific math problems.
The better ones have to do with understanding specific mathematical notations (usually in the context of solving specific math problems), and the best ones have to do purely with understanding mathematical concepts for their own sake.
So perhaps I should really say, the best questions have to do with proposing new ideas (independently originated) and asking what the consequences (ramifications, implications) would be of those ideas. But those are hardly questions with a "single, definable answer," are they?
How is this relevant to the question of user etiquette on Math SE?
The only possible purpose to writing an answer here is to improve the understanding of those who read the answer.
When an answer is written so that only those who already have a vast amount of requisite knowledge can understand the answer, it is of more limited value.
An answer written to convey the necessary understandings, without assumptions of reader expertise (particularly without assuming notational expertise), is thus highly desirable.
Now, having written this far, I'll confess: I don't have an explicit answer to the title question, "How to make users follow a better etiquette?" Instead, I believe that what I've written above constitutes an entirely different framework from which to address the problem.
Perhaps canonical answers could be written—excellent exposés on individual mathematical concepts—which could be used as "dupe close" targets. Other sites have many of these; this site, so far as I know, has none.
Perhaps apparent "etiquette problems" are merely due to this failure on our part to write canonical answers.
I think I see a possible solution to this, however imperfect it might be (as mentioned very well in comments, once a game system in put in place, bad behaviour occurs).
The platform can decide to allow less points for early answers once these are validated (for instance, the number of points can grow linearly with time and then be fixed).
If we look at this new situation from a game-theory point of view, then the Nash Equilibrium for the people answering very fast is to answer right away, thus earning less points. I think we are safe from any cooperation on their part (all of them waiting just after some time limit to post a one-line answer).
It works under the assumption that any answer given in less than a certain time (for instance 30s) has less value than one that takes more time to arrive. It might be true that most of them are from the users described by Morgan Rodgers.
Answers to more advanced questions usually take time to be given.
The associated question can also be accorded less value, but it may be problematic since it gives leverage to fast-answerers to take down the value of a question which may have been interesting. I wouldn't be in favor of it.
Please post any disadvantage you would see for this new point system in comment.
Of course, plenty of versions can be made (negative points for early answers then compensated if it is a good answer, $points\times =\ln(time+1)$, $points \times =1-\exp(-time)$). Personally, I like the $\ln$ version better, it also covers the problem of long-time ignored questions.
How do you know that the community will upvote the best answer?
There's plenty of times where the correct answer is downvoted because either the question was unclear in the first place or because the author preferred a certain method. In these cases, why is it fair or reasonable to force a user to accept the upvoted answer. I understand that you are saying that people should merely wait for votes, but if an answer truly answers a question accurately and correctly in a clear manner, then why should the user be forced to wait? Some questions are asked out of sincerity and are incredibly simple just as some other questions are incredibly complex and may never be answered.
"rather than just picking the first or none at all?"
Once again, an answer might not truly answer everything the op wanted to know. This sometimes causes friction between users as expected. Sometimes questions are asking things that people immediately reject as false. Someone makes a false premise (to try and analyze how the falsehood of a true statement changes things) but wants it to be followed through with regardless and people refuse to accept and reject the premise in their answers. In hindsight, this is an obviously wrong answer but maybe it gets upvoted for being intelligently well-written. Clearly the person shouldn't be punished for someone giving a bad answer by forcing it to be accepted over the answer they wanted.
Not the answer you're looking for? Browse other questions tagged discussion etiquette accepted-answer .
How to post follow up questions? | CommonCrawl |
Written by Colin+ in basic maths skills, pirate maths.
"Arr!" said the Mathematical Pirate.
"Pieces of eight!" said the Mathematical Pirate's parrot.
"How many pieces of eight?"
"That'll be... seventy and ten minus twenty and four, making fifty and six!"
"Who's a clever boy?" asked the parrot. "Awk!"
The Mathematical Pirate is, indeed, a clever boy. He's using a combination of number bonds and the small times tables he knows by h-arrrr-t.
He knows that $3 + 7 = 10$, so $7\times$ something is the same as $10\times$ the thing minus $3 \times$ the thing.
He knows that $8 \times 10$ is 80 and $8 \times 3$ is 24.
Alternatively, he could have worked the same thing the other way: since $2 + 8 = 10$, $8\times$ something is $10\times$ the thing minus $2\times$ the thing. Seven tens are 70, which he thinks of as sixty-ten; seven twos are 14; taking the ten from the 60 leaves 50, and taking the 4 from the 10 leave 6. 56 again!
This is a really powerful trick -- the Mathematical Pirate claims he learnt it from a Ninja, but nobody saw it.
$10\times$ (the something else) -- which would be 40, which you think of as "next ten down and ten" -- here, that's 30 and 10.
Add these up to get the answer, 28. | CommonCrawl |
Abstract: We develop a systematic and efficient method of counting single-trace and multi-trace BPS operators with two supercharges, for world-volume gauge theories of $N$ D-brane probes for both $N \to \infty$ and finite $N$. The techniques are applicable to generic singularities, orbifold, toric, non-toric, complete intersections, et cetera, even to geometries whose precise field theory duals are not yet known. The so-called ``Plethystic Exponential'' provides a simple bridge between (1) the defining equation of the Calabi-Yau, (2) the generating function of single-trace BPS operators and (3) the generating function of multi-trace operators. Mathematically, fascinating and intricate inter-relations between gauge theory, algebraic geometry, combinatorics and number theory exhibit themselves in the form of plethystics and syzygies. | CommonCrawl |
In a wave, energy is proportional to amplitude squared.
This is something I would like to understand better in the case of mechanical (linear) waves.
where $z=z(x, y, t)$ is the vertical displacement. Am I right?
The total energy is then just the sum of the kinetic and potential energies with appropriate modification using the mass density times a infinitesimal length as the mass.
When you move to higher dimensions, you have to account for that in the kinetic and potential energies. The kinetic energy of a portion of the surface is the square of the momentum of that portion ($mv$) divided by the mass of that portion. The potential energy is the spring constant of the membrane divided by 2, multiplied by the square of the displacement from zero.
It's not true in general that the energy of a wave is always proportional to the square of its amplitude, but there are good reasons to expect this to be true in most cases, in the limit of small amplitudes. This follows simply from expanding the energy in a Taylor series, $E=a_0+a_1 A+a_2 A^2+\ldots$ We can take the $a_0$ term to be zero, since it would just represent some potential energy already present in the medium when there was no wave excitation. The $a_1$ term has to vanish, because otherwise it would dominate the sum for sufficiently small values of $A$, and you could then have waves with negative energy for an appropriately chosen sign of $A$. That means that the first nonvanishing term should be $A^2$. Since we don't expect the energy of the wave to depend on phase, we expect that only the even terms should occur, $E=a_2A^2+a_4A^4+\ldots$ So it's only in the limit of small amplitudes that we expect $E\propto A^2$.
The other issue to consider is that we had to assume that $E$ was a sufficiently smooth function of $A$ to allow it to be calculated using a Taylor series. This doesn't have to be true in general. As an easy example involving an oscillating particle, rather than a wave, consider a pointlike particle in a gravitational field, bouncing up and down elastically on an inflexible floor. If we define the amplitude as the height of the bounce, then we have $E \propto |A|$. But a realistic ball deforms, so the small-amplitude limit consists of the ball vibrating while remaining in contact with the floor, and we regain $E\propto A^2$.
You could also make up examples where $a_2$ vanishes and the first nonvanishing coefficient is $a_4$.
It's just a sine wave. If frequency is constant, then velocity at the zero crossing is proportional to amplitude, and energy is proportional to velocity squared.
You need a simpler model, like a 1-dimension mass on a spring (or a small-angle pendulum). Its position x is a sine wave of a certain frequency (you can do the math to get the frequency). Its maximum x in one direction is its amplitude a. Its velocity v is dx/dt, which is 90 degrees out of phase with x. At the center of its swing, x = 0, and v = max. Then clearly if you double a, it will have twice as far to swing in the same time, so v will be doubled. I'm sure you got that.
Now your question is, why is energy $E$ equal $mv^2/2$, i.e. proportional to $v^2$? Well, that's a basic equation, but let me see if I can answer it anyway.
If you drop a weight w from a height h, it has initial potential energy wh, which is transformed into kinetic energy as it reaches the floor at velocity v. If it falls under the constant force of gravity, the distance it falls in a given time t is $gt^2/2$ (time-integral of velocity), and the velocity after that time is $v = gt$. So, if you want to double the velocity it has at the floor, you have to double $t$, right? And if you double $t$, you're going to quadruple the height. That quadruples the energy. I hope that answers the question.
Just thought of another explanation. If you have a spring whose force $f$ is $kx$ where $x$ is the displacement of the end of the spring, and $k$ is its stiffness. Since energy (work) is the integral of $fdx$, the energy $E$ stored in the spring, as a function of $x$ is $kx^2/2$. So there's your energy-amplitude relation.
Not the answer you're looking for? Browse other questions tagged energy waves string or ask your own question.
Total energy of a simple pendulum proportional to the square of the amplitude?
The relationship between the energy and amplitude of a wave? Derivation?
Why is $L^2$ norm of the gradient called kinetic energy?
Can the Taylor expansion of Energy/Intensity be explained more in-depth?
Is the energy of a vibrating string the classical analog to Born's rule?
Slowly-varying envelope approximation: what does it imply? | CommonCrawl |
Quantum simulations offer the possibility of answering quantum spin system dynamics questions which may otherwise require unrealistic classical resources to solve. Such simulations may be implemented using well-controlled systems of effective spins, including, as we demonstrate, two-dimensional lattices of locally interacting ions. We present experimental results from a model ion lattice system, realized as a surface electrode rf trap with a square lattice geometry. Using 440 nm diameter charged microspheres, we loaded a 30 $\times$ 36 lattice with a spacing of 1.67 mm. When the trap is driven at 2 kHz and 375 volts, we observe isolated ion secular frequencies of 170 Hz perpendicular to the trap, and Coulomb repulsion between ions at different lattice sites consistent with numerical modeling. These results, when scaled to single-atom ion charge-to-mass ratios, and linewidths achievable using standard microlithography, are promising for quantum simulations with planar ion trap lattices. | CommonCrawl |
I would like to know how I can calculate the yield of a bond futures contract(say the 5 yr treasury "FVM05" is trading at 108.2)? I am not sure how to go about calculating the yield of the futures contract?
Need some guidance in doing so.
There's a lot of intracacies involved and you've got several options. Let's go through an example, using the current front-month 5-year contract FVU6 (FV expiring in September 2016).
CTD Yield: The cheapest-to-deliver ("CTD") into FVU6 is the 1.625s of 11/30/2020 and its yield to maturity as of last close is 1.075%. You can simply use this as a proxy as the futures yield. This may seem dumb, but it's actually the one of the most prevalent choices in time series analyses. It works particularly well when the futures contract is fairly priced relative to cash bonds and the CTD is highly likely to be delivered into the contract (as it stands, 1.625s of 11/30/2020 has 100% delivery probability).
CTD forward yield: Given that a futures contract more closely resemble a forward, it is natural to calculate the forward yield of the CTD. You can calculate the forward price for the CTD using the cash-carry formula, assuming that the forward date = delivery date (10/5/2016 in this case). The forward price can then be converted back into a forward yield. For FVU6, we'd have 1.105%.
Futures implied yield: You can also calculate the so called futures implied yield. This is computed by assuming that the forward price of the CTD is the futures price multiplied by the conversion factor. In this case, the futures price is 121.46875, while the conversion factor for the 1.625s of 11/30/2020 is 0.8408, so you would assume that the CTD's forward price is $121.46875 \times 0.8408 = 102.130925$. Then you simply calculate the yield to maturity, assuming that the settlement date is the delivery date (10/5/2016), which nets you a yield of 1.099%.
These methods above assume that the CTD will not change between now and the delivery date. If that's not the case, you may want to calculate an average yield, weighted by CTD delivery probabilities.
Not the answer you're looking for? Browse other questions tagged fixed-income yield-futures or ask your own question.
How to calculate FX hedged bond yield? | CommonCrawl |
The 4, 5 and 6 will be involved in the largest possible products, so we can begin by seeing whether they touch.
The 4 and the 5 do not touch, as shown in the diagram below, which shows part of the folded net.
The 6 touches the 4 and the 5, as shown.
This means that the greatest product must include the 6 and the 5 or the 6 and the 4.
Clearly it will include the 3, not the 1, and so it will be 6$\times$5$\times$3 = 90. | CommonCrawl |
Sreenivasulu, M and Rao, Krishna GS (1989) Vilsmeier reaction studies on some $\alpha$ .$\beta$ -unsaturated alkenones. In: Indian Journal of Chemistry, Section B: Organic Chemistry Including Medicinal Chemistry, 28B (6). pp. 494-495.
$\alpha$ ,$\beta$-Unsatd. alkenones were converted into chlorobenzenemono-, di- and tricarboxaldehydes under Vilsmeier reaction conditions. Besides six known chlorobenzaldehydes the route gave, in a one-pot reaction, two new members of this class of compds., 3-methyl- and 3,5-dimethyl-2-chlorobenzaldehydes. E.g., 2-hexen-4-one gave 20% 4-chloro-5-methylisophthaldehyde (I). | CommonCrawl |
In this note, we prove that all $2 \times 2$ monotone grid classes are finitely based, i.e., defined by a finite collection of minimal forbidden permutations. This follows from a slightly more general result about certain $2 \times 2$ (generalized) grid classes having two monotone cells in the same row.
This page has been seen 508 times.
This article's PDF has been downloaded 224 times. | CommonCrawl |
Can anyone explain how the process of emission differs in these two cases?
To my understanding, it is simply single atom versus many number of atoms. For example, suppose one atom with an electron at energy level 7 ($n_2=7$). That electron can "de-excite" from $n_2=7$ to $n_1=6, 5, 4, 3, 2,$ or $1$. All those transitions give one spectral line for each. Thus, total of $1 \times 6 = n_1(n_2-n_1)$ (foot note 1) spectral lines would be present in the spectrum.
Foot note 1: Total number of spectral lines for single atom where $n_2=7$ should be: $1 \times 6 = (n_2-n_1)$ in the spectrum, not $n_1(n_2-n_1)$ as I originally suggested (Thanks @porphyrin for careful reading).
Not the answer you're looking for? Browse other questions tagged physical-chemistry spectroscopy energy electrons or ask your own question.
How to calculate atomic number of an atom when wavelength is known? | CommonCrawl |
The study aimed to evaluate the best level of licury oil in the diet of 3/4 Boer goats, as determined by profile analysis of commercial cuts on aspects of chemical composition, sensorial quality and fatty acid content. Nineteen male goats were used, with an initial weight of 10.8 kg/live weigh. The animals were fed with hay and a concentrated mix containing different levels of licury oil, which constituted the treatments. The experiment lasted for 60 days, at which point the animals were submitted to feed fasting and slaughtered. The carcass weight, commercial yield and cuts were measured. The ham was collected for sensorial and chemical evaluation and the longissimus dorsi was collected for fatty acid profile analysis. The addition of licury oil to the diet did not promote changes in the proportions and weights of the commercial cuts, nor to the meat's sensorial attributes. The sum of medium-chain fatty acids and the atherogenicity index was increased with the addition of oil. Licury oil can be added to the diet of goats (up to 4.5%) without resulting in changes in to the proportions of the commercial cuts, or to the chemical composition or sensorial characteristics of the meat. Based on the chain length of fatty acids, the addition of 4.5% licury oil can improve the quality of meat, but no effect was noted in relation to the atherogenicity index.
AOAC (Association of Analytical Chemists). 1990. Official methods of analysis. 12th ed, 1094.
Carvalho Junior, A. M., J. M. Pereira Filho, R. M. Silva, M. F. Cezar, A. M. A. Silva and A. L. N. Silva. 2009. Effect of supplemental feeding on carcass and non-carcass characteristics of F1 (Boer$\times$SRD) goats finished on native pasture. Rev. Bras. Zootecn. 38:1301-1308.
Chardigny, J. M., F. Destaillats, C. Malpuech-Brugère, J. Moulin, D. E. Bauman, A. L. Lock, D. M. Barbano, R. P. Mensink, J. B. Bezelgues, P. Chaumont, N. Combe, I. Cristiani, F. Joffre, J. B. German, F. Dionisi, Y. Boirie and J. L. Sébédio. 2008. Do trans fatty acids from industrially produced sources and from natural sources have the same effect on cardiovascular disease risk factors in healthy subjects? Results of the trans Fatty Acids Collaboration (TRANSFACT) study. Am. J. Clin. Nutr. 87:558-566.
Costa, R. G., F. Q. Cartaxo, N. M. Santos, R. C. R. E. Queiroga. 2008. Goat and sheep meat: fatty acids composition and sensorial characteristics. Rev. Bras. Saude Prod. An. 9:497-506.
Hedrick, H. B., E. D. Aberle, J. C. Forrest, M. D. Judge and R. A. Merkel. 1994. Principles of meat science 1994, 3 ed. San Francisco: Kendall/Hunt Publishing Company, 123-132.
Intarapichet, K., W. Pralomkarn and C. Chinajariyawong. 1994. Influence of genotypes and feeding of growth and sensory characteristics of goat meat. Asean Food Journal 9:151-155.
Jansen, C., N. R. M. Buist and T. Wilson. 1986. Absorption of individual fatty acids from long chain or medium chain triglycerides in very small infants. Am. J. Clin. Nutr. 43:745- 751.
Kadim, I. T., O. Mahgoub, D. S. Al-Ajmi, R. S. Al-Maqbaly, N. M. Al-Saqri and A. Ritchie. 2003. An evaluation of the growth, carcass and meat quality characteristics of Omani goat breeds. Meat Sci. 66:203-210.
Kaneda, T. 1991. Iso- and anteiso-fatty acids in bacteria: Biosynthesis, function, and taxonomic significance. Microbiol. Rev. 55:288-302.
Santos, C. L., J. R. O. Perez, C. A. C. da Cruz, J. A. Muniz, Í. P. A. Santos and T. R. V. Almeida. 2008. Chemical composition of carcass cuts of Santa Ines and Bergamacia lambs. Cienc. Tecnol. Aliment. 28:51-59.
Sheridan, R., L. C. Hoffman and A. V. Ferreira. 2003. Meat quality of Boer goat kids and Mutton Merino lambs 1. Commercial yields and chemical composition. Anim. Sci. 76:63-71.
Silva, M. M. C. 2005. Suplementacao de lipídios em dietas para cabras leiteiras. Vicosa: Universidade Federal de Vicosa, 129p. Thesis, Universidade Federal de Vicosa, 2005. | CommonCrawl |
The third axiom is about events that are mutually exclusive. Two events $A$ and $B$ are mutually exclusive if at most one of them can happen; in other words, they can't both happen.
For example, suppose you are selecting one student at random from a class in which 40% of the students are freshmen and 20% are sophomores. Each student is either a freshman or a sophomore or neither; but no student is both a freshman and a sophomore. So if $A$ is the event "the student selected is a freshman" and $B$ is the event "the student selected is a sophomore", then $A$ and $B$ are mutually exclusive.
What's the big deal about mutually exclusive events? To understand this, start by thinking about the event that the selected student is a freshman or a sophomore. In the language of set theory, that's the union of the two events "freshman" and "sophomore". It is a great idea to use Venn diagrams to visualize events. In the diagram below, imagine $A$ and $B$ to be two mutually exclusive events shown as blue and gold circles. Because the events are mutually exclusive, the corresponding circles don't overlap. The union is the set of all the points in the two circles.
What's the chance that the student is a freshman or a sophomore? In the population, 40% are freshmen and 20% are sophomores, so a natural answer is 60%. That's the percent of students who satisfy our criterion of "freshman or sophomore". The simple addition works because the two groups are disjoint.
If $A$ and $B$ are mutually exclusive events, then $P(A \cup B) = P(A) + P(B)$.
For any fixed $n$, if $A_1, A_2, \ldots, A_n$ are mutually exclusive (that is, if $A_i \cap A_j = \phi$ for all $i \ne j$), then This is sometimes called the axiom of finite additivity.
This deceptively simple axiom has tremendous power, especially when it is extended to account for infinitely many mutually exclusive events. For a start, it can be used to create some handy computational tools.
Suppose that 50% of the students in a class have Data Science as one of their majors, and 40% are majoring in Data Science as well as Computer Science (CS). If you pick a student at random, what is the chance that the student is majoring in Data Science but not in CS?
The Venn diagram below shows a dark blue circle corresponding to the event $A =$ "Data Science as one of the majors", and a gold circle (not drawn to scale) corresponding $B =$ "majoring in both Data Science and CS". The two events are nested because $B$ is a subset of $A$: everyone in $B$ has Data Science as one of their majors.
So $B \subseteq A$, and those who are majoring in Data Science but not CS is the difference "$A$ and not $B$": where $B^c$ is the complement of $B$. The difference is the bright blue ring on the right.
What's the chance that the student is in the bright blue difference? If you answered, "50% - 40% = 10%", you are right, and it's great that your intuition is saying that probabilities behave just like areas. They do. In fact the calculation follows from the axiom of additivity, which we also motivated by looking at areas.
Suppose $A$ and $B$ are events such that $B \subseteq A$. Then $P(A \backslash B) = P(A) - P(B)$.
If an event has chance 40%, what's the chance that it doesn't happen? The "obvious" answer of 60% is a special case of the difference rule.
For any event $B$, $P(B^c) = 1 - P(B)$.
Proof. The Venn diagram below shows what to do. Take $A = \Omega$ in the formula for the difference, and remember the second axiom $P(\Omega) = 1$. Alternatively, redo the argument for the difference rule in this special case.
When you see a minus sign in a calculation of probabilities, as in the Complement Rule above, you will often find that the minus sign is due to a rearrangement of terms in an application of the addition rule.
When you add or subtract probabilities, you are implicitly splitting an event into disjoint pieces. This is called partitioning the event, a fundamentally important technique to master. In the subsequent sections you will see numerous uses of partitioning. | CommonCrawl |
In this section we present an algorithm implemented using LazySets that computes the reach sets of a hybrid system of linear ordinary differential equations (ODE). This algorithm is an extension of the one presented in A Reachability Algorithm Using Zonotopes.
We consider a simple case here where modes do not have invariants and transitions do not have updates. In set-based analysis like ours, it may make sense to take a transition as soon as one state in the current set of states can take it. Note that this is not equivalent to must semantics of hybrid automata (also called urgent transitions), which is defined on single trajectories. We also offer the usual may transitions interpretation.
The hybrid algorithm maintains a queue of triples $(m, X, t)$ where $m$ is a mode, $X$ is a set of states, and $t$ is a time point. For each element in the queue the algorithm calls the Continuous algorithm to compute the reachable states in the current mode $m$, starting in the current states $X$ at time $t$. The result is a flowpipe, i.e., a sequence of sets of states. For each of those sets we check intersection with the guards of $m$'s outgoing transitions. Depending on the transition semantics, we add the discrete successors to the queue and continue with the next iteration until the queue is empty.
This is basically the same implementation as outlined in the section A Reachability Algorithm Using Zonotopes, only that this time we use concrete operations on zonotopes.
For illustration purposes it is helpful to plot the flowpipes in different colors, depending on the current mode. The following function does that for 2-mode models.
We consider an extension of the example presented in Reachability of uncertain linear systems using zonotopes, A. Girard, HSCC. Vol. 5. 2005 to a hybrid system with two modes $\ell_i$, $i = 1, 2$, with initial states $[0.9, 1.1] \times [-0.1, 0.1]$ and uncertain inputs from a set $u$ with $\mu = \Vert u \Vert_\infty = 0.001$.
LazySets offers an order reduction function for zonotopes, which we used here with an upper bound of 10 generators. We plot the reachable states for the time interval $[0, 4]$ and time step $δ = 0.001$.
# take transitions only the first time they are enabled? | CommonCrawl |
Normality of orbit closures in the enhanced nilpotent cone - Mathematics > Representation Theory - Download this document for free, or read online. Document in PDF available to download.
Abstract: We continue the study of the closures of $GLV$-orbits in the enhancednilpotent cone $V\times\cN$ begun by the first two authors. We prove that eachclosure is an invariant-theoretic quotient of a suitably-defined enhancedquiver variety. We conjecture, and prove in special cases, that these enhancedquiver varieties are normal complete intersections, implying that the enhancednilpotent orbit closures are also normal. | CommonCrawl |
A plane electromagnetic wave of frequency $25\: MHz$ travels in free space along the $x$ - direction. At a particular point in space and time, $E= 6.3 \: jV/m$. What is B at this point?
To find the direction, we note that $E$ is along $y$ - direction and the wave propagates along $x$ - axis.
Therefore, $B$ should be in a direction perpendicular to both $x$ - and $y$ - axes.
Using vector algebra,$ E\times B$ should be along $x$ - direction.
Since, $(+j) \times (+k) = i$ , so, $B$ is along +k or the $z$ - direction. | CommonCrawl |
Amouch, M. ; Zguitti, H.
Let $X$ be a Banach space and $T$ be a bounded linear operator on $X$. We denote by $S(T)$ the set of all complex $\lambda \in \mathbb C$ such that $T$ does not have the single-valued extension property at $\lambda $. In this note we prove equality up to $S(T)$ between the left Drazin spectrum, the upper semi-B-Fredholm spectrum and the semi-essential approximate point spectrum. As applications, we investigate generalized Weyl's theorem for operator matrices and multiplier operators. | CommonCrawl |
The state of our knowledge about general arithmetic circuits seems to be similar to the state of our knowledge about Boolean circuits, i.e. we don't have good lower-bounds. On the other hand we have exponential size lower-bounds for monotone Boolean circuits.
What do we know about monotone arithmetic circuits? Do we have similar good lower-bounds for them? If not, what is the essential difference that doesn't allow us to get similar lower-bounds for monotone arithmetic circuits?
The question is inspired by comments on this question.
Lower bounds for monotone arithmetic circuits come easier because they forbid cancellations. On the other hand, we can prove exponential lower bounds for circuits computing boolean functions even if any monotone real-valued functions $g:R\times R\to R$ are allowed as gates (see e.g. Sect. 9.6 in the book).
Even though monotone arithmetic circuits are weaker than monotone boolean circuits (in the latter we have cancellations $a\land a=a$ and $a\lor (a\land b)=a$), these circuits are interesting because of their relation to dynamic programming (DP) algorithms. Most of such algorithms can be simulated by circuits over semirings $(+,\min)$ or $(+,\max)$. Gates then correspond to subproblems used by the algorithm. What Jerrum and Snir (in the paper by V Vinay) actually prove is that any DP algorithm for the Min Weight Perfect Matching (as well as for the TSP problem) must produce exponentially many subproblems. But the Perfect Mathching problem is not of "DP flawor" (it does not satisfy Bellman's Principle of Optimality). Linear programming (not DP) is much more suited for this problem.
So what about optimization problems that can be solved by reasonably small DP algorithms - can we prove lower bounds also for them? Very interesting in this respect is an old result of Kerr (Theorem 6.1 in his phd). It implies that the classical Floyd-Warshall DP algorithm for the All-Pairs Shortest Paths problem (APSP) is optimal: $\Omega(n^3)$ subproblems are necessary. Even more interesting is that Kerr's argument is very simple (much simpler than that Jerrum and Snir used): it just uses the distributivity axiom $a+\min(b,c)=\min(a,b)+\min(a,c)$, and the possibility to "kill" min-gates by setting one of its arguments to $0$.This way he proves that $n^3$ plus-gates are necessary to multiply two $n\times n$ matrices over the semiring $(+,\min)$. In Sect. 5.9 of the book by Aho, Hopcroft and Ullman it is shown that this problem is equivalent to APSP problem.
A next question could be: what about the Single-Source Shortest Paths (SSSP) problem? Bellman-Ford DP algorithm for this (seemingly "simpler") problem also uses $O(n^3)$ gates. Is this optimal? So far, no separation between these two versions of the shortest path problem are known; see an interesting paper of Virginia and Ryan Williams along these lines. So, an $\Omega(n^3)$ lower bound in $(+,\min)$-circuits for SSSP would be a great result. Next question could be: what about lower bounds for Knapsack? In this draft lower bounds for Knapsack are proved in weaker model of $(+,\max)$ circuits where the usage of $+$-gates is restricted; in Appendix Kerr's proof is reproduced.
Yes. We do know good lower bounds and we have known them for quite some time now.
Jerrum and Snir proved an exponential lower bound over monotone arithmetic circuits for the permanent by 1980. Valiant showed even a single minus gate is exponentially more powerful.
For more on (monotone) arithmetic circuits, check out Shpilka's survey on arithmetic circuits.
Another result that I'm aware of is by Arvind, Joglekar and Srinivasan -- they present explicit polynomials computable by linear sized width-$2k$ monotone arithmetic circuits but any width-$k$ monotone arithmetic circuit would take exponential size.
Does this count: Chazelle's semi-group lower bounds for fundamental range-searching problems (in the offline setting). All lower bounds are almost optimal (up to log terms when the lower bounds is polynomial and log log terms when the lower bound is polylogarithmic).
Not the answer you're looking for? Browse other questions tagged cc.complexity-theory circuit-complexity boolean-functions monotone arithmetic-circuits or ask your own question.
Lower bounds for noncommutative arithmetic circuits with exact division?
Are arithmetic circuits weaker than boolean? | CommonCrawl |
Non-Boolean computing based on emerging post-CMOS technologies can potentially pave the way for low-power neural computing platforms. However, existing work on such emerging neuromorphic architectures have either focused on solely mimicking the neuron, or the synapse functionality. While memristive devices have been proposed to emulate biological synapses, spintronic devices have proved to be efficient at performing the thresholding operation of the neuron at ultra-low currents. In this work, we propose an All-Spin Artificial Neural Network where a single spintronic device acts as the basic building block of the system. The device offers a direct mapping to synapse and neuron functionalities in the brain while inter-layer network communication is accomplished via CMOS transistors. To the best of our knowledge, this is the first demonstration of a neural architecture where a single nanoelectronic device is able to mimic both neurons and synapses. The ultra-low voltage operation of low resistance magneto-metallic neurons enables the low-voltage operation of the array of spintronic synapses, thereby leading to ultra-low power neural architectures. Device-level simulations, calibrated to experimental results, was used to drive the circuit and system level simulations of the neural network for a standard pattern recognition problem. Simulation studies indicate energy savings by $\sim\!\! 100\times$ in comparison to a corresponding digital/analog CMOS neuron implementation. | CommonCrawl |
How do you show that $l_p \subset l_q$ for $p \leq q$?
Can you provide me historical examples of pure mathematics becoming "useful"?
What makes elementary functions elementary?
Is there possibly a largest prime number?
What is the role of conjectures in modern mathematics?
Comparing Hilbert spaces and Banach spaces.
What do Greek Mathematicians use when they use our equivalent Greek letters in formulas and equations?
Is there a "continuous product"?
Why do we require a topological space to be closed under finite intersection?
Find a $4\times 4$ matrix $A$ where $A\neq I$ and $A^2 \neq I$, but $A^3 = I$.
Is $p$-norm decreasing in $p$?
Why is $(0, 0)$ not a minimum of $f(x, y) = (y-3x^2)(y-x^2)$?
Elevator pitch for a (sub)field of maths? | CommonCrawl |
Volume 7, Number 2 (2002), 217-236.
In this paper we study a class of nonlinear integro-differential equations which correspond to a fractional-order time derivative and interpolate nonlinear heat and wave equations. For this purpose we first establish some space--time estimates of the linear flow which is produced by Mittag--Leffler's functions based on Mihlin--Hörmander's multiplier estimates and other harmonic analysis tools. Using these space--time estimates we prove the well-posedness of a local mild solution of the Cauchy problem for the nonlinear integro-differential equation in $ C([0,T); L^p(\mathbf R^n))$ or $L^q(0, T; L^p(\mathbf R^n))$.
Adv. Differential Equations, Volume 7, Number 2 (2002), 217-236. | CommonCrawl |
for events the day of Monday, September 11, 2017.
Abstract: Given a monotone Lagrangian L in a monotone symplectic manifold, there is a function known as the disc potential that encodes counts of maslov index 2 discs with boundary on L. The behavior of this potential under certain geometric transformations of the Lagrangian (mutations) is governed by what is known as a "wall-crossing formula." In this talk I will present a new, simple argument that allows one to prove such formulas in a general setting. The main new ingredient is a reformulation of the problem in terms of relative Floer theory. This is joint work with Dmitry Tonkonog.
Abstract: In short, Topological Modular Forms (tmf) is a "universal elliptic cohomology theory". More precisely, it is the global section of a sheaf of $E_\infty$-ring spectra over the moduli stack of (generalized) elliptic curves. In this talk, I'll introduce tmf and sketch the construction of it (or really this sheaf of $E_\infty$ ring spectra). If time allows, I'll also explain its relationship to classical modular forms. | CommonCrawl |
Would this method approximate a uniformly distributed random points on sphere?
I know there are several ways to generate uniformly distributed random points on the 2-sphere $S^2$. But I would like to know if my method does the same job, although it is very inefficient.
Take a random point $(x,y,z) \in [-3,3] \times [-3,3] \times [-3,3]$.
Would this algorithm approximately generate a uniform distribution on the sphere, or would the points be clustered around, say, the equator?
Your method will work (although presumably you mean intervals of $[-3,3]$, not $[0,3]$).
You can get a uniform distribution of points within the sphere using rejection sampling (i.e., rejecting everything with a distance greater than $r$). What you're doing is effectively a double rejection, removing a sphere of radius $2.999$ from a sphere of radius $3$.
Not the answer you're looking for? Browse other questions tagged random uniform-distribution or ask your own question.
Is it possible to generate a uniformly distributed random 128-bit number from multiple uniformly distributed random numbers of size <= 32 bits?
Is the arrival time of a point in a Poisson process distributed uniformly at random? | CommonCrawl |
This is all really neat!
Now this – this is really cool! Our client is free, and our server is type-checked.
The repetition is gone! Very cool.
The two API types we provide above end up describing the exact same API structure. However, the structure of the type is different, and the operation does not in fact distribute. What's it mean for something to distribute?
Let's look at addition and multiplication, a very simple form of distribution. If we have $(x \times y) + (x \times z)$, we can factor out the multiplication of $x$. That gives us $x \times (y + z)$, an expression that is exactly equal.
Ideally, we could factor out parameters in our servant API types, and they would "distribute" those parameters to all subroutes.
:kind! takes a type and tries to normalize it fully – this means applying all available type synonyms and computing all type families.
apiServer' accepts the playerId parameter from the capture. serveX and serveY are mere Handler Ints, now. They have access to playerId because it's in scope, but if you factor those functions out into top-level definitions, you'd need to pass it explicitly.
Huh, that's – weird. Client Api' has the kind * – it's an ordinary value.
Ah! So client with the Api' type does not give us a pair of client functions, but rather, a function that returns a pair of clients. This makes derivation much less pleasant.
This gets dramatically worse as the number of parameters goes up, and as the level of nesting increases.
At this point, I strongly recommend keeping your API types as flat and repetitive as possible. Doing otherwise takes you off Servant's happy path.
Servant uses a technique that I refer to as "inductive type class programming." It provides a lot of extensibility, and is super cool.
This is all we need to define our own type-level APIs! We have a type operators :> and a data type :<|>, and a Capture type that describes a named parameter.
Now the fun begins. We are going to write a lot of overlapping and orphan instances. That's just part of the deal with this style of programming. We're going to start with our base case: the Get handler.
The handler for a simple Get a is an IO a.
What about alternation? If we have left :<|> right, then it makes sense that we'd need for left to be serve-able and right to be serve-able. We express this by requiring HasServer instances in the instance context.
So the server for an alternation of APIs is the alternation of the servers of those APIs.
Let's do Capture now – that one is a bit interesting! In order to handle the Capture, we need to take the capture-d thing as a parameter.
Well, that doesn't quite work out. We don't have the rest of the server to delegate to. That's because we use :> for chaining combinators. We'll need to write the instance for Capture using the :> combinator to make it flow.
forall k1 k2 (skipMe :: k2) (rest :: k1).
forall k (paramName :: Symbol) paramType (rest :: k).
Frankly, this one mystified me. Google didn't help me find it. This kind of programming puts you into a fairly hostile territory, and it becomes difficult to figure out how to solve problems. It requires a lot of experimentation, guesswork, and luck.
Writing out the client is actually very similar to the server. We create a type class HasClient, and write instances for all the various parts of the chain. Since we're not actually serving or requesting anything here, I'll omit that part. The small server pretend implementation is sufficient for us to continue.
But… we want to have a nice DRY API type with nesting! That eliminates a lot of boilerplate and makes writing handlers and clients quite nice. Therefore, we need some way of distributing the parameters.
Type level programming in Haskell is quite hairy. It's generally easier to sketch out a value-level program, desugar it, and then port it to the type level than it is to implement it directly.
We'll walk up the route tree, collecting the Captures into a list, and when we hit a :<|> branch, we distribute the captures to both sides of the alternation.
You might note that applyCaptures could be rewritten using foldl'. We won't do that, because you don't have foldl' at the type level.
Now that we've implemented this at the value level, it's relatively straight forward to desugar it and bring it to the type level, once you know the desugaring rules.
There are no case expressions, so all pattern matching must be done at the top level.
There are no where blocks, so all expressions must be at the top level.
Now, we've got to make a choice: how do we do this at the type level? Type classes, or type families? Let's try type families first.
GHC doesn't complain! We did it! Awesome!
Alright, it's time to level this thing up. Let's port to servant and see if it works.
OK, this is the port. I changed Get to Verb and added all the type parameters. Everything else gets collected and distributed out to all the API leaves.
It's got a type error. If we look at that type error, we see that the Flatten type family is still there.
GHC does a thing where it gets "stuck" if a type family doesn't match anything. Rather than saying "I can't figure this type family out, you must have made a mistake," it just carries the type on in the non-reduced state. This is totally bizarre if you don't know what you're looking for. So if you're type-level-hacking and you see a type family application, that means that it failed to match a case.
Hmmm.. What extensions are enabled? The behavior of type level programming in Haskell is dependent on the extensions we provide.
TypeFamilies enables, well, type families. UndecidableInstances is needed for the recursion in ApplyCaptures. KindSignatures allows us to write data SWrap (s :: Symbol). DataKinds lets us promote values-to-types and types-to-kinds. And TypeOperators lets us use operators in types.
And now our implementation works!
Type level programming is hard and full of minefields.
Servant is fantastic, but nontrivial modifications and extensions require intense knowledge of GHC's weird flavor of type level programming. | CommonCrawl |
I got confused on the operator of the wedge product on other 2 vectors. Please help.
Let $V=\mathbb R^3,e_1= (1,0,0),e_2= (0,1,0)$, and $e_3= (0,0,1)$. Find: $3e_1∧4e_3((1,α,0),(0,β,1))$, where α,β are irrational numbers.
Not the answer you're looking for? Browse other questions tagged multilinear-algebra exterior-algebra or ask your own question.
How do I evaluate the Clifford product in dimensions greater than 3? | CommonCrawl |
The composition (or superposition) of two functions $f:Y \rightarrow X$ and $g:Z \rightarrow Y$ is the function $h=f\circ g : Z \rightarrow X$, $h(z)=f(g(z))$.
The composition of two binary relations $R$, $S$ on set $A \times B$ and $B \times C$ is the relation $T = R \circ S$ on $A \times C$ defined by $a T c \Leftrightarrow \exists b \in B \,:\, a R b, b S c$.
See Convolution of functions concerning composition in probability theory.
See Automata, composition of concerning composition of automata.
See also: Composition (combinatorics), an expression of a natural numbers as an ordered sum of positive integers; Composition series, a maximal linearly ordered subset of a partially ordered set.
This page was last modified on 3 September 2017, at 16:29. | CommonCrawl |
[SOLVED] generalisations of the Seifert-van Kampen Theorem?
[SOLVED] If I want to study Jacob Lurie's books "Higher Topoi Theory", "Derived AG", what prerequisites should I have?
[SOLVED] DG categories in algebraic geometry - guide to the literature?
[SOLVED] What is a symmetric monoidal $(\infty,n)$-category?
[SOLVED] How should one approach reading Higher Algebra by Lurie?
Categorical formalism for higher non-abelian group cohomology / obstruction theory for gerbes?
[SOLVED] Is the $\infty$-category of spectra "convenient"?
Does simplicial localization with a 3-arrow calculus commute with functor categories?
[SOLVED] classifying $\infty$-toposes for topological/localic groups?
[SOLVED] When does simplicial localization commute with functor categories?
How should one approach reading Spectral Algebraic Geometry by Lurie? | CommonCrawl |
Abstract: Professors from SOCS will give a 5-minute summary of the research going on in their lab. This is a great opportunity for new students to learn about the department and gather information necessary to eventually decide which research area and supervisor to choose.
Abstract: We will describe some recent work on two key problems in medical imagery: segmentation and registration. We will also describe controlled active vision techniques for image guided surgery and therapy.
The underlying method is based on certain flows which give rise to (nonlinear) geometric equations which are invariant with respect to a given transformation group action. We will provide some relevant results from the theory of curve and surface evolution, and show how these may be used for a number of areas in computer vision and image processing, such as image enhancement, optical flow, registration image segmentation, shape theory, and invariant scale-spaces. We will demonstrate these techniques with a wide variety of medical images including MR, CT, and ultrasound.
The talk is designed to be accessible to a general audience with an interest in medical imaging.
Abstract: Despite their sensing limitations, blind people are able to explore and navigate in the world. We have been developing algorithms for robots with sensing limitations --- sparse and limited-range sensing --- to make maps and subsequently navigate using those maps. Our motivation is twofold: to improve the capabilities of inexpensive robots with a limited array of sensors and to investigate the relationship between sensing limitations and a robot's mapping and navigation abilities. As an example of the sensing limitations we have in mind, our robots are equipped with only five IR rangefinders that have a maximum range of 80 cm.
reflecting the "topological" connectivity between places. In particular, we have devised robot behaviors that trace the generalized Voronoi diagram under the $L_\infty$ distance metric, resulting in a complete algorithm for mapping rectilinear worlds with sparse short-range sensing.
Next, I will discuss how topological maps from multiple robots (without a common reference frame) can be merged using a combination of techniques from image registration and graph matching. Finally, I will show some preliminary results in applying a particle filter SLAM (simultaneous localization and mapping) technique to robots with sparse sensing to create maps of large-scale environments.
Abstract: A new type of computing is emerging from the convergence of several trends: shrinking computational costs, smaller and more powerful mobile devices, and the proliferation of communication service providers as new broadband devices--such as 802.11 access points--become not only cheap but also allow wireless transmission of data at 500 times the speed of even 3G cellular connections. Interestingly, a seemingly contrary trend of concentrating computational power in centralized, well-connected data centers has also emerged. In the new paradigm of "tetherless computing," client applications running on small, inexpensive, mobile devices, such as Personal Digital Assistants (PDAs), Radio Frequency ID (RFID) tag readers, and mobile telephones, maintain intermittent broadband wireless connectivity with back-end services running on powerful computers, enabling novel classes of applications.
In this talk, I will outline the nature, scope and social context of tetherless computing. I will then present an architecture for tetherless computing that, unlike existing solutions, provides *both* disconnection and mobility transparency. Our solution leverages the strengths of Distributed Hash Tables and Delay Tolerant Networking (DTN). Early results confirm that our architecture is robust, efficient, and highly scaleable.
This is joint work with Aaditeshwar Seth and Patrick Darragh at the University of Waterloo.
this matrix it is possible to recover phase information that is normally lost in computing autocorrelation. I will discuss why this information is important and will talk about how the Autocorrelation Phase Matrix can be useful for problems like tempo prediction and beat tracking. I will conclude with speculations on the utility of this structure as a general feature for measuring music similarity.
Abstract: There has been a broad assumption that code clones are inherently bad and that eliminating clones by refactoring would solve the problems of code clones. To investigate whether this assumption is valid, we developed a formal definition of clone evolution and built a clone genealogy tool that automatically extracts the history of code clones from a source code repository. Using our clone genealogy extractor, we studied the evolution of code clones in two Java open source projects.
Our study of clone evolution contradicts some conventional wisdom about clones; refactoring may not benefit many clones for two reasons. First, many code clones exist in the system for only a short time, disappearing soon after; extensive refactoring of such short-lived clones may not be worthwhile if they are to diverge from one another very soon. Second, many clones, especially long-lived clones that have changed consistently with other elements in the same group, are not locally refactorable due to the programming language limitations. Our study discovers that there are types of clones that refactoring would not help, and it opens up opportunities for clone maintenance tools that target unaddressed classes of clones using clone genealogy information.
This is joint work with Miryung Kim, Vibha Sazawal, and Gail Murphy.
Abstract: In an on-going project at Carnegie Mellon University we are implementing a scalable distributed security architecture using smartphones and embedded devices in order to control access to a variety of resources such as office doors and computer accounts. At the heart of the effort is a so-called authorization logic which plays a dual role for policy specification and policy enforcement. A policy is specified as a collection of logical axioms that permit us to reason about who has access to which resources. A policy is enforced by requiring a formal proof of the right to access a resource before that access is granted.
We sketch the overall architecture and then focus on the authorization logic and how it is implemented in a logical framework. We also discuss some recent results that exploit the representation of authorization logic in a logical framework in order to prove important properties of policies, such as non-interference between principals.
Knowledge representation should be in terms of experience. Recent work has shown that a surprisingly wide range of world knowledge can be expressed as predictions of experience, enabling it to be automatically verified and tuned, and grounding its meaning in data rather than in human understanding.
experiences. General methods, such as dynamic programming, can be used to plan using knowledge expressed in predictive form.
State representation should be in terms of experience. Rather than talk about objects and their metric or even topological relationships, we represent states by the predictions that can be made from them. For example, the state "John is in the coffee room" corresponds to the prediction that going to the coffee room will produce the sight of John.
Much here has yet to be worked out. Each of the "should"s above can also be read as a "could", or even a "perhaps could". I am optimistic and enthusiastic because of the potential for developing a compact and powerful theory of AI in the long run, and for many easy experimental tests in the short run.
Abstract: We discuss a variety of discrete motion planning questions that can be regarded as far-reaching generalizations of the fifteen puzzle. For instance, given an initial position of a system of n congruent circular or square modules in the plane, is it possible to transform it into a prescribed target position, obeying certain natural motion rules? If the answer is yes, what is the smallest number of steps sufficient for completing this task? We survey some recent developments in this field, including the discovery of unexpected connections between questions of this type and old density problems for disk packings, due to Laszlo Fejes Toth (1915-2005). | CommonCrawl |
Locally decodable codes (LDCs) are error correcting codes that allow for decoding of a single message bit using a small number of queries to a corrupted encoding. Despite decades of study, the optimal trade-off between query complexity and codeword length is far from understood. In this work, we give a new characterization of LDCs using distributions over Boolean functions whose expectation is hard to approximate (in $L_\infty$ norm) with a small number of samples. We coin the term 'outlaw distributions' for such distributions since they 'defy' the Law of Large Numbers. We show that the existence of outlaw distributions over sufficiently 'smooth' functions implies the existence of constant query LDCs and vice versa. We also give several candidates for outlaw distributions over smooth functions coming from finite field incidence geometry and from hypergraph (non)expanders. We also prove a useful lemma showing that (smooth) LDCs which are only required to work on average over a random message and a random message index can be turned into true LDCs at the cost of only constant factors in the parameters.
Briët, J, Zeev Dvir, & Sivakanth Gopi. (2017). Outlaw distributions and locally decodable codes. In Proceedings of ITCS. | CommonCrawl |
Say we have a stock price time series $S_k$. We can do monte carlo simulations on the stock price to make predictions about future prices (e.g. through Geometric Brownian Motion SDE's).
for some time indices $i<j$.
The transformed quantity $p$ is the percentage change of the stock price for some time period $i < t < j$. The only difference is that the percentage changes can be negative, whereas stock prices are always positive. Furthermore, in some cases, the quantity $p$ will behave like white noise.
Is it valid to do monte carlo simulations on stock price percentage change? If so, what conditions do we have to impose and what changes need to be made to the analysis? If not, why not?
Agree with will that this approach will complicate things, mostly for the fact that GBM SDEs rely on log returns, and not discrete returns. To go from some finite underlying price level $S$ to $0$ means a log return of $-\infty$, whereas the equivalent discrete return is $-1$. To ensure a discrete return - based stochastic process, where $S$ can never take a negative value would likely mean cumbersome tinkering with the formulae.
Not the answer you're looking for? Browse other questions tagged time-series monte-carlo or ask your own question. | CommonCrawl |
Abstract : [en] Several variants of high-level replacement (HLR) and adhesive categories have been introduced in the literature as categorical frameworks for graph transformation and HLR systems based on the double pushout (DPO) approach. In addition to HLR, adhesive, and adhesive HLR categories several weak variants, especially weak adhesive HLR with horizontal and vertical variants, as well as partial variants, including partial map adhesive and partial VK square adhesive categories are reviewed and related to each other. We propose as weakest version the class of vertical weak adhesive HLR categories, short $\mathcalM$-adhesive categories, which are still sufficient to obtain most of the main results for graph transformation and HLR systems. The results in this paper are summarized in Fig.~f:hierarchy showing a hierarchy of all these variants of adhesive, adhesive HLR, and $\mathcalM$-adhesive categories, which can be considered as different categorical frameworks for graph transformation and HLR systems. | CommonCrawl |
goes to infinity as .
if is sufficiently slowly growing. Inserting these bounds into (8), the claim follows.
uniformly for , where are the primes between and .
The claim follows (noting from the prime number theorem that ).
This sum evaluates to , and the claim follows since goes to infinity.
Note that the trivial bound on (10) is , so one needs to gain about two logarithmic factors over the trivial bound in order to use the above proposition. The presence of the supremum is annoying, but it can be removed by a modification of the argument if one improves the bound by an additional logarithm by a variety of methods (e.g. completion of sums), or by smoothing out the constraint . However, I do not know of a way to remove the need to improve the trivial bound by two logarithmic factors.
Interesting post. Proposition 3 interests me, in particular. There would seem to be some hope of using it to understand horocycle flows along the primes, because (9) can probably be obtained from Ratner's theorem as B-S-Z do, whilst the stronger quantitative estimate (10) requires only an understanding of flows on , and not on the product . As I understand it, quite a lot is known in that setting.
Yes indeed. My feeling is that one is in the game so long as it is not necessary to understand quantitative distribution results on $G/\Gamma \times G/\Gamma$, which is beyond current techniques.
but just for ! I think we could improve it a little, but the problem is that you need (10), as far as I can see, for any ; in particular, for larger than this seems to me out of reach of any method using just harmonic analysis on the modular surface.
In other words, you need essentially any level for type I sums, which is typically quite hard to get.
Exactly, that is the trouble. Anyway, as you both point, it is really nice that now one "only" needs to control the horocycle flow in the modular surface, instead of having to know either about products of the modular surface or about the primes themselves.
A remark on the derivation of proposition 1 : applying (a corollary of) Selberg's version of the Bessel inequality (lemma 1.7 in Montgomery's "Topics in multiplicative number theory") to vectors $\phi_p : n \mapsto f(pn)$ and $\xi = \mu$ (with notations from the reference above), one immediately gets the result (once Turan-Kubilius is applied).
where the bound depends on p and q.
If H grows sufficiently slowly then the remaining sum is O(x) and this gives the estimate as required.
A viewpoint on Katai's orthogonality criterion | I Can't Believe It's Not Random! | CommonCrawl |
Classical linear regression estimates the mean response of the dependent variable dependent on the independent variables. There are many cases, such as skewed data, multimodal data, or data with outliers, when the behavior at the conditional mean fails to fully capture the patterns in the data.
Can be used to study the distributional relationships of variables.
Is useful for dealing with censored variables.
Is more robust to outliers.
where $\tau$ is the quantile level.
Each orange circle represents an observation while the blue line represents the quantile regression line. The black lines illustrate the distance between the regression line and each observation, which are labelled d1, d2 and d3.
Optimizing this loss function results in an estimated linear relationship between $y_i$ and $x_i$ where a portion of the data, $\tau$, lies below the line and the remaining portion of the data, $1-\tau$, lies above the line as shown in the graph below (Leeds, 2014).
In the graph above, 90.11% of the observations are below the quantile regression line which was estimated with τ set to 0.9.
Today we will use the GAUSS function quantileFit to estimate our salary model at the 10%, 25%, 50%, 75%, and 90% quantiles. This allows us insight into what factors impact salaries at the extremes of the salary distribution, in addition to those at quantiles in between those extremes.
String, name of data set.
String, formula of the model. E.g "y ~ X1 + X2"
Optional argument, Nx1 vector, containing observation weights. Default = uniform weights.
Optional argument, an instance of the qfitControl structure containing members for controlling parameters of the quantile regression.
We can see in the table of our results that both the magnitude and intensity of the coefficients on our predictors' changes across the quantiles.
The magnitude of impact that Hits has on salary decreases as players' salaries move from the 10% quantile to those in the 90% quantile.
Hits is less statistically significant for the 90% quantile than lower quantiles.
HmRun is only statistically significant for the 75% and 90% quantiles.
This suggests that players with the highest salaries aren't necessarily paid to just hit balls but rather to hit home runs.
This paints a nice picture. However, it is inappropriate to make any conclusions without first considering how statistically significant these differences are (Leeds, 2014).
The quantile regression parameters and confidence intervals are in orange. The blue lines represent the OLS coefficient estimates and 95% confidence interval.
The graph above provides a visualization of the difference in coefficients across the quantiles with the bootstrapped confidence intervals. It also includes the OLS estimates, which are constant across all quantiles, and their confidence intervals.
From this graph, we can see that OLS coefficients fall within the confidence intervals of the quantile regression coefficients. This implies that our quantile regression results are not statistically different from the OLS results.
The intuition of quantile regression.
How to estimate a quantile regression model in GAUSS.
How to interpret the results from quantile regression estimates.
Leeds, M. 2014, "Quantile Regression for Sports Economics," International journal of sport finance, 9, 346-359. | CommonCrawl |
Hi so im really struggling right now with the lambda calculus. It doesnt make sense to me.
Does this mean i have a function which takes an x and outputs g and i should apply it to x?
No, $(\lambda x\to x)$ is a function that takes a number (or, indeed, anything whether or not it is a number) to itself. When you say $(\lambda x\to x)y$ you're applying that function to $y$ (whatever $y$ is), and the result of that is just $y$.
In $(\lambda x\to gx)$ you have a function that takes something and returns the value of $g$ applied to that something -- as you describe it, a function that takes $x$ and produces $gx$.
However, again, when you write $(\lambda x\to gx)x$ you have taken the function just described and applied it to $x$, which would produce $gx$.
The statement that $g$ and $x$ are free variables means that the expression "$(\lambda x\to gx)x$" doesn't really have a value before you decide what the value of the variables $g$ and $x$ are going to be. It's a recipe for doing something with a thing you call $g$ and a thing you call $x$, but you can't really do it before you have decided what those things are going to be.
Note, by the way, that in $(\lambda x\to gx)x$ you have two different $x$ around. If we write it $(\lambda x_1\to gx_1)x_2$, the two ones shown as $x_1$ is just a dummy variable that tells you what the function does with its argument, whereas the $x_2$ is the free variable whose value you're supposed to decide.
Not the answer you're looking for? Browse other questions tagged logic lambda-calculus or ask your own question.
How do lambda calculus most basic definitions work?
What strongly normalizing lambda calculi exist that can be integrated with/as logic?
Confused by the explanation of beta reduction of lambda calculus on wikipedia. | CommonCrawl |
On a 15x15 chessboard there are 15 rooks that do not attack each other (via ordinary rook moves). Then each of the rooks makes one move like that of a knight.
Is it possible that after all this is done, the 15 rooks still do not attack each other (via ordinary rook moves)?
Since there is (initially and finally) a rook in each row and column, we can say the sum of all rooks' X and Y positions must be equal to 2 $\times$ (15 + 14 + ... + 1) = 120.
A knight's move will increment the rook's (X + Y) by +3, +1, -1, or -3, all of which are odd numbers.
The sum of 15 odd numbers (15 knight moves) is an odd number.
Since we are performing 15 moves, we are adding an odd number to the sum of the rooks' X and Y positions.
The final sum of the rooks' X and Y positions cannot be equal to the initial sum since performing 15 knight moves adds a non-zero number to that sum.
Since the initial sum is 120, the final sum cannot be 120 and therefore can't be a legal (non-attacking) position.
This proof can be extended to any $N\times N$ board where $N$ is odd.
The board wraps itself - i.e. from the top row, move 1 up, and you are now on the bottom row. In that case, all of the rooks make an identical move, and they are still safe.
You said all the rooks make one move. You didn't say, that they make ONLY one move. So, the rooks simple move out, then back to their original position.
The rooks move like knights. Presumably they also take like knights. In that case, I think the solution is obvious ;). | CommonCrawl |
Abstract: We compute the expected value of powers of the geometric condition number of random tensor rank decompositions. It is shown in particular that the expected value of the condition number of $n_1\times n_2 \times 2$ tensors with a random rank-$r$ decomposition, given by factor matrices with independent and identically distributed standard normal entries, is infinite. This entails that it is expected and probable that such a rank-$r$ decomposition is sensitive to perturbations of the tensor. Moreover, it provides concrete further evidence that tensor decomposition can be a challenging problem, also from the numerical point of view. On the other hand, we provide strong theoretical and empirical evidence that tensors of size $n_1~\times~n_2~\times~n_3$ with all $n_1,n_2,n_3 \ge 3$ have a finite average condition number. This suggests there exists a gap in the expected sensitivity of tensors between those of format $n_1\times n_2 \times 2$ and other order-3 tensors. For establishing these results, we show that a natural weighted distance from a tensor rank decomposition to the locus of ill-posed decompositions with an infinite geometric condition number is bounded from below by the inverse of this condition number. That is, we prove one inequality towards a so-called condition number theorem for the tensor rank decomposition. | CommonCrawl |
Abstract: We consider the problem of selecting non-zero entries of a matrix $A$ in order to produce a sparse sketch of it, $B$, that minimizes $\|A-B\|_2$. For large $m \times n$ matrices, such that $n \gg m$ (for example, representing $n$ observations over $m$ attributes) we give sampling distributions that exhibit four important properties. First, they have closed forms computable from minimal information regarding $A$. Second, they allow sketching of matrices whose non-zeros are presented to the algorithm in arbitrary order as a stream, with $O(1)$ computation per non-zero. Third, the resulting sketch matrices are not only sparse, but their non-zero entries are highly compressible. Lastly, and most importantly, under mild assumptions, our distributions are provably competitive with the optimal offline distribution. Note that the probabilities in the optimal offline distribution may be complex functions of all the entries in the matrix. Therefore, regardless of computational complexity, the optimal distribution might be impossible to compute in the streaming model. | CommonCrawl |
Mole fraction is the amount of moles of particular component in the mixture to the total number of moles of all the components in the mixture. The symbol for mole fraction is X with the component being studied written as subscript.
Sum of mole fractions are always unity.
Mole fraction has no units.
Example problems are given below based on mole fraction concept.
Question 1: A gaseous solution contains 3.5 $\times$ 10-3 moles of N2 and 1.5 $\times$ 10-3 moles of CO2. What is the mole fraction of each component present?
Question 2: What is the mole fraction of CH3OH in an aqueous solution which is simultaneously 3.5m C2H5OH and 2m CH3OH? | CommonCrawl |
Is there an algorithm which can calculate a $9 \times 9$ Sudoku with non-trivial $n$ possible solutions?
So if you play it you can play it for example 4 times? So if you play it there's a choice in which way you solve the Sudoku.
So, for example, we have a row in which it's possible to place two numbers in 2 ways for getting a valid result for the whole Sudoku that would mean you can play it 2 times.
My question is if there's a suitable method for "constructing" this kind of Sudokus?
Browse other questions tagged combinatorics algorithms or ask your own question.
In how many ways can we to place an $X$ in four cells, such that there is exactly one $X$ in each row, column, and $2\times2$ outlined box? | CommonCrawl |
An autopilot is the software that provides assistance while controlling a drone. While many autopilots allow the drone to fly/move autonomously following some pre-specified geographical points (e.g. using GPS), the autonomous movement is a characeristic of the autopilot but not the autopilot itself.
An autopilot is a system used to control the trajectory of a vehicle without constant 'hands-on' control by a human operator being required. Autopilots do not replace a human operator, but assist them in controlling the vehicle, allowing them to focus on broader aspects of operation, such as monitoring the trajectory, weather and systems. Autopilots are used in aircraft, boats (known as self-steering gear), spacecraft, missiles, and others. Autopilots have evolved significantly over time, from early autopilots that merely held an attitude to modern autopilots capable of performing automated landings under the supervision of a pilot.
Depending on the controlled variable the perception of the control of a quadrotor by a pilot is perceived differently. According to the image, the easiest case for the pilot is controlling the desired ($_d$) positions through $x_d$, $y_d$ and $z_d$ (there's still one more level which corresponds to complete autonomous flight where the pilot can set desired begin and endpoint).
The task of an autopilot is to abstract the user from the different physical parameters (such as velocity, angular rates or moments) and offer a simple interface so that the piloting is as easy as possible.
We have been working hard on BeaglePilot, a complete Linux-based autopilot based on ardupilot that provides all the necessary tools and has been built by a collaboration between different entities and contributors.
Although it shouldn't be used in real drones, we also provide a simplified autopilot implemented in python that should be used for pedagogical matters. | CommonCrawl |
A sudoku puzzle is a partially filled $9\times 9$ grid with numbers $1,\ldots,9$ such that each column, each row, and each of the nine 3×3 sub-grids that compose the grid does not contain two of the same number.
What is the minimal number of entries needed to produce an inconsistent puzzle, i.e., a puzzle that can not be completed to one where there is a solution?
EDIT: Now that there is an example where 5 is attained, can one show that this is the least possible, i.e., that any non-trivial puzzle with 4 entries can be completed to a solution?
I doubt there will be an easy-to-read explanation of the non-incompletability of partial sudoku with at most 4 non-empty cells. I attempted to form a proof of the 3 non-empty cell case (which is much easier), and it split into a large number of cases already. So, I instead opted for a computational solution. My C++ code is below (it's a simple backtracking algorithm).
The top left cell contains 1.
If these don't occur in a particular example, you can permute the rows/columns/symbols so that it does, while preserving the sukoku properties. Then we complete this case, then reverse the permutation on the completed sudoku to obtain a completion of the particular example in question. There's stricter conditions we could assume that would reduce the search space (but they weren't required).
The code found no examples in which a partial sudoku with 4 non-empty cells was incompleteable: in every case, a completion was constructed.
B. Smetaniuk, A new Construction on Latin Squares. I. A Proof of the Evans Conjecture. Ars Combinatoria (11) 1981, pp. 155-172.
Not the answer you're looking for? Browse other questions tagged puzzle recreational-mathematics or ask your own question.
Sudoku grid guaranteed to be solvable?
Is this generalized Sudoku solvable? | CommonCrawl |
The sampling rate of a real signal needs to be greater than twice the signal bandwidth. Audio practically starts at 0 Hz, so the highest frequency present in audio recorded at 44.1 kHz is 22.05 kHz (22.05 kHz bandwidth).
Perfect brickwall filters are mathematically impossible, so we can't just perfectly cut off frequencies above 20 kHz. The extra 2 kHz is for the roll-off of the filters; it's "wiggle room" in which the audio can alias due to imperfect filters, but we can't hear it.
The specific value of 44.1 kHz was compatible with both PAL and NTSC video frame rates used at the time.
Note that the rationale is published in many places: Wikipedia: Why 44.1 kHz?
44,100 was chosen by Sony because it is the product of the squares of the first four prime numbers. This makes it divisible by many other whole numbers, which is a useful property in digital sampling.
As you've noticed, 44100 is also just above the limit of human hearing doubled. The just above part gives the filters some leeway, therefore making them less expensive (less chips rejected).
As Russell points out in the comments, the divisible by many other whole numbers aspect had an immediate benefit at the time the sample rate was chosen. Early digital audio was recorded on existing analog video recording media which supported, depending on region, either the NTSC or PAL video spec. NTSC and PAL had different Lines per Field and Fields per Second rates, the LCM of which (together with the Samples per Line) is 44100.
The Nyquist rate is above twice the bandlimit of a baseband signal that you want to capture without ambiguity (e.g. aliasing).
Sample at a lower rate than twice 20kHz, and you won't be able to tell the difference between very high and very low frequencies just from looking at the samples, due to aliasing.
Added: Note that any finite length signal has infinite support in the frequency domain, thus is not strictly bandlimited. This is yet another reason why sampling any non-infinite audio source a bit above twice the highest frequency spectra (in a baseband signal) is required to avoid significant aliasing (beyond just reasons of finite filter transition roll-off).
Basically, twice the bandwidth is a common requirement for signal sampling, thus $2\times 20 = 40$ kHz is a minimum. Then, a little more is useful to cope with imperfect filtering and quantization. Details follow.
I am not an expert in audio, but I have been trained by high-quality audio sampling/compression people. My knowledge might be rusty, take it with caution.
This is the analysis part of the "sampling theorem". The "can be" is important. There is a synthesis part: the continuous signal "can be reconstructed" analogously using cardinal sines. This is not the only technique, and it does not take into account low-pass prefiltering, non-linear (such as quantization, saturation) and other time-variant factors.
Hearing is not linear: there are audition and suffering thresholds. It is not time-invariant. There are masking effects in both time and frequency.
If the 20 Hz up to 20,000 Hz band is a common range, and a 40,000 Hz should theoretically suffice, a little extra is needed to cope with extra distortion. A rule of thumb says that 10% more is ok ($2.2\times$ signal bandwidth) and 44,100 Hz just does it. It goes back to the late 1970s. Why is not 44,000 Hz used? Mainly because of standards, set by the popularity of CDs, whose technology is as always based on a trade-off. In addition, 44,100 is the product of squares of first four prime numbers ($2^2 \times 3^2 \times 5^2 \times 7^2$), hence has small factors, beneficial for computations (like FFT).
So from $2\times 20 $ to $44.1$ (and multiples), we have a balance in safety, quantization, usability, computations and standards.
Other options exist: the DAT format for instance was released with 48 kHz sampling, with initially difficult conversion. 96 kHz is discussed with respect to quantization (or bit depth) in What sample rate and bit depth should I use? This is a controversial subject, see 24 bit 48kHz verses 24 bit 96kHz. You can check Audacity sample rates for instance.
Why it is exactly 44.1 kHz has been already answered - but to focus on the aspect of your question related to the limit of human perception, the reason is quite simple.
In order to faithfully reproduce a signal, the faster the sample rate the better. ~40 kHz was chosen, because it was a low sample rate that most people can't tell the difference for (when reconstructed). When audio sampling was introduced, memory and storage was expensive and higher sample rates were not cheaply possible.
At double the upper limit of human hearing two samples per cycle is very poor reconstruction, even if it meets the Nyquist criteria for sampling signals, a simple chart depicting a sine wave with two samples per cycle will show you how poor two samples per cycle is in reproducing a waveform. You can literally turn a sine wave into square wave; it is a good thing at 20 kHz nobody can tell. I bet a dog could though.
Not the answer you're looking for? Browse other questions tagged audio sampling nyquist or ask your own question. | CommonCrawl |
Fermion propagator is not a Grassmann-odd object?
What is the number of fermions in Kitaev honeycomb model?
What is the fundamental reason of the fermion doubling?
Recall that the fermion doubling is the problem in taking the $a \to 0$ limit of a naively discretized fermionic theory (defined on a lattice with lattice spacing $a$). After such a limit one finds themselves with an additional amount (precisely $2^d$) of fermionic fields. One can fix this by considering different discretizations of the action that make unwanted fields decouple in the continuum limit. The downside is that the additional terms have to spoil some nice features of the theory (chiral symmetry, locality, lattice symmetry, ...).
Now, I wonder what is the true reason for the appearance of new fields. Is it the fermionic nature of the theory? (In other words, is a similar problem ruled out for bosonic fields?) And do all (naive?) fermionic theories (that is, independent of the continuum form of the action) suffer from this problem?
More generally, how can one tell a priori what will the field content of a lattice theory in the continuum limit be? Or is the field content fundamentally a continuum limit property that has to be calculated?
The fermion doubling is manifested through the existence of extra poles in the Dirac propagator on the lattice. These poles cannot be made to disappear at the continuum limit. (The number of doublers can be reduced by different discretizations but not eliminated at all, this is essentially the Nielsen-Ninomiya theorem).
The reason for the fermion doubling lies in the existence of chiral anomaly. This anomaly exists in the continuum limit due to the chiral non-invariance of the path integral measure and not because of the non-invariance of Lagrangian. In the lattice formulation of a chiral theory based on a discretization of the Lagrangian, the anomaly is absent and the lattice formulation generates the extra species just to cancel this anomaly in the continuum limit. Since the axial anomaly exists in nature $\pi^0 \rightarrow \gamma \gamma$, this situation is unacceptable.
The fermion doubling problem is an artifact of the realization of the theory by means of quarks where the axial anomaly is not present in the lagrangian but in the path integral measure. There are approaches of other types of discretizations such as by means of fuzzy spaces in noncommutative geometry, where the quarks are not the basic fields of the theory. In these approaches, the fermion doubling problems do not exist.
This is an update referring to Marek's first comment.
For fermions, the axial anomaly can be recovored only by a non-trivial regularization of the path integral measure of the fermions. Any finite dimensional approximation of this measure as a product of Berezin-Grassmann integrals does not produce the axial anomaly. This is the reason why the Lattice regularization does not produce the axial anomaly and as a consequence the phenomenon of doubling occurs where the different species of doublers are of opposite chirality to cancel the anomaly. This is a property of fermion fields represented by Grassmann variables. In effective field theories (where the basic fields are pions) such as sigma models, the axial anomaly is manifested through a Wess-Zumino-Witten term in the Lagrangian, but these theories are not even perturbatively renormalizable and I think this is the reason why they were not put on a lattice.
One approach that I know of which solves the fermion doubling problem is the regularization by means of a fuzzy space approximation of the space-time manifold. The philosophy of this approach is explained in the introduction of Mark's Rieffel's article. The resolution of the fermion doubling using this approach is given nicely in Badis Ydri's thesis. (There is also more recent work on the subject by A. Balachandran and B. Ydri in the arxiv).
The main idea is that the Poisson algebra of functions over certain spaces (such as the two sphere, and more generally coadjoint orbits of compacl Lie groups) can be approximated by finite dimensional matrices. These approximations are called fuzzy spaces. Gauge fields and Fernmions can be constructed on these fuzzy spaces, which have the correct continuum limit when the matrix dimensions become infinite. This formulation contains the axial anomaly inherently, thus it is free of the doubling problem. The only drawback that I can see of this approach is that it is applicable only to some special 4-dimensional manifolds such as $CP^2$ or $S^2 \times S^2$, because the fuzzy manifold is required to be Berezin-quantizable.
Thanks. So, am I correct in concluding that doubling problem is not really a fermionic effect but can appear also in bosonic theories as a consequence of chiral non-invariance of path-integral? Also, do you have any reference for this stuff (either book or paper would be fine)?
@Marek: How can you get a chiral anomaly without fermions?
@Bebop: I don't know much about anomalies (much less the chiral ones), so no idea. But chirality isn't unique to fermions, so unless you are referring to some technical result where half-integer spin is required in some calculation where the anomaly appears, I don't follow. Could you elaborate?
@BebopButUnsteady: The chiral anomaly appears explicitely in the lagrangian of effective theories in the form of a Wess-Zumino-Witten term. In his seminal paper "Global aspects of current algebra" Nuclear Physics B Volume 223, Issue 2, 22 August 1983, Pages 422-432, Witten computed the amplitude of $\pi^0 \rightarrow \gamma \gamma$ by gauging the Wess-Zumino term of the effective action of the $SU(3)$ sigma model + a Wess-Zumino term.
@David Bar Moshe: I had somehow forgotten about that... Thanks for the reminder.
@David: thanks for the update!
When $p_\mu$ is near zero, one is considering a momentum near zero and the discrete lattice works fine. It's when $p_\mu$ is near $\pm \pi/a$, or more generally, near $n\pi/a$ for $n\neq 0$ an integer, that one finds problems. Instead of having very large momenta, these values of $p_\mu$ essentially give momenta as small as those near $p_\mu=0$, but with signs in the $\gamma$ matrices negated.
This is a type of aliasing problem. As with the usual aliasing, the problem at $\pm \pi/a$ goes away when one makes the sample rate faster, that is, replaces $a$ with a smaller value. And just as with aliasing, making $a$ smaller does not eliminate aliasing entirely, but instead pushes the problem to a higher frequency.
The difference with the usual aliasing is what happens when $p_\mu = (2n+1)\pi/a$. These are the values that give a continuous Dirac equation with negated gamma matrices. To understand better what is going on here, let's consider the density matrix form. Density matrices avoid unphysical complex phases. I'll work in 3+1 dimensions.
One has latitude in how one chooses the four states. In general, one chooses two elements of the Dirac algebra that (a) square to unity, (b) commute, and (c) are independent. These are called a "complete set of commuting roots of unity."
$$\rho = (1\pm \sigma_z)(1\pm Q)/4.$$ To get the spinors from a density matrix, one chooses a nonzero column and normalizes. Thus spinors and density matrices are alternative mathematical representations of wave functions; neither is more fundamental.
If we discretize the density matrix, we will end up with the usual aliasing problem. From the point of view of lattice type calculations this is acceptable; there will be no duplicated particles. But spinors carry an extra degree of freedom; the arbitrary complex phase. This makes their aliasing behavior more complicated.
In the above, the density matrix will see this frequency appropriately as a high frequency. But with a spinor, we have arbitrary complex phase freedom. So we can negate the blue dots; the result is a low frequency. Thus the arbitrary complex phases of spinors naturally give aliasing problems at half frequencies.
I'm unsatisfied with the above and would delete it except I think it has a germ of truth in it. Maybe the argument should instead be that the (naive) conversion from wave function to density matrix eliminates the frequency aliasing. Or that the problem of making a unitary derivative operator is easier in the density matrix form than in the state vector form (which requires looking at the Dirac equation for density matrices). See: arxiv.org/abs/hep-lat/0207008 for another explanation of the relationship between aliasing and fermion doubling.
I have to say I don't understand this answer at all (even after reading it twice), sorry.
If you modeled wave functions in position instead of momentum, you would end up aliasing for all the usual reasons. And the aliasing would show up doubled in state vectors (compared to density matrix) because of the arbitrary complex phase. Yet density matrices and state vectors are equally fundamental. Translating the calculation to momentum space (where it's actually done), you'll end up with something similar. | CommonCrawl |
Аннотация: Many Ramond–Ramond backgrounds that arise in the AdS/CFT correspondence are described by integrable sigma-models. The equations of motion for classical spinning strings in these backgrounds can be solved quite generally by the finite-gap integration method via classical Bethe equations and algebraic curves. This construction is reviewed for general $\mathbb Z_4$ cosets and then exemplified by the backgrounds that arise in the AdS$_5$/CFT$_4$, AdS$_4$/CFT$_3$ and AdS$_3$/CFT$_2$ dualities. | CommonCrawl |
There is an insidious virus (less contagious than C. Coli, but still dangerous) which plagues the squares of checkerboards. This virus spreads through social interaction: whenever the majority of a friend group becomes sick, the entire group becomes sick.
Specifically, any four squares which meet at a common vertex are mutual friends. Whenever three out of four squares in a friend group are sick, the remaining friend becomes sick the next day. This is the only way the infection can spread.
Initially, 14 squares of a checkerboard are infected. Is it possible for the infection to spread to the entire board?
The answer would be yes if 14 was replaced with 15, as shown below.
See the solution to Checkerboard Infection. This puzzle also has a simple answer.
Say that a set of infected cells $C$ is rookable if a chess rook could travel from any square in $C$ to any other by a series of rook moves that all end on a cell in $C$. For example, two opposite corners of the board would not be rookable, but adding a third corner would be rookable. Clearly, if the whole board is infected, that forms a rookable set. Notice that if you run the infection backwards in time it will always still be a rookable set. Why? Well, removing one corner of a $2 \times 2$ still allows the rook to move around the other side of the $2 \times 2$ and access any row or column that it could before. Therefore, the initial infected cells formed a rookable set.
Notice also that the initial cells also must occupy every row and column. What is the smallest rookable set that also hits all the rows and columns? Well, imagine the rook starting somewhere and then having to hit the seven other columns and seven other rows. Any given move can enter a new row, or a new column, but not both. There for, there must be at least $7+7$ other initial cells besides the one the rook starts in, for a total of at least $15$.
No, it is impossible to make it with less than 15.
you need to put another two square which should be anywhere except the middle. Otherwise there will be no solution.
to spread it out everywhere, you just need to put two infected square not at the same square you had before every time the dimension increases.
impossible with less than $n_8$ but by induction, i found this result.
as the infection spreads, the perimeter of the infected cells does not increase.
Let P be the perimeter of the infected cells, and let N be the number of contiguous groups of infected cells. As the infection spreads, the quantity Q = P – 2N does not increase. This is because any merging of two groups must be accompanied by a decrease in perimeter by 2 (and when 3 groups merge, perimeter decreases by 4).
Furthermore, when $k$ cells are initially infected, the initial value of Q is at most $2k$. Indeed, suppose these initial infections form N contiguous groups with sizes $k_1,k_2,\dots,k_N$. Each group has a perimeter of at most $4k_i - 2(k_i-1) = 2k_i + 2$; each cell contributes 4 edges, but we must subtract out the $k_i-1$ overlaps. Adding these all up, the total perimeter P of all these groups satisfies $P\le 2k+2N$, implying $Q\le 2k$.
Therefore, when you initially infect 14 cells, Q is at most 28, but to infect the whole board, Q needs to end at 32 – 2 • 1 = 30.
infected cells to start with.
Note: This proof is incorrect, and I don't think it can be easily fixed. Nevertheless I'm leaving this answer up because it shows the difficulty of this straightforward approach.
Base case: It is obviously true that you need $1$ infected square to infect a $1 \times 1$ board. That may seem too trivial to work as a base case (though it is perfectly valid), so if you prefer you can use the fact that a $2\times2$ board needs at least $3$ infected squares to also infect the fourth.
Induction step: Suppose for the sake of induction the hypothesis holds for a particular board size $m \times n$. Add a new row or column to the side of that board. Every newly added square has only a single neighbour that is part of the old board, so if none of the added squares are infected, they can never have two infected neighbours and so will remain healthy. Therefore, to infect the newly added row/column, at least one of the new squares must start off sick. Furthermore, you cannot infect any of the new squares before its adjacent cell on the original board is infected. This means that the original board must be infected without any help from the squares in the new row/column (*). The original board needed at least $m+n-1$ infected squares, so the expanded board must therefore need at least one more, i.e. $(m+n-1)+1$ infected squares.
You can rewrite this as $(m+1)+n-1$ or $m+(n+1)-1$ depending on whether you added a row or a column to the board.
By induction the result follows: $m+n-1$ infected squares are needed on an $m \times n$ board.
If you place two or more adjacent infected squares in the extra row/column, then the extra row/column helps infect the rest of the board, and maybe that allows the rest of the board to have fewer initially infected squares to compensate. The argument I used does not disprove the possibility of having an extra infected square in the extra column which then saves 2 or more initial infected squares from the rest of the board.
Not the answer you're looking for? Browse other questions tagged mathematics checkerboard cellular-automata or ask your own question. | CommonCrawl |
You are given a rooted tree with $n$ nodes. The nodes are numbered $1..n$. The root is node $1$, and $m$ of the nodes are colored red, the rest are black.
You would like to choose a subset of nodes such that there is no node in your subset which is an ancestor of any other node in your subset. For example, if A is the parent of B and B is the parent of C, then you could have at most one of A, B or C in your subset. In addition, you would like exactly $k$ of your chosen nodes to be red.
If exactly $m$ of the nodes are red, then for all $k=0..m$, figure out how many ways you can choose subsets with $k$ red nodes, and no node is an ancestor of any other node.
Each input will consist of a single test case. Note that your program may be run multiple times on different inputs. Each test case will begin with a line with two integers $n$ ($1 \le n \le 2 \times 10^5$) and $m$ ($0 \le m \le min(10^3,\ n)$), where $n$ is the number of nodes in the tree, and $m$ is the number of nodes which are red. The nodes are numbered $1..n$.
Each of the next $n-1$ lines will contain a single integer $p$ ($1 \le p \le n$), which is the number of the parent of this node. The nodes are listed in order, starting with node $2$, then node $3$, and so on. Node $1$ is skipped, since it is the root. It is guaranteed that the nodes form a single tree, with a single root at node $1$ and no cycles.
Each of the next $m$ lines will contain single integer $r$ ($1 \le r \le n$). These are the numbers of the red nodes. No value of $r$ will be repeated.
Output $m+1$ lines, corresponding to the number of subsets satisfying the given criteria with a number of red nodes equal to $k=0..m$, in that order. Output this number modulo $10^9+7$. | CommonCrawl |
I am trying to solve the following problem: $|4-x| \leq |x|-2$. I am trying to do it algebraically, but I'm getting a solution to the problem that makes no sense. I fail to see the error in my reasoning though. I hope to get an explanation where I went wrong.
If $4-x$ and $x$ are both negative, then for them to be equal to $2$, we need to multiply both expressions by $-1$.
But if you sub in any $x$ less than or equal to $1$, the inequality doesn't work! Can you please explain where in my logic, where in the steps, have I gone wrong? Thank you!
For both $x$ and $4-x$ to be negative, $x<0$ and $x>4$, which is impossible.
As for the solution, the right side has to be positive, hence $| x | \geq 2$. Now, we solve it in $3$ parts.
In $( -\infty,-2 ]$, we have, $4+|x| \leq |x|-2$, which is not true.
In $(4,\infty )$, we have, $x-4 \leq x-2$, which is true.
In $[2,4]$, we have, $4-x \leq x-2 \implies x \geq 3$.
Hence, the solution is, $x \in [3, \infty )$.
Then define $f(x)=|4-x|-|x|+2$. We want to find all values of $x$ for which $f(x)\le0$.
In the middle interval we see that $f(x)$ is decreasing and reaches a value of $0$ at $x=3$ and we get $f(x)\le0$ for $x\ge3$. So the solution set is $[3,\infty)$.
so, solve in $[2,4], [4,+\infty)$ and $(-\infty,-2]$.
Not the answer you're looking for? Browse other questions tagged algebra-precalculus absolute-value or ask your own question.
Question about solving absolute values.
Question about solving absolute value equation. | CommonCrawl |
If $X\times Y$ is homotopy equivalent to a finite-dimensional CW Complex, are $X$ and $Y$ as well?
Is there a space $X$ that is not homotopy equivalent to a finite-dimensional CW complex for which there exists a space $Y$ such that the product space $X\times Y$ is homotopy equivalent to a finite-dimensional CW complex? If so, how might we construct an example?
A first consideration could be where $X$ has infinitely many nontrivial homology groups, in which case $X$ is not homotopy equivalent to a finite-dimensional CW complex. Yet, in this case it follows from the Künneth Formula that no suitable $Y$ exists. Of course, this only rules out spaces with infinitely many nontrivial homology groups.
Another consideration could be where $X$ is an Eilenberg–Maclane space for which $\pi_1(X)$ has non-trivial torsion, in which case $X$ is, again, not homotopy equivalent to a finite-dimensional CW complex. In this case, if $\pi_1(X)$ is abelian then $X$ is homotopy equivalent to $L\times Z$ for some other space Z and an infinite-dimensional lens space $L$, and hence $X$ has infinitely many nontrivial homology groups, which implies, again, that no suitable $Y$ exists. This has led me to wonder if there exists a space $Y$ and a noncommutative group $G$ with non-trivial torsion such that $Y\times K(G,1)$ is homotopy equivalent to a finite-dimensional CW complex, where $ K(G,1)$ denotes an Eilenberg–Maclane space.
Any insight into approaching this problem is greatly appreciated.
Browse other questions tagged at.algebraic-topology homotopy-theory cohomology classifying-spaces or ask your own question.
Are the path components of a loop space homotopy equivalent?
Is there a functorial proof that Eilenberg-MacLane spaces are unique up to homotopy equivalence?
"Economic" CW-structure for Eilenberg-MacLane spaces?
Are there versions of highly connected covers of Lie groups with highly periodic homotopy groups?
Is any CW complex with only finitely many nonzero homology groups homotopic to a finite dimensional CW complex? | CommonCrawl |
I was spending the weekend at Woodful Towers when a wealthy old Sir Joshua Woodful was horribly murdered in the library.
Transformations is a collective name for several different methods in Geometry.
Suppose we enter a number, $abc$, say, in a calculator and then repeat it to get $abcabc$. Now divide by $13,11$ and $7$. The calculator will always display the original number $abc$; why?
Q.503 A rectangle 11cms $\times$ 7cms is divided by ruled lines into 1cm $\times$ 1cm squares, each containing a button. | CommonCrawl |
Dimitrova, I., Vítor H. Fernandes, and J. Koppitz. "A note on generators of the endomorphism semigroup of an infinite countable chain." Journal of Algebra and its Applications (DOI: 10.1142/S0219498817500311). 16 (2017): 1750031 (9 pages).
In this note, we consider the semigroup $O(X)$ of all order endomorphisms of an infinite chain $X$ and the subset $J$ of $O(X)$ of all transformations $\alpha$ such that $|Im(\alpha)|=|X|$. For an infinite countable chain $X$, we give a necessary and sufficient condition on $X$ for $O(X) = < J >$ to hold. We also present a sufficient condition on $X$ for $O(X) = < J >$ to hold, for an arbitrary infinite chain $X$. | CommonCrawl |
Lurie, Jacob. On the classification of topological field theories. Current developments in mathematics, 2008, 129--280, Int. Press, Somerville, MA, 2009. 58Jxx (57Rxx) MR2555928. arXiv:0905.0465.
Or, rather, Lurie first provides reasonable definitions for a number of things, end then proves that there is an equivalence of $n$-categories between the $(0,\dots,n)$-TQFTs with target $\mathcal V$ and the $n$-groupoid of ("fully") dualizable objects in $\mathcal V$. (The classification is not particularly effective in two ways: given a dualizable object, which is the value the TQFT assigns to a point, it can be still very hard to understand the functor on complicated manifolds; and given a category, it can be still very hard to classify its dualizable objects.) For a review, see nLab: cobordism hypothesis.
Question: Is there a classification, similar to Lurie's, for $(k,\dots,k+n)$-TQFTs with a give target $n$-category?
My guess is that extending this style of classification to any of the adjacent slots (1,2,3,4), (2,3,4) or (2,3) would be very difficult. For (1,2,3,4) one would need to start by describing a categorified action of mapping class groups of surfaces in terms of local data; the uncategorified version is already long and messy (see refs above). For (2,3,4) one would need to characterize mapping class groups of 3-manifolds in terms of local data (Hatcher-Thurston for 3-manifolds).
1) (k,k+1)-Bord must be interpreted as an (infty,1)-category (or at least as an (n,1)-category), rather than as an ordinary category. Consequently, this is a very complicated object even when k=1 (to my knowledge, there is no concrete description of its representations along the lines of "commutative Frobenius algebras"). Fortunately it is quite easy to understand when k < 0, which is exploited in the treatment of the case of fully extended field theories.
2) The presentation is more complicated than in the fully extended case. When increasing the dimension, you need to add generators and relations corresponding to handles and handle cancellations for all indices (in the fully extended case, there is a cancellation phenomenon which ends up telling you that the only data you need to supply is for a handle of index 0).
Not the answer you're looking for? Browse other questions tagged tqft ct.category-theory at.algebraic-topology higher-category-theory homotopy-theory or ask your own question.
Is there a simple description of the indecomposable dg cocommutative coalgebras?
When is the endofunctor category of a monoidal category braided? When is it ribbon? Fusion? Modular?
Why does the following construction describe the Serre functor? | CommonCrawl |
July 2016, 397 pages, hardcover, 17 x 24 cm.
It has been known for some time that geometries over finite fields, their automorphism groups and certain counting formulae involving these geometries have interesting guises when one lets the size of the field go to 1. On the other hand, the nonexistent field with one element, $\mathbb F_1$, presents itself as a ghost candidate for an absolute basis in Algebraic Geometry to perform the Deninger–Manin program, which aims at solving the classical Riemann Hypothesis.
This book, which is the first of its kind in the $\mathbb F_1$-world, covers several areas in $\mathbb F_1$-theory, and is divided into four main parts – Combinatorial Theory, Homological Algebra, Algebraic Geometry and Absolute Arithmetic.
Topics treated include the combinatorial theory and geometry behind $\mathbb F_1$, categorical foundations, the blend of different scheme theories over $\mathbb F_1$ which are presently available, motives and zeta functions, the Habiro topology, Witt vectors and total positivity, moduli operads, and at the end, even some arithmetic.
Each chapter is carefully written by experts, and besides elaborating on known results, brand new results, open problems and conjectures are also met along the way.
The diversity of the contents, together with the mystery surrounding the field with one element, should attract any mathematician, regardless of speciality. | CommonCrawl |
Journal: Proc. Amer. Math. Soc.
In this paper we consider the evolution of sets by a fractional mean curvature flow. Our main result states that for any dimension $n \geq 2$, there exists an embedded surface in \mathbb Rn evolving by fractional mean curvature flow, which developes a singularity before it can shrink to a point. When $n \geq 3$ this result generalizes the analogue result of Grayson for the classical mean curvature flow. Interestingly, when $n = 2$, our result provides instead a counterexample in the nonlocal framework to the well known Grayson Theorem, which states that any smooth embedded curve in the plane evolving by (classical) MCF shrinks to a point. | CommonCrawl |
plus a long and unwieldy term $S_2$ due to the so called local kappa symmetry which has to be preserved. This $S_2$ term is not further explained or derived.
So can somebody at least roughly explain to me what this kappa symmetry is about and what its purpose is from a physics point of view?
On general super-target spaces the $\kappa$-symmetry of the Green-Schwarz action functional is indeed a bit, say, in-elegant. But a miracle happens as soon as the target space has the structure of a super-group (notably if it is just super-Minkowski spacetime with its canonical structure of the super-translation group over itself): in that case the Green-Schwarz action functional is just a supergeometric analog of the Wess-Zumino-Witten functional with a certain exceptional super-Lie algebra cocycle on spacetime playing the role of the B-field in the familiar WZW functional. It turns out that this statement implies and subsumes $\kappa$-symmetry in these cases.
... or almost all of them. It turns out that some are missing in the "old brane scan". For instance the M2-brane is there (is given by a $\kappa$-symmetric Green-Schwarz action functional) but the M5-brane is missing in the "old brane scan". Physically the reason is of course that the M5-brane is not just a $\sigma$-model, but also carries a higher gauge field on its worldvolume: it has a "tensor multiplet" of fields instead of just its embedding fields.
But it turns out that mathematically this also has a neat explanation that corrects the "old branee scan" of $\kappa$-symmetric Green-Schwarz action functional in its super-Lie-theoretic/WZW interpretation: namely the M5-brane and all the D-branes etc. do appear as generalized WZW models as soon as one passes from just super Lie algebras to super Lie n-algebras. Using this one can build higher order WZW models from exceptional cocycles on super-$L_\infty$-algebra extensions of super-spacetime. The classification of these is richer than the "old brane scan", and looks like a "bouquet", it is a "brane bouquet"... and it contains precisely all the super-$p$-branes of string M-theory.
The brane bouquet diagram itself appears for instance on p. 5 here. Notice that this picture looks pretty much like the standard "star cartoon" that everyone draws of M-theory. But this brane bouquet is a mathematical theorem in super $L_\infty$-algebra extension theory. Each point of it corresponds to precisely one $\kappa$-symmetric Green-Schwarz action functional generalized to tensor multiplet fields.
Be warned, it's a technically very complex thing with limited physical implications. See e.g. [this](Be warned, it's a technically very complex thing with limited physical implications. See e.g. this intro for some background. Surprising that David McMahon chose this topic/formalism in a "demystified book". The kappa-symmetry is a local fermionic symmetry on the world sheet whose task is to remove the excessive number of spinor components of the Green-Schwarz "covariant" string down to 8 physical transverse fermions (8+8 on left/right). It may be done in some backgrounds - in others, the right known constructions don't start with a manifestly covariant start.
Not the answer you're looking for? Browse other questions tagged quantum-field-theory string-theory supersymmetry symmetry superspace-formalism or ask your own question.
Conversion of the Polyakov action into the Nambo-Goto action? | CommonCrawl |
This plugin shows an ad to all players inside your server, motivating them to buy TrackMania United. When a player clicks the ad, it will open, in the player's web browser, the official Nadeo page to buy and download TrackMania United.
You can change the position and the size of the ad if you don't like the defaults. To do this, just change the $x_pos, $y_pos, $x_size and $y_size variables into the values you want.
- The ad is shown to non-United players only. | CommonCrawl |
My question is about the description of general defects (specially loop defects) in the Walker-Wang (WW) model.
Elementary excitations in the WW model can be point particles, loop defects and more general defects (see page 12 of arXiv:1104.2632). It is stated their description is closely related to the boundary condition of 3-manifolds.
From my understanding, a boundary condition is a collection of ribbon ends on the surface boundary Y. One can then define a 1-category A(Y) in which objects are the boundary conditions, morphisms are linear combinations of string-nets in Y x [0,1], between Yx0 and Yx1, and composition is given by 'stacking'. If I am not mistaken, excitations are then associated with the hom spaces of this category.
Does this construction encapsulate all types of higher defects including loop defects? If so, then how can I think of loop defects in this setting?
The boundary of a loop excitation is a torus $T$, so the possible elementary loop excitations correspond to irreducible representations of $A(T)$.
Defects are another sort of boundary condition, and correspond to higher categorical representations. For example, a domain wall type defect between a WW model and the vacuum corresponds to a (higher) representation of the 3-category $A(p)$ associated to a point $p$.
Codimension-2 defects (e.g. loop defects) correspond to representations of the 2-category $A(S)$, where $S$ is a circle. (You should think of $S$ as a small circle which links the loop defect, not the large circle (loop) which describes the location of the loop defect.) For a WW model, the 0-morphisms of $A(S)$ are trivial (only one 0-morphism), the 1-morphisms are ribbon end points in $S\times I$, and the 2-morphisms are linear combinations of string nets in $S\times D$, modulo the usual relations. (Here $D$ is a disk.) Since the 0-morphisms are trivial, this 2-category can be thought of as a tensor category.
In this context, a representation of $A(S)$ is just a module category for $A(S)$.
Every loop defect gives rise to a loop excitation, but not vice-versa.
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology ct.category-theory tqft or ask your own question.
Is there an algebraic "derived mapping space" construction that encompasses both Hochschild homology and loop spaces of non-simply-connected spaces?
How are the Walker-Wang TQFT and the Crane-Yetter TQFT related?
Is the bar construction of a CDGA model a Hopf algebra model for the loop space?
Differential form TQFT for Walker-Wang model? | CommonCrawl |
Recent studies have shown promising performance benefits of pipelined stencil applications. An important factor for the computing efficiency of such pipelines is the granularity of a task. We presents GOPipe, the first granularity-oblivious programming framework for efficient pipelined stencil executions. With GOPipe, programmers no longer need to specify the appropriate task granularity. GOPipe automatically finds it, and schedules tasks of that granularity while observing all inter-task and inter-stage data dependencies. In our experiments on four real-life applications, GOPipe outperforms the state-of-the-art by up to $4.57\times$ with a much better programming productivity. | CommonCrawl |
What's a nice argument that shows the volume of the unit ball in $\mathbb R^n$ approaches 0?
A better way to reference theorem-like environments?
Heuristic explanation of why we lose projectives in sheaves.
What are the differences between rings, groups, and fields?
What's the best way to get constructive critique of your photographs?
What does it take to run a good learning seminar?
What is a primitive polynomial?
Have all the (relatively) easy math problems been solved?
Where are some interesting places where the axiom of choice crops up in category theory? | CommonCrawl |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.