text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
From the image below, A is an open bezier curve with it's points positioned to represent a circle. AP3 and AP1 have the same position. From B, How can I move the points (AP1 and AP3 along the bigger circle of radius r whiles the CP's reduce in length as CP1 and CP4 also rotate) till they form a symmetric bezier curve at the center of the bigger circle marked as the red curve?
To keep things simple & consistent, let's stick with circular arcs throughout.
We'll have our first circle with radius \$r_1\$, shown in red below.
Then we'll use a second radius \$r_2\$ to define the new positions of our endpoints as the curve swings open by an angle \$\alpha\$, shown on the green circle below.
Then we'll draw a third circle, through our moved anchor points \$AP_1\$, \$AP_3\$ and the one we left stationary \$AP_2\$, shown in blue below. This gives us a unique arc to follow.
Now we have a center (\$\vec C_3\$), a radius (\$r_3\$), a start angle (0), and an end angle (\$\theta\$), so we can draw a cubic Bézier curve along this circular arc using the technique described in my earlier answer.
For the lower half, just flip the y coordinates along the horizontal.
// Buffer for our final spline's anchor & control points.
// How much of the inner circle should we start with?
// Size & center of our inner circle (red).
// Size & center of our second circle (green).
Vector2 _secondCenter; // Computed from radius & coverage.
// How far should we swing the anchor points out around the green circle?
// Size & center of our third circle (blue).
// to our inner anchor points before we've swung them outward.
// Place the anchor points of the original circle.
// Place the second center so the green circle passes through the endpoints.
// Swing out our anchor points along the green circle.
// Find the center of the blue circle joining these anchors.
// If they're in a vertical line, abort and just draw a straight line.
// Phew, we have a non-infinite circle! Place its center & radius.
// Find the angle of the endpoints around this blue circle.
// Handle reversing concavity correctly.
// Populate our Bezier curve buffers using code from previous answer.
// Convenience method for plotting a point in polar coordinates.
Not the answer you're looking for? Browse other questions tagged unity mathematics curves or ask your own question.
How can I maneuver an AI pirate ship for a sea battle?
How do I get AI Agents to move along a curve? | CommonCrawl |
Question - A Roulette wheel is divided into 36 sectors. Each sector is assigned a random number from 1 to 36. Show that there are three consecutive sectors such that the sum of their assigned numbers is at least 56.
Let $a_1, a_2, \ldots, a_n$ be the numbers assigned to the 1st, 2nd, ..., nth sectors respectively. Assume that the sum of all three consequtive sectors is less than 56.
There are three consecutive sectors such that the sum of their assigned numbers is at least 56.
A066385 Smallest maximum of sum of 3 consecutive terms in any arrangement of [1..n] in a circle.
in the On-Line Encyclopedia of Integer Sequences for the general question and more information. In particular it is listed that $a(36) = 57$.
Since the average value of $a_i=18.5$ the average value of the sum of triples is 55.5 thus exists at least one triple such that the sum is at least 56.
Not the answer you're looking for? Browse other questions tagged inequality pigeonhole-principle or ask your own question. | CommonCrawl |
Why is $\vert \phi \vert ^2$ infinite in QFT?
Suppose for instance that $\phi$ is the real Klein-Gordon field. As I understand it, $a^\dagger(k)|0\rangle=|k\rangle$ represents the state of a particle with momentum $k\,.$ I also learned that $\phi^\dagger(x)$ acts on the vacuum $\phi(x)^\dagger|0\rangle\,,$ creating a particle at $x\,.$ But it seems that $\phi^\dagger(x)|0\rangle\,,\phi^\dagger(y)|0\rangle$ are not even orthogonal at equal times, so I don't see how this is possible. So what is it exactly? And what about for fields that aren't Klein-Gordon, ie. electromagnetic potential.
Edit: As I now understand it, $\phi(x)|0\rangle$ doesn't represent a particle at $x$, but can be interpreted as a particle most likely to be found at $x$ upon measurement and which is unlikely to be found outside of a radius of one Compton wavelength (by analyzing $\langle 0|\phi(y)\phi(x)|0\rangle)$. So taking $c\to\infty\,,$ $\phi(x)|0\rangle$ represents a particle located at $x\,,$ and I suppose generally experiments are carried over distances much longer than the Compton wavelength so for experimental purposes we can regard $\phi(x)|0\rangle$ as a particle located at $x\,.$ Is this the case? If so it's interesting that this doesn't seem to be explained in any QFT books I've seen.
@Phoenix87 I know, I only wrote the adjoint because I am interested in other fields too.
@WeatherReport I'm not sure what the commutation relations have to do with this. But according to everything I've seen, the states aren't orthogonal at equal times.
@JLA I messed up badly, now the comment is deleted.
The quantum mechanical interpretation in terms of probabilities of being at a point in space is intrinsically nonrelativistic. To get this interpretation for a relativistic particle, one needs to perform an additional Foldy-Wouthuysen transformation, which transforms the covariant measure in spacetime to the noncovariant Lebesgue measure in space. This is more or less done as in discussions of the Dirac equation. In the resulting Foldy-Wouthuysen coordinates (corresponding to the Newton-Wigner position operator), the probabilistic position interpretation is valid, and only in this representation. See the entry ''Particle positions and the position operator'' in Chapter B1: The Poincare group of my theoretical physics FAQ.
For the electromagnetic field, point localization is impossible; your question regarding it doesn't make sense because of gauge invariance.
Position is never fixed, always uncertain with an uncertainty of the Compton length. This reconciles the different points of view. Note that probabilities are associated with observations, which always happen in the eigenframe of the observer. What you describe is a simplified version of the Unruh effect, which even says that the notion of particle is frame dependent.
I meant to say (but wasn't allowed to edit it): Covariant position is never fixed, always uncertain with an uncertainty of the Compton length, due to Zitterbewegung.
Feynman diagrams and their conventional interpretation as processes are only for building intuition for the perturbative series. They can in no way be identified with actual processes happening in space-time. That's why one talks about ''virtual'' particles and processes. Virtual means nonreal, unphysical, imagined.
The main point to take home from this discussion is that the particle concept is a semiclassical notion, valid only in an approximation that fails at short distances. For example, the photon concept works intuitively correct for the quantized electromagnetic field in exactly those situations where geometric optics is applicable. | CommonCrawl |
We consider the problem of obtaining an approximate maximum a posteriori estimate of a discrete random field characterized by pairwise potentials that form a truncated convex model. For this problem, we propose two st-mincut based move making algorithms which we call Range Swap and Range Expansion. Our algorithms can be thought of as extensions of $\alpha\beta$-Swap and $\alpha$-Expansion respectively which fully exploit the form of the pairwise potentials. Specifically, instead of dealing with one or two labels at each iteration, our methods explore a large search space by considering a range of labels (i.e. an interval of consecutive labels). Furthermore, we show that Range Expansion provides the same multiplicative bounds as the standard linear programming (LP) relaxation. Compared to previous approaches based on the LP relaxation, e.g. interior-point algorithms or tree-reweighted message passing (TRW), our methods are faster as they use only the efficient st-mincut algorithm in their design. We demonstrate the usefulness of the proposed approaches on both synthetic and standard real data problems. | CommonCrawl |
The factorial of $N$, written as $N!$, is defined as the product of all the integers from $1$ to $N$. For example, $3! = 1 \times 2 \times 3 = 6$.
This number can be very large, so instead of computing the entire product, just compute the last digit of $N!$ (when $N!$ is written in base $10$).
The first line of input contains a positive integer $1 \leq T \leq 10$, the number of test cases. Each of the next $T$ lines contains a single positive integer $N$. $N$ is at most $10$.
For each value of $N$, print the last digit of $N!$. | CommonCrawl |
for events the day of Monday, January 29, 2018.
Abstract: The computation of the pointed endomorphisms of $\mathbb P^1$ gives a first approximation to the zeroth stable homotopy group of the motivic sphere. In this introductory talk, following Cazanave, I'll give some examples of a basic construction which associates a bilinear form to a rational function $\mathbb P^1 \rightarrow \mathbb P^1$. This will hint at a reason for the appearance of Grothendieck-Witt groups in stable motivic homotopy theory.
Abstract: We start our investigation on L2 Betti numbers by analyzing Hilbert modules over von Neumann algebras and their dimension function. | CommonCrawl |
Factoring polynomial is an action in which polynomial is represented as a product of simpler polynomials that cannot be further factored.
In this lesson we will sometimes use complex numbers so it would be great if you are familiar with them but it is not necessary. If you are not familiar with complex numbers you can just skip those parts or learn about them in lesson Complex numbers.
In other words, maximum factorization using only real numbers can contain quadratic polynomials as factors.
To understand this better let's look at next example.
This is the best we can do if we don't want to use complex numbers. To factor it further we find roots of $x^2+1$, $\pm i$, and factor it as $(x+i)(x-i)$ and then factor $(x-2)(x^2+1)$ to $(x-2)(x+i)(x-i)$.
As we saw in previous example, there are polynomials, such as $x^2+1$, that cannot be further factored using just real numbers, but can be further factored when using complex numbers.
For factorization of polynomials you should be familiar with some basic, but very useful formulas which you can find in lesson Determining polynomials, basic math operations, the most important rules for multiplying in section Multiplication.
So how do we factor polynomials? There is no universal rule here, but there are some tricks that can help you.
First thing you should do is see if there is something common to all terms of your expression and if there is, extract it. For example, in polynomial $x^3+3x^2-x$ $x$ is common to all terms so we can extract it: $x^3+3x^2-x=x(x^2+3x-1)$.
Example 2 Factor polynomial $p(x)=x^2 + 4x$.
So, $4x^4 + 4x^3 – 24x^2=4x^2(x+3)(x-2)$.
Useful thing to know is how to factor quadratic polynomial if we know its roots.
If $x_1$ and $x_2$ are roots of quadratic polynomial $p(x)=x^2+bx+c$ then it can be factored as $(x-x_1)(x-x_2)$.
Example 4 Factor polynomial $p(x)=x^2-x-20$.
Now we write $p$ as $p(x)=(x-5)(x+4)$. You can easily check that $(x-5)(x+4)=x^2-x-20$.
First we extract $2$, $2x^2-2x-4=2(x^2-x-2)$. Then we find roots $x_1=-1$ and $x_2=2$ of polynomial $x^2-x-2$ and write it as $(x-2)(x+1)$. Now we can write $p$ as $p(x)=2(x-2)(x+1)$. Again, you can easily check that $2(x-2)(x+1)=2x^2-2x-4$.
You can learn more about quadratic polynomials in lesson Quadratic equations.
Example 6 Factor polynomial $p(x)=x^4-x^3+2x^2-4x-8$.
where in last equality we used fact that $-2$ and $1$ are roots of $x^2-x-2$ and $\pm i$ are roots of $x^2+4$.
Sometimes it can be pretty hard to see how to group terms of polynomial to allow extraction, but you can help yourself with using Bézout's theorem: you guess some root $x_1$ of given polynomial then divide polynomial with $(x-x_1)$. That way you will have to deal with polynomial with lesser degree and maybe it will be simpler. About division of polynomials you can learn in lesson Determining polynomials, basic math operations, the most important rules for multiplying in section Division.
To easier guess roots of polynomials you can use Integer solutions of a polynomial function theorem, The Rational Root Theorem and The Irrational Root Theorem.
Example 7 Factor polynomial $p(x)=2x^4+ x^3- 4 x^2+ x-6$.
By Integer solutions of a polynomial function theorem if $p$ has integer roots they are divisors of 6, so potential candidates for roots are $-6,-3,-2,-1,1, 2, 3$ and $6$.
$$2\cdot 1^4+ 1^3- 4\cdot 1^2+ 1-6=-6$$ so 1 is not root.
$$2\cdot (-1)^4+ (-1)^3- 4\cdot (-1)^2+ (-1)-6=-10$$ so -1 is not root.
$$2\cdot 2^4+ 2^3- 4\cdot 2^2+ 2-6=20$$ so 2 is not root.
$$2\cdot (-2)^4+ (-2)^3- 4\cdot (-2)^2+ (-2)-6=0$$ so -2 is root.
$$2 i^3- 3 i^2 +2 i-3=2(-i)-3(-1)+2i-3=0$$ so $i$ is root.
Polynomials we were factoring so far were polynomials in one variable. We can similarly factor polynomials with more variables.
Example 8 Factor polynomial $p(x)=x^2 y + xy^2 + 4xy$.
where we used formula for difference of squares in third equality to write $x^2-y^2$ as $(x+y)(x-y)$.
As we mentioned before, there is no general algorithm for factoring polynomials. Every polynomial is new problem and often there is more than one way to final factorization. We have showed you here some tricks that could help you but the best way to learn how to factor is practice. With practice you will get some intuition on how to group terms and what trick can help you with particular polynomial. | CommonCrawl |
We begin by talking about probability measure preserving actions of discrete groups, and introduce the notion of the Group Measure Space construction, or the cross product von Neumann algebra. We will then discuss about free and ergodic actions and the measurable functions fixed by these. We will conclude by presenting and proving the key theorem of this talk: The free action on an $L^\infty $ space is ergodic if and only if the corresponding cross product von Neumann algebra is a factor. | CommonCrawl |
Common Core: 4.OA.1, 4.OA.3, 4.OA.5, 4.NBT.5, 5.OA.2, 5.OA.3, 6.RP.1, 6.EE.1, MP1, MP2, MP3, MP6, MP7, MP8.
This is a great puzzle for kids who have some comfort with multiplication but still need to deepen their understanding. It gets them looking at the multiplication table with purpose, and they'll find some serious structure there.
What is the sum of all the numbers on a multiplication table?
Simple to pose, this problem leads to numerous discoveries about the structure of multiplication, and finally, a truly surprising and powerful solution that allows us to quickly find a sum of the numbers on a multiplication table of any size.
Students should start small. A four by four multiplication table turns out to be very suggestive, and smaller ones are good too. Fortunately, it's easy to write out a multiplication table that large.
Another good organizational tactic is to try adding up the rows (or columns) first. Teachers can suggest this to kids who get stuck or overwhelmed.
Once students have solved the smaller sizes, expand to larger tables—10 by 10, 12 by 12, or bigger. For students who are ready can try to write an algebraic formula for the sum of all the numbers.
Compare this problem to the extensions below.
Here are some very closely related problems, though the nature of the relationship is not at all obvious at first.
How many rectangles are on a 4 by 4 grid?
What is the sum of the consecutive cube numbers $latex 1^3 + 2^3 + 3^3 + 4^3 + \ldots + n^3$?
Here's a PDF of the Dot Array Multiplication Table 5 by 5 for easy printing.
This image is beautiful. Is there a way I could print it out? My special education students would love it.
Thank you! I just added the PDF file at the bottom of the blog post for easy downloading. Enjoy!
I love this! Anyone seen a 10X10 of this dot array multiplication table?
I started to make one once, but the dots were too big, and I couldn't fit them all on a page without it feeling scrunched.
Interesting how my strategies were completely different using the two separate tables. With the dot visuals I quickly realized I could just slide them over to create one giant array. | CommonCrawl |
Format: MarkdownItexadded a new Examples-section <a href="http://ncatlab.org/nlab/show/fiber+sequence#Integralreal">Integral versus real cohomology</a> to [[fiber sequence]] (and renamed the original [[fibration sequence]] and made it a redirect to that -- but the cache bug is in the way,as usual).
(and renamed the original fibration sequence and made it a redirect to that – but the cache bug is in the way,as usual).
Format: MarkdownItexadded a detailed proof of how <a href="http://ncatlab.org/nlab/show/fiber+sequence#OfFuncCats">homotopy fibers of oo-functor categories</a> are computed in terms of over-categories.
added a detailed proof of how homotopy fibers of oo-functor categories are computed in terms of over-categories.
Format: MarkdownItexif you think about it, the proof I just mentioned is the full-fledged version of the proof that we sketched previously in the section <a href="http://ncatlab.org/nlab/show/twisted+cohomology#InTermsOfSections">In terms of sections</a> at [[twisted cohomology]]. I have added some more sentences to this effect now.
if you think about it, the proof I just mentioned is the full-fledged version of the proof that we sketched previously in the section In terms of sections at twisted cohomology. I have added some more sentences to this effect now.
Format: MarkdownItexThe last reference in the [[fiber sequence]] entry has now been published. I have updated the entry. The previous title at this point was wrong by the way. It did not correspond to the paper to which the link refered. I hope this is now correct.
The last reference in the fiber sequence entry has now been published. I have updated the entry. The previous title at this point was wrong by the way. It did not correspond to the paper to which the link refered. I hope this is now correct.
Format: MarkdownItexAn Anonymous has made changes to [[fiber sequence]] in the section on Mayer-Vietoris. I do not understand their point. They did change something to $A\times C$, but from the pullback this should be $A\times B$ and ... in other words the entry seems to be a mess at this point (glancing at it). The diagram is wrong as well. There are several ways to fix it, but I am not sure which is intended.
An Anonymous has made changes to fiber sequence in the section on Mayer-Vietoris. I do not understand their point. They did change something to A×CA\times C, but from the pullback this should be A×BA\times B and … in other words the entry seems to be a mess at this point (glancing at it). The diagram is wrong as well. There are several ways to fix it, but I am not sure which is intended.
Format: MarkdownItexThanks for the alert. I have fixed it now ([here](http://ncatlab.org/nlab/show/fiber+sequence#MayerVietoris)), both the original diagram as well as the fiber sequence afterwards, exchanging Bs for Cs. I suppose somebody saw the typo, kindly went to fix it, but then didn't exactly.
Thanks for the alert. I have fixed it now (here), both the original diagram as well as the fiber sequence afterwards, exchanging Bs for Cs.
I suppose somebody saw the typo, kindly went to fix it, but then didn't exactly.
Format: MarkdownItexThat is what I suspected. | CommonCrawl |
Frequently it is useful do deal with countable transitive models M of ZFC, for example in forcing constructions.
The notion of being an ordinal is absolute for any transitive model, so certainly if ($\alpha$ is an ordinal)M then also $\alpha$ is an ordinal. For the same reason, M will contain successors of every ordinal in it.
On the other hand, if M is countable then M cannot contain every countable ordinal; there must be a least (countable) ordinal not in M.
Can anything be said about this ordinal? Does it have any special significance?
If M is a transitive model of ZFC, then so is LM, the constructible universe as constructed inside M, and these two models have the same height. Thus, one could equivalently consider only models of ZFC + V = L.
It is relatively consistent with ZFC that S is empty, that is, that there are no transitive models of ZFC. For example, the least element of S is the least α such that Lα is a model of ZFC. This is sometimes called the minimal model of ZFC, though of course it refers to the minimal transitive model. It is contained as a subclass of all other transitive models of ZFC. The minimal model has no transitive models inside it, and so it believes S to be empty.
The least element of S (and many subsequent elements) is Δ12 definable in V. This is because one can say: a real codes that ordinal iff it codes a well-ordered relation and there is a model of that order type satisfying that no smaller ordinal is in S (a Σ12 property), also iff every well-founded model of ZFC has ordinal height at least the order type of the ordinal coded by z (a Π12 property).
If S has any uncountable elements, then it is unbounded in ω1. The reason is that if Lβ satisfies ZFC and β is uncountable, then we may form increasingly large countable elementary substructures of Lβ, whose Mostowski collapses will give rise to increasingly large countable ordinals in S.
In particular, if there are any large cardinals, such as an inaccessible cardinal, then S will have many countable members.
If 0# exists, then every cardinal is a member of S. This is because when 0# exists, then every cardinal κ is an L-indiscernible, and so Lκ is a model of ZFC. Thus, under 0#, the class S contains a proper class club, and contains a club in every cardinal.
S is not closed. For example, the supremum of the first ω many elements of S cannot be a member of S. The reason is that if αn is the nth element of S, and λ = supn αn, then there would be a definable cofinal ω sequence in Lλ, contrary to the Replacement axiom.
S contains members of every infinite cardinality less than its supremum. If β is in S, then we may form elementary substructures of Lβ of any smaller cardinality, and the Mostowski collapses of these structures will give rise to smaller ordinals in S.
If β is any particular element of S, the we may chop off the universe at β and consider the model Lβ. Below β, the model Lβ calculates S that same as we do. Thus, if β is a limit point of S, then Lβ will believe that S is a proper class. If β is a successor element of S, then Lβ will believe that S is bounded. Indeed, if β is the αth element of S, then in any case, Lβ believes that there are α many elements of S.
If S is bounded, then we may go to a forcing extension V[G] which collapses cardinals, so that the supremum of S is now a countable ordinal. The forcing does not affect whether any Lα satisfies ZFC, and thus does not affect S.
Reading your question again, I see that perhaps you meant to consider a fixed M, rather than letting M vary over all transitive models. In this case, you will want to look at fine-structural properties of this particular ordinal. Of course, it exhibits many closure properties, since any construction from below that can be carried out in ZFC can be carried out inside M, and therefore will not reach up to ht(M).
Not the answer you're looking for? Browse other questions tagged lo.logic set-theory model-theory or ask your own question.
Can there be only one (uncountable transitive model of ZFC)?
When is a direct limit of nice models of set theory is again a model of set theory?
Buying more absoluteness for countable transitive models? | CommonCrawl |
Task 1: Find the maximum value of $n$ for which it is possible to create a regular uniquely-solvable Killer Sudoku (9x9) such that every cage has atleast $n$ cells in it.
Each row, column, and nonet contains each number exactly once.
The sum of all numbers in a cage must match the small number printed in its corner.
No number appears more than once in a cage.
Task 2: What if rule number 3 was ignored?
The biggest $n$ we could hope to find is $81$ with a single cage which is obviously not going to work.
The next biggest would be $40$ with two cages of $41$ and $40$ cells.
We need to get as close to this as possible while enforcing uniqueness, which means we need to first force some cell to be some value and then have that force another and so on cascading through the entire sudoku.
Make it so one cage has rows and columns containing $(1),(1,2),\dots,(1,2,3,4,5,6,7,8,9)$ overlapping on the highest numbers and the other has the rest (overlapping on the smallest numbers). The two cages will now be the cages with the smallest and largest possible sums for their sizes too.
Two separate ones and two separate nines will fall out first, then two pairs of "a one and a two" and two pairs of "a nine and an eight", and so on until the whole sudoku is filled in the only way possible.
This same process then cascades through to completion.
To get things going: a simple observation giving us an upper bound $n \leq 8$ for task 1.
Task 1 for $n=9$ (creating a uniquely solvable $9 \times 9$ killer Sudoku with cages of size $n=9$) can not be accomplished. This follows from the fact that if a solution would exist such that each row, each column, each nonet and each 9-cage contains the numbers $1, .. 9$, one could swap any two numbers across the grid (e.g. swap all $1$'s and $2$'s) and thereby obtain another solution.
Not the answer you're looking for? Browse other questions tagged logical-deduction calculation-puzzle optimization sudoku or ask your own question.
Sudoku and guessing : theory? | CommonCrawl |
We define Landau-Lifshitz sigma models on general coset space $G/H$, with $H$ a maximal stability sub-group of $G$. These are non-relativistic models that have $G$-valued N\"other charges, local $H$ invariance and are classically integrable. Using this definition, we construct the $PSU(2,2|4)/PS(U(2|2)^2)$ Landau-Lifshitz sigma-model. This sigma model describes the thermodynamic limit of the spin-chain Hamiltonian obtained from the complete one-loop dilatation operator of the N=4 super Yang-Mills (SYM) theory. In the second part of the paper, we identify a number of consistent truncations of the Type IIB Green-Schwarz action on $AdS_5\times S^5$ whose field content consists of two real bosons and 4,8 or 16 real fermions. We show that $\kappa$-symmetry acts trivially in these sub-sectors. In the context of the large spin limit of the AdS/CFT correspondence, we map the Lagrangians of these sub-sectors to corresponding truncations of the $PSU(2,2|4)/PS(U(2|2)^2)$ Landau-Lifshitz sigma-model. | CommonCrawl |
Deduplication systems for traditional backups have optimized for large sequential writes and reads. Over time, new applications have resulted in nonsequential accesses, patterns reminiscent of primary storage systems. The Data Domain File System (\ddfs) needs to evolve to support these modern workloads by providing high performance for nonsequential accesses without degrading performance for traditional backup workloads.
Based on our experience with thousands of deployed systems, we have updated our storage software to distinguish user workloads and apply optimizations including leveraging solid-state disk (SSD) caches. Since SSDs are still significantly more expensive than magnetic disks, we make our system cost-effective by caching metadata and file data rather than moving everything to SSD. We dynamically detect access patterns to decide when to cache, prefetch, and perform numerous other optimizations. We find that on a workload with nonsequential accesses, with SSDs for caching metadata alone, we measured a 5.7$\times$ improvement on input/output operations per second (IOPS) when compared to a baseline without SSDs. Combining metadata and data caching in SSDs, we measured a further 1.7$\times$ IOPS increase. Adding software optimizations throughout our system added an additional 2.7$\times$ IOPS improvement for nonsequential workloads. Overall, we find that both hardware and software changes are necessary to support the new mix of sequential and nonsequential workloads at acceptable cost. Our updated system is sold to customers worldwide. | CommonCrawl |
I've been reading a lot lately about the differences between Fisher's method of hypothesis testing and the Neyman-Pearson school of thought.
My question is, ignoring philosophical objections for a moment; when should we use the Fisher's approach of statistical modelling and when should be use the Neyman-Pearson method of significance levels et cetera? Is there a practical way of deciding which viewpoint to endorse in any given practical problem?
Let me start by defining the terms of the discussion as I see them. A p-value is the probability of getting a sample statistic (say, a sample mean) as far as, or further from some reference value than your sample statistic, if the reference value were the true population parameter. For example, a p-value answers the question: what is the probability of getting a sample mean IQ more than $|\bar x-100|$ points away from 100, if 100 is really the mean of the population from which your sample was drawn. Now the issue is, how should that number be employed in making a statistical inference?
Fisher thought that the p-value could be interpreted as a continuous measure of evidence against the null hypothesis. There is no particular fixed value at which the results become 'significant'. The way I usually try to get this across to people is to point out that, for all intents and purposes, p=.049 and p=.051 constitute an identical amount of evidence against the null hypothesis (cf. @Henrik's answer here).
On the other hand, Neyman & Pearson thought you could use the p-value as part of a formalized decision making process. At the end of your investigation, you have to either reject the null hypothesis, or fail to reject the null hypothesis. In addition, the null hypothesis could be either true or not true. Thus, there are four theoretical possibilities (although in any given situation, there are just two): you could make a correct decision (fail to reject a true--or reject a false--null hypothesis), or you could make a type I or type II error (by rejecting a true null, or failing to reject a false null hypothesis, respectively). (Note that the p-value is not the same thing as the type I error rate, which I discuss here.) The p-value allows the process of deciding whether or not to reject the null hypothesis to be formalized. Within the Neyman-Pearson framework, the process would work like this: there is a null hypothesis that people will believe by default in the absence of sufficient evidence to the contrary, and an alternative hypothesis that you believe may be true instead. There are some long-run error rates that you will be willing to live with (note that there is no reason these have to be 5% and 20%). Given these things, you design your study to differentiate between those two hypotheses while maintaining, at most, those error rates, by conducting a power analysis and conducting your study accordingly. (Typically, this means having sufficient data.) After your study is completed, you compare your p-value to $\alpha$ and reject the null hypothesis if $p<\alpha$; if it's not, you fail to reject the null hypothesis. Either way, your study is complete and you have made your decision.
The Fisherian and Neyman-Pearson approaches are not the same. The central contention of the Neyman-Pearson framework is that at the end of your study, you have to make a decision and walk away. Allegedly, a researcher once approached Fisher with 'non-significant' results, asking him what he should do, and Fisher said, 'go get more data'.
any other tests to check assumptions, etc.
Fisher's significance testing can be interpreted as a way of deciding whether or not the data suggests any interesting `signal'. We either reject the null hypothesis (which may be a Type I error) or don't say anything at all. For example, in lots of modern 'omics' applications, this interpretation fits; we don't want to make too many Type I errors, we do want to pull out the most exciting signals, though we may miss some.
Neyman-Pearson's hypothesis makes sense when there are two disjoint alternatives (e.g. the Higgs Boson does or does not exist) between which we decide. As well as the risk of a Type I error, here we can also make Type II error - when there's a real signal but we say it's not there, making a 'null' decision. N-P's argument was that, without making too many type I error rates, we want to minimize the risk of Type II errors.
Often, neither system will seem perfect - for example you may just want a point estimate and corresponding measure of uncertainty. Also, it may not matter which version you use, because you report the p-value and leave test interpretation to the reader. But to choose between the approaches above, identify whether (or not) Type II errors are relevant to your application.
The whole point is that you cannot ignore the philosophical differences. A mathematical procedure in statistics doesn't just stand alone as something you apply without some underlying hypotheses, assumptions, theory... philosophy.
That said, if you insist on sticking with frequentist philosophies there might be a few very specific kinds of problems where Neyman-Pearson really needs to be considered. They'd all fall in the class of repeated testing like quality control or fMRI. Setting a specific alpha beforehand and considering the whole Type I, Type II, and power framework becomes more important in that setting.
My understanding is: p-value is to tell us what to believe (verifying a theory with sufficient data) while Neyman-Pearson approach is to tell us what to do (making best possible decisions even with limited data). So it looks to me that (small) p-value is more stringent while Neyman-Pearson approach is more pragmatic; That's probably why p-value is used more in answering scientific questions while Neyman and Pearson is used more in making statistical/practical decisions.
Not the answer you're looking for? Browse other questions tagged hypothesis-testing p-value methodology or ask your own question.
Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"?
Is the exact value of a 'p-value' meaningless?
Are smaller p-values more convincing?
Using p-value to compute the probability of hypothesis being true; what else is needed?
Is it wrong to refer to results as being "highly significant"?
Is p-value involved in Neyman-Pearson Testing procedure? | CommonCrawl |
Why does adding PKCS#1 v1.5 padding make RSA encryption non-deterministic?
I'm quite a beginner to cryptography, but have been implementing some encryption according to a specification over the last few weeks using the PyCrypto library.
I've discovered that when encrypting using RSA public keys alone, encryption appears to be deterministic (meaning encrypting the same message multiple times results in the same cipher text). However, once I add PKCS#1 v1.5 padding, it becomes non-deterministic (the cipher text differs on subsequent encryptions).
My question is: why are there differences in determinism in these two cases? I understand why non-determinism in encryption is useful (to prevent an interceptor noticing two messages are the same), but why does RSA encryption alone not offer this, and how does padding add this? Is this one of the purposes of padding?
When you use textbook RSA, the public key is $(e,N)$ and the ciphertext of a message $m$ is $c = m^e\bmod N$. The encryption process of textbook RSA involves no randomness; this causes the problem.
It is easy to see that when having $m_1=m_2$ the ciphertexts of them $m_1^e =m_2^e \bmod N$. Hence you can see where determinism comes from.
When using a Padding scheme like PKCS#1 v1.5, one important aspect is that the padding is random, thus differs each time when you encrypt.
In PKCS#1 v1.5 for RSA, $m$ is padded to $x=0x00||0x02||r||0x00||m$ and the ciphertext is $c=x^e\bmod N$ instead of $m^e\bmod N$. Here $r$ is a long enough random string. Therefore, even having $m_1=m_2$, the ciphertexts of them will be produced over $x_1\ne x_2$, thus will looks totally different.
Let $N$ be $k$ bytes long, then $m$ must be $\leq$ $k-11$ bytes long.
The padded string ($x$) must be $k$ bytes long.
Thus $r$ is of $k-3-|m|$ bytes long ($|m|$ is how many bytes the plaintext $m$ is), which is at least 8 bytes long. However, this may still not be enough. I remember in Kats & Lindell's book, it mentions that $r$ needs to be roughly half the length of $N$, in order to for us to consider RSA CPA secure.
It is nice that you have tested if from the first hands.
So, an attacker can modify the plaintext and create valid plaintexts.
In order to prevent these, a padding scheme is necessary. Insert randomness to prevent 1 and format the data in order to see the modification in the ciphertext.
Not the answer you're looking for? Browse other questions tagged rsa padding deterministic-encryption pkcs1 or ask your own question.
Does RSA padding have to be unpredictable if the payload is?
Is semantic security important in a hybrid cryptosystem?
Is PKCS #1 v1.5 RSA encryption padding secure under these conditions?
Are there any misuse-resistant asymmetric encryption schemes?
Why PS does differ between PKCS1 v1.5 padding for signature and for encryption? | CommonCrawl |
1 . In question, select the related word/letters/number from the given alternatives.
AEJ : 61 : : JKL : ?
In an imaginary operation of mathematics, '+' means multiply, '$\times$' means subtract, '$\div$' means add and '-' means divide. In this operation of mathematics all other rules are same as in present system. Which of these is the answer to the following ?
Which day falls on 16 March, 1989 ?
What time between 3 and 4 O' clock, the angle of the hands will be one third of a right angle ?
5 . In question, select the related word/letters/number from the given alternatives.
Wine : Cellar : : Weapons : ?
Wine is stored in cellar. Similarly, Weapons are stored in Arsenal.
II. No cat is dog.
If a mirror is placed on the line AB, then which of the answer figures is the right image of the given figure ?
Which of the following answer figures will be the correct water image of the question figure?
In the following questions given format is made on a transparent page and folded along the dotted line, then how will it be visible ? | CommonCrawl |
The Riemann mapping theorem (cf e.g. http://en.wikipedia.org/wiki/Riemann_mapping_theorem) essentially guarantees the existence of a biholomorphic mapping of a simply connected, open subset of the complex plane onto the unit disk.
Are there any results known about length-preserving mappings from simply connected, open subsets of the complex plane to non-congruent target-subsets of the complex plane?
What are the necessary and sufficient conditions, under which such mappings exist?
Are any analogues of the Schwarz-Christoffel mapping (cf e.g. http://en.wikipedia.org/wiki/Schwarz%E2%80%93Christoffel_mapping) known for such length-preserving mappings?
are there non-isometric mappings between two simply connected regions with equal perimeter of finite length, which preserve the length of a finite collection of coordinate lines?
A different way of stating the relaxed problem would be that the source-region is partitioned into a finite collection of simply connected sub-regions, whose interiors do not intersect and the question is, whether non-isometric mappings exist, for which the length of the boundary of every sub-region's image equals the length of the boundary of the corresponding original region.
Any length preserving map from one plane domain to another must be also conformal. Because the angle of a small triangle can be found if you know the sides. This observation solves all your questions, because if a conformal map is also length preserving, than the derivative must have constant absolute value one, thus it is of the form $z\mapsto \lambda z+c$ where $|\lambda|=1$, so it is a "roto-translation".
Not the answer you're looking for? Browse other questions tagged reference-request cv.complex-variables or ask your own question.
Conformal mappings that preserve angles and areas but not perimeters?
How to determine bounds on the extremal length around annuli?
Variant of the Riemann Mapping Theorem for $Conf(\mathbb H^2)$?
Is there a proof of the uniformization theorem using circle packing? | CommonCrawl |
The usually quoted value of the fine structure constant is 1/137.0359... This value holds at low energies. As is well known, this value increases somewhat with energy: it is about 1/128 at (M_Z)^2. Now, IF we imagine that there is no GUT, no physics beyond the standard model, no supersymmetry, and that QED is correct all the time (yes, that is a big IF), what would the value be at Planck energy (10^19 GeV)?
Yes, increases is correct - i edited the text!
@RyanThorngren: I am afraid Mayhem means SM rather than QED here.
The fine structure constant at Planck energy is around 1/105.
any detailed calculations to back up this estimate, or is it just another magic number?
1/105 or 1/100 is still a small number, which says nothing. Do you, Mayhem, know that $\alpha$ is always multiplied by some dimensionless function of the physical problem variables in question and only this product serves as a "small parameter"?
There are no predictions for the fine structure constant that I know of. I remmember seeing an image for Grand Unification once where there was SM on one side and MSSM on the other. I've done some work on this topic my self and the value is around 1/100. There's no way to test if it's true however my equation is in full agreement with experimental evidence where available, both for the EM coupling (fine structure constant) and the strong coupling constant. | CommonCrawl |
Abstract: The most general chiral Lagrangian for electroweak interactions with the complete set of $SU(2)_L\times U(1)_Y$ invariant operators up to dimension four is considered. The two-point and three-point functions with external gauge fields are derived from this effective chiral Lagrangian to one-loop order in a generic $R_\xi$-gauge. The same set of Green's functions are paralelly studied in the renormalizable standard model to one-loop order, in a $R_\xi$-gauge and in the large Higgs mass limit. An appropriate set of matching conditions connecting the Green's functions of the two theories allows us to derive, systematically, the values of the chiral Lagrangian coefficients corresponding to the large Higgs mass limit of the standard model. These chiral parameters represent the non-decoupling effects of a heavy Higgs particle and incorporate both the leading logarithmic dependence on $\mh$ and the next to leading constant contributions. Some phenomenological implications are also discussed. | CommonCrawl |
Abstract: T-duality is used to extract information on an instanton of zero size in the $E_8\times E_8$ heterotic string. We discuss the possibility of the appearance of a tensionless anti-self-dual non-critical string through an implementation of the mechanism suggested by Strominger of two coincident 5-branes. It is argued that when an instanton shrinks to zero size a tensionless non-critical string appears at the core of the instanton. It is further conjectured that appearance of tensionless strings in the spectrum leads to new phase transitions in six dimensions in much the same way as massless particles do in four dimensions. | CommonCrawl |
Browse other questions tagged power-series absolute-value p-adic-number-theory valuation-theory or ask your own question.
$p$-adic field with infinite residue field.
Can one define the $p$-adic exponential function $\exp_p (x)$ as the limit of $(1 + x/n)^n$ when $n\to\infty$? | CommonCrawl |
How to draw a lattice for the divisors of big numbers?
An exercise ask to find atoms and join-irreducible elements for the set of divisors of 360. I know how to find them by drawing the lattice but it seems difficult in this case.
Is there another way to find atoms? If not, is there a easy way to draw such a lattice?
Let $d=p^k$ be a divisor of $360$. Then $d=a\vee b$ implies $a=p^k$ or $b=p^k$, so $d$ is join-irreducible. Conversely, let $d$ be a join-irreducible divisor of $360$ and write $d=a\times b$ with $\gcd(a,b)=1$. Then $d=a\vee b$ and hence $d=a$ or $d=b$, meaning that $b\mid a$ or $a\mid b$, respectively. This holds for every factorization $d=a\times b$ and hence $d$ is a prime power.
The atoms are the numbers with no nontrivial proper divisors, i.e. the primes.
Not the answer you're looking for? Browse other questions tagged discrete-mathematics lattice-orders integer-lattices or ask your own question.
How does the base in the complete lattice from a given topology look like?
How is a complete lattice defined solely by a least-upper bound?
Lattice divisors of 165, ordered by divisibility. Draw Hasse diagram, get complements, check if distributive, check if Boolean.
Lattice divisors of 150, ordered by divisibility. Draw Hasse diagram, get complements, check if distributive, check if Boolean. | CommonCrawl |
For questions concerning random matrices.
Let $U$ be a random $n \times n$ unitary matrix (w.r.t. the Haar measure) and let $M$ be a $k \times l$ submatrix. What is the distribution of the singular values of $M$?
What does this physics paper mean by having a matrix in a denominator?
Are these random matrix processes equivalent?
Total Variation Distance Between a Distribution and a perturbed distribution?
How to do change of variables of a j.p.d.f with N pdf(s)?
What is the product of two Haar distributed unitary matrices?
I guess a product of two Haar distributed unitary matrices is also a Haar distributed unitary matrix. Is there a proof?
What is the expectation of the rank of a matrix with a 1 at each column?
Do imaginary inverses of non-invertible matrices exist?
There isn't a real solution to $x^2 = -1$, but a complex solution $x = i$ exists. Similarly, does there exist a complex inverse of non-invertible matrices?
Bounding sub-Gaussian tail events by Gaussian tail events?
I have a naive question because it's mentioned in every random matrix paper and is not explained. What does it mean to say a random matrix has localized eigenvalues? And what are some examples of it?
Is there a distribution for random matrices which are constrained to have "unit vector" columns?
is random gaussian matrix invertible?
Is Gaussian Random Matrix invertible? I mean can we invert a Random Gaussian Square Matrix and also what is nature of its determinant, I mean to say whether determinant is zero or non zero?
Are the eigenvectors of real Wigner matrices made of independent random variables with zero-mean?
Why are independence and mean-zero necessary for the symmetrization lemma to hold?
How can the Wigner semicircle distribution go to zero? | CommonCrawl |
The generation and evolution of microjets are studied both experimentally and numerically. The jets are generated by focusing a laser pulse into a microscopic capillary tube ($\sim$50 $\mu$m) filled with water-based red dye. A vapor bubble is created instantly after shooting the laser ($<$1 $\mu$s), sending out a shockwave towards the curved free surface at which the high-speed microjet forms. The process of jet formation is captured using high-speed recordings at 1.0 $\times$ $10^6$ fps. The velocity of the microjets can reach speeds of $\sim$850 m/s while maintaining a very sharp geometry. The high-speed recordings enable us to study the effect of several parameters on the jet velocity, e.g. the absorbed energy and the distance between the laser spot and the free surface.The results show a clear dependence on these variables, even for supersonic speeds. Comparisons with numerical simulations confirm the nature of these dependencies. | CommonCrawl |
The game of Nimble is played as follows. You have a game board consisting of a line of squares labelled by the nonnegative integers. A finite number of coins are placed on the squares, with possibly more than one coin on a square. A move consists of picking up one of the coins and placing it on a square somewhere to the left of its original position. Players alternate moves. The game ends when no moves are possible, i. e. when all the coins are on the square labelled 0. The last player to move wins.
As described, this is just Nim in disguise, where a coin in position $n$ corresponds to a pile containing $n$ coins. The winning strategy is well-known. Define the nim-sum of some positive integers as the result when they're written in binary and addition is performed without carrying. Then the only winning positions are those where the nim-sum of all the pile sizes (or coin positions) are zero.
But what is the winning strategy if we don't allow coins to be stacked? This corresponds to Nim where we don't allow two piles of the same size, but this restriction feels a lot more natural in Nimble than in Nim.
There are two reasonable terminal positions. One is to allow stacking on the square marked zero only, and say the game terminates when all the coins are at zero; the other is to not allow stacking and say the game terminates when coins are on $0, 1, \ldots, k-1$ if there are $k$ coins. Either one would be of interest.
If someone gives me an answer by Friday, I might talk about it in my class.
Not the answer you're looking for? Browse other questions tagged nim combinatorial-game-theory or ask your own question.
Can anyone analyze this misere game?
Does a generalized Queen split the upper P-positions of Wythoff Nim into two new beams of P-positions? | CommonCrawl |
We consider a random walk with finite second moment which drifts to $-\infty$ and has a heavy tail. We focus on the events when the minimum and the final value of this walk belong to some compact set. We first specify the associated probability. Then, conditionally on such an event, we finely describe the trajectory of the random walk. It yields a decomposition theorem with respect to a random time giving a big jump whose distribution can be described explicitly. | CommonCrawl |
Recall from the Linear Lagrange Interpolating Polynomials page that given two points, $(x_0, y_0)$ and $(x_1, y_1)$ where $x_0$ and $x_1$ are distinct, we can construct a line $P_1$ that passes through these points. We also saw that we could approximate a function with this line that also passes through these two points.
Note that $P_2$ does in fact pass through all the points specified above since $P_2(x_0) = y_0$, $P_2(x_1) = y_1$, and $P_2(x_2) = y_2$. A formal definition of the polynomial above is given below.
Definition: The Quadratic Lagrange Interpolating Polynomial through the points $(x_0, y_0)$, $(x_1, y_1)$, and $(x_2, y_2)$ where $x_0$, $x_1$, and $x_2$ are distinct is the polynomial $P_2(x) = y_0L_0(x) + y_1L_1(x) + y_2L_2(x)$.
It is important to note that while we define $P_2$ to be the "quadratic" Lagrange interpolating polynomial, it is possible that $P_2$ may have degree less than $2$.
Let's now look at some examples of constructing a quadratic Lagrange interpolating polynomials.
Construct the quadratic Lagrange interpolating polynomial $P_2(x)$ that interpolates the points $(1, 4)$, $(2, 1)$, and $(5, 6)$.
Construct the quadratic Lagrange interpolating polynomial $P_2(x)$ that interpolates the points $(1, 2)$, $(3, 4)$, and $(5, 6)$.
Note that example 2 shows that $P_2$ need not be quadratic and may be a polynomial of lesser degree. | CommonCrawl |
In a first part, we prove Bernstein-type deviation inequalities for bifurcating Markov chains (BMC) under a geometric ergodicity assumption, completing former results of Guyon and Bitseki Penda, Djellout and Guillin. These preliminary results are the key ingredient to implement nonparametric wavelet thresholding estimation procedures: in a second part, we construct nonparametric estimators of the transition density of a BMC, of its mean transition density and of the corresponding invariant density, and show smoothness adaptation over various multivariate Besov classes under $L^p$ -loss error, for $1\leq p<\infty$. We prove that our estimators are (nearly) optimal in a minimax sense. As an application, we obtain new results for the estimation of the splitting size-dependent rate of growth-fragmentation models and we extend the statistical study of bifurcating autoregressive processes. | CommonCrawl |
I am interested in magic tricks whose explanation requires deep mathematics. The trick should be one that would actually appeal to a layman. An example is the following: the magician asks Alice to choose two integers between 1 and 50 and add them. Then add the largest two of the three integers at hand. Then add the largest two again. Repeat this around ten times. Alice tells the magician her final number $n$. The magician then tells Alice the next number. This is done by computing $(1.61803398\cdots) n$ and rounding to the nearest integer. The explanation is beyond the comprehension of a random mathematical layperson, but for a mathematician it is not very deep. Can anyone do better?
It teaches them about prime decomposition, about the decimal structure, about consecutive division etc.
I learnt it from Avraham Arcavi.
Here is a trick much in the spirit of the original number-adding example; moreover I'm sure Richard will appreciate the type of "deep mathematics" involved.
On a rectangular board of a given size $m\times n$, Alice places (in absence of the magician) the numbers $1$ to $mn$ (written on cards) in such a way that rows and columns are increasing but otherwise at random (in math term she chooses a random rectangular standard Young tableau). She also chooses one of the numbers say $k$ and records its place on the board. Now the she removes the number $1$ at the top left and fills the empty square by a "jeu de taquin" sequence of moves (each time the empty square is filled from the right or from below, choosing the smaller candidate to keep rows and columns increasing, and until no candidates are left). This is repeated for the number $2$ (now at the top left) and so forth until $k-1$ is gone and $k$ is at the top left. Now enters the magician, looks at the board briefly, and then points out the original position of $k$ that Alice had recorded. For maximum surprise $k$ should be chosen away from the extremities of the range, and certainly not $1$ or $mn$ whose original positions are obvious.
All the magician needs to do is mentally determine the path the next slide (removing $k$) would take, and apply a central symmetry with respect to the center of the rectangle to the final square of that path.
In fact, the magician could in principle locate the original squares of all remaining numbers (but probably not mentally), simply by continuing to apply jeu de taquin slides. The fact that the tableau shown to the magician determines the original positions of all remaining numbers can be understood from the relatively well known properties of invertibility and confluence of jeu de taquin: one could slide back all remaining numbers to the bottom right corner, choosing the slides in an arbitrary order. However that would be virtually impossible to do mentally. The fact that the described simple method works is based on the less known fact that the Schútzenberger dual of any rectangular tableau can be obtained by negating the entries and applying central symmetry (see the final page of my contribution to the Foata Festschrift).
Destination Unknown is a magic trick that makes use of Combinatorics. It really fools people.
Apart from tricks based on numbers, there are topological objects whose properties can seem quite magical, like the Möbius strip or the unknot.
Tie ends of a long enough piece of rope to your wrists, while wearing a loosely fitting jacket or sweatshirt. With your arms tied like that, take the jacket off your back and put it back on inside out. It's easier to figure out how to do it than to explain it in words, so I'll skip the explanation. The more risque version is to tie the ankles and do the trick with pants.
The other one I haven't tried, but maybe it can be done at a party if you have a stick and some plasticine around.
The "casting out nines" sanity check of calculations is dead simple to use (a small child can do it), but the proof requires a deeper knowledge of mathematics (more precisely of arithmetic ; my own students don't have access to it even though they know what series are and can diagonalize matrices!).
Lay out 21 cards face up in three vertical lines. Have a friend pick out any card without telling you which card he/she has chosen. Have your friend tell you which line of cards the selected card is in, and make three stacks of cards, each stack being made from each line of cards. stack the three stacks on top of each other, placing the stack with the selected card between the other two stacks (IMPORTANT!). lay out the cards again in the exact same set up (3 lines of 7 all face up) but here is the trick: when laying out the cards, flip them face up in a line every time. In other words, don't make one line at a time, but put a card in every line one at a time. Have your friend again tell you which line has the selected card. Stack the cards again, the exact same way you did the first time. One more time, lay out the cards the exact same way as the last time, one card per line, and again have your friend tell you which line has the selected card. Stack all the cards again one last time, again placing the line with the selected card between the other stacked cards. now lay out all the cards face down, one at a time. while you're doing this, remember to count, because the 11th card you place down is the selected card. from this point you can do whatever you can think of to make the trick "magical" and shock your friend by suddenly coming up with his/her card.
Start with a deck of 32 cards. Then the player should take a card and tell a number $n$ between 1 and 32 then you divide the stack in 2 smaller stacks and the player has to tell which of the stacks contains his chosen card. according to a rule dependend on that number you put that stack above or below the other stack. After repeating this 5 times the chosen card should be exactly at position $n$. The rule has to depend on the way you want to deal cards (whether you turn around the deck and start dealing from the bottom, or you deal from the top and turn each single card around or you deal at first and then turn bost stacks around). In one of the cases the rule was take $N-11$, find the representation in the system with base $-2$ and revert that presentation. ($0$ tells you to put the stack containing the chosen card on top, etc.). I dont remeber this trick properly, it should not be too difficult to express the final position depending on the choices in some formula; but it is the only situation I know, in which the $-2$-system is useful.
You hand a deck of cards to an audience and tell them to choose any five cards they wish. You collect the ve cards, look at them quickly, and then ask a volunteer to hide one of the cards after showing it to the audience. You place the remaining four cards face up in a line. You then tell someone in the room to go to the door and fetch your partner, who has been waiting outside. You take a seat in the back of the room somewhere out of sight. Your partner enters, takes a look at the four cards displayed, and correctly calls out the hidden card! Applause follows.
How about the "Flash Mind Reader"
Write down a whole number of any size without showing you.
Just below it, write the same digits in a mixed up order.
Subtract the smaller from the larger, and write down the result.
In the resulting number, circle any digit which is not a 0.
Ask them to tell you the other digits in the number which are not circled.
You tell them the circled digit.
Not the answer you're looking for? Browse other questions tagged soft-question big-list popularization or ask your own question.
What are some deep theorems, and why are they considered deep? | CommonCrawl |
In WolframAlpha I was playing around with the values of the sequence defined by the integral and noticed that the values seem to get arbitrarily close to 1. I guess the difficulty is finding a closed expression for the value of the definite integral.
Split the integral in two parts, one in $[0,1]$ and the second one in $[1,+\infty]$. Using the Dominated convergence theorem you should conclude that the first integral converges to $1$ and the second one to $0$. If you need more help let me know.
where the limit follows immediately.
and the limit for $n \to \infty$ goes to $1$, where you can use l'Hospital's rule.
There is one pole inside the contour.
Not the answer you're looking for? Browse other questions tagged calculus integration limits convergence definite-integrals or ask your own question. | CommonCrawl |
Is there a proof of the Johnson-Lindenstrauss Lemma that can be explained to an undergraduate ?
Is there a proof of the JL Lemma that isn't "geometric" ?
It's a little odd to even ask the question, considering the intrinsic geometric nature of the lemma. But there's a reasonably straightforward way of seeing how the bound emerges without needing to worry too much about random rotations, matrices of Gaussians or the Brunn-Minkowski theorem.
Warning: what follows is a heuristic argument that helps suggest why the bound is in the form that it is: it should not be confused for an actual proof.
In its original form, the JL Lemma says that any set of $n$ points in $R^d$ can be embedded in $R^k$ with $k = O(\log n/\epsilon^2)$ such that all distances are preserved to within a $1+\epsilon$ factor. But the real result at the core of this is that there is a linear mapping taking a unit vector in $R^d$ to a vector of norm in the range $1\pm \epsilon$ in $R^k$, where $k = 1/\epsilon^2$ (the rest follows by scaling and an application of the union bound).
Trick #1: Take a set of values $a_1, \ldots, a_n$ and set $Y = \sum_i a_i r_i$, where $r_i$ is chosen (iid) to be +1 or -1 with equal probability. Then $E[Y^2] = \sum a_i^2$.This can be verified by an easy calculation.
So now consider the vector $v$. Let's assume that $v$'s "mass" is roughly equally distributed among its coordinates. Take a random sample of $d/k$ of the coordinates of $v$ and apply the above trick to the values. Under the above assumption, the resulting $Y^2$ will have roughly $1/k$ of the total (squared) mass of $v$. Scale up by $k$.
This is one estimator of the norm of $v$. It is unbiased and it has a bounded maximum value because of the assumption. This means that we can apply a Chernoff bound over a set of $k$ such estimators. Roughly speaking, the probability of deviation from the mean is $\exp(-\epsilon^2 k)$, giving the desired value of $k$.
But how do we enforce the assumption ? By applying a random Fourier transform (or actually, a random Hadamard transform). this "spreads" the mass of the vector out among the coordinates (technically by ensuring an upper bound on the $\ell_\infty$ norm).
That's basically it. Almost all the papers that follow the Ailon-Chazelle work proceed in this manner, with increasing amounts of cleverness to reuse samples, or only run the Fourier transform locally, or even derandomize the process. What distinguishes this presentation of the result from the earlier approaches (which basically boil down to: populate a matrix with entries drawn from a distribution having subGaussian tails) is that it separates the "spreading" step (called the preconditioner) from the latter, more elementary step (the sampling of coordinates). It turns out that in practice the preconditioner can often be omitted without incurring too much error, yielding an extremely efficient (and sparse) linear transform.
The Joint Mathematics Meetings is billed as the "largest annual mathematics meeting in the world". In 2012 it's in Boston between Jan 4 and 7, and I'm told that there will be over 2000 presenters.
Happily, some of these will be geometers.
Satyen Devadoss and Joe O'Rourke will be running an MAA short course based on their new book on Jan 2-3 before the main conference starts.
Seth Pettie: An Introduction to Davenport-Schinzel Sequences, Forbidden 0-1 Matrices, and Their Geometric Applications.
Yusu Wang: Toward understanding complex data: graph Laplacians on singular manifolds.
I'll be talking about Horoball Hulls and Extents in Positive Definite Space.
Finally, Erik Demaine will be delivering the AMS-MAA-SIAM Gerald and Judith Porter Public Lecture on Saturday, titled "Geometric puzzles: Algorithms and complexity"
There will also be an all-day AMS special session on Computational and Applied Topology (also on Thursday).
So there's lots to do at the JMM if you're interested in geometry. So if you're in the area in January, drop by ! | CommonCrawl |
Nature photographing may be fun for tourists, but it is one of the most complicated things for photographers. To capture all the facets of a bird, you might need more than one cameras. You recently encountered such a situation. There are $n$ photographers, so there are $n$ cameras in a line on the x-axis. All the cameras are at distinct coordinates. You want to pair up these cameras ($n$ is even) in such a way that the sum of angles subtended on the bird by the pair of cameras is maximized. Formally, let A, B be two cameras, and let P be the bird to be captured by these two cameras. The angle will APB. Note: All angles are in radians. ### Input - The first line of the input contains an integer $T$ denoting the number of test cases. The description of the test cases follows. - The first line of each test case contains an integer $n$. - The second line of each test case contains $n$ space-separated integers denoting the $x_i$ coordinates of the cameras. - The third line of each test case contains two space-separated integers $P, Q$ denoting the x and y coordinates of the bird respectively. ### Output For each test case, output your answer in a single line. Your answer would be considered correct if its absolute error is less than or equal to 1e-6 of the actual answer. ###Constraints - $1 \le T \le 10$ - $2 \le n \leq 100$ - $1 \le x_i \leq 300$ - $0 \le P \leq 300$ - $1 \le Q \leq 300$ ### Example Input ``` 2 2 0 1 0 1 2 0 1 100 1 ``` ### Example Output ``` 0.785398163397 0.000100999899 ``` ###Explanation **Note**: $1 \leq x_i$ is not being satisfied by the sample input, but will be satisfied in the actual test data. **Testcase 1**: There are only 2 cameras, so they have to paired up with each other. And the angle subtended by then is 45 degrees. Converting this to radians gives the output. | CommonCrawl |
The linear function is arguably the most important function in mathematics. It's one of the easiest functions to understand, and it often shows up when you least expect it. Because it is so nice, we often simplify more complicated functions into linear functions in order to understand aspects of the complicated functions.
Unfortunately, the term "linear function" means slightly different things to different. Fortunately, the distinction is pretty simple. We first outline the strict definition of a linear function, which is the favorite version in higher mathematics. Then, we discuss the rebellious definition of a linear function, which is the definition one typically learning in elementary mathematics but is a rebellious definition since such a function isn't linear.
In one variable, the linear function is exceedingly simple. A linear function is one of the form $$f(x)=ax,$$ where the parameter $a$ is any real number. The graph of $f$ is a line through the origin and the parameter $a$ is the slope of this line.
A linear function of one variable. The linear function $f(x)=ax$ is illustrated by its graph, which is the green line. Since $f(0)=a \times 0 =0$, the graph always goes through the origin $(0,0)$. You can change $f$ by typing in a new value for $a$, or by dragging the blue point with your mouse. The parameter $a$ is the slope of the line, as illustrated by the shaded triangle.
One important consequence of this definition of a linear function is that $f(0)=0$, no matter what value you choose for the parameter $a$. This fact is the reason the graph of $f$ always goes through the origin. By this strict definition of a linear function, the function $$g(x) = 3x +2$$ is not a linear function, as $g(0) \ne 0$.
Why this insistence that $f(0)=0$ for any linear function $f$? The reason is that in mathematics (other than in elementary mathematics), we don't define linear by the requirement that the graph is a line. Instead, we require certain properties of the function $f(x)$ for it to be linear.
One important requirement for a linear function is: doubling the input $x$ must double the function output $f(x)$. It's easy to see that the function $g(x)$ fails this test. For example, $g(1)=5$ and $g(2)=8$, which means that $g(2) \ne 2g(1)$. We can write this requirement for a linear function $f$ as $$f(2x)=2f(x)$$ for any input $x$. If $f(x)=ax$, then $f(2x)=2ax$ and $2f(x)=2ax$, so this requirement is satified.
To satisfy this doubling requirement, we must have $f(0)=0$. This follows from the fact that if you double zero, you get zero back. Therefore, the doubling requirement means $f(0)=2f(0)$, so $f(0)$ is a number that is the same if you double it; i.e., $f(0)=0$.
By the way, for a linear function, this property must be satisfied for any number, not just the number 2. A linear function must satisfy $f(cx)=cf(x)$ for any number $c$. The other requirement for a linear function is that applying $f$ to the sum of two inputs $x$ and $y$ is the same thing as adding the results from being applied to the inputs individually, i.e., $f(x+y)=f(x)+f(y)$.
The rebellious view of the linear function is to call any function of the form $$f(x)=ax+b$$ a linear function, since its graph is a line.
An affine function of one variable. The affine function $f(x)=ax+b$ is illustrated by its graph, which is the green line. Since $f(0)=a \times 0 +b=b$, the graph always goes through the $y$-axis at the point $(0,b)$, which is illustrated by the gray point. You can change $f$ by typing in a new values for $a$ or $b$, or by dragging the blue points with your mouse. The parameter $a$ is the slope of the line, as illustrated by the shaded triangle.
However, as mentioned above, this type of function with $b \ne 0$ does not satisfy the properties for linearity. So, to call $f$ a linear fuction, we have to rebellious ignore such facts to the contrary. Strictly, if $b \ne 0$, then $f$ should be called an affine function rather than a linear function.
Given that this rebellious view is firmly entrenched in elementary mathematics, we might sometimes join in and use this terminology. If it doesn't seem worthwhile to insist on the distinction, we might use the term linear function when we should really use the term affine function.
In other contexts, the properties of linearity are critical for the mathematical analysis. In such cases, we'll be careful to insist that a linear function $f(x)$ does satisfy that $f(0)=0$, and make the distinction between linear and affine functions.
The linear function by Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. For permissions beyond the scope of this license, please contact us. | CommonCrawl |
Given a knapsack of some capacity $C$ and $n$ objects with object $i$ having weight $w_i$ and profit $p_i$, the goal is to choose some subset of the objects that can fit in the knapsack (i.e. the sum of their weights is no more than $C$) while maximizing profit.
$x$ is a vector is size $n$ where $x_i$ is one if we chose to keep the object in the knapsack, 0 otherwise. | CommonCrawl |
I recently noticed that Oxford's computer science department has started offering a grad course on Categorical quantum mechanics. Apparently they say that it is relevant for the study of quantum foundations and quantum information, and that it uses paradigms from category theory.
How exactly does it help in the study of quantum information?
Has this formulation actually produced any new results or predictions apart from what our general formulation of quantum mechanics has already done? If so, what are those?
This answer is the opinion of someone who is essentially an outsider to "CQM" (= Categorical Quantum Mechanics), but a broadly sympathetic outsider. It should be interpreted as such.
Its results about "completeness" should be interpreted in the same sense as it would in Gödel's Completeness Theorem [sic]: that a set of axioms can perfectly capture a model, which in this case is the model of transformations on a set of qubits expressed in terms of transformations of degrees of freedom expressed in terms of the Z and X eigenbases.
Occasional comparisons to things like "Rel" (that is: the category of relations, which from a computational point of view is more closely allied to non-deterministic Turing machines than quantum computers) illustrate the fact that they are aware of quantum information theory as being part of a larger landscape of computational theories, where the distinctions between these theories may lead to a robust top-down intuition about what distinguishes quantum theory from other possible dynamical theories of information.
The difference in motivation of CQM to the rest of the field, also reveals itself in the approach which is taken to analysis, in which linear algebra over $\mathbb C$ takes very much a background role.
Linear algebra over $\mathbb C$ is certainly still present in the background, essentially as the target model for CQM. But the usual approach to quantum mechanics in terms of linear algebra over $\mathbb C$ it is seen as potentially obscuring "what is actually going on". And to give the proponents of CQM their due, they have a good argument here: the usual presentation of quantum information theory, starting from vectors over $\mathbb C$ and unitary transformations, working through density operators and CPTP maps, requires a non-trivial amount of work to develop an intuition of what it is for and in what ways it differs (and in what ways it does not differ) from probability theory. It is certainly possible to get that intuition by the usual complex-linear-algebraic approach, but the proponents of CQM would claim that the usual approach is not likely to be the most effective approach.
CQM attempts to put the intuitive meaning front-and-centre, in a mathematically rigorous way. This obligates them to talk about such apparently obscure things as "dagger commutative Frobenius algebras". Of course, such terminology means little to nothing to almost anyone else in the field — but then this is not much different from how quantum information theorists come across to other computer scientists.
This is just the starting point of the potential confusion for an outsider — as those pursuing CQM are in effect mathematicians/logicians with top-down motivations, there is not one single thread of research in CQM, and there is not a sharp boundary between work on CQM and work in higher category theory. This is analogous to the lack of sharp boundary between computational complexity expressed in terms of quantum circuits, quantum communication complexity, query complexity, and the classical version of these topics, along with Fourier analysis and other relevant mathematical tools. Without a clear frame of reference, it can sometimes be a bit confusing as to where CQM begins and ends, but it has in principle as well-defined a notion of scope as any other topic in quantum information theory.
If you wonder why people might like to investigate CQM rather than a more mainstream question in quantum information theory, we should first acknowledge that there are other lines of research in quantum information theory which are not exactly directed towards meaningful impact on anyone else. If we are happy for people to do research into such things as approaches to quantum computation involving physical phenomena which no-one has yet exhibited in the lab [arXiv:1701.05052] or approaches to error correction on closed d-dimensional manifolds for d>2 [arXiv:1503.02065], we should be equally happy to admit other lines of investigation which is somewhat divorced from the mainstream. The justification in each case is the same: that while the arc of theory is long, it bends towards application, and things which are investigated for purely theoretical reasons have a way of yielding practical fruits.
On that note: one view of the purpose of paying attention to foundations is to get the sort of insight necessary to solve problems more easily. Does CQM provide that insight?
I think that it is only very recently that the proponents of CQM have seriously considered the question of whether the insights it provides, allow one to obtain new results in subjects which are more in the mainstream of quantum information theory. This is again because the main motivation are the foundations, but recent work has started to develop on the theme of payoffs in the wider field.
Novel techniques for constructing unitary error bases and Hadamard matrices (e.g. [arXiv:1504.02715, arXiv:1609.07775]. These appeared to be of enough interest to the quantum information community that these results were presented as talks in QIP 2016 and 2017 respectively.
A well-thought out and clear definition of a quantum graph, which recovers the definition of a noncommutative graph from [arXiv:1002.2514] in such a way that makes the relationship to 'classical' graphs clear, allows them to connect to higher algebra, and obtain (Corollary 5.6) a result on the asymptotic density of pairs of graphs for which there is a quantum advantage in pseudo-telepathy games.
Some recent techniques for solving problems in counting complexity regarding the Holant, which are inspired by quantum computation [arXiv:1702.00767], are more specifically inspired by a particular line of investigation into CQM which involved the distinction between GHZ states and W states.
One of the main products of CQM is the ZX-calculus, which one might describe as a tensor-notation which is similar to circuit notation, but which also comes equipped with a formal system for transforming equivalent diagrams to one another. There is a line of investigation into using this as a practical tool for circuit simplification, and for realising unitary circuits in particular architectures. This is based in part on the fact that ZX diagrams are a notation which allows you to reason about tensors beyond just unitary circuits, and which is therefore more flexible in principle.
Should everyone start using CQM immediately?
As with many things which have been devised for heterodox academic reasons, it is not necessarily the best tool for every question which one might want to ask. If you want to run numerical simulations, chances are you use C or Python as your programming language rather than SML. However, on that same note, just as programming languages developed in earnest by major software firms may in time be informed by ideas which were first developed in such a heterodox academic context, so too might some of the ideas and priorities of CQM eventually filter out to the broader community, making it less an isolated line of investigation than it may seem today.
There are also subjects for which CQM does not (yet) seem to provide a useful way of approaching, such as distance measures between different states or operations. But every mathematical tool has it's limits: I expect that I won't be using quantum channel theory any time soon to consider how to simplify unitary circuits.
There will be problems for which CQM sheds some insight, and may provide a convenient means for analysis. A few examples of such topics are provided above, and it is reasonable to suppose that more areas of application will become evident with time. For those topics where CQM is useful, one can choose whether to take the time to learn how to use the useful tool; apart from that, it's up to you whether or not you are curious enough. In this respect, it is like every other potential mathematical technique in quantum information theory.
If there don't seem to be many novel applications of CQM yet, it's because there aren't — because this isn't the main motivation of CQM, nor have many people studied it.
Its main motivations are along the lines of foundations of computer science and of physics.
Applications of the tools of CQM to mainstream quantum information theory do exist, and you can expect to see more as time goes on.
Not the answer you're looking for? Browse other questions tagged quantum-information foundations or ask your own question.
What kind of mathematics is common in quantum computing?
What is the relationship between the Toffoli gate and the Popescu-Rohrlich box? | CommonCrawl |
In carrying out continuous spline interpolation of a function,derivatives of the function at some points are always needed.However, in the real world situation, not only that it may be difficult to compute the derivatives of a function, the derivatives may not even exist at some points. In such a situation, the usual continuous spline interpolation will not be suitable. We therefore introduce a discrete interpolation scheme that involves only differences. Since no derivatives are involved, the discrete interpolant can be constructed for a more general class of functions and therefore has a wider range of applications. In this thesis, we shall develop two kinds of discrete spline via a constructive approach, the first kind of discrete spline involves forward differences, while the second kind of discrete spline involves central differences. We recall that a quintic polynomial is a polynomial of degree five.In the first case where $f(t)$ defined on a discrete interval, we shall develop a class of quintic discrete Hermite interpolant and derive explicit error bounds in $\ell_\infty$ norm. We also establish, for a two-variable function $f(t,u)$ defined on a discrete rectangle, the biquintic discrete Hermite interpolant and perform the related error analysis. Based on the results of discrete Hermite interpolation, we then define the quintic discrete spline interpolant of the function $f(t)$, formulate its construction, and establish explicit error estimates between $f(t)$ and its spline interpolant. We also tackle the two-variable discrete spline interpolation and the corresponding error analysis for $f(t,u)$. As an application, we solve Fredholm integral equations numerically by using biquintic discrete splines to degenerate the kernels, and furnish the related error analysis. In the second case where central differences are involved, for a periodic function $f(t)$ defined on a discrete interval, we construct the periodic quintic discrete spline interpolant and obtain the explicit error estimates between the function and its spline interpolant. The treatment is then extended to a periodic function $f(t,u)$ defined on a discrete rectangle, here we establish the two-variable periodic discrete spline interpolant and also provide the error analysis. As applications, we solve second order and fourth order boundary value problems by discrete splines involving central differences. Not only that we tackle the related convergence and error analysis, comparisons with other known methods in the literature are also illustrated by several examples. | CommonCrawl |
Discuss with your partner how Lin's method is similar to and different from drawing base-ten diagrams or using the partial quotients method.
Lin subtracted $ 3 \boldcdot 2,$ then $3 \boldcdot 1$, and lastly $3 \boldcdot 9$. Earlier, Andre subtracted $3 \boldcdot 200,$ then $3 \boldcdot 10$, and lastly $3 \boldcdot 9$. Why did they have the same quotient?
In the third step, why do you think Lin wrote the 7 next to the remainder of 2 rather than adding 7 and 2 to get 9?
Lin's method is called long division. Use this method to find the following quotients. Check your answer by multiplying it by the divisor.
Here is Priya's calculation of $906 \div 3$.
Priya wrote 320 for the value of $906 \div 3$. Check her answer by multiplying it by 3. What product do you get and what does it tell you about Priya's answer?
Describe Priya's mistake, then show the correct calculation and answer.
Long division is another method for calculating quotients. It relies on place value to perform and record the division.
When we use long division, we work from left to right and with one digit at a time, starting with the leftmost digit of the dividend. We remove the largest group possible each time, using the placement of the digit to indicate the size of each group. Here is an example of how to find $948 \div 3$ using long division.
We start by dividing 9 hundreds into 3 groups, which means 3 hundreds in each group. Instead of writing 300, we simply write 3 in the hundreds place, knowing that it means 3 hundreds.
There are no remaining hundreds, so we work with the tens. We can make 3 groups of 1 ten in 4 tens, so we write 1 in the tens place above the 4 of 948. Subtracting 3 tens from 4 tens, we have a remainder of 1 ten.
We know that 1 ten is 10 ones. Combining these with the 8 ones from 948, we have 18 ones. We can make 3 groups of 6, so we write 6 in the ones place.
In total, there are 3 groups of 3 hundreds, 1 ten, and 6 ones in 948, so $948 \div 3 = 316$.
Long division is an algorithm for finding the quotient of two numbers expressed in decimal form. It works by building up the quotient one digit at a time, from left to right. Each time you get a new digit, you multiply the divisor by the corresponding base ten value and subtract that from the dividend.
Using long division we see that $513 \div 4 = 128 \frac14$. We can also write this as $513 = 128 \times 4 + 1$. | CommonCrawl |
In conditionally heteroskedastic models, the optimal prediction of powers, or logarithms, of the absolute process has a simple expression in terms of the volatility process and an expectation involving the independent process. A standard procedure for estimating this prediction is to estimate the volatility by gaussian quasi-maximum likelihood (QML) in a first step, and to use empirical means based on rescaled innovations to estimate the expectation in a second step. This paper proposes an alternative one-step procedure, based on an appropriate non-gaussian QML estimation of the model, and establishes the asymptotic properties of the two approaches. Their performances are compared for finite-order GARCH models and for the infinite ARCH. For the standard GARCH(p, q) and the Asymmetric Power GARCH(p,q), it is shown that the ARE of the estimators only depends on the prediction problem and some moments of the independent process. An application to indexes of major stock exchanges is proposed.
Andersen, T.G. and T. Bollerslev (1998) Answering the skeptics: yes, standard volatility models do provide accurate forecasts. International Economic Review 39, 885--906.
Baillie, R. and T. Bollerslev (1992) Prediction in dynamic models with time-dependent conditional variance. Journal of Econometrics 52, 91--113.
Bardet, J-M. and O. Wintenberger (2009) Asymptotic normality of the Quasi-maximum likelihood estimator for multidimensional causal processes. The Annals of Statistics 37, 2730--2759.
Berkes, I. and L. Horvath (2004) The efficiency of the estimators of the parameters in GARCH processes. The Annals of Statistics 32, 633--655.
Berkes, I., Horvath, L. and P. Kokoszka (2003) GARCH processes: structure and estimation. Bernoulli 9, 201--227.
Billingsley, P.(1961) The Lindeberg-Levy theorem for martingales. Proceedings of the American Mathematical Society 12, 788--792.
Billingsley, P. (1995) Probability and Measure. John Wiley \& Sons, New York.
Davis, J.-P. Kreiss and T. Mikosch. New York: Springer. Zakoïan, J-M. (1994) Threshold heteroskedastic models. Journal of Economic Dynamics and Control 18, 931--955.
Ding, Z., Granger, C. and R.F. Engle (1993) A long memory property of stock market returns and a new model. Journal of Empirical Finance 1, 83--106.
Douc, R., Roueff, F. and P. Soulier (2008) On the existence of some ARCH($\infty$) processes. Stochastic Processes and their Applications 118, 755--761.
Engle, R.F. and T. Bollerslev (1986) Modeling the persistence of conditional variances. Econometric Reviews 5, 1--50.
Engle, R.F. and D.F. Kraft (1983) Multiperiod forecast error variances of inflation estimated from ARCH models. Applied Time Series Analysis of Economic Data. Washington, DC: Bureau of the Census.
Engle, R.F. and J.R. Russell (1998) Autoregressive Conditional Duration: A New Model for Irregularly Spaced Transaction Data. Econometrica 66, 1127--1162.
Escanciano, J.C. (2009) Quasi-maximum likelihood estimation of semi-strong GARCH models. Econometric Theory 25, 561--570.
Giraitis, L., Kokoszka, P. and R. Leipus (2000) Stationary ARCH models: dependence structure and central limit theorem. Econometric Theory 16, 3--22.
Giraitis, L., Leipus, R., and D. Surgailis (2008) ARCH($\infty$) and long memory properties. In Handbook of Financial Time Series. Eds. T. G. Andersen, R. A. Davis, J-P.
Kreiss, T. Mikosch. Springer, Berlin Heidelberg New York. Hall, P. and Q. Yao (2003) Inference in ARCH and GARCH models with heavy-tailed errors. Econometrica 71, 285--317.
Hamadeh, T. and J-M. Zakoian (2009) Asymptotic properties of least-squares and quasi-maximum likelihood estimators for a class of nonlinear GARCH Processes. Unpublished Document, University Lille 3.
Higgins, M.L. and A.K. Bera (1992) A class of nonlinear ARCH models. International Economic Review, 33, 137--158.
Karanasos, M. (1999) Prediction in ARMA models with GARCH in mean effects. Journal of Time Series Analysis 22, 555--576.
Kazakevicius, V. and R. Leipus (2003) A new theorem on existence of invariant distributions with applications to ARCH processes. Journal of Applied Probability 40, 147--162.
Loève, M. (1977) Probability Theory I, 4th edition Springer, New-York.
Nelson, D. B. (1990) Stationarity and persistence in the GARCH(1,1) model. Econometric Theory 6, 318--334.
Newey, W.K. and D.G. Steigerwald (1997) Asymptotic bias for quasi-maximum-likelihood estimators in conditional heteroskedasticity models. Econometrica 65, 587--599.
Pan, J., Wang, H., and H. Tong (2008) Estimation and tests for power-transformed and threshold GARCH models. Journal of Econometrics, 142, 352--378.
Pascual, L., Romo, J., and E. Ruiz (2005) Bootstrap prediction for returns and volatilities in GARCH models. Computational Statistics \& Data Analysis 50, 2293--2312.
Robinson, P.M. (1991) Testing for strong correlation and dynamic conditional heteroskedasticity in multiple regression. Journal of Econometrics, 47, 67--84.
Robinson, P.M. and P. Zaffaroni (2006) Pseudo-Maximum Likelihood Estimation of Arch($\infty$) Models. The Annals of Statistics, 34, 1049--1074.
Straumann, D. and T. Mikosch (2006) Quasi-maximum likelihood estimation in conditionally heteroscedastic Time Series: a stochastic recurrence equations approach. The Annals of Statistics 5, 2449--2495.
Teräsvirta, T. (2007) An introduction to univariate GARCH models. Forthcoming in Handbook of Financial Time Series, ed. by T.G. Andersen, R.A. | CommonCrawl |
As example above, the polynomial X^8 + X^6 + X^5 + X + 1 is one of irreducible polynomial defining the finite field GF(2^8).
How can I get all the irreducible polynomial which can define the finite field GF(2^8)?
What you are looking for is the list of all degree-8 irreducible polynomials over $\mathbb F_2$. There is no built-in function for this, but they can be found very easily if you combine polynomials which iterates over all polynomials of a given degree and is_irreducible that tests irreducibilty.
Finally, you can have access to some specific irreducible polynomials using R.irreducible_element(algorithm='...'). Have a look at the documentation of irreducible_element for details here.
This is a good party, so let me join it!
This and the other answers show how versatile is sage, and how it supports thinking and experimenting. | CommonCrawl |
Step-by-step interactive example for calculating standard deviation First, we need a data set to work with. Let's pick something small so we don't get overwhelmed by the number of data points.... For example, the standard deviation of a sample can be used to approximate the standard deviation of a population. Finding a sample size can be one of the most challenging tasks in statistics and depends upon many factors including the size of your original population.
You need to be creative, because these data are consistent with any mean exceeding $0\times .05 + 1\times .07 + \cdots + 5\times .18$ = $2.89$ and any standard deviation exceeding $1.38$ (which are attained by assuming nobody visited any more than five times per month). | CommonCrawl |
We discuss the problem to count, or, more modestly, to estimate the number f(m,n) of unimodular triangulations of the planar grid of size $m\times n$. Among other tools, we employ recursions that allow one to compute the (huge) number of triangulations for small m and rather large n by dynamic programming; we show that this computation can be done in polynomial time if m is fixed, and present computational results from our implementation of this approach. We also present new upper and lower bounds for large m and n, and we report about results obtained from a computer simulation of the random walk that is generated by flips. | CommonCrawl |
Bores are a well known phenomena in fluid mechanics, although their occurrence in nature is relatively rare. The circumstances in which they occur is usually when a tidal swell causes a difference in surface elevation in the mouth of a river, or narrow bay, causing long waves to propagate upstream. The term 'tidal bore' is also frequently used in this context. Depending on the conditions the bore may take on various forms, ranging from a smooth wavefront followed by a smaller wave train, to one singe breaking wavefront. Some noteworthy locations where tidal bores can be found include the River Seine in France, the Petitcodiac River in Canada, and the Qiantang River in China. Common for all these locations is a large tidal range. Bores, when powerful enough, can produce particularly unsafe environments for shipping, but at the same time popular opportunities for river surfing. As found by Favre in 1935 by wave tank experiments, the strength of the bore can be determined by the ratio of the incident water level above the undisturbed water depth to the undisturbed water depth. Denoting this ratio by $\alpha$, bores can occur in one of three categories: If $\alpha$ is less than 0.28 the bore is purely undular, and will feature oscillations downstream of the bore front. If $\alpha$ is between 0.28 and 0.75 the bore will continue to feature oscillations, but one or more waves behind the bore front will start to break. If $\alpha$ is greater than 0.75 the bore is completely turbulent, and can no longer be described by the standard potential flow theory. The goal of this report is to simulate the time evolution of an undular bore through numerical experiments, using a dispersive nonlinear shallow water theory, in particular the Korteweg-deVries (KdV) equation. This is a third order nonlinear partial differential equation, where the dependent variable describes the displacement of the free surface. When deriving this equation, an expression for the velocity field of the flow is also available. This can be calculated at any point in the fluid, as long as the displacement of the surface is known. Thus, solving the KdV equation also yields the fluid particle velocity field, which can be used to calculate fluid particle trajectories, as done by Bjørkvåg and Kalisch, but also to formulate a breaking criterion. By applying this breaking criterion to the undular bore, the onset of breaking, and thus also a maximum allowable wave height (due to the nonlinearity of the model equation), can be computed numerically. This criterion can also be applied to the exact traveling wave solutions of the KdV equation, namely the 'solitary wave' solution and the 'cnoidal wave' solution. These are waves of constant shape traveling at constant velocity, thus applying the breaking criterion yields a maximum height for which they can exist. The theory leading to the formulation of the KdV equation is also included, in addition to formulation of the linearized and shallow water equations. These, however, serve only as 'stepping stones' towards the higher order Boussinesq equations, and are not used in any further calculations. | CommonCrawl |
Abstract: We consider the free 2-step nilpotent Lie algebra and its cohomology ring. The homotopy transfer induces a homotopy commutative algebra on its cohomology ring which we describe. We show that this cohomology is generated in degree 1 as $C_\infty$-algebra only by the induced binary and ternary operations. | CommonCrawl |
We propose and theoretically investigate an very attractive novel magnetic tunnel junction (MTJ) Fe(001)/O/NaCl(001)/O/Fe(001) for spintronics. Due to the presence of the single p(1$\times$1)O layer between Fe electrode and NaCl insulator, the interfacial strain can be full released. Therefore, area perfectly ordered NaCl can be grow on top of Fe electrode. Since the unit cell of Fe crystal in direction has two layers, there are two different kinds of contact interface between Fe electrode and NaCl insulator, i.e., the translational and mirror symmetry configurations. According to our ab initio total energy calculation, both of them are experimentally accessible. For the translational symmetry configuration, the tunneling magnetoresistance (TMR) ratio of Fermi energy is in the magnitude of 500\%. More interestingly, for the mirror symmetry configuration, the TMR ration will drastically increase to 5500\%. Different from the role of MgO barrier in well known Fe/MgO/Fe MTJ, the evanescent state with the $\Delta_5$ symmetry dominates the transmission of the majority spin electrons through the NaCl barrier. By studying the scattering states and the complex band structure of NaCl insulator, we systematically understand the transport properties of Fe/O/NaCl/O/Fe MTJ. | CommonCrawl |
How many steps does it take the computer to solve a Sudoku puzzle?
We all know what Sudoku is. Given a Sudoku puzzle, one can use a simple recursive procedure to solve it using a computer. Before describing the algorithm, we make some definitions.
A partial solution is a Sudoku puzzle with only some of the numbers entered.
Given an empty square in a partial solution, an assignment of a digit to the square is consistent if it doesn't appear in the same row, column or $3\times 3$ square.
If there is any square for which there is no consistent assignment, give up.
Otherwise, pick an empty square $S$ (*).
Calculate the set of all consistent assignments $A$ to this square.
Put $a$ in $S$, and recurse.
We have two degrees of freedom: choosing an empty square, and choosing and order for the assignments to the square. In practice, it seems that whatever the choice is, the algorithm reaches a solution very fast.
Suppose we give the algorithm a partial Sudoku with a unique solution. Can we bound the number of steps the algorithm takes to find the solution?
To make life easier, you can choose any rule you wish for ( * ) and (**), even a random rule (in that case, the relevant quantity is probably the expectation); any analyzable choice would be interesting.
Also, if it helps, you can assume something about the input - say at least $X$ squares are filled in. I'm also willing to relax the restriction that there be a unique solution - indeed, even given an empty board, the algorithm above finds a complete Sudoku very fast. Analyzes for random inputs (in whatever meaningful sense) are also welcome.
Since Sudoku is known to be NP-complete for arbitrarily large grid sizes, it's highly unlikely that your algorithm has any 'good' bound. As to why it works so well, I suspect the reason is simply that Sudoku puzzles are designed to be cleanly solved by humans; humans in general are really mediocre at backtracking searches (particularly once the depth gets to more than a small handful of steps), so most human Sudoku puzzles in fact have nearly-linear solutions with very little branching needed, just because those make for more interesting puzzles.
Not the answer you're looking for? Browse other questions tagged probability algorithms recreational-mathematics or ask your own question.
Is there any mathematical theory behind sudoku?
Can someone produce a sudoku puzzle where guessing more than one cell's value at a time is required?
How are Sudoku puzzles created?
How to remove fields from sudoku puzzle in such way to assure there's still only 1 solution?
Is it possible to solve sudoku without backtracking? | CommonCrawl |
Assume a folder named Test on the desktop. In this Test folder, there is another folder named Survey reports. In the Survey report folder, there are multiple Excel files. All the Excel files have the same structure i.e. if there is an Operator Names in cell C9 of one Excel file, then in other Excel files as well, there will be the operator name in cell C9 itself.
In another workbook (say Book1.xlsx), one may want to extract data from specific cells of all Excel files lying in the Survey report folder. Therefore, in sheet1 of Book1.xlsx, one may want to show in A3:C3 data from cells C4, R9 and cell C16 of the first Excel file from the Survey report folder. Likewise, in sheet1 of Book1.xlsx, one may want to show in A4:C4 data from cell C4, R9 and cell C16 of the second Excel file from the Survey report folder and so on.
While one way is to open each file manually and then link individual cells, this is a very time-consuming process if there are many files in the Survey reports folder.
You may follow the process mentioned below to extract data from specific cells of multiple closed Excel files.
3. From cell D3 downwards, type names of all Excel files (along with their extension i.e. xls, xlsx) in the Survey report folder. To generate this list of Excel files names automatically, refer to the following post.
You will observe that even with all Excel files in the Survey reports folder closed, the INDIRECT.EXT function will display the data from respective cells of those Excel files.
Since data is being extracted from closed Excel files via an addin, performance of the workbook will take a hit. Please try this on limited Excel files first.
Could this formula be adapted to pull data the meets a certain criteria?
How would I rewrite this formula, assuming this is possible?
Please upload your file somewhere and share the link of the uploaded file here. Please explain exactly what you want.
Basically what I want to do is extract the information for "tba's" into a new spreadsheet, including name address, etc.
Getting data from the same tab name of five files into one based on condition that there must be a specific value in column C will make the consolidated file very slow. In this case, it will be better to consolidate data from multiple workbooks into one and then apply a filter on column C.
I am trying to use your method of extraction above and I am getting an error when trying to pull the data(Step 5), the error is #REF! Suggestion on what I could be doing wrong?
If 2 above is not the reason, then please share your files.
What exactly would you need and how would I share it?
Have you checked to see if NBTEXT() function works? Also, I would need the two files - the base_data file and file in which you are using INDIRECT.EXT function. Upload them to SkyDrive and share the link of the uploaded files here.
I am using the method that you provided here to consolidate data from A.xlsx and B.xlsx. I have three colums A, B and C in each of these files and they both have one row of data. File A has Jay, 5, 7 and File B has Smith, 2, 3. I want to consolidate these two and see both the rows in one file book1.xlsx. I have used the following formula and I am getting value '0' in all the columns. WHat am I doing wrong? I tried NBTEXT and it is displaying the text correctly.
What exactly do you have in cell D3 and cell A1. Also instead of putting the last double quotes after the !, put the last double quotes just before the closing bracket i.e. after $1.
Hi, very many thanx for the morefunc set and explanation on how to install it.
Working great on excel win32 systems.
I was using indirect.ext but i see there is many others included. Cant imagine what i couldve done if indirect.ext dint exist. wonder why indirect doest work like indirect.ext or any other function that retrieves data from closed workbooks.
Anyway i am writing to ask if you could suggest some modification to the code which will make it work in win64bit PCs. morefunc.xll is not being considered valid!
You are welcome. Glad to hear that it is working well. I do not quite know why it is not working for Windows 64 Bit. Have you followed the steps mentioned for MS Excel 2007 and higher versions (Step 4)?
I've got the same issue as "kt_027", working on Win7 64bit and Office2010 32bit. I just get a #REF error although if I'm evaluating the formula it looks ok.
It drives me nuts. Could you test it on a Win7 64bit machine?
Cell A2 should spell the number. Is that happening. If it is, then the add in has been installed successfully and therefore there could be a mistake in your formula.
Truly the great add-in for Excel, but when installed and enabled in Excel 2013, it prevents the newest Quick Analysis tool from working properly - charts part just doesn't show... Maybe you know how to resolve this?..
I do not quite know how to resolve that issue. Charts and Tools of the Quick Analysis get effected.
I worked allot with morfunc.
i replaced machine and installed win 7 64 bit and office 2010 64 bit and morefunc didn't work.
I replaced office 2010 from 64 bit to 32 bit (still win 7 64 bit remained) and followed your installation suggestions and now it works !!
You need to give a far better description of your problem.
When I put the indirect.ext function into the vlookup function I getting a #REF! when the workbook I am linking to is closed. When I have it open it works fine.
I cannot identify any mistake in the formula. One thing you can try is to give a smaller range reference i.e. Sales'!$A$1:$BJ$5000 instead of Sales'!$A$1:$BJ$65536 and January_Sales!$A9:$J9 instead of January_Sales!$9:$9.
2. In cell B1, enter =NBTEXT(A1).
Do you see the result as Twelve?
However, when I increase the range to B3:E10000, then I get a #REF error. So there seems to be some internal limit on the range height which this function can handle.
I am having similar problems to other individuals where the INDIRECT.EXT is returning a #REF error when the source file is closed. I have tried the NBTEXT() function, and it returns the proper value.
Essentially, I am making a copy (needs to be separate) of sheet on an existing document, with cells corresponding to their partner cells.
Works fine when other doc is open; will not work when closed.
5. In cell A10, the following formula gave the correct result even with the Base_data file closed.
Even when adjusting the syntax to match yours, I still get a # REF error. When I evaluate the formula and it reaches the reference to A2, the message displays that "the cell currently evaluated contains a constant" and will not progress any further.
Is it possible that this add-in doesn't work with macro-enabled workbooks?
I see that every example you use has a ".xlsx" extension, but mine is a ".xlsm".
That is the only difference I show between your syntax and mine.
It is working fine for me even when I type C:\Users\Ashish\Desktop\[Base_data.xlsm] in cell A2.
After restarting the program, the function is working. Thank you for all your help!
Ideally, I want to have the source document be sourced in an online Sharepoint document library. I have created a test document for this purpose, but the function only works now when both documents are open.
Is there a problem with this function referencing a document housed online?
Glad to hear that it is working fine. The formula should work even when the workbook is closed - that is the very purpose of the INDIRECT.EXT function. I have never tested it on any document online.
Works fine when the files are in one folder, but how would the example formula look if you want to go through all the subfolders aswell?
You will have to give different paths in the INDIRECT.EXT function.
this is regardless what options I enable in the installation before this.
I run Office 2010 32bits on Win7 64bits.
The procedure for installing the Morefunc add-in for Excel 2010 and higher versions is mentioned in step 4 of this Blog article. Please follow those steps.
I've upzipped the file, then dubble click the setup and then I get the error.
That is strange. An error on double click means a corrupt setup file. Could you try this on some other machine?
I'm getting the same error as this, the setup file works fine on other machines but not this one, looks like a PC issue but cant get to the bottom of it!!!
Sorry, no idea about this problem.
would you have the equivalent for MS Excel for MAC 2011 release 14.3.4 (version 130416) ?
Just a great big "thank you" for your article, which saved me hours of copying and pasting. Brilliant!
You are welcome. Glad to help.
You mention that 'performance of the workbook will take a hit'. I need to extract data from about 50 workbooks/files with about 100 cell values from each workbook/file, and wondering if this method would be suitable for that number of files without crashing my summary workbook. Do you think that would be possible?
You will have to try it yourself. Do it in small chunks i.e. do it for only 5 workbooks first.
Just want to say this is just awesome. Thanks for publishing Morefunc.
I tried the following formula but turns out #REF error. I've tried NBTEXT() function and it works well. Appreciate your assistance. Thanks in advance!!
I cannot make much sense of what you have done. Upload your two files to SkyDrive and share the link of the uploaded files here.
I waste hours setting up linked spreadsheets and copying lists of filenames. Morefunc and indirect.ext are fantastic.
I am glad you found the Excel add-in useful. All thanks to the creator of this add-in. I use it extensively myself.
Keep getting #REF! on cells with indirect.ext function. NBTEXT works. Running Win8 with Excel 2000. This formula works on Excel 2000 on a Win XP machine.
Excel 2000 will not recognize an MS Excel file with .xlsx extension. Drop the last x in the xlsx. Does it work?
It works with .XLS extension on Win8 and Excel 2000. Do you know of a good batch XLSX to XLS convertor that is not too expensive?
No, I am not aware of any such convertor.
I have had a problem exactly like your example "Extract data from multiple cells of closed Excel files". I ran through your blog first and set up the exact same example and everything worked fine. I then set up my problem in the same fashion and the file names transferred through perfectly but I kept getting a #REF error in cells I was trying to retrieve the data into.
I think I have tracked the problem down. Your example uses Sheet1 as the worksheet you are retrieving the data from within the spreadsheet. My multiple spreadsheets have Summary as the name of the sheet and it is the entry of this in direct replacement of the Sheet1 in the formulae which is causing the error.
I would much appreciate any ideas you have as to why this is occurring.
Many thanks and a great blog.
In your files, if you change the name of the worksheets from Summary to sheet1, then does the solution work fine?
Ashish, interestingly when I change the worksheet from Summary to sheet1 it worked perfectly.
Obviously, I changed the sheet1 in the formula to Summary in the first instance.
Please post the INDIRECT.EXT formula which you have written.
If I use the worksheet name as Summary the formula doesn't work. I then open the spreadsheet, change the worksheet to Sheet1 with a similar change to the formula it does work. Then very confusingly if I swap the Sheet1 workbook reference back to Summary on the worksheet and formula it works!
I have a few hundred spreadsheets so I don't want to open them all up and do this.
It is probably unrelated but there is one spreadsheet in the folder that it refuses to recognise even though I have removed and readded it to the folder.
From your description, I cant even guess why the result is wrong. Sorry, I cannot help.
Well thanks for all of your help Ashish. I am still trying to resolve the obviously small issue and if I find a result I will post it.
I have now brought it down to a simple problem. There is an error shown with the above formula but if I go into the spreadsheet that the data is being extracted from. Change the Worksheet name to anything, then change it back to Summary and save the formula works????????
=INDIRECT.EXT("'c:\[aaa.xls]Sheet1'!C8") a single cell transfer, and I use: =INDIRECT.EXT("'c:\[aaa.xls]Sheet1'!RC",,2) for transferring data when the cell reference matches the source sheet, but if I want to use indirect.ext to copy B36:D40 from the source sheet into a cell block of A32:C36, what would the proper syntax be?
Thank you for your reply. I closed the function with a quotation and attempted your method listed above, along with a few variations, but continue to get an "invalid cell reference" message.
I am at an absolute loss. The formula I shared works for me.
Great to hear that. You are welcome.
To follow up, what changes are necessary to make the first two cell references active?
I forgot to add that I'm puzzled by the $ character in ...&"\[$"&... What's its function? Even when I delete it, the expression bombs out.
Thanks, but the proposed solution does not work--the double-quoted single quote gets highlighter as an error. I have tried so many quoting combinations, and none of them work (Excel 2003).
Could someone try the suggested solution to see if's my setup/configuration, or is this an INDIRECT.EXT bug/feature?
Does morefunc work with Excel 2103, the version that comes with Office 365?
The process described in point 4 will enable you to install Morefunc on Excel 2013. I have Morefunc installed on Office 2013 32 bit.
No it will not work on 64 Bit.
I need to send a lot of Excel files that I have linked using the indirect.ext with to my Accountant. I tried to include the MoreFunc add-on to one of the sheets but get an error. "run time error "53" file not found. how can I send him the file?
What do you mean by "tried to include the MoreFunc add-on "?
I don't know. You will have to try.
How can I modify the formula to capture the URL if the target cell has a hyperlink? I've found the VBA code to do so, but not clear if that needs to be run on the target file or the searching file?
Just noticed the question above mine from Rico. I've used the solution to target server-based files and on MS365 sharepoint using SkyDrivePro. Works great in both cases. For SkyDrivePro, the files are replicated on my hard drive, so not sure how it would work if not.
Great to hear that you could try and resolve the issue by yourself.
You are welcome. Thank you for the feedback.
Thank you for your explanation on how to use Morefunc. Unfortunately I can't seem to get it to work on for me.
where D3:D9 are the 7 files I would like to draw data from.
In A1 I have put AM13, as that is the source cell in each file I want the data from.
I tried moving the apostrophes around, restarting Excel, etc. What on earth am I doing wrong?
Is it because the data I am trying to retrieve from the source cells is text? Is it because there are macros running on the source files? (I put in combo box drop-down menus).
I sent a SkyDrive link to you with one of the source files in case you wanted to take a look at it via your 'Contact' page.
There is no apparent mistake in your formula. Has the Morefunc been installed properly? To check, type 12 in cell A1 and in cell A2, type =NBTEXT(A1). In cell A2, do you now see the Number being spelled? If yes, then the installation is OK.
I will need to see both files - the one where you are writing the formula and any one of the 7 files. Upload both of them to SkyDrive and share the link of the uploaded files here.
Hi Ashish, thanks for your prompt response.
Yes I checked using the NBTEXT function and Morefunch has been installed properly.
I sent you the 1st and 2nd file via your inbox.
Share the link of the folder here (not via the Contact Me page). Also, ensure that the files you upload are not heavy in size.
There is no file there. In the files that you share, please also, ensure that you share the INDIRECT.EXT formula as well.
And here is a link of the tool from which I am trying to pull data (I have several of these, different submissions from different people, from which I need to compile information on the previous document).
Saved tons of my time. Had to struggle a bit but was able to get it right with some compromises.. like i had to keep all my other excel spreadsheets open 🙂 Thanks much.
The INDIRECT.EXT function should work even if the other workbooks are closed. If you have to open the other workbooks, then you might as well write the native INDIRECT Excel function itself.
How do I validate data from the same sheet? I tried using DATA tab and validating, But If i re run the generated VBA code it shows an error 1004 for the INDIRECT() statement.
Mine is code for Drop down menu wherein the output options for column B are dependent on the Input i choose in the column A("I validate "A" and use =Name).
The main question is why does the INDIRECT statement , which works well during data validation become a Error when the same code is re run again?
As per your suggestion i used Indirect.Ext coupled with vlookup function.
There are around 20 sheets in the workbook named Sal_f16_2013_14.xls (Excel 2007). Each sheet of this workbook pertains to each of the employee.
For Each month there is a seperate workbook with names like STAFF_SAL_04_2013.XLS, STAFF_SAL_05_2013.XLS and so on for 12 months. In this sheet the data pertaining to all the employees is available.
A9 value is blank and as soon as i enter the value the related cells are being updated.
It started working fine. but all of a sudden after data for two three sheets are obtained suddenly all the cells start showing #NAME.
Closed the file and reopened and again after entering the values the same is repeating.
and remain constant till next time i press the command button.
I cannot figure out the problem. If it is working fine for some and not for others, it remains just as much as a mystery to me. Just check this - In cell AA6, you mention STAFFSAL_ but in the second paragraph, you mentioned STAFF_SAL. Other than that, I have nothing to suggest.
I know there are lots of people with Excel 2010 having compatibility problems but I found a solution on the MSFT site (believe it or not, first actual solution I found there).
You might need to active show hidden files and folder under tools on windows explorer.
Worked for me and made the indirect.ext function come back to life. Yay.
I am having similar problems to those above and getting #REF and I used the =NBTEXT(A1) and that worked fine.
I am in marketing (so I don't have a strong excel background at all), I am simply trying to pull email addresses from booking forms.
This is where my documents are - it is only a sample document as the full version contains private information.
In the end I will need to pull this one line of information from over 1000 documents (I will break this down into smaller groups).
Unmerge the cells first. Also, please post the exact formula which you have written.
I cannot actually unmerge the cells as that would involve me going into each form one by one (and there are over 1,000) and then I might as well copy over the email addresses one at a time. Does this mean that i will not be able to use your formula.
1. What exactly is there in cell D3 and A1?
2. What type of error are you getting?
3. Ensure that the file name in cell D2 has the file extension as well i.e. .xls, .xlsx etc.
In D3 is the name of the excel sheets I was to get the information from i.e. booking form trial 2.xls.
All the file names have the extensions on them.
I have re-uploaded the documents to the same link.
I cannot point out any obvious mistake in your formula. I cannot test what you have done either because I have Office 2013 64 Bit installed on my system. This add-in is not compatible with 64 Bit.
I cannot download the file now. I get a message saying that that "This item might not exist or is no longer available". Share the file again.
Can you use the function INDIRECT.EXT with vlookup? Is there anyway I can lookup data from another CLOSED workbook using INDIRECT.EXT?
Yes you can. Refer to the solution which I have provided to Netty on June 13. 2013.
I am stuck on step 5 because I am getting a reference error. I also tested NBTEXT and it works fine.
By the way the Chinese characters were not a problem in creating the list of excel files.
The formula looks correct. What do you have in cells D3 and A1?
A1 has A2 which is supposed to represent the cell to extract form Univ1.xls right?
No it does not work. Is it possible that the quotations are misplaced? If not please explain why you place them where they are so I can get a better understanding and try to figure it out.
The quotations are placed just fine. To get the correct syntax, just link to any cell in another workbook (= and then select any cell of another workbook, say workbook2) and close both workbooks. Now just open workbook1. You will be able to see the placement sof ! and ' in the formula bar.
Is there anything wrong here? I'm still getting a reference error.
I put a ' before the ! mark.
The cell that directly references looks like this and does work.
After adding the indirect.ext function and quotes around the above reference we get what we had before which does not work. Oddly enough, I can use the above reference while Univ1.xls is closed and it is able to reference the cell. Do I need indirect.ext or will I encounter a different problem later on.
If the worksheet name and workbook name remains static, then that formula should suffice.
When I change the Univ1.xls to "&$D3&" in the direct reference without the indirect.txt formula I get the reference error. This is where the problem is.
I am also unsure as to how I am supposed to change $A$2 to to reference A1 (which reads: A2).
I don't think I can help you any further on this since I do not use 32 Bit MS Excel. I use 64 Bit and this add-in is not compatible with 64 Bit.
I love the potential of this additional function and I am trying to integrate this into a file for financial analysis of multiple assets. I have a FANTASTIC template for analyzing individual assets, but sometimes circumstances require the analysis of multiple assets as a group. My plan is to create a template where successive rows pull specified information from a number of different, but similarly formatted, asset specific files. All data will then be formatted within a use specific summary sheet using columns of similar data on a total of four sheets (these are segregated by use/purpose).
So I hope to create a formula that pulls the filename of the source file from Column A (Rows 3+...) of the 'Summary' sheet of the summary file, but then pulls the cell reference information from the cell in Row 1 at the top of each column. By combining these two sources, I only have to input the source filename and everything should auto-fill. Using a selective "$" symbol, I believe I should be able to drag and it will fill in appropriately.
Column B is intended to display the "Asset Name", which is always stated in the tab named "Purchase" and in cell $A$1. In cell "B1" of the summary sheet I have included the tab and cell reference, stated as: Purchase'!$A$1.
In combining the two references, I should retrieve the appropriate data for the summary sheet. Of course, the data in each row would have different tab and cell (or names) in successive columns to reference different data sets.
The stumbling block is the precise syntax of the formula to combine the two different text references into one.
I'd appreciate your thoughts on the process/mechanics and a suggested solution.
Is there a way to speed this up by having it only update the derivative cell contents at the time the file is opened? The source cell contents never change, but the derivative workbook represents a complex set of functions that work on the data. I just don't want my calculations constantly re-reading the source data.
1. Is there a way that the indirect.ext function can reference a full folder of files, not just a specific file (in case files change names)?
2. If this is possible, can I have it sum all of the values from the specific cells in each file?
I tried to keep this as simple as possible, please let me know if you need additional information.
You are welcome. That add-in only works with Office 32 bit. I have Office 64 bit installed on my system and there I would not be able to help you with solving your problem.
D1 returns a #REF! value here. even when i change my workbook name to "C:\Users\Cats\Desktop\[age14.xls]" to include the path name.
If I use the concatenate function to modify my formula, I get the value I want, but ONLY if my workbook name does not include the path name "C:\Users\...."
Would you have any idea why this is happening or how to rectify it?
You are welcome. I am glad you liked it. Your first mistake is that your column index is >4. If the lookup_array is from A:D, then column index has to be <=4. The following worked fine for me =VLOOKUP(C1,INDIRECT("'"&A1&B1&"'!A1:D100"),4,0) In cell A1, I tried both and it worked fine: 1. C:\Users\mathu_000\Desktop\[age14.xls]; and 2. [age14.xls] In cell B1, I typed all Hope this helps.
This has been great and very easy to follow, although I am still coming up with a blank cell. I have used the INDIRECT.EXT on a cell just to return simple cell data in another worksheet so test if it works which it does. When I use INDIRECT.EXT in my formula with the other worksheet closed it returns a no data. It does return data when the other workbook is open.
This is meant to act like a lookup function but returning multiple rows of data. I will have several different users updating their own worksheet with multiple different pieces of equipment. Each piece of equipment has their own sheet in a different work book. I the above formula to search one of these worksheets (which is dependant on which user has that piece of equipment) for the equipment code and return all repairs done on it.
I hope this is enough info for you to help me out.
Thank you so much for your time and effort!
Thank you so much for your response, Unfortunately when copying the formula in, there is a bracket or comma in the wrong spot and I can't figure out where it is. I am unfamiliar with this formula as I found it online. I have confirmed using Ctrl+Shift+Enter.
I tried taking off the ned part ,"") and it returns TRUE. It doesn't make any sense to me but at least I am now returning something.
Ok well it seems to work when the other workbook files are open and from what I can tell COUNTIF functions don't work unless the workbook is open. I am using COUNTIF statements within other sections of formulas that would require the same workbooks to be open as above. From the research I have done so far today it seems that it will not work.
Thanks so much for your help. If you have anymore ideas on how to make the above formula work I would love to here it. If you would also prefer we could swap contact info and maybe we could talk about this on the big scale that it is without clouding up all these comments. I would love to talk to you more but I understand if you back out.
The COUNTIF() function does not work with closed workbooks - you are correct. However, the equivalent SUMPRODUCT() function does. Share your COUNTIF() formula here.
For mutual learning and benefit, i'd prefer to communicate via this medium.
Ok so I have put the formula in and it still returns a blank value when the other workbooks are closed. All these formulas work when workbooks are open and I am returning what I need to accomplish which is great other than having to open over 20 files at the same time to view one. I still have a feeling that this is becoming a lost cause but as long as you are still willing to go the distance I will keep trying. I just don't want to waste your time.
The Formula above is meant to search for an equipment code that is in cells E4:J17 and then return the value of the file name which is in cells K4:K17 once the file name is return to the master sheet of the equipment list then the IFERROR formula which you have been working on will know which file to search for the repair data.
I hope this makes sense and I know there is a lot going on. If you need more back story I would be happy to give it to you, especially if it helps and maybe help build a new formula that makes more sense. I also am aware that a database program would be way more efficient at what I am trying to accomplish but I do not have time to learn a whole new language. I know excel enough to be able to find most answers online until now.
Again thank you so much for you help. Cheers!
I used the SUMPRODUCT and it is returning #VALUE just like the COUNTIF is. I opened the other Workbooks that they refer to and then they return values that are needed.
The other one with the IFERROR that we were originally working on is still returning a blank value with the Workbooks closed. but once open they return the right values.
Ok so this isn't good really. I found another hole in the entire system. If I have apiece of equipment that changes between different staff members the repairs change for what the other staff member has written down. Let me paint the big picture I will have 21 Staff members that each have their own equipment. If something goes wrong with that equipment then I would like to be able to see all repairs done on any given piece of equipment. Each item has there own sheet in a workbook that houses only that item but all of them that we have. For example a small cook stove. If it is with staff member bob then the cells finds bob's repair sheet and searches for all repairs done to small cook stove 01. The problem lies that if bob has repairs written down for stove 01 and then it switched and Steve gets stove 01 then the cells only searches for the repairs on Steve's repair sheet not Bob's. With the below formula can I search all staff members repair sheets to return all repairs written about stove 01?
So in this it searches a document, being a variable based on which staff member has the equipment. I need it to search all staff instead of just one variable. Is this possible?
Since I do not have the Morefunc add-in (this add-in is not compatible with Office 64 bit), I will not be able to help.
Ok thanks so much for trying. I really appreciate the help as I have found it very difficult to get so much as a response to any of my posts. I truly respect and appreciate the online helping/advice community. Again thank you so much. I guess it's back to the drawing board I go.
I know (thanks to you, some months ago) that Morefunc add-in (and indirect.ext) do not work on office 64 bit.
Do they work on win 7 (64 bit) and office 2013 (32 bit) ?
The Morefunc add-in will work on Office 32 bit.
The #VALUE is at the first ROW which is trying to refer to a range K4:K17 This is where it is.
Any Ideas?? If not sorry I thought I would try. I have changed my line of thinking for the other formula.
Thanks heaps. Saved lot of my time. Wonderful formula.
You are welcome. Thank you for your kind words.
I followed all these steps and it's work, but when I have some files which converted from TDMS extension to xlsx extension, it doesn't work. there is error.
What is a TDMS file? As far as your file is an Excel file (with .xls or .xlsx extension), the solution should work.
I have send to you one of the excel files that I converted from TDMS extension to xlsx extension. I am trying to extract some data from the sheet 4 and 5 in this file, but I found an error said "Moving or deleting cells caused an invalid cell reference, or function is returning reference error.
I am sorry but I will not be able to help you because I do not use the Morefunc add-in any more.
Does this turotial work for shareddrive locations at work? I was able to replicate the steps with my test folder on my C drive desktop but when I put in a shared drive folder it just gives me an empty cell no #REF! or anything.
Sorry but I will not be able to help you with this add-in any more because this add-in is not compatible with my SKU of MS Office (64 bit). This add-in only works with 32 bit.
Figured I'd check since it has been awhile - any luck on this working with 64 bit? Or do you know of another? I'd really like to reference docs without opening them. Thanks!
That add-in will not work with Excel 64 bit.
Your explanation of how to use INDIRECT.EXT seemed the perfect fit for what I need. I have windows 8.1 and Excel 2013. But when I downloaded Morefunc and run the execute file, it errors out. Will Morefunc work with 2013? Is there another download for 2013? Note: I have to pull 3 cells from about 50 workbooks in one folder, all files are identical and the data is on sheet 1, cells J9, D11 and J11.
Yes it will but I will not be able to help you with this because I do not use this add-in any more. I have Office 2013 64 bit installed on my system this add-in does not work with 64 bit Office.
I have been having trouble extracting data from multiple excel files contained in one folder. All of the files have the same template and I'm trying to return several data points from about 815 files. Some of the data I'm trying to return are numerical values and some are text strings. I have tried the following equation for the column D values in the shared Google file, but I keep getting #Ref.
Where column A contains the entire file name and C6 contains the data I'm trying to return from that excel file. For some reason, however, the equation keeps referencing the C6 cell in the workbook containing the equation instead of searching for the source file within the folder.
I cannot share direct information on the project, but this file (shared with you) shows the reference sheet within each file and the cells that contain the data.
Let me know if you have any other questions. Thanks for your help!
I would not be able to help you with this anymore because I do not use this add-in now. I have Excel 64 bit installed and this add-in is not compatible with Excel 64 Bit.
Hi Ashish, Your tool is good. I have a question here. I need to search for a content inside 5- 6 closed excel files, but these files are in different sub folders.
My excel files will be present in either of these folders .
2. I need to search for text e.g. "Brakes" inside any of the cells of these excel files. The output should return me 2 columns , namely "Summary", "Address", form the input excel files. Position of these columns is not fixed in either of the files. E.g.
Can your tool help me achieve the above OR can I tweak the code to get this done?
I will not be able to help you with the add-in referred to in this blog post because I no longer use the Morefunc add-in. This add-in only works with Excel 32 bit whereas I have Excel 64 bit. This can only be achieved by using VBA code.
A1 is the data incell A1 as you know.
This way I don't need to create table of rows & columns A,B1,C1 etc by using "&A1&" etc.
Try this formula in cell B2 and copy down. ROW()-1 will return 1. When this is copied down, then ROW()-2 will return 2 and so on.
Your method is working for me to extract data from excel file, however I need to extract it from .CSV file. When tried, it works when file is open but does not extract data from closed .CSV file. Can it be done, help will be appreciated.
I cannot provide any help on this question because I do not use the Morefunc add-in any longer. This add-in does not work with 64 Bit Excel.
We have used recently moved from a server environment to OneDrive for Business and we are attempting to determine if the indirect function will function in this environment.
Can you advise if you are aware if the indirect function will function and if not, are you aware of any methods to replace the indirect functionality?
With OneDrive for business synchronized on your local machine, the INDIRECT.EXT function should work.
Hi I have followed your instructions clearly and have tested that the addin works via the NBTEXT method that you prescribed to the other users.
I tried all your instructions on my own spreadsheets and it would not work then I tried it with fresh files so that the data that was being collected was simple as you explain and I still get the #REF problem.
If I send my files to you can you take a look and show me what I did wrong?
If so, how do I send the files?
I would not be able to help you with solving this problem because of my Excel version. That add-in only works with Office 32 bit whereas I have Office 64 bit installed.
is there any way to get around the problem of file names with an apostrophe (eg 'leo's excel file' not working with indirect.ext ?
I would not be able to provide you any support on this because the Morefunc add-in does not work with Excel 64 Bit. Since I have Excel 2016 64 Bit installed on my system, I will not be able to help you. Sorry about this.
I'm not sure if this point was rised, nevertheless if I'm about to extract data from Excel files which contains same structure but different sheet names I recievd error. Is there any solution but changeing names of sheet names?
I cannot provide support for this post any longer because the add-in used to solve this problem requires installation of Excel 32 bit. I have Excel 64 bit installed on my system.
What if the cells I need to extract are not on sheet one and/or are on a named sheet? Instead of just writing a cell reference, can I write a sheet and cell reference?
Thank you for writing in. I cannot provide support on this thread anymore because this add-in (Morefunc) requires one to install Excel 32 bit whereas I have Excel 64 bit installed. Sorry.
I need to create a series of formulas or a macro that will find the Efficiency for each line and each group of lines . The data is exported to include each stock keeping unit that has run changing the number of rows and the report sometimes has additional information the creates additional columns. So the data needed is always in a different place.
Basically, I want to see if I can create a series of formulas or a macro that can be created to get each month of data without having to manually re-enter the data in excel 2010.
I do not understand your question. Please share some data, explain your requirement and show the expected result.
Thanks for your instruction. Your example works. I will try it in my work.
I test the equipment and collect hundreds of files in the same format. I want to summarize these data and show them in a graphic. I will try it in your way.
I have a folder with 130 excel files . I want to extract data based on their names from all the files into a separate file . i.e i want all sharma in 1 file , pandet in 1 file , etc . Please help me out .
Thank you for writing in. I will not be able to help you with a solution to your query because the Morefunc add-in only works on 32 Bit. I have Excel 64 Bit installed on my system. Sorry.
I have a weird issue I have with the excel sheet.
I have a streaming excel sheet (let us say A) which downloads data from one of the brokers website. I have Sheet B, which is offline and a different work sheet. Now I try to extract data from Sheet A (streaming one) to Sheet B (which is offline) using vlookup for some reason this damn thing doesnt work. Yes I did see your solution where you had asked me to open one sheet and then the other one using Control+O. But sadly, I cant save Sheet A as once I save it the streaming (live) stops. I have tried using a separate sheet within the Sheet A, but for some reason the vlookup just stops working on the streaming sheets. Please help me here.
I'll need to see your files. Share the link from where i can download them. Ensure that your VLOOUP function is written there.
Thanks much for your immediate response Ashish, much appreciated. Are you available on Team viewer, as sending files wont help as then again it becomes a saved file and vlookup works on these files. Let me know if I can post snapshots, and if that would help.
Here is the shared link, not sure how to give you access, i guess i will wait till you click on "Request Access".
3 images in the pic.
2. Second pic - Table Array - Trying to choose the second sheet (Streaming one which has real time data streamed from website) - Choosing the column E&F where I am trying to get the value of "Col F". As you can see, the column just gets highlighted, while actually we should have moving dashes to confirm the area chosen in vlookup.
3. Pic of the table array after choosing the Col E&F - As you can see the Vlookup Table Array is empty.
(a) Tried put the offline sheet as part of the streaming sheet - which was quite tricky for me to do, but still did it (as the offline contains data for the pivot etc). Still no go.
(b) I created the pivot sheet within the stream workbook (in a separate sheet). Even pivots failed to work in the streaming sheet which I find it quite frustrating and strange. While the first time when I put the data and create the pivot, it worked. Later when I updated the data, and refreshed the pivot, the pivot doesnt update with the new data.
Let me know if you need any further clarification to understand the issue faced by me.
The problem of not being able to select the range on the second worksheet when providing a range within the second argument of the VLOOKUP() function generally arises when two instances of Excel are open in the same session. The only way i know to work around this problem is that i open the workbook (where i want to write the VLOOKUP() function) by pressing Ctrl+O when the streaming workbook is already open. You've mentioned that you have already tried this - so i am out of ideas here.
Have you tried to import data from the website into the Query Editor (available under Data > Get & Transform > New Query). We will then attempt to write the VLOOKUP() in the query editor.
It may be worth a try.
Just to further clarify, the pivot data is different from the streaming sheet.
Thanks so much Ashish. I will give it a shot on Monday, as market is closed and no streaming and things will work as designed now when there is no data-streaming.
Talking about the Control+O option, I had tried to open the streaming sheet through the pivot sheet, which failed because once the streaming sheet is saved, the live data stops streaming so this is of no use. Now that you mentioned, let me try to open the pivot sheet through the streaming sheet... wonder why i didnt think of this before :).
Just checked your second option and dont see Get and Transform. Can you kindly clarify please.
Many thanks for your continued assistance, much appreciated.
Today I opened the Streaming sheet first (live data update), and then Ctrl+O the pivot sheet, and voila.... things are working fine. Much thanks but I wonder how did you find such a trick, this doesnt sound logical at all :). But thank god you did.
So I basically extract live data by right clicking on the site (which requires login) which opens up the excel sheet and the live streaming is on.
Here is the challenge, whenever there is a new update on the site (page), this doesnt update the streaming sheet. While the values (existing scrips) gets updated, any new scrips added to the page (in the site), doesnt get updated in the streaming sheet. Is there anything that you can think of to make this happen ?
You are welcome. The reason my suggested technique works is that the second Excel file is opening up in the same instance of Excel. This allows formulas to work seamlessly. In your solution, another instance was firing up in the memory of the computer causing communication between the files to break down.
I do not understand your second question. Which site are you downloading from? What are you downloading? What current method are you following?
Can I send you an email in private. Please let me know.
For mutual learning, I'll appreciate if we can have all communication here. If there is a file you need to share, then upload it to some service such as Google Drive/OneDrive and share the download link here.
I have two purchase order templates in a folder. They are both read only. One is for items bought by price per quantity. The other is for materials bought by weight. They are 'saved as' the next available number in order, i.e. 25001, 25002, 25003, regardless of which template is used. A separate workbook in another folder is used to document payment approval, date of approval, and invoice #s. Is there a way I can get the monthly workbook to automatically fill in the p.o. number, type, and vendor as they are saved in the folder. An example would be if i opened read only template blank p.o.'s each saved as 25004-S (Drive:\Purchase Orders\25004.xlsx) and it was for items bought from Vendor A then on "current month 2018" (Drive:\Management\Management\Purchase Orders) they would appear automatically as follows.
column C D E have to be filled in manually as does G because invoices come through mail, email, and etc. but simply having these three things automatically go on the monthly would cut out a lot of data entry. Any advice or help would be greatly appreciated. Thanks.
The add-in required (Morefunc) to solve this problem only works with 32 Bit MS Excel. I have 64 bit MS Excel installed on my system so i would not be able to offer any help. Sorry.
Hi there, is there a way to install this add-in without running the setup.exe? I am asking because I want to install it on my work computer which does not allow random installation of .exe files. Could you provide the add-in files to copy to the add-in folder directly?
I do not know. I have been using Excel 64 bit for over 5 years now and this add-in is not compatible with 64 Bit. Sorry cannot help.
Hello. I really like and rely on the morefunc functionality and have been using it for years. I've run into a problem... perhaps it's a system resource issue. I create successive Excel files for a number of months, each seeking information from the previous month's Excel file. A cell in the workbook that has worked for all files from February until July of this year, suddenly returns a "#REF" error. The formula in that cell contains the "INDIRECT.EXT" function. When I search the error drop-down window for that cell it states that a cell that the INDIRECT.EXT function is relying on "contains a constant". The cell being relied upon by the INDIRECT.EXT function is in the same worksheet. I tried opening all the workbooks from February to July as well at the same time, to troubleshoot, then I received a notification stating, "Microsoft Excel is waiting for another application to complete an OLE action." Is there a potential limit to the number of files that rely on each other that may be causing this error? Any input would be appreciated!
I use 64 Bit MS Excel and the Morefunc add-in is not compatible with 64 Bit. Therefore I will not be able to help. Sorry.
I tried below formula which is returning an #Ref! error.
couldn't understand the problem. Is there any error with the quotation marks?
I do not use the Morefunc add-in anymore. Sorry I cannot help.
I' ve been part of the training program that Ashish had conducted at our organisation. I found the sessions to be extremely useful for my day to day work and Ashish was exemplary in the way he conducted these sessions. Very practical, systematic and enforcing, the examples provided were of utmost relevance. I have no hesitation in recommending Ashish's training programs.
Ashish Mathur is an expert in Excel with amasing knowledge on the subject. HIs conceptual knowledge and the way he provides the excel training shows how in depth he has researched on the subject. Always interested to find the best approach to a problem he shows a "nothing is impossible" attitude. Way to go and thanks a lot for the useful training.
I knew excel but after attending the session from Mr. Ashish I was ablt to identify new avenues of using the same function which I was not aware off and tremendous application of the same. It is awesome to see how he conduct the session too. God Bless Him! | CommonCrawl |
Recall from The Lebesgue Number Lemma page that if $(X, d)$ is a metric space that is also a BW space then for every open cover $\mathcal F$ of $X$ there exists an $\epsilon > 0$ called a Lebesgue number such that for all $x \in X$ there exists a $U \in \mathcal F$ such that $B(x, \epsilon) \subseteq U$.
We used this very important lemma to prove a very nice result on the Metric Spaces Are Compact Spaces If and Only If They're BW Spaces page. We proved that if $(X, d)$ is a metric space that $X$ is compact if and only if $X$ is a BW space. The first direction of this statement is somewhat trivial as we have already seen that a compact space $X$ is a BW space from the Compact Spaces as BW Spaces page, however, the converse of this result is very useful nevertheless.
We will now look at a nice consequence of these results. We will see that a metric space $X$ is compact if and only if $X$ is countably compact. Therefore, in a metric space $X$, the concept of compactness and countably compactness are in essence the same.
Theorem 1: Let $X$ be a metric space. Then $X$ is compact if and only if $X$ is countably compact.
Proof: $\Rightarrow$ Let $X$ be a compact metric space. Then trivially, $X$ is also countably compact (since every open cover $\mathcal F$ of $X$ has a finite subcover $\mathcal F^*$ and a finite number is countable).
$\Leftarrow$ Let $X$ be a countably compact metric space. We know that every metric space is Hausdorff, so $X$ is both Hausdorff and countably compact. So by the theorem referenced on the Hausdorff Spaces Are BW Spaces If and Only If They're Countably Compact page, we have that $X$ is a BW space. | CommonCrawl |
For a vanilla option, I know that the probability of the option expiring in the money is simply the delta of the option... but how would I calculate the probability, without doing monte carlo, of the underlying touching the strike at some time at or before maturity?
Case 2) The drift is $vt$, where $v$ is a constant.
Allow me to disagree with Jaydles' proposal ; his methodology is valid only if the events of touching the barrier on each were independent.
If you are working within the standard Black-Scholes framework, you're looking for the probability of a drifted Brownian motion hitting a fixed level before a fixed time ; this probability is derived in most stochastic calculus texts, see for example Karatzas-Shreve or Chesney-Jeanblanc-Yor.
Another way of seeing it : you're trying to price a knock-in digital option with 0 interest rate, or knock-in zero bond. You can find formulae for these in Peter Carr's work on barrier options.
You could run a vanilla model that calcs delta for each expiration date between now and expiration, and grab the delta for each. That would give you the likelihood that it's in the money at the close on any day.
From that, you can pretty easily calculate the odds that it's not in the money each day (just subtract the delta from one), multiply them all together, and subract the product from one to determine the likelihood that it closes above the strike between now and expiration.
This does require running the formula to calc delta many times, and it ignores the risk of an intra-day touch, but it doesn't require writing something to calc the exotic you're describing.
The required probability is equivalent to asking: what is the probability that the geometric brownian motion of the underlying touches the strike for the first time before the given time $T$?
A strategy for solving this related to Brownian motion - first passage time.
After transforming from geometric brownian motion to brownian motion via the log, the full distribution of the first passage time of a random walker (with a given drift and vol) to a boundary (which should be $log(K/S_0)$ where $K$ is the strike and $S_0$ is the initial stock price), is given by the Inverse Gaussian Distribution.
The explicit form for the distribution is given here http://goo.gl/lzGZ9Y. Using the notation in that link, the parameters of the IG distribution would be $\mu=log(K/S_0)/d$ and $\lambda=(log(K/S_0))^2/\sigma^2$ where $d, \sigma$ are the drift and volatility of the geometric brownian motion.
So, you will want to compute the probability that this random variable, the first passage time of ordinary brownian motion with drift $d$ and volatility $\sigma$ to a fixed boundary at $log(K/S_0)$, is $<= T$, by straightforward integration.
I believe you can modify my https://github.com/barrycarter/bcapps/blob/master/box-option-value.m to do this.
You're effectively looking for the distribution of the maximum (or minimum) of the price for a given period of time.
I understand that a useful approximation for prob of touch is 2 * delta, though I don't recall which authority that is from.
However, as in much of statistics, asymptotic answers = "In the long-run" are not usually helpful to traders, or even investors. Example: say initial price $x_0=\$20$, and you wish the probability of reaching say, $x= \$25$ before $x = \$15$ with an annual $\sigma$ of say $0.2$. Convert $X$ to fractions, $X=(\$25-\$15)/\$15$, and calculate $\tau = 1.126$ years. That is without some drift and for reasonable values of the volatility, you are going to have to wait a long time.
Not the answer you're looking for? Browse other questions tagged option-pricing probability or ask your own question.
what is the best way to calculate the probability of an equity option ending in the money?
How to calculate confidence interval for option price? | CommonCrawl |
"Exact Asymptotics of the Optimal Lp-error of Asymmetric Linear Spline " by Vladyslav Babenko, Yuliya Babenko et al.
In this paper we study the best asymmetric (sometimes also called penalized or sign-sensitive) approximation in the metrics of the space $L_p$, $1\leqslant p\leqslant\infty$, of functions $f\in C^2\left([0,1]^2\right)$ with nonnegative Hessian by piecewise linear splines $s\in S(\triangle_N)$, generated by given triangulations $\triangle_N$ with $N$ elements. We find the exact asymptotic behavior of optimal (over triangulations $\triangle_N$ and splines $s\in S(\triangle_N)$ error of such approximation as $N\to \infty$.
Babenko, Vladyslav; Babenko, Yuliya; Parfinovych, Nataliya; and Skorokhodov, Dmytro, "Exact Asymptotics of the Optimal Lp-error of Asymmetric Linear Spline Approximation" (2014). Faculty Publications. 4029. | CommonCrawl |
Over the past six months, I've been working with people at NCSA (National Center for Supercomputing Applications) and Thomas Lucas Productions to produce a 2-minute segment for their upcoming planetarium show, Solar Superstorms. They visualized one of my simulations of the first galaxies and another of a supernova from a first star with yt. You can view the segment (third one in the playlist) on their website. It was narrated by Benedict Cumberbatch! I never would've thought that I'd hear his voice speaking about my work.
Earlier this year, I was privileged enough to be chosen to give a TEDx talk at Douglasville, just to the west of Atlanta. There I spoke about how we and others model galaxies on supercomputers, understanding the formation and evolution of galaxies that we observe with some of the best telescopes on Earth and in space.
(abs, pdf) Pallottini et al., The Brightest Ly$\alpha$ Emitter: Pop III or Black Hole?
(abs, pdf) Clark & Glover, Does the CO-to-H2 conversion factor depend on the star formation rate?
(abs, pdf) Matsumoto et al., Direct Collapse Black Holes Can Launch Gamma-Ray Bursts and Get Fat to Supermassive Black Holes? | CommonCrawl |
Is graphical method the best way to solve convolution questions whether they be discrete or continuous?
for all other values $n$ is $0$.
Please tell me how can I solve this question?
Graphical evaluation of convolution (flip n drag) is a very useful, helpful and indipensible method which aids in a very quick visual anticipation of the output, in terms of the input sequences. Indeed even if you don't use specifically the graphical method, you would still benefit from drawing a plot of input sequences and the rough sketch of the expected output in any case. Yet no method is the best for all types of problems.
First, observe (graphically) that output will extend from $n=-\infty$ to $n=\infty$. This is because a right sided sequence is convolved with a left sided sequence of both semi infinite lengths.
Not the answer you're looking for? Browse other questions tagged convolution or ask your own question.
Can I construct a linear system so that it results in a moving average?
How do I efficiently evaluate a convolution integral between different-sized domains? | CommonCrawl |
Abstract: We present a unified perspective on symmetry protected topological (SPT) phases in one dimension and address the open question of what characterizes their phase transitions. In the first part of this work we use symmetry as a guide to map various well-known fermionic and spin SPTs to a Kitaev chain with coupling of range $\alpha \in \mathbb Z$. This unified picture uncovers new properties of old models --such as how the cluster state is the fixed point limit of the Affleck-Kennedy-Lieb-Tasaki state in disguise-- and elucidates the connection between fermionic and bosonic phases --with the Hubbard chain interpolating between four Kitaev chains and a spin chain in the Haldane phase. In the second part, we study the topological phase transitions between these models in the presence of interactions. This leads us to conjecture that the critical point between any SPT with $d$-dimensional edge modes and the trivial phase has a central charge $c \geq \log_2 d$. We analytically verify this for many known transitions. This agrees with the intuitive notion that the phase transition is described by a delocalized edge mode, and that the central charge of a conformal field theory is a measure of the gapless degrees of freedom. | CommonCrawl |
In the extant books of Diophantus, are considered in the system of equations. Of interest is the non-linear system of Diophantine equations. Some simple systems from his book manages to solve it.
But there is in the books such a group of tasks which are of the same type. The conditions are very similar. It is conceivable that they can be solved one way.
Now interested in the question itself.
In the 2nd book is very much the same type of systems that can be described as follows. I wrote to look for solutions in integers.
The system is very similar and right of common and more simple approach to their solution. Because then there are systems with a large number of equations.
The system of book 2 tasks 22 , 23 .
Found this solution, but it sets a very different kind of decision. Different from the previous one.
For the system of book 2 task 30.
The system of book 2: objectives 24, 25.
The solutions can be written as.
2 books of Diophantus task $34 , 35$. For the system of Diophantine equations.
You can record such a decision.
$n,l,k,t,p,s - $ any integers.
Not the answer you're looking for? Browse other questions tagged number-theory diophantine-equations systems-of-equations or ask your own question.
Have you read Diophantus? What did you get out of it?
System of Equations and Pumpkins? I Think Not.
The system of Diophantine equations.
How was the determinant of matrices generalized for matrices bigger than $2 \times 2$?
System of four equations of four variables including second powers.
How are situated the lines $k$ and $m$ in the space $E3$? | CommonCrawl |
Gaussian Fluctuations of Eigenvalues in Wigner Random Matrices - Mathematics > Probability - Download this document for free, or read online. Document in PDF available to download.
Abstract: We study the fluctuations of eigenvalues from a class of Wigner randommatrices that generalize the Gaussian orthogonal ensemble. We begin byconsidering an $n \times n$ matrix from the Gaussian orthogonal ensemble GOEor Gaussian symplectic ensemble GSE and let $x k$ denote eigenvalue number$k$. Under the condition that both $k$ and $n-k$ tend to infinity with $n$, weshow that $x k$ is normally distributed in the limit. We also consider thejoint limit distribution of $m$ eigenvalues from the GOE or GSE with similarconditions on the indices. The result is an $m$-dimensional normaldistribution. Using a recent universality result by Tao and Vu, we extend ourresults to a class of Wigner real symmetric matrices with non-Gaussian entriesthat have an exponentially decaying distribution and whose first four momentsmatch the Gaussian moments. | CommonCrawl |
We show that deformations of a coisotropic submanifold inside a fibrewise entire Poisson manifold are controlled by the $L_\infty$-algebra introduced by Oh-Park (for symplectic manifolds) and Cattaneo-Felder. In the symplectic case, we recover results previously obtained by Oh-Park. Moreover we consider the extended deformation problem and prove its obstructedness. | CommonCrawl |
You are given a cyclic array that consists of $n$ values. Each element has two neighbors; the elements at positions $n$ and $1$ are also neighbors.
Your task is to divide the array into subarrays so that the sum of each subarray is at most $k$. What is the minimum number of subarrays?
The first input line contains integers $n$ and $k$.
The next line has $n$ integers $x_1,x_2,\ldots,x_n$: the contents of the array.
There is always at least one division (i.e., no value in the array is larger than $k$).
Print one integer: the minimum number of subarrays.
Explanation: We can create three subarrays: $[2,2,1]$, $[3,1]$, and $[2,1,2]$ (remember that the array is cyclic). | CommonCrawl |
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:151-168, 2017.
In the differentially private top-$k$ selection problem, we are given a dataset $X ∈\pmo^n \times d$, in which each row belongs to an individual and each column corresponds to some binary attribute, and our goal is to find a set of $k ≪d$ columns whose means are approximately as large as possible. Differential privacy requires that our choice of these $k$ columns does not depend too much on any on individual's dataset. This problem can be solved using the well known exponential mechanism and composition properties of differential privacy. In the high-accuracy regime, where we require the error of the selection procedure to be to be smaller than the so-called sampling error $α≈\sqrt\ln(d)/n$, this procedure succeeds given a dataset of size $n ≳k \ln(d)$. We prove a matching lower bound, showing that a dataset of size $n ≳k \ln(d)$ is necessary for private top-$k$ selection in this high-accuracy regime. Our lower bound shows that selecting the $k$ largest columns requires more data than simply estimating the value of those $k$ columns, which can be done using a dataset of size just $n ≳k$.
%X In the differentially private top-$k$ selection problem, we are given a dataset $X ∈\pmo^n \times d$, in which each row belongs to an individual and each column corresponds to some binary attribute, and our goal is to find a set of $k ≪d$ columns whose means are approximately as large as possible. Differential privacy requires that our choice of these $k$ columns does not depend too much on any on individual's dataset. This problem can be solved using the well known exponential mechanism and composition properties of differential privacy. In the high-accuracy regime, where we require the error of the selection procedure to be to be smaller than the so-called sampling error $α≈\sqrt\ln(d)/n$, this procedure succeeds given a dataset of size $n ≳k \ln(d)$. We prove a matching lower bound, showing that a dataset of size $n ≳k \ln(d)$ is necessary for private top-$k$ selection in this high-accuracy regime. Our lower bound shows that selecting the $k$ largest columns requires more data than simply estimating the value of those $k$ columns, which can be done using a dataset of size just $n ≳k$. | CommonCrawl |
Reject the null hypothesis if the observed $z$ falls in one of the two most extreme $\alpha$ / 2 areas of the standard normal distribution. In order to find the critical values $z^*$ and $-z^*$ that correspond to these tail areas, look for the $z$ value that has area $\alpha$ / 2 to the right of it (this is $z^*$) and the $z$ value that has area $\alpha$ / 2 to the left of it (this is $-z^*$). Observed $z$ values that are equal to or larger than $z^*$, and observed $z$ values that are equal to or smaller than $-z^*$, lead to rejection of the null hypothesis.
Reject the null hypothesis if the observed $z$ falls in the highest $\alpha$ area of the standard normal distribution. In order to find the critical value $z^*$ that corresponds to this upper tail area, look for the $z$ value that has area $\alpha$ to the right of it. Observed $z$ values that are equal to or larger than $z^*$, lead to rejection of the null hypothesis.
Reject the null hypothesis if the observed $z$ falls in the lowest $\alpha$ area of the standard normal distribution. In order to find the critical value $z^*$ that corresponds to this lower tail area, look for the $z$ value that has area $\alpha$ to the left of it. Observed $z$ values that are equal to or smaller than $z^*$, lead to rejection of the null hypothesis. | CommonCrawl |
Is there in ZFC a topological space which is normal, ccc, countably compact, first countable and non-compact?
I am looking for a space as in the title and since many very similar spaces do exist in the literature, I wonder whether someone has a reference (different from the ones I cite below) or just some remarks about this question.
A space is ccc iff any family of pairwise disjoint open sets is at most countable, and countably compact iff any countable open cover has a finite subcover. A space is separable iff it has a countable dense subset. It is immediate that a separable space is ccc.
If one looks for a perfectly normal such space, the answer is independent of ZFC: MA + $\neg$CH implies that a perfectly normal countably compact space is compact (Weiss 1978), and this result is also compatible with CH (Eisworth 2002). Moreover, if $\diamondsuit$ holds, there is a perfectly normal, (hereditarily) separable, first countable, countably compact, non-compact space (Ostaszewski 1976).
If one relaxes first countability, such spaces do exist in ZFC, see for instance Henno Brandsma's answer to this related question: https://mathoverflow.net/questions/79021 (the given examples are Frechet-Uryson and collectionwise normal). One can also obtain separable examples (Franklin-Rajadopalan 1970).
If one wants a space which is only Tychonov but not normal, there is an example in ZFC (Bell 1989).
The related question of whether there is a countably compact, first countable, separable, non-compact regular space in ZFC is still open, as far as I know. Nyikos wrote several papers on the subject.
Browse other questions tagged gn.general-topology or ask your own question.
Is there a notion of a "perfectly regular" topological space?
What is the origin of the metrization problem for compact convex sets?
Is every T0 2nd countable space the quotient of a separable metric space?
Is there a normal space with a $G_\delta$ diagonal which is not submetrizable?
Is $c(\mathcal M(X)) = c(X)$ for any first countable regular space? | CommonCrawl |
There is a simple approximate rule of thumb used by investors and accountants to estimate the time taken in years, $n$; for an investment to double with an interest rate of $R%$; or indeed for a debt to double if left unpaid. One simply divides $72$ by $R$ to estimate the time in years.
Fairly early in your study of algebra, you meet one of the most useful of algebraic techniques, the difference of two squares which enables you to write, for example $x^2-1=(x-1)(x+1)$.
Q1064. The numbers $1, 2,\ldots , 16$ are placed in the cells of a $4\times 4$ table as shown in the left hand diagram below. One may add $1$ to all numbers of any row or subtract $1$ from all numbers of any column.
Q1057 Remember that a regular polygon has all sides equal and all angles equal. | CommonCrawl |
We have introduced the basic ideas about neuronal networks in the previous chapter of our tutorial.
We pointed out the similarity between neurons and neural networks in biology. We also introduced very small articial neural networks and introduced decision boundaries and the XOR problem.
The focus in our previous chapter had not been on efficiency.
We will introduce a Neural Network class in Python in this chapter, which will use the powerful and efficient data structures of Numpy. This way, we get a more efficient network than in our previous chapter. When we say "more efficient", we do not mean that the artificial neural networks encountered in this chaper of our tutorial are efficient and ready for real life usage. They are still quite slow compared to implementations from sklearn for example. The focus is to implement a very basic neural network and by doing this explaining the basic ideas. We want to demonstrate simple and easy to grasp networks.
Ideas like how the signal flow inside of a network works, how to implement weights. how to initialize weight matrices or what activation functions can be used.
We will start with a simple neural networks consisting of three layers, i.e. the input layer, a hidden layer and an output layer.
We have to note that some would call this a two layer network, because they don't count the inputs as a layer.
Each of the arrows in our network diagram has an associated weight value. We will only look at the arrows between the input and the output layer now.
Now that we have defined our weight matrices, we have to take the next step. We have to multiply the matrix wih the input vector. Btw. this is exactly what we have manually done in our previous example.
You might have noticed that something is missing in our previous calculations. We showed in our introductory chapter Neural Networks from Scratch in Python that we have to apply an activation or step function $\Phi$ on each of these sums.
The following picture depicts the whole flow of calculation, i.e. the matrix multiplication and the succeeding multiplication.
The matrix multiplication between the matrix wih and the matrix of the values of the input nodes $x_1, x_2, x_3$ calculates the output which will be passed to the activation function.
One of the important choices which have to be made before training a neural network consists in initializing the weight matrices. We don't know anything about the possible weight, when we start. So, we could start with arbitrary values?
We can easily see that it would not be a good idea to set all the weight values to 0, because in this case the result of this summation will always be zero. This means that our network will be incapable of learning. This is the worst choice, but initializing a weight matrix to ones is also a bad choice.
The values for the weight matrices should be chosen randomly and not arbitrarily. By choosing a random normal distribution we have broken possible symmetric situations, which are bad for the learning process.
There are various ways to initialize the weight matrices randomly. The first one we will introduce is the unity function from numpy.random. It creates samples which are uniformly distributed over the half-open interval [low, high), which means that low is included and high is excluded. Each value within the given interval is equally likely to be drawn by 'uniform'.
We like to create random numbers with a normal distribution, but the numbers have to be bounded. This is not the case with np.random.normal(), because it doesn't offer any bound parameter.
We can use truncnorm from scipy.stats for this purpose.
where n denotes the number of input nodes.
rad = 1 / np.sqrt(no_of_hidden_nodes) # this is the input in this layer!
Running our neural network on some input means that we will have a matrix multiplications of the weight vectors and the inputs. We have to apply an activation function on the output values. There are lots of different activation functions used in neural networks. The sigmoid function belongs to the most often used activation functions.
Instead of defining the sigmoid function ourselves, we can use the expit function from scipy.special, which is an implementation of the sigmoid function. It can be applied on various data classes like int, float, list, numpy,ndarray and so on. The result is an ndarray of the same shape as the input data x.
running the network with an input vector input_vector.
A bias node is a node that is always returning the same output. In other words: It is a node which is not depending on some input and it does not have any input. The value of a bias node is often set to one, but it can be other values as well. Except 0 which doesn't make sense. If a neural network does not have a bias node in a given layer, it will not be able to produce output in the next layer that differs from 0 when the feature values are 0. Generally speaking, we can say that bias nodes are used to increase the flexibility of the network to fit the data. Usually, there will be not more than one bias node per layer. The only exception is the output layer, because it makes no sense to add a bias node to this layer.
network with optional bias nodes"""
x = np.dot(tmp, input_vector.T)[:-1,:] # ???? last element cut off, ??? | CommonCrawl |
My question is about finite model theory/descriptive complexity, so $FO(R)$ will mean "first order over finite binary words, using predicates Rs and a unary predicate P true on the position of the 1 in the word".
I would like to know, are there any caracterisation of $FO(<,R)$ with R any predicate on $\mathbb N^r$ for some r? For example on $FO(<,+)$, or $FO(<,P_2)$ where $P_2$ is the set of power of 2. Especially, it seems to me that it should be equal to $AC^0$ with some uniformity condition, but I can not find any resultat that states this.
Here is what I already know, for some value of $R$.
If we can use arbitrary numerical predicate in our first order logic, then we have $AC^0$ (non uniform), if $C$ is a class of function containing the log-time computable function, then $FO(<,C)$ is equal to $AC^0-C$-uniform (for these two results see Barrington, "Extensions of an Idea of Mc-Naughton", 1993).
Finally $FO(<)$ is the class of star-free language (language that can be defined by a regular expression using no Kleene star), but this give no information in term of circuit complexity.
The idea that restricting numerical predicates in FO-formula corresponds to uniformity conditions is explicitly investigated, for example, in the paper "FO(<)-uniformity" by Behle and Lange.
Not the answer you're looking for? Browse other questions tagged cc.complexity-theory lo.logic circuit-complexity descriptive-complexity finite-model-theory or ask your own question.
Ehrenfeucht-Fraïssé games (Ajtai-Fagin in fact) for regular languages.
Are there descriptive complexity representations of quantum complexity classes? | CommonCrawl |
You are given two of each from the array of 8 vegetables numbered 1 to 8 as shown above. So in total you have 16 veggies(8 pairs). Your task is to make the longest kebab (sequence of vegetables arranged linearly) such that the number of vegetable between any pair of vegetables should match exactly to the number written on the pair itself.
So in the 3-d case, between the two red capsicums (n=1) there is exactly one vegetable (eggplant) as required and similar case holds for n=2 and n=3.
In addition to finding the longest kebab (8-d sequence) you can also try some related questions shown below.
2) How many 8-d kebabs are possible?
3) Are there any dimensions in which it is impossible to make a kebab?If yes then whats the general formula for figuring out the allowed dimensions?
is probably the lexicographically-least 8D answer.
In a solution, let us write the occurrences of fruit i as i and i'. Fruits i and i' are not distinguishable, so we can arrange that i always appears before i'. Thus we can exhaust all possible orders of 2n items by generating and testing all permutations of n items, accepting those permutations such that an array of 2n cells can be filled in via the following C code, with elements i and i' at distance i+1. For a given n-permutation, there is one way or no way to fill in the array of 2n cells.
Note, the above routine uses some globally-declared variables, as below. The current permutation is held in pi...pi[n], and dat is used in trying to generate an n-kabob from the n-permutation, or when printing a solution. Var exitnow tells the caller whether to exit (eg, if a solution is found).
The routine that calls testPerm is like the following, which implements a permutation algorithm of Dijkstra's.
So far the answers have been dealing with brute force. Here I present the mathematical proof that such kebab only exists iff $n\equiv -1,0 \pmod 4$.
Prove that no kebab could exist if $n\equiv 1,2 \pmod 4$.
Let $x_1, x_2, \ldots, x_n$ be the positions of the left-most vegetable, for each vegetable numbered $1, 2,\ldots, n$, respectively. Then the positions for the other vegetables are: $x_1+2, x_2+3, \ldots, x_n+n+1$. Note that these $2n$ positions should all be distinct and in the range of $[1,2n]$.
Note that for all $i$, if $i$ is odd, $x_i$ and $x_i+i+1$ will have the same parity, and if $i$ is even, they will have different parity.
There are $2k+1$ vegetables with odd index, and $2k$ vegetables with even index.
There are in total $8k+2$ positions to be filled, with $4k+1$ even-indexed positions and $4k+1$ odd-indexed positions.
Since the vegetables with even index will use one even-indexed position and one odd-indexed position, there will be $4k+1-2k=2k+1$ even-indexed positions and $2k+1$ odd-indexed positions left for the vegetables with odd index. But each pair of vegetable with odd index will use two position with the same parity. Since $2k+1$ is odd, we won't be able to place all the odd index vegetables.
For the case $n=4k+2\equiv 2(\mod 4)$, it's quite analogous to previous case, again resulting in $2k+1$ odd-indexed positions for the remaining $2k+1$ odd index vegetables, each requiring two positions with the same parity.
I intended to give a constructive proof here, but it seems difficult.
I wrote some code for this and brute forced it. I would greatly prefer a non-brute force solution or at very least more elegant code.
should work. I started boxing the even numbers and then shuffled in the remaining odd numbers. If what I posted in the comment above is ok, then one could add 5 or 1.
For the first two "dimensions", there's obviously no such kebab. Thinking about $d>8$.
What is the ratio between small and big x-gons? | CommonCrawl |
Suppose $x_1,x_2,...,x_6$ are non-negative Independent and identically-distributed random variables, is it true that $P(x_1+x_2+x_3+x_4+x_5+x_6 \lt 3\delta) \lt 2P(x_1 \lt \delta)$ for any $\delta \gt 0$?
$(p^4)(10p^2-24p+15)$. I then plugged $2p-(p^4)(10p^2-24p+15)$ into R and got a function that looks always non-negetive between 0 and 1 (shouldn't be too bad to show this by derivatives but I'm lazy and have computer power).
Is there a good explanation for this fact on pairwise independent variables?
Do the converses of [weak law of large numbers / central limit theorem] hold?
Is this a situation where triple mutual information is always non-negative? | CommonCrawl |
The next generation of cosmic microwave background (CMB) experiments, such as CMB-S4, will require large arrays of multi-chroic, polarisation-sensitive pixels. Arrays of lumped-element kinetic inductance detectors (LEKIDs) optically coupled through an antenna and transmission line structure are a promising candidate for such experiments. Through initial investigations of small prototype arrays, we have shown this compact device architecture can produce intrinsic quality factors < 10^5, allowing for MUX ratios to exceed 10^3. Moreover, we have demonstrated that additional noise from two-level systems can be reduced to an acceptable level by removing the dielectric from over the capacitive region of the KID, while retaining the microstrip coupling into the inductor. To maximise the efficiency of future focal planes, it is desirable to observe multiple frequencies simultaneously within each pixel. Therefore, we utilise the proven transmission line coupling scheme to introduce band-defining structures to our pixel architecture. Initially targeting the peak of the CMB at 150-GHz, we present a preliminary study of these narrow-band filters in terms of their spectral bandwidth and out of band rejection. By incorporating simple in-line filters we consider the overall impact of adding such structures to our pixel by investigating detector performance in terms of noise and quality factor. Based on these initial results, we present preliminary designs of an optimised mm-wave diplexer that is used to split-up the 150 GHz atmospheric window into multiple sub-bands, before reaching the absorbing length of the LEKID. We present measurements from a set of prototype filter-coupled detectors as the first demonstration towards construction of large-format, multi-chroic, antenna-coupled LEKIDs with the sensitivity required for future CMB experiments.
Future ground-based cosmic microwave background (CMB) experiments will require more than $10^5$ polarization sensitive pixels covering multiple atmospheric bands. The scientific potential for such an experiment is impressive; however, the technical challenges are daunting: such an instrument will require square meters of focal plane covered in background limited cryogenic detectors and a dramatic increase in readout capability. We are developing novel kinetic inductance detectors (KIDs) optimized for this purpose. These devices use a twin-slot microwave antenna, superconducting Nb transmission line, and a novel coupling scheme that deposits mm-wavelength power onto a high-resistivity meander deposited as the first layer on a bare Si wafer. This architecture allows us to independently adjust the detector and antenna properties and to pursue multi-band designs. We have fabricated superconducting resonators made from atomic layer deposited (ALD) titanium nitride (TiN), with thicknesses ranging from 3 to 40 nm. We find a strong dependence of transition temperature on thickness, from 0.6 to 4.2 K for our thinnest and thickest films, respectively. In dark measurements, we find internal quality factors that range from $10^4$ to $7\times 10^5$ depending on film thickness, and kinetic inductance as high as 8 nH/square. The very small volumes and high kinetic inductance make it possible to engineer extremely sensitive detectors with inductor volumes approaching a few cubic microns that operate at readout frequencies of tens to hundreds of MHz. By taking advantage of the large fractional bandwidth available at low frequencies, we expect to achieve multiplexing densities that exceed that of state of the art TES arrays even without further improvements in film quality factor. We will present the characterization of film properties and dark devices, as well as well as initial optical results for antenna coupled single-band and single-pol devices. We will also discuss designs and sensitivity projections for future dual-pol and multi-band arrays ready for deployment in near-future CMB instruments. | CommonCrawl |
Lemma 15.96.5. Let $G$ be a finite group acting on a ring $R$. For any two primes $\mathfrak q, \mathfrak q' \subset R$ lying over the same prime in $R^ G$ there exists a $\sigma \in G$ with $\sigma (\mathfrak q) = \mathfrak q'$.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0BRI. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0BRI, in case you are confused. | CommonCrawl |
If you are reading this. I am honored.
3 What to do when someone copies and pastes a 2 year old question as their own?
13 Is $1 = = [] = [[[ 1 ]]], \ldots$?
8 Is there a mathematical notation of indexing a matrix?
8 derivative with respect to constant.
7 Where does the CPU get its first instructions from? | CommonCrawl |
We are given an array with $n$ numbers: $a[0 \dots n-1]$. The task is to find the longest, strictly increasing, subsequence in $a$.
In this article we discuss multiple algorithms for solving this task. Also we will discuss some other problems, that can be reduced to this problem.
Dynamic programming is a very general technique that allows to solve a huge class of problems. Here we apply the technique for our specific task.
First we will search only for the length of the longest increasing subsequence, and only later learn how to restore the subsequence itself.
To accomplish this task, we define an array $d[0 \dots n-1]$, where $d[i]$ is the length of the longest increasing subsequence that ends in the element at index $i$. We will compute this array gradually: first $d$, then $d$, and so on. After this array is computed, the answer to the problem will be the maximum value in the array $d$.
$d[i] = 1$: the required subsequence consists of only the element $a[i]$.
Here is an implementation of the algorithm described above, which computes the length of the longest increasing subsequence.
So far we only learned how to find the length of the subsequence, but not how to find the subsequence itself.
To be able to restore the subsequence we generate an additional auxiliary array $p[0 \dots n-1]$ that we will compute alongside the array $d$. $p[i]$ will be the index $j$ of the second last element in the longest increasing subsequence ending in $i$. In other words the index $p[i]$ is the same index $j$ at which the highest value $d[i]$ was obtained. This auxiliary array $p$ points in some sense to the ancestors.
Then to derive the subsequence, we just start at the index $i$ with the maximal $d[i]$, and follow the ancestors until we deduced the entire subsequence, i.e. until we reach the element with $d[i] = 1$.
We will change the code from the previous sections a little bit. We will compute the array $p$ alongside $d$, and afterwards compute the subsequence.
For convenience we originally assign the ancestors with $p[i] = -1$. For elements with $d[i] = 1$, the ancestors value will remain $-1$, which will be slightly more convenient for restoring the subsequence.
It is also possible to restore the subsequence without the auxiliary array $p$. We can simply recalculate the current value of $d[i]$ and also see how the maximum was reached.
This method leads to a slightly longer code, but in return we save some memory.
In order to obtain a faster solution for the problem, we construct a different dynamic programming solution that runs in $O(n^2)$, and then later improve it to $O(n \log n)$.
We will use the dynamic programming array $d[0 \dots n]$. This time $d[i]$ will be the element at which a subsequence of length $i$ terminates. If there are multiple such sequences, then we take the one that ends in the smallest element.
Initially we assume $d = -\infty$ and for all other elements $d[i] = \infty$.
We will again gradually process the numbers, first $a$, then $a$, etc, and in each step maintain the array $d$ so that it is up to date.
After processing all the elements of $a$ the length of the desired subsequence is the largest $l$ with $d[l] < \infty$.
We now make two important observations.
The array $d$ will always be sorted: $d[i-1] \le d[i]$ for all $i = 1 \dots n$. And also the element $a[i]$ will only update at most one value $d[j]$.
Thus we can find this element in the array $d$ using binary search in $O(\log n)$. In fact we are simply looking in the array $d$ for the first number that is strictly greater than $a[i]$, and we try to update this element in the same way as the above implementation.
It is also possible to restore the subsequence using this approach. This time we have to maintain two auxiliary arrays. One that tells us the index of the elements in $d$. And again we have to create an array of "ancestors" $p[i]$. $p[i]$ will be the index of the previous element for the optimal subsequence ending in element $i$.
It's easy to maintain these two arrays in the course of iteration over the array $a$ alongside the computations of $d$. And at the end it in not difficult to restore the desired subsequence using these arrays.
Instead of the above method for computing the longest increasing subsequence in $O(n \log n)$ we can also solve the problem in a different way: using some simple data structures.
Let's go back to the first method. Remember that $d[i]$ is the value $d[j] + 1$ with $j < i$ and $a[j] < a[i]$.
The problem of finding the maximum of a prefix of an array (which changes) is a standard problem that can be solved by many different data structures. For instance we can use a Segment tree or a Fenwick tree.
This method has obviously some shortcomings: in terms of length and complexity of the implementation this approach will be worse than the method using binary search. In addition if the input numbers $a[i]$ are especially large, then we would have to use some tricks, like compressing the numbers (i.e. renumber them from $0$ to $n-1$), or use an implicit Segment tree (only generate the branches of the tree that are important). Otherwise the memory consumption will be too high.
On the other hand this method has also some advantages: with this method you don't have to think about any tricky properties in the dynamic programming solution. And this approach allows us to generalize the problem very easily (see below).
Here are several problems that are closely related to the problem of finding the longest increasing subsequence.
This is in fact nearly the same problem. Only now it is allowed to use identical numbers in the subsequence.
The solution is essentially also nearly the same. We just have to change the inequality signs, and make a slightly modification to the binary search.
We can use the first discussed method, either the $O(n^2)$ version or the version using data structures. We only have to additionally store in how many ways we can obtain longest increasing subsequences ending in the values $d[i]$.
The number of ways to form a longest increasing subsequences ending in $a[i]$ is the sum of all ways for all longest increasing subsequences ending in $j$ where $d[j]$ is maximal. There can be multiple such $j$, so we need to sum all of them.
Using a Segment tree this approach can also be implemented in $O(n \log n)$.
It is not possible to use the binary search approach for this task.
For a given array with $n$ numbers $a[0 \dots n - 1]$ we have to colorize the numbers in the smallest number of colors, so that each color forms a non-increasing subsequence.
To solve this, we notice that the minimum number of required colors is equal to the length of the longest increasing subsequence.
Proof: We need to prove the duality of these two problems.
Let's denote by $x$ the length of the longest increasing subsequence and by $y$ the least number of non-increasing subsequences that form a cover. We need to prove that $x = y$.
It is clear that $y < x$ is not possible, because if we have $x$ strictly increasing elements, than no two can be part of the same non-increasing subsequence. Therefore we have $y \ge x$.
We now show that $y > x$ is not possible by contradiction. Suppose that $y > x$. Then we consider any optimal set of $y$ non-increasing subsequences. We transform this in set in the following way: as long as there are two such subsequences such that the first begins before the second subsequence, and the first sequence start with a number greater than or equal to the second, then we unhook this starting number and attach it to the beginning of second. After a finite number of steps we have $y$ subsequences, and their starting numbers will form an increasing subsequence of length $y$. Since we assumed that $y > x$ we reached a contradiction.
Thus it follows that $y = x$.
Restoring the sequences: The desired partition of the sequence into subsequences can be done greedily. I.e. go from left to right and assign the current number or that subsequence ending with the minimal number which is greater than or equal to the current one. | CommonCrawl |
I had an matrix ((2,1,1),(-11,4,5),(-1,1-0)) I got the eigen values to be -1,1,2 for the eigenvalue -1 I got an eigenvector (0,1,-1) on the answers it says the answer is (0,-1,1). Is there an actual difference?
and thus $\alpha v$ is also an eigenvector with eigenvalue $\lambda$. Since $\alpha$ is any scalar, if you let $\alpha = -1$ then you see that $v$ being an eigenvector implies $-v$ is an eigenvector. So there is no mathematical difference between which "scaling" of the eigenvector you choose ($\alpha$ just scales the eigenvector and flips it).
Note: Normally one chooses the normalized eigenvalue (norm = 1) but even then that doesn't account for the "flipping".
Not the answer you're looking for? Browse other questions tagged eigenvalues or ask your own question.
What is the difference between affinity matrix eigenvectors and graph Laplacian eigenvectors in the context of spectral clustering?
spectral clustering/theory, is there any meaning for the magnitude of values in eigenvectors?
For an eigenvector v, is it always true that v^tv =1? | CommonCrawl |
Suppose the Universe was non-relativistic so time and space would be independent of each other. In other words, both of them separately would be absolute and independent of an observer's motion (unlike the absolute spacetime in relativity).
Above that, let's suppose the speed of light is infinite. Would the kind of cause and effect that we experience in our real Universe be the same in this imaginary Universe?
In our Universe, it takes time for a cause to propagate and have an effect at a distance from the cause. I assume the cause is transferred by e.m. radiation and ignore the other three forces. In case of the weak force the cause will propagate with a finite speed, but because I believe the weak force is not a fundamental force, but a residue force of a more fundamental force, namely the hyper color force transmitted by hyper gluons as is explained by Haim Harari in his Rishon Model (just as the strong force was once thought to be transmitted by massive pions which turned out to be wrong; the force transmitted by the pions was a residue force of the fundamental color force transmitted by massless gluons, but that aside).
So I think all causes in the sketched imaginary Universe are transmitted with infinite speed. Does this mean that everything in the Universe causes (simultaneous) effects on everything in the Universe? Of course, you can wonder how it can be that an effect anywhere in this Universe occurs simultaneously with a cause anywhere else [which seems to imply that they are the same, while in fact, they are inseparable in the absolute time that walks at the same pace everywhere in this Universe (?)] but particles move in continuous trajectories, which according to me means that all processes follow the rule that the cause occurs prior to the effect.
But I think there is a problem here. The question remains what will happen if all particles cause a simultaneous effect on all other particles and this effect simultaneously backfires on all particles. What will happen, if anything happens at all?
A more important issue would be addressing issues like Olber's Paradox. With infinite light speeds, all the light in the universe goes everywhere, instantly. The result would be an instantaneous leveling of all heat levels everywhere due to black-body radiation.
Of course, nobody knew about blackbody radiation back then either, so it's logical to conceive of a universe where light doesn't directly transfer energy, and space dust absorbs most of the light in the universe, fixing Olber's Paradox.
But then you have to head down the rabbit hole to figure out how light allows us to see things in the first place. Then figure out how to get chemistry to work like it does in real life without light transferring energy. And how the Sun keeps the Earth warm. Etc.
At the end of the day, if you ask "is it possible?", it probably is, but the universe in question might ultimately have little in common with ours.
1I don't think there's any reason to presume this statement is wrong in and of itself, but I'm also not the most expert mathmetician in the world. As mentioned in a comment, the only thing necessary here is to show that any arbitrarily high, finite framerate can be achieved, which would be functionally equivalent to "infinite" framerate from a human perspective, provided "arbitrarily high" is high enough. A quadrillion quadrillion quadrillion frames per second puts each frame under Planck time, for example.
2From the comments, there's some debate about whether this is 100% true. One comment suggests energy would fail to transfer at all, locking everything at its original energy. Entropy wouldn't increase, but you'd still effectively stop thermodynamics from functioning. The main point here is that setting the speed of light to "infinity" requires a lot of modifications to current theory to make things appear to work the same way.
Cause and effect would still exist because not everything propagates at the speed of light. My fist punching you could still be the cause for you feeling pain.
Causality may be non-local. We may be forced to recognize a pair of events separated by a photon's motion as a single event for purposes of defining causality. This comes from the reality that information can indeed propagate sufficiently instantaneously as to be treated as instant.
The exact definition of "speed of light is infinite" would come under intense scrutiny. We can't actually say a speed is infinite because infinity is not a real number. It is, instead, typically used as a shorthand for a limit which says something more along the lines of "the speed of light is boundless." Tiny quivvers in your wording can change things dramatically.
Uncaused causes could be more frequent. If two systems exchange photons, they could easily form a chain reaction which starts to look more and more like uncaused causes. Whether they are actually uncaused causes would depend on your precise wording, as mentioned earlier.
"Cause and effect" would still work, except that in some cases, an effect could be coincident with its cause. You would never have an effect before a cause, though - because in Newtonian (Galilean) spacetime, what constitutes "the present" is an absolute.
An example of an effect coincident with cause would be Newtonian gravity - if you had two distant masses, and you grabbed one and shook it, the other would start shaking at the same time thanks to the infinite rate of transmission of force, and information, implied by $c = \infty$.
However, the real problem with this universe is that it would, sadly, be lifeless. The very same instant cause-and-effect above would imply that, without any additional changes to our laws of physics, there would be no force fields - especially electromagnetic - that would propagate disturbances at a finite speed. That means there would be no electromagnetic radiation, and thus no way for objects to lose heat or accumulated energy to the vacuum of space. Objects would continually be heated up through collisions and otherwise until they came apart. Indeed, one could question whether any would even form at all since charged particles like protons and electrons would be unable to shed energy to form stable atoms.
Relativity is, at least with the setup of the other laws, necessary for life. To abrogate this, you'd need to more drastically rewrite the script.
Yes. Everyone observed cause and effect for thousands of years before Einstein came up with Relativity and before anyone knew that the speed of light was finite.
For example, I hit a golf ball and it sails into the air. It sails into the air because I hit it. This has nothing to do with Relativity.
Not the answer you're looking for? Browse other questions tagged newtonian-mechanics speed-of-light causality galilean-relativity or ask your own question.
Is a finite lightspeed necessary for cause and effect to exist?
Can an object accelerate to infinite speed in a finite time-interval in non-relativistic Newtonian mechanics?
In a universe where the speed of light is infinite, are relativistic models and Newtonian models equivalent?
Has the speed of light changed over time?
What is the difference between "accelerating a particle to light speed" and "turning on a flashlight"?
Are the ideas of the speed of light and the absolute top speed inside the universe equivalent?
Our speed and direction by comparing the speed of light?
Are Lorentz transformations a direct consequence of finiteness of signal speed? | CommonCrawl |
It has been shown previously that immune cells produce cytokines such as interleukin-1$\beta$ (IL-1$\beta$) and tumor necrosis factor $\alpha$ (TNF$\alpha$) which can signal the central nervous system and result in immune suppression. It has been demonstrated in our lab that an immune challenge, bacterial lipopolysaccharide (LPS) or endotoxin, can activate central autonomic regulatory regions and pathways, and increase sympathetic input to the spleen. In addition, splenic macrophage function has been shown to be regulated by the splenic nerve, and that the sympathetic signal is tonically immunosuppressive and further suppression can be centrally induced. Our previous data indicated that the HIV-1 coat protein, gp120, may disrupt the neural-immune regulatory circuit and our present hypothesis is that gp120 produces autonomic dysregulation which may result in a cytokine environment in the spleen that enhances HIV replication in early infection. gp120, infused into brain ventricles, has been shown to enhance splenic macrophage function, as measured by macrophage cytokine production after LPS stimulation in vitro, rather than suppressing function as do intracerebral ventricular cytokines. However, similar to cytokines or LPS, gp120 does elevate plasma corticosterone. In addition, gp120, coinfused with $\alpha$-msh, attenuated the increase in splenic NE turnover produced by $\alpha$-MSH, which is thought to play a role in the sympathetic down regulatory mechanism. Yet, gp120 had no effect on NE turnover when gp120 was infused alone. This data suggested that gp120 may disrupt sympathetic inhibition of splenic macrophage activity. The present study sought to observe in vivo the effect of central gp120 (4$\mu$g icv) on LPS ("low dose" 0.1$\mu$g iv) stimulated splenic cytokine mRNA production via Northern Blot analysis of the splenic cytokine mRNAs in rats. Immunocytochemistry was also used to examine TNF$\alpha$ protein in vivo. It was expected that a small peripheral dose of LPS would result in increased cytokine mRNA expression and that central gp120 would significantly enhance this expression. The hypothesis was confirmed as gp120 did significantly increase the levels of both TNF$\alpha$ (p $<$.05) and IL-1$\beta$ (p $<$.025) mRNAs. This enhancement is consistent with our previous findings that central gp120 significantly increased splenic macrophage production of cytokines after low dose LPS stimulation in vitro. TNF$\alpha$ protein was not significantly elevated but a trend was apparent and the levels were significantly correlated with the mRNA levels (p $<$.0001). As seen previously, plasma corticosterone levels were significantly elevated (p $<$.0001) following central gp120 and low dose LPS relative to vehicle injected controls, but there was no difference in plasma catacholamines, suggesting no change in sympathetic activity. The present study also sought to examine the effect of centrally infused gp120 on the LPS ("high dose" 100$\mu$g iv) induced increase in splenic nerve electrical activity in adult male rats. It was predicted that gp120 would attenuate the LPS induced increase in splenic nerve activity. With regard to this latter hypothesis, no inhibition was observed under the conditions of the present experiment. Although gp120 did not attenuate the increase in sympathetic nerve activity induced by intravenous LPS, this does not directly conflict with the ability of gp 120 to block the increase in NE turnover produced by central $\alpha$-MSH injection. Overall, these data provide further evidence that gp120 acts in the brain to alter the splenic cytokine environment by attenuating the suppressive signal provided by the sympathetic nervous system. | CommonCrawl |
There are 9 square feet in a square yard.
We know that $1~yd = 3~ft$ We can find the number of square feet in a square yard. $1~yd^2 = 1~yd \times 1~yd$ $1~yd^2 = 3~ft\times 3~ft$ $1~yd^2 = 9~ft^2$ There are 9 square feet in a square yard. In the diagram, we can see that a square yard contains 9 square feet. | CommonCrawl |
In this paper we analyse the behaviour of a pile-up of vertically periodic walls of edge dislocations at an obstacle, represented by a locked dislocation wall. Starting from a continuum non-local energy $E_\gamma$ modelling the interactions$-$at a typical length-scale of $1/\gamma$$-$of the walls subjected to a constant shear stress, we derive a first-order approximation of the energy $E_\gamma$ in powers of $1/\gamma$ by $\Gamma$-convergence, in the limit $\gamma\to\infty$. While the zero-order term in the expansion, the $\Gamma$-limit of $E_\gamma$, captures the `bulk' profile of the density of dislocation walls in the pile-up domain, the first-order term in the expansion is a `boundary-layer' energy that captures the profile of the density in the proximity of the lock. This study is a first step towards a rigorous understanding of the behaviour of dislocations at obstacles, defects, and grain boundaries. | CommonCrawl |
Fish oil contains omega-3 fatty-acids that help us to maintain proper immune and inflammatory responses to the demands of daily life.
Hold on…what are the 'essential fats'?
The essential fats are two, relatively common fats found in the diet. The essential 'omega-6' fat is linoleic acid, found in abundance in the modern diet from vegetable oils (e.g. sunflower and safflower) and the 'omega-3' fat is alpha-linolenic acid, found in certain plant foods, most especially in flaxseeds, pumpkin seeds, hemp seeds, and chia.
These 'base' fats are converted into various 'metabolites' that help the body to modulate inflammation and immunity.
Inflammation is a critical process in the body that helps us to heal and repair, and signal appropriate immune responses (or dampen them down when needed). BUT excessive inflammation is linked to illnesses including autoimmune conditions, diabetes and insulin resistance, heart disease, cancer and neurological (brain) disorders. That's why it has been suggested that the balance of omega-3 to omega-6 fats should be around 1 to less than 7.
Suffice to say, our modern, western-style diet has a disproportionately high amount of omega-6 fats that can encourage inflammation.
Supplementing with fish oil helps to redress the imbalance and provides the directly usable fatty-acids that help to modulate inflammation, even if you are a poor converter.
Summary: How do I increase omega-3 and reduce omega-6 fats?
Most common vegetable oils (like soy, safflower, sunflower etc.) have high levels of omega-6 fats and very low levels of omega-3s. Most of these are also very heat-sensitive (breaking down and oxidising under heat) and so, it's better to use heat-stable oils like coconut oil and ghee for cooking and use omega-3 and 9 containing oils like flax oil, hempseed oil, macadamia oil, and extra virgin olive oil for dressings, or use MCT oil as the base for dressings for an added brain and body boost!
Factory-farmed meats and eggs have higher levels of omega-6 and lower levels of omega-3 than their free-range counterparts because they are fed high omega-6 grain-feed. Not to mention that free-range animals have healthier, happier lives!
As mentioned above, you should cook with heat-stable oils. The added bonus is that these don't further distort your omega-6 to 3 balance.
Fish oil contains the metabolites of omega-3 in a directly usable form. So, by taking fish oil you can experience the health benefits without worrying about whether your body can convert the base omega-3 to its usable compounds.
Fish oil supplementation is likely to be generally cardioprotective9 and has several positive effects on health markers. Fish oil supplementation reduces triglycerides in a dose-dependent manner.10, 11 Triglycerides are one of the best indicators of cardiovascular and metabolic health and you want these to be lower.
Note: These cardiometabolic benefits are not observed to the same degree from increased consumption of alpha-linolenic acid,17 the 'base' omega-3 fat found in plant-derived sources such as flaxseed oil.
So, suffice to say, it's important to source your fish oil for a trusted company, with a trusted supply, and a rigorous testing regimen.
Melrose fish oil is a high-potency fish oil with approx. 64% total EPA and DHA. It is also tested for heavy metals, peroxide, dioxins and other contaminants.
Fish oil can be taken with or without food. Many people prefer to take it with food (or in shakes or smoothies) as it can be easier on the stomach and the additional fat helps the absorption of fat-soluble vitamins and carotenoids. For best utilisation rates though, it's probably best to avoid taking it with substantial amounts of omega-6 fats. It's also OK to take a high-quality fish oil (like Melrose) on an empty stomach in the morning. A fantastic way to take it is to have fish oil both in the morning and night (simply divide your optimal dose below by two).
Note: the American Heart Association recommended dose is equivalent to around 1500 mg of Melrose Fish Oil.
omega-3s in the modern diet and the evidence for health benefits.
1. Mantzioris E, James MJ, Gibson RA, Cleland LG. Dietary substitution with an alpha-linolenic acid-rich vegetable oil increases eicosapentaenoic acid concentrations in tissues. The American Journal of Clinical Nutrition. 1994;59(6):1304-9.
2. Davis BC, Kris-Etherton PM. Achieving optimal essential fatty acid status in vegetarians: current knowledge and practical implications. The American Journal of Clinical Nutrition. 2003;78(3):640S-6S.
3. Burdge GC, Calder PC. Conversion of $\alpha$-linolenic acid to longer-chain polyunsaturated fatty acids in human adults. Reprod Nutr Dev. 2005;45(5):581-97.
4. Brenna JT, Salem Jr N, Sinclair AJ, Cunnane SC. α-Linolenic acid supplementation and conversion to n-3 long-chain polyunsaturated fatty acids in humans. Prostaglandins, Leukotrienes and Essential Fatty Acids. 2009;80(2–3):85-91.
5. Arterburn LM, Hall EB, Oken H. Distribution, interconversion, and dose response of n−3 fatty acids in humans. The American Journal of Clinical Nutrition. 2006;83(6):S1467-76S.
6. Burdge G. α-Linolenic acid metabolism in men and women: nutritional and biological implications. Current Opinion in Clinical Nutrition & Metabolic Care. 2004;7(2):137-44.
7. Harnack K, Andersen G, Somoza V. Quantitation of alpha-linolenic acid elongation to eicosapentaenoic and docosahexaenoic acid as affected by the ratio of n6/n3 fatty acids. Nutrition & Metabolism. 2009;6(1):8.
8. Goyens PL, Spilker ME, Zock PL, Katan MB, Mensink RP. Conversion of α-linolenic acid in humans is influenced by the absolute amounts of α-linolenic acid and linoleic acid in the diet and not by their ratio. The American Journal of Clinical Nutrition. 2006;84(1):44-53.
9. Delgado-Lista J, Perez-Martinez P, Lopez-Miranda J, Perez-Jimenez F. Long chain omega-3 fatty acids and cardiovascular disease: a systematic review. British Journal of Nutrition. 2012;107(SupplementS2):S201-S13.
10. Montori VM, Farmer A, Wollan PC, Dinneen SF. Fish oil supplementation in type 2 diabetes: a quantitative systematic review. Diabetes Care. 2000;23(9):1407-15.
11. Eslick GD, Howe PRC, Smith C, Priest R, Bensoussan A. Benefits of fish oil supplementation in hyperlipidemia: a systematic review and meta-analysis. International Journal of Cardiology. 2009;136(1):4-16.
12. Balk EM, Lichtenstein AH, Chung M, Kupelnick B, Chew P, Lau J. Effects of omega-3 fatty acids on serum markers of cardiovascular disease risk: A systematic review. Atherosclerosis. 2006;189(1):19-30.
13. Campbell F, Dickinson HO, Critchley JA, Ford GA, Bradburn M. A systematic review of fish-oil supplements for the prevention and treatment of hypertension. European Journal of Preventive Cardiology. 2013;20(1):107-20.
14. Wang C, Harris WS, Chung M, Lichtenstein AH, Balk EM, Kupelnick B, et al. n−3 Fatty acids from fish or fish-oil supplements, but not α-linolenic acid, benefit cardiovascular disease outcomes in primary- and secondary-prevention studies: a systematic review. The American Journal of Clinical Nutrition. 2006;84(1):5-17.
15. León H, Shibata MC, Sivakumaran S, Dorgan M, Chatterley T, Tsuyuki RT. Effect of fish oil on arrhythmias and mortality: systematic review. BMJ. 2008;337.
16. Rizos EC, Ntzani EE, Bika E, Kostapanos MS, Elisaf MS. Association between omega-3 fatty acid supplementation and risk of major cardiovascular disease events: A systematic review and meta-analysis. JAMA. 2012;308(10):1024-33.
17. Wendland E, Farmer A, Glasziou P, Neil A. Effect of α linolenic acid on cardiovascular risk markers: a systematic review. Heart. 2006;92(2):166-9.
18. Appleton KM, Rogers PJ, Ness AR. Updated systematic review and meta-analysis of the effects of n−3 long-chain polyunsaturated fatty acids on depressed mood. The American Journal of Clinical Nutrition. 2010.
19. Appleton KM, Hayward RC, Gunnell D, Peters TJ, Rogers PJ, Kessler D, et al. Effects of n–3 long-chain polyunsaturated fatty acids on depressed mood: systematic review of published trials. The American Journal of Clinical Nutrition. 2006;84(6):1308-16.
20. Miles EA, Calder PC. Influence of marine n-3 polyunsaturated fatty acids on immune function and a systematic review of their effects on clinical outcomes in rheumatoid arthritis. British Journal of Nutrition. 2012;107(SupplementS2):S171-S84.
21. Albert BB, Derraik JGB, Cameron-Smith D, Hofman PL, Tumanov S, Villas-Boas SG, et al. Fish oil supplements in New Zealand are highly oxidised and do not meet label content of n-3 PUFA. Scientific Reports. 2015;5:7928. | CommonCrawl |
Over the past years a lot of progress has been made in the understanding of single spin asymmetries in hard scattering processes. We briefly review this subject, covering the non-vanishing of time-reversal odd parton distributions, universality of fragmentation functions, and the discovery of previously unknown time-reversal odd parton distributions at subleading twist. Interestingly enough, all these developments were initiated by simple model calculations.
It is claimed in Phys. Lett. A by T. Nishioka et. al. 327 (2004) 28-32, that the security of Y-00 is equivalent to that of a classical stream cipher. In this paper it is shown that the claim is false in either the use of Y-00 for direct encryption or key generation, in all the parameter ranges it is supposed to operate including those of the experiments reported thus far. The security of Y-00 type protocols is clarified.
Evidence for dynamical tunneling is observed in studies of the transmission, and wave functions, of a soft-walled microwave cavity resonator. In contrast to previous work, we identify the conditions for dynamical tunneling by monitoring the evolution of the wave function phase as a function of energy, which allows us to detect the tunneling process even under conditions where its expected level splitting remains irresolvable.
We study gauge symmetry breaking patterns in supersymmetric gauge models defined on $M^4\times S^1$. Instead of utilizing the Scherk-Schwarz mechanism, supersymmetry is broken by bare mass terms for gaugino and squarks. Though the matter content is the same, depending on the magnitude of the bare mass, the gauge symmetry breaking patterns are different. We present two examples, in one of which the partial gauge symmetry breaking $SU(3)\to SU(2)\times U(1)$ is realized.
Numero intero de l'Unità del giorno 2004-11-12, dall'archivio de l'Unità.
Let $M$ be a K\"ahler-Einstein surface with positive scalar curvature. If the initial surface is sufficiently close to a holomorphic curve, we show that the mean curvature flow has a global solution and it converges to a holomorphic curve.
The gravitational anomalies are investigated for generalized Euclidean Taub-NUT metrics which admit hidden symmetries analogous to the Runge-Lenz vector of the Kepler-type problem. In order to evaluate the axial anomalies, the index of the Dirac operator for these metrics with the APS boundary condition is computed. The role of the Killing-Yano tensors is discussed for these two types of quantum anomalies.
The pion re-scattering operator for pion production, derived recently in time-ordered perturbation theory, is compared with the one following from the simple S-matrix construction. We show that this construction is equivalent to the on-shell approximation introduced in previous papers. For a realistic NN interaction, the S-matrix approach, and its simplified fixed threshold-kinematics version, work well near threshold.
The gripping documentary exposé into the life and death of North Koreans as they try to escape their homeland and China.
This CD was distributed along with Wired Magazine's Nov 2004 issue. It is a collection of songs produced under Creative Commons, inspired by Creative Commons' remix website, ccMixter. "Rip, mix, burn. Swap till you drop. The music cops can't do a thing - it's 100 percent legal, licensed by the bands. The Wired CD , copyrighted for the 21st century." check out Wired Magazine's page for this release here , with a great article by Thomas Goetz. Also check it out on ccMixter here . | CommonCrawl |
Abstract : This paper presents a robust observer-based $H_\infty$ controller design method via LMIs for a class of switched discrete-time linear systems with $l_2$-bounded disturbances and parameter uncertainties. The main contribution of this paper consists in a new and judicious use of the slack variables coming from Finsler's lemma. We show analytically how the proposed slack variables allow to eliminate some bilinear matrix coupling. The validity and effectiveness of the proposed design methodology are shown through two numerical examples. | CommonCrawl |
Let $ A $ be a positive definite matrix and let $ D $ be a positive diagonal matrix with entries on the main diagonal: $ d_1,d_2,...,d_n $, both $ A $ and $ D $ have the same dimension $ n \times n $. I was interested in understanding how the eigenvalues of the sum $ A + ADA $ qualitatively behave with respect to the eigenvalues of $ A $ and the entries $ d_1,...,d_n $.
I thought since this sum has a special form, one could hopefully say a bit more analytically than by exclusively using the fact that the two summands are Hermitian. I thought about using some other techniques, perhaps expressing the product of $A$ and $ D $ as a polynomial of $ A $, but unfortunately, I am stuck. I certainly appreciate all help on this.
Browse other questions tagged linear-algebra matrices eigenvalues hermitian or ask your own question.
How much can a diagonal matrix change the eigenvalues of a symmetric matrix?
How do eigenvectors and eigenvalues change when we remove a row/column pair of a matrix?
What are the eigenvalues of a square matrix added to a diagonal matrix with different diagonal elements?
How the eigenvalues change when a Hermitian matrix is left multiplied and right multiplied by a diagonal matrix? | CommonCrawl |
Why is an entangled qubit shown at the origin of a Bloch sphere?
I'm unclear why the Bloch sphere representation of a maximally entangled qubit shows the state of the bit as being at the origin of the sphere.
over time, with $q_0$ on the left and $q_1$ on the right. Both qubits end up at the origin of their respective spheres following application of $CNOT$ ($q_1$ "waits" at its initial value until after $H$ moves $q_1$ to $x$).
Why is a maximally entangled qubit shown at the origin of a Bloch sphere?
An explanation of sorts is provided here, but I'm too much of a beginner to follow it.
Let $(x,y,z)$ be a point in the unit sphere with $x^2+y^2+z^2 \leq 1$.
This is just a convenient way to parameterize all $2\times 2$ density matrices. This doesn't work as nicely for qudits with $d \neq 2$. But since we are talking $d=2$, we might as well use this nice parameterization.
This is the maximally mixed state.
What is being shown is the state for only 1 qubit. This is the result after taking a partial trace over the other qubit.
Edit: As stated above "This is just a convenient way to parameterize all $2\times 2$ density matrices. This doesn't work as nicely for qudits with $d \neq 2$. But since we are talking $d=2$, we might as well use this nice parameterization." So even if density matrices still make you queasy, don't think of the center of a sphere as being something particularly meaningful. It is just a convenient way to draw all states and in this case the center happens to line up with the maximally mixed state. So no it is not something fundamental. It doesn't generalize to other $d$ or more qubits. Don't take this particular parameterization too seriously, it just allows us to plot the state in a way to quickly convey the information visually.
The Bloch sphere only represents the state of a single qubit. What you're talking about is taking a multi-qubit state, and representing the state of just one of those qubits on the Bloch sphere.
If the multi-qubit state is a product state (pure and separable), then the state of the single qubit is a pure state, and is represented as a point on the surface of the Bloch sphere. If the overall state is entangled, then the individual qubit is not pure, and is represented by a point that is on the interior of the Bloch sphere. The shorter the distance to the centre, the more mixed the individual qubit is, and hence the more entangled the global state is. The maximally entangled state yields the shortest possible distance, i.e. the point right at the centre of the sphere. AHussain's answer gives you the mathematics of how to formally calculate that.
Not the answer you're looking for? Browse other questions tagged entanglement bloch-sphere or ask your own question.
How to think about the Z gate in a Bloch sphere?
How to prove that Z operator rotates points on Bloch sphere about Z axis through 180°?
Why do Bloch sphere wavefunctions have half angles?
Can the Bloch sphere be generalized to two qubits?
How to transfer non maximally entangled state to maximally entangled?
What's a vector in the format of the Bloch Sphere?
What utility is provided by the Bloch sphere visualization? | CommonCrawl |
The talk is a review of the current status of the field, which is very much in flux. Two central topics will be: (i) applications of AdS/CFT (both static finite T and dynamical) and (ii) electric-magnetic duality and the role of magnetic objects in sQGP.
We provide an overview of our present knowledge of the phase diagram of cold and dense baryonic matter. We consider, in particular, two regimes in which systematic field theory calculations are possible: The regime of very low baryon density which can be studied using effective field theory methods, and the regime of very high baryon density in which weak coupling QCD is applicable.
14:00 to 15:00 String theory and holographic descriptions for large N gauge theories - where do we stand?
I will review what is known about string theoretic and holographic dual descriptions of large N gauge theories, focusing on the progress towards a description of large N QCD.
Yang-Mills theories in 2+1 (or 3) dimensions are interesting as nontrivial gauge theories in their own right and as effective theories of QCd at high temperatures. By a suitable parametrization of fields and techniques from 2-dimensional CFT, a Hamiltonian approach has been developed which has led to interetsing results on the vacuum wave function, string tension, mass gap, etc. I shall review the basics of this approach, emphasizing symmetries and robustness of results, and also provide a short update on its status.
Recent results of lattice thermodynamics will be presented. The nature of the transition, the absolute scale of the transition, the static potential, the equation of state and the phase diagram will be discussed. The analyses used a.) physical quark masses and b.) controlled continuum extrapolations.
We discuss the importance of the inclusion of the Polyakov loop in computation of observables in QCD at finite temperature. In the deconfinement phase it is possible to obtain a fully gauge invariant effective action of QCD, which means invariance under large gauge symmetries, and it is necessary to consider the Polyakov loop. In the confinement phase of QCD, traditional Chiral Quark Models become inconsistent with Chiral Perturbation Theory at finite temperature, they predict deconfinement even at low temperature, etc. These problems can be solved if we consider the coupling of the Polyakov loop in these models. In particular, the Polyakov-NJL model predicts a phase transition which agrees very well with lattice data, if we consider a convenient non-perturbative parametrization for the distribution of the Polyakov loop in pure gluodynamics. The main theoretical justification of Polyakov-Chiral Quark Models is large gauge invariance. Some references are: E. Megias, E. Ruiz Arriola and L.L. Salcedo PLB563:173-178(2003), PRD69:116003(2004), PRD74:065005(2006), PRD74:114014(2006), JHEP0601:073(2006), PRD75:105019(2007).
http://www.ugr.es/~emegias/index_en.html - My Homepage. You can find information about my CV, list of publications, list of talks, etc.
Color diagonal and color anti-symmetric ghost propagator of zero temperature and finite temperature configurations of the MILC collaboration are measured after Landau gauge fixing and the color confinement mechanism and their temperature dependence are discussed.
In the epsilon-regime of lattice QCD one can get an accurate measurement of the pion decay constant by monitoring how just one single Dirac operator eigenvalue behaves when subjected to different external vector sources. This can be done both for imaginary isospin chemical potential with real eigenvalues, as well as for real chemical potential with complex eigenvalues. We briefly sketch the formalism how to compute individual Dirac eigenvalue distributions from Random Matrix Theory and compare to Lattice data.
In lattice QCD, the Maximum Entropy Method can be used to reconstruct spectral functions from euclidean correlators obtained in numerical simulations. We show that at finite temperature the most commonly used algorithm, employing Bryan's method, is inherently unstable at small energies and give a modification that avoids this. We demonstrate this approach using the vector current-current correlator obtained in quenched QCD at finite temperature. Our first results indicate a small electrical conductivity above the deconfinement transition.
QCD with two color undergoes a transition to a phase with diquark condensate when the quark chemical potential equals half the pion mass. We investigate the gluonic properties of this transition by inspecting the behaviour of the glueball correlators. It turns out that the glueball masses change very little at the transition, supporting the view that the suprfluid transition is not deconfining, while the amplitude of the propagators drops, suggesting a strong reduction of the gluon condensate.
Quark-gluon matter as produced in the early stages of an ultrarelativistic heavy-ion collison has strong momentum space anisotropies. Even at arbitrarily weak coupling this leads to nonperturbative phenomena in the form of nonabelian plasma instabilities. Their dynamics will be discussed within the framework of the hard-loop effective theory, both for a stationary anisotropic plasma and an anisotropically expanding one.
I am going to discuss how heavy quarks could be used as probes of the deconfined strongly interacting matter. First I will discuss correlation functions of infinitely heavy (static) quarks and color screening calculated on the lattice. Then I show recent results on quarkonia correlators and spectral functions calculated on the lattice and in potential models. The transport contribution to the quarkonium will be also discussed.
We construct a holographic model for baryons in the context of AdS/QCD and study the spin-1/2 and spi-3/2 nucleon spectra and its couplings to mesons, taking fully account of the effects from the chiral symmetry breaking. A pair of 5D spinors is introduced to represent both left and right chiralities. Our model contains two adjustable parameters, the infrared cutoff and the Yukawa coupling of bulk spinors to bulk scalars, corresponding to the order parameter of chiral symmetry. Taking the lowest-lying nucleon mass as an input, we calculate the mass spectrum of excited nucleons and the nucleon couplings to pions. The excited nucleons show a parity-doubling pattern with smaller pion-nucleon couplings.
This is a report on recent work on the dynamics of strongly coupled gauge theory with quenched flavours, studied using D7-brane probes of various string theory duals.
We review recent work on adding flavour to strongly coupled finite temperature field theories by embedding D7 brane probes into the AdS-Scharzschild black hole background.In particular, transport processes in the presence of an isospin chemical potential are examined. We also discuss the behaviour of mesons in this scenario.
Various holographic approaches to QCD in five dimensions are explored using input both from the putative non-critical string theory as well as QCD. It is argued that a gravity theory in five dimensions coupled to a dilaton and an axion may capture the important qualitative features of pure QCD. A part of the higher $\alpha'$-corrections are resumed into a dilaton potential. The potential is shown to be in one-to-one correspondence with the exact $\beta$-function of QCD, and its knowledge determines the full structure of the vacuum solution. The geometry near the UV boundary is that of AdS$_5$ with logarithmic corrections reflecting the asymptotic freedom of QCD. We find that all relevant confining backgrounds have an IR singularity of the ``good" kind that allows unambiguous spectrum computations. Near the singularity the 't Hooft coupling is driven to infinity. Asymptotically linear glueball masses can also be achieved. The classification of all confining asymptotics, the associated glueball spectra and meson dynamics are also addressed.
We present an accurate lattice calculation of the Euclidean two-point functions of the energy-momentum tensor in the SU(3) gauge theory at temperatures relevant to present and future heavy ion colliders. This allows us to give a robust upper bound on the shear and bulk viscosities. We compare the correlators to AdS/CFT predictions and quantify the deviations from conformality. Finally, we describe the extraction of the corresponding spectral functions and give our best estimates of the transport coefficients.
We discuss a novel the "charge separation phenomena " which may happen during the QCD phase transition. This effect is due to the anomalous terms in the effective lagrangian in the presence of nonzero chemical potential and the so-called theta term. There is a number of cosmological/astrophysical observations (such as similarity of dark matter and baryon densities in universe) supporting this picture. There are also preliminary results from RHIC which also apparently demonstrate that the separation of charges takes place if some conditions are met.
A complete thermodynamics analysis for the 2+1D Gross-Neveu model is performed using the optimized perturbation method (or linear delta expansion). Analytical results for all critical quantities, like the critical temperature and chemical potential for (discrete) phase transition are obtained. The first order phase transition line and tricritical point, not seen from the available results for the model using for example the standard large-N approximations, are obtained. Our results for the phase diagram for the model also indicates the presence of a "liquid-gas" phase, not previously seen from other methods used to analyze the model.
We investigate in what situations the phenomenon of Anderson localization, originally introduced in condensed matter, may be relevant in the context of QCD. At the chiral phase transition we provide compelling evidence from lattice and phenomenological instanton liquid models that the QCD Dirac operator undergoes a metal - insulator transition similar to the one observed in a disordered conductor. This suggests that Anderson localization plays a fundamental role in the chiral phase transition. Based on a recent relation between the Polyakov loop and the spectral properties of the Dirac operator we discuss how the confinement-deconfinement transition may be related to a metal-insulator transition in the bulk of the spectrum of the Dirac operator.
Analyses of the Dyson-Schwinger equation (DSE) have proven to be successful in studying the phase structure of vacuum gauge theories. In the Landau gauge DSE with the ladder kernel for the fermion mass function in the vacuum QED, the fermion wave function renormalization constant is guaranteed to be unity, satisfying the Ward identity. Thus irrespective of the problem of the ladder approximation, the results obtained would be gauge invariant Although in the vacuum case, despite the use of ladder kernel, in the analysis in the Landau gauge the Ward identity is guaranteed to be satisfied, at finite temperature/density there is no such guarantee. In fact, even in the Landau gauge the fermion wave function renormalization constant largely deviates from unity, being not even real. In finite temperature/density QCD/QED, the results obtained from the ladder Dyson-Schwinger equation explicitly violate the Ward identity, thus depend on the gauge, their physical meaning being obscure. In this paper, we study, in the analysis of the HTL resummed improved ladder DS equation for the fermion mass function in thermal QCD, the procedure how we can get the "gauge invariant" solution in the sense it satisfies the Ward identity. The proposed procedure works excellently to obtain a "gauge invariant" solution, at least in the numerical analysis. To get such a solution it is essential that the gauge parameter ? depends on the momentum of the gauge boson. Properties of the "gauge-invariant" solutions are discussed. A theoretical investigation is now underway.
Motivated by the gluon condensate in QCD I study Abelian gauge theory in the presence of a constant (p=0) gluon background. The background adds an extra \sim \mu^2\delta^4(p) term to the free (Abelian) gluon propagator. I show how to evaluate dressed Green functions at tree level in the coupling \alpha, but summed to all orders in the parameter \mu characterizing the background field. Instead of a pole at p^2=m^2, the dressed quark propagator has a discontinuity that vanishes exponentially for large p^2. For vanishing quark mass I study a propagator solution which breaks chiral symmetry spontaneously.
The spectral function of heavy quarkonium is an observable possessing both a theoretically transparent connection to heavy ion phenomenology, and a non-trivial qualitative change as a function of the temperature, making it a conceptually clean probe of deconfined QCD matter. I review some basic facts concerning this observable, and discuss then recent progress on its determination, by approaching the physically relevant temperature range from the high-temperature side, where the spectral function can be computed systematically in (resummed) weak-coupling expansion.
We study the two-flavors Nambu--Jona-Lasinio (NJL) model at finite temperature and chemical potential by using the dimensional regularization. Since this model is non-renormalizable in four spacetime dimensions, physical results depend on the regularization method. We also discuss the color superconductivity in the extended NJL model by using the dimensional regularization.
I shall describe a bottom-up approach to constructing a holographic dual to QCD: AdS/QCD. Holographic models simultaneously satisfy chiral symmetry constraints, QCD sum rules and match asymptotic behavior of QCD correlation functions. The simplest model of this type gives a remarkably good fit to low energy hadronic observables.
We derive light-front equations for the bound states of massless quarks from the semiclassical approximation to the equations of motion describing the propagation of bosonic and fermionic modes on AdS space and the mapping of string modes to light-front wavefunctions of hadrons in physical space-time. The effective equations describing light-front eigenmodes in 3+1 space-time depend on the average impact distance between the constituents and possesses remarkable algebraic structures and integrability properties dictated by the conformality of the theory. The holographic model is extended to include a confining potential while preserving its algebraic structure. The gauge/gravity correspondence provides insights into inherently non-perturbative aspects of hadronic physics, such as the orbital and radial spectra and the behavior of form factors in the space-like and the time-like regions.
I will present the phase diagram, as function of number of flavors and colors, for SU(N) gauge theories with matter in "any" irreducible representation of the gauge group.
QCD applications: The introduction of higher dimensional representations will allow me to define new large N limits capturing properties of QCD at leading N which are poorly described in the 't Hooft one.
New Strong Dynamics at the Electroweak Scale: The phase diagram will provide us with the first and most complete catalogue of strongly coupled theories near an infrared fixed point which can be used to break the Electroweak theory dynamically while not being at odds with precision measurements. Theories near an infrared fixed point have a coupling constant with a "walking" rather than a running dependence on the energy scale and hence are termed "walking" theories. Our phase diagram will allow me to define "Minimal Walking Theories" and their phenomenological consequence for the physics at LHC.
We discuss the issue of the chiral symmetry restoration in high excitations. On the phenomenological side we use the linearity and equidistance of the quark meson Regge trajectories to show that the Nambu-Goldstone mode persists at high energies. This conclusion is supported by theoretical consideration within the framework of the Sakai-Sugamoto construction.
Using the Abelian Higgs model, we study the radial excitations of single vortex and their propagation modes along the vortex line. We call such beyond-stringy modes peristaltic modes of single vortex. With the profile of the static vortex, we derive the vortex-induced potential, i.e., single-particle potential for the Higgs and the photon field fluctuations around the static vortex, and investigate the coherently propagating fluctuations which corresponds to the vibration of the vortex. We derive, analyze and numerically solve the field equations of the Higgs and the photon field fluctuations around the static vortex with various Ginzburg-Landau parameter kappa and topological charge n. Around the BPS value or critical coupling kappa^2=1/2, there appears a significant correlation between the Higgs and the photon field fluctuations mediated by the static vortex. As a result, for kappa^2=1/2, we find the characteristic new-type discrete pole of the peristaltic mode corresponding to the quasi-bound-state of coherently fluctuating fields and the static vortex. We investigate its excitation energy, correlation energy of coherent fluctuations, spatial distributions, and the resulting magnetic flux behavior in detail. Our investigation covers not only usual Type-II vortices with n=1 but also Type-I and Type-II vortices with n no less than 1 for the application to various general systems where the vortex-like objects behave as the essential degrees of freedom. We also consider the peristaltic modes of dual string between quarks using the dual Higgs theory as the effective theory of low energy QCD. The implications for hadron physics will be discussed.
At high temperature, finite correlation lengths appear due to the stochastic nature of thermal fluctuations. Then, small time-dependent perturbations localized in the plasma will not affect regions beyond that length. This can be shown as an exponential decay of the two point correlator with the distance. In N=4 we can study the plasma phase using the holographic gravity dual, a black hole in AdS_5. We identify correlation lengths of different operators with the imaginary part of complex momentum modes of the dual fields in the black hole geometry, and explore their dependence with frequency.
It has been believed that charged black hole is not relevant to the BARYON density. Here we suggest that it is relevant in the case where the brane is filling the whole AdS bulk: in high temperature, the background becomes RN-$AdS$ black hole as a consequence of the gravity back reaction to the presence of charge. We show that the main difference of R-charge and baryon charge lies on the strength of couplings to the gravity: the latter is suppressed by $N_f/N_c$ relative to the former. With such gravity back reaction, density dependence of physical quantities can be encoded in quadratic action of AdS/QCD without adding higher order $\alpha'$ correction. We discuss phase diagram based on this picture with flat boundary and show that the phase diagram closes at the finite density or chemical potential unlike the brane embedding approach without gravity back reaction.
I will review a recent AdS/CFT computation of the drag force experienced by a heavy quark moving through strongly coupled N=4 SYM plasma at finite R charge density. I will explain, how a discrepancy between two previous calculations has now been resolved, and will briefly discuss the D7 brane boundary conditions relevant for the different classes of spinning string solutions found.
I will review methods of computing spectral functions and transport coefficients from gauge/gravity duality.
We evaluate the spectra of light scalar and vector glueballs in a holographic model of QCD with a dilaton background field. In particular we study how the masses depend on the background and on the geometry of the bulk.
I analyze the high energy collisions of hadrons in QCD from the point of view of gravity duals of QCD. Dual black hole formation dominates at least above 10 GeV, and the total QCD cross section behaviour with s is obtained. The saturation of the Froissart bound in QCD maps to production of black holes in the IR of the dual. These correspond to the RHIC fireballs and similar objects expected at the LHC. The temperature of the fireball is derived, as well as other properties. A simple field theory toy model shows how it is possible to obtain black hole-like properties in QFT. The model has thermal horizons that trap information, and presents aparent information loss, which should be restored by unitarity.
We study D4-D8-\bar D8 brane configuration which describes large N QCD in a certain regime of parameters. The high energy meson spectrum is computed analytically. We consider this system at finite chemical potential and temperature and find a line of first order phase transitions.
We study the thermodynamics of the hard wall model, which consists in the introduction of an infrared cut-off in asymptotically AdS spaces. This is a toy model for confining backgrounds in the context of the gauge/gravity correspondence. We use holographic renormalization and reproduce the existence of a Hawking Page phase transition recently discussed by Herzog. We also show that the entropy jumps from N^0 to N^2, which reinforces the interpretation of this transition as the gravity dual of confinement/deconfinement. We also show that similar results hold for the phenomenologically motivated soft wall model, underlying the potential universality of our analysis.
I will discuss holographic dual models of QCD-like theories at strong coupling. This will include recent developments of gauge theories at finite temperature based on brane thermodynamics.
We explore real time phenomena such as plasma oscillations and Debye screening in a SU(N) gauge theory with a Z(N) interface at high temperature, where the Z(N) symmetry is spontaneously broken. The Z(N)interface is obtained as a solution between different Z(N) vacua. In the background we show that gluons and quarks have unusual distributions, and gluon mass and Debye screening mass are changed. We construct a real-time effective theory in the background.
We consider a gravity dual description of time dependent, strongly interacting large-Nc N=4 SYM. We regard the gauge theory system as a fluid with shear viscosity. Our fluid is expanding in one direction following the Bjorken's picture that is relevant to RHIC experiments. We obtain the dual geometry at the late time that is consistent with dissipative hydrodynamics. We show that the integration constants that cannot be determined by hydrodynamics are given by looking at the horizon of the dual geometry.
We consider an expanding boost-invariant plasma using AdS/CFT. We determine the time-dependence of the meson spectrum in the late-time regime and recover the N=4 results asymptotically.
We calculate the energy spectrum of strings in (2+1)-dimensional SU(N) gauge theories with N=2,3,4,5,6,8. We first focus on zero temperature nd calculate the tensions of strings in different SU(N) representations. We attempt to control various systematic errors, and for the fundamental representation this allows us to compare with the analytic prediction of Karabali, Kim, and Nair. In this case we find that the prediction deviates from our results by at most 3%, which is statistically significant. We proceed to discuss the temperature dependence of the Polyakov loops' spectrum in the confined phase and compare it to the free bosonic string model. For the ground state we find that this model provides a very good approximation for our results, even relatively close to the deconfinement transition. We discuss how our results evolve with the lattice spacing, the number of colors, and make a connection with our previous calculation for the Hagedorn temperature in 3+1 dimensions.
We will give a critical discussion of the phase diagram of QCD in the temperature -- chemical potential plane. We will explain the mechanism of chiral symmetry breaking at nonzero chemical potential and emphasize the importance of the the phase of the fermion determinant. Among others it is shown that the Banks-Casher formula, which relates the chiral condensate and the Dirac spectrum, is not valid for QCD at nonzero chemical potential. Instead, because of the phase of the fermion determinant, intricate cancellations lead to a nonzero chiral condensate. The distribution of this phase is calculated by means of chiral perturbation theory, and a region of the phase diagram where Monte Carlo simulations can be performed is identified. | CommonCrawl |
This thesis aims to extend some of the results of the Graph Minors Project of Robertson and Seymour to "group-labelled graphs". Let $\Gamma$ be a group. A $\Gamma$-labelled graph is an oriented graph with its edges labelled from $\Gamma$, and is thus a generalization of a signed graph. Our primary result is a generalization of the main result from Graph Minors XIII. For any finite abelian group $\Gamma$, and any fixed $\Gamma$-labelled graph $H$, we present a polynomial-time algorithm that determines if an input $\Gamma$-labelled graph $G$ has an $H$-minor. The correctness of our algorithm relies on much of the machinery developed throughout the graph minors papers. We therefore hope it can serve as a reasonable introduction to the subject. Remarkably, Robertson and Seymour also prove that for any sequence $G_1, G_2, \ldots$ of graphs, there exist indices $i < j$ such that $G_i$ is isomorphic to a minor of $G_j$. Geelen, Gerards and Whittle recently announced a proof of the analogous result for $\Gamma$-labelled graphs, for $\Gamma$ finite abelian. Together with the main result of this thesis, this implies that membership in any minor closed class of $\Gamma$-labelled graphs can be decided in polynomial-time. This also has some implications for well-quasi-ordering certain classes of matroids, which we discuss.
Huynh, T.C.T. (2009, September 24). The Linkage Problem for Group-labelled Graphs. UWSpace. | CommonCrawl |
Given the input and the output, how to determine the impulse response?
I know that $y[t]=x[t]*h[t]$, but I am having hard time to figure out the right way to calculate the impulse response. I know very little about signal processing, so if you don't mind giving an easy explanation, then I appreciate it. Or, if it's possible to do an example, that's better.
Computing the inverse $\mathcal Z$-transform of $H (z)$, we finally obtain the impulse response.
Not the answer you're looking for? Browse other questions tagged discrete-signals linear-systems fir impulse-response or ask your own question.
What is impulse response if i know input and output?
How can output sequence be equal to the sum of copies of the impulse response, scaled and time-shifted signals?
What does "how to identify impulse response of a system?" mean? | CommonCrawl |
The Pythagorean theorem has already been proved and it is a basic fact of math. It always works, and there are proofs of it. But I have found a problem.
Say you want to get from point A to point B.
Here is a way to do it, where red is vertical movement and grey is horizontal movement.
And if you continue forever, the path will become diagonal.
I know the Pythagorean theorem is true and proven, so what is wrong with this series of steps that I went through?
By splitting the path you have essentially created lots of little triangles. You still need to apply Pythagoras' theorem to each one. If you do, then you will get the correct answer.
The problem here is that the limit of the lengths is not the length of the limit. One has assumed that the sequence of lengths $x+y,x+y,x+y,\ldots$ converges to the length of the hypotenuse in the fake proof.
Not the answer you're looking for? Browse other questions tagged geometry triangles fake-proofs or ask your own question.
How is it that this shape can converge to what looks like a triangle but has a different perimeter?
Why are two completely independent variables still linked through the Pythagorean theorem?
Why does the Pythagorean Theorem not work on this problem in the way that I used it? | CommonCrawl |
Use this tag for questions about the board on which the game of chess is played.
Minimum number of dominoes on an $n \times n$ chessboard to prevent placement of another domino.
Method for finding all the solutions of the $n$ queens problem for a given $n$?
Determine numbers written on a chessboard with the fewest number of questions.
How do I solve this tile-covering problem?
Can two "magical" chess knights meet?
Number of knight cycle-free $n$-tours on an $n \times n$ modular chessboard?
Reference on properties of the queen graph $Q_n$?
Chess Problem: What is the minimum number of white pieces required such that the black king is in check, no matter it's position?
$16$ rooks on a chess board.
How many ways can I put down two indistinguishable pieces on an ordinary $8 \times 8$ chessboard if they must either be in the same row or column?
12 1x2 dominoes are cut out of a chessboard. Is is true that one will always be able to cut a 1x3 rectangle from the remaning figure?
How many ways are there to get from one point to another on a (simple) chessboard?
Can Zermelo's theorem be extended to a game which always has a winner?
How many unique solutions are there to the 8 rooks problem?
Are there any applications of matrices, or linear algebra to chess? If so, are there good books on it? | CommonCrawl |
While it is interesting that many Markov Chains are reversible, the examples that we have seen so far haven't explained what we get by reversing a chain. After all, if it looks the same running forwards as it does backwards, why not just run it forwards? Why bother with reversibility?
It turns out that reversing Markov Chains can help solve a class of problems that are intractable by other methods. In this section we present an example of how such problems arise. In the next section we discuss a solution.
People have long been fascinated by encryption and decryption, well before cybersecurity became part of our lives. Decoding encrypted information can be complex and computation intensive. Reversed Markov Chains can help us in this task.
To get a sense of one approach to solving such problems, and of the extent of the task, let's try to decode a short piece of text that has been encoded using a simple code called a substituion code. Text is written in an alphabet, which you can think of as a set of letters and punctuation. In a substitution code, each letter of the alphabet is simply replaced by another in such a way that the code is just a permutation of the alphabet.
To decode a message encrypted by a substitution code, you have to invert the permutation that was used. In other words, you have to apply a permutation to the coded message in order to recover the original text. We will call this permutation the decoder.
To decode a textual message, we have to make some assumptions. For example, it helps to know the language in which the message was written, and what combinations of letters are common in that language. For example, suppose we try to decode a message that was written in English and then encrypted. If our decoding process ends up with "words" like zzxtf and tbgdgaa, we might want to try a different way.
So we need data about which sequences of letters are common. Such data are now increasingly easy to gather; see for example this web page by Peter Norvig, a Director of Research at Google.
Let's see how we can use such an approach to decode a message. For simplicity, suppose our alphabet consists of only three letters: a, d, and t. Now suppose we get the coded message atdt. We believe it's an English word. How can we go about decoding it in a manner that can be replicated by a computer for other words too?
As a first step, we will write down all 3! = 6 possible permutations of the letters in the alphabet and use each one to decode the message. The table decoding contains all the results. Each entry in the Decoder column is a permutation that we will apply to our coded text atdt. The permutation determines which letters we will use as substitutes in our decoding process.
To see how to do this, start by keeping the alphabet in "alphabetical" order in your head: 'a', 'd', 't'. Now look at the rows of the table.
The decoder in the first row is ['a', 'd', 't']. This decoder simply leaves the letters unchanged; atdt gets decoded as atdt.
The decoder in the second row is ['a', 't', 'd']. This keeps the first letter of the alphabet 'a' unchanged, but replaces the second letter 'd' by 't' and the third letter 't' by 'd'.
So atdt gets decoded as adtd.
You can read the rest of the table in the same way.
Notice that in each decoded message, a letter appears twice, at indices 1 and 3. That's the letter being used to decode t in atdt. A feature of substitution codes is that each letter original is coded by a letter code, with the same letter code being used every time the letter original appears in the text. So the decoder must have the same feature.
Which one of these decoders should we use? To make this decision, we have to know something about the frequency of letter transitions in English. Our goal will be to pick the decoder according to the frequency of the decoded word.
We have put together some data on the frequency of the different bigrams, or two-letter combinations, in English. Here is a transition matrix called bigrams that is a gross simplification of available information about bigrams in English; we used Peter Norvig's bigrams table and restricted it to our three-letter alphabet. The row corresponding to the letter 'a' assumes that about 2% of the bigrams that start with 'a' are 'aa', about 22% are 'ad', and the remaining 76% are 'at'.
It makes sense that the 'aa' transitions are rare; we don't use words like aardvark very often. Even 2% seems large until you remember that it is the proportion of 'aa' transitions only among transitions 'aa', 'ad', and 'at', because we have restricted the alphabet. If you look at its proportion among all $26\times26$ bigrams, that will be much lower.
Now think of the true text as a path of a Markov Chain that has this transition matrix. An interesting historical note is that this is what Markov did when he first came up with the process that now bears his name – he analyzed the transitions between vowels and consonants in Eugene Onegin, Alexander Pushkin's novel written in verse.
If the true text is tada, then we can think of the sequence tada as the path of a Markov chain. Its probability can be calculated at $P(t)P(t, a)P(a, d)P(d, a)$. We will give each decoder a score based on this probability. Higher scores correspond to better decoders.
To assign the score, we assume that all three letters are equally likely to start the path. For three common letters in the alphabet, this won't be far from the truth. That means the probability of each path will start with a factor of 1/3, which we can ignore because all we are trying to do is rank all the probabilities. We will just calculate $P(t, a)P(a, d)P(d, a)$ which is about 8%.
According to our decoding table above, tada is the result we get by applying the decoder ['t', 'd', 'a'] to our data atdt. For now, we will say that the score of this decoder, given the data, is 8%. Later we will introduce more formal calculations and terminology.
To automate such calcuations we can use the prob_of_path method. Remember that its first argument is the initial state, and the second argument is a list or array consisting of the remaining states in sequence.
Should we decide that our message atdt should be decoded as tada? Perhaps, if we think that 8% is a high likelihood. But what if some other possible decoder has a higher likelihood? In that case it would be natural to prefer that one.
So we are going to need the probabilities of each of the six "decoded" paths.
Let's define a function score that will take a list or array of characters and return the probability of the corresponding path using the bigrams transition matrix. In our example, this is the same as returning the score of the corresponding decoder.
Here are the results in decreasing order of score. There is a clear winner: the decoder ['d', 't', 'a'] corresponding to the message 'data' has more than twice the score of any other decoder.
What we have been able to do with an alphabet of three characters becomes daunting when the alphabet is larger. The 52 lower case and upper case letters, along with a space character and all the punctuations, form an alphabet of around 70 characters. That gives us 70! different decoders to consider. In theory, we have to find the likelihood of each of these 70! candidates and sort them.
Here is the number 70!. That's a lot of decoders. Our computing system can't handle that many, and other systems will have the same problem.
One potential solution is to sample at random from these 70! possible decoders and just pick from among the sampled permutations. But how should we draw from 70! items? It's not a good idea to choose uniform random permutations of the alphabet, as those are unlikely to get us quickly to the desired solution.
What we would really like our sampling procedure to do is to choose good decoders with high probability. A good decoder is one that generates text that has higher probability than text produced by almost all other decoders. In other words, a good decoder has higher likelihood than other decoders, given the data.
For the given encoded text, the denominator is the normalizing constant that makes all the likelihoods sum to 1. It appears in the likelihood of every decoder. In our example with the three-letter alphabet, we ignored it because we could figure out the numerators for all six decoders and just compare them. The numerator was what we called the score of the decoder.
Even when the alphabet is large, for any particular decoder $j$ we can find the numerator by multiplying transition probabilities sequentially, as we did in our example. But with a large alphabet we can't do this for all possible decoders, so we can't list all possible scores and we can't add them all up. Therefore we don't know the denominator of the likelihoods, not even up to a decent approximation.
What we need now is a method that helps us draw from a probability distribution even when we don't know the normalizing constant. That is what Markov Chain Monte Carlo helps us to do. | CommonCrawl |
$K^\bullet $ is $m$-pseudo-coherent as a complex of $A$-modules.
The same equivalence holds for pseudo-coherence.
Proof. Assume (1). Choose a bounded complex of finite free $B$-modules $E^\bullet $ and a map $\alpha : E^\bullet \to K^\bullet $ which is an isomorphism on cohomology in degrees $> m$ and a surjection in degree $m$. Consider the distinguished triangle $(E^\bullet , K^\bullet , C(\alpha )^\bullet )$. By Lemma 15.62.8 $C(\alpha )^\bullet $ is $m$-pseudo-coherent as a complex of $A$-modules. Hence it suffices to prove that $E^\bullet $ is pseudo-coherent as a complex of $A$-modules, which follows from Lemma 15.62.10. The pseudo-coherent case of (1) $\Rightarrow $ (2) follows from this and Lemma 15.62.5.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 064Z. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 064Z, in case you are confused. | CommonCrawl |
Tears dripped from my face as I stood over the bathroom sink. Exposed again! The tears melted into thoughts, and an idea formed in my head. This will surely keep my secrets safe, once and for all. I crept back to my computer and began to type.
Notice that the values iPmQ and iQmP are not removed when constructing the public key. Let us call these values $a$ and $b$ in the following. If $a', b'=egcd(p, q)$, then $a'p+b'q=gcd(p,q)$ by Bézout's identity. Hence, \[ \quad (a+iq)p + (b+jp)q = 1\nl \Rightarrow ap+bq+(i+j)pq = 1\nl \Rightarrow ap+bq=1+zn=:c \] for small values $i,j,z\in \mathbb Z$.
We can expect $q/a$ to be small. Hence, $k\approx -yc/a$. Then, $q=ka+yc$. | CommonCrawl |
The proper divisors of a positive integer, $n$, are all the divisors excluding $n$ itself. For example, the proper divisors of $6$ are $1, 2,$ and $3$.
A number, $n$, is said to be multiplicatively perfect if the product of its proper divisors equals $n$. The smallest such example is six: $6 = 1 \times 2 \times 3$ the next such example is eight: $8 = 1 \times 2 \times 4$.
Determine the nature of all multiplicatively perfect numbers. | CommonCrawl |
This workshop, sponsored by AIM and the NSF, will focus on the theory of Toeplitz matrices (TM) and Wiener-Hopf operators (WHO).
In the recent ten or so years, the theory of these operators (TM and WHO) attracted a great deal of renewed attention from mathematical and theoretical physics communities due to the mathematical beauty of problems involved, and due to new applications in physics. For example, novel developments in quantum information theory and statistical physics concern the asymptotics of the (quantum) entanglement entropy of thermal equilibrium states, the overlap of ground states in the Anderson orthogonality catastrophe and the emptiness formation probability in quantum spin models. These quantities can be deduced from suitable Szego limit theorems, that is, from asymptotic results for traces of certain functions of TM and WHO.
The workshop will focus on the following problems.
Double scaling limits for TM and WHO with symbols which, apart from the scaling parameter $L$, depend on another parameter, say on temperature $T$, as $L \rightarrow \infty$ and $T \downarrow 0$ in a coordinated way.
Asymptotics for block TM and WHO with Fisher-Hartwig singularities.
The study of multi-term asymptotics for TM and WHO with the aim to detect topological entropy, that is, a term depending, for instance, on the Euler characteristics of the spatial domain.
Extension to TM and WHO which are translation invariant with respect to a group different from (all) Euclidean translations. Examples include periodic potentials, magnetic fields and random operators.
The purpose of this workshop is to bring together experts from mathematics and theoretical/mathematical physics in order to address the topics outlined above, and to identify new directions in the field Szego expansions, stimulated by recent developments in both fields. | CommonCrawl |
Abstract. In this paper we consider aperiodic ergodic Markov chains with transition probabilities exponentially small in a large parameter $\b$ . We extend to the general not necessarily reversible case the analysis started in [OS] of the first exit problem from a general domain $Q$ containing many stable equilibria (attracting equilibrium points for the $\b = \infty$ dynamics). In particular we describe the tube of typical trajectories during the first excursion outside $Q$. | CommonCrawl |
"Image Zooming Using Corner Matching" by Ronald Marsh, Md Nurul Amin et al.
This work was intended to direct the choice of an image interpolation/zoom algorithm for use in UND's Open Prototype for Educational Nanosats (OPEN) satellite program. Whether intended for a space-borne platform or a balloon-borne platform, we expect to use a low cost camera (Raspberry Pi) and expect to have very limited bandwidth for image transmission. However, the technique developed could be used for any imaging application. The approach developed analyzes overlapping $3\times 3$ blocks of pixels looking for "L" patterns that suggest the center pixel should be changed such that a triangle pattern results. We compare this approach against different types of single-frame image interpolation algorithms, such as zero-order-hold (ZOH), bilinear, bicubic, and the directional cubic convolution interpolation (DCCI) approach. We use the peak signal-to-noise ratio (PSNR) and mean squared error (MSE) as the primary means of comparison. In all but one of the test cases the proposed method resulted in a lower MSE and higher PSNR than the other methods. Meaning this method results in a more accurate image after zooming than the other methods. | CommonCrawl |
Let $B$ be a quantum algebra possessing a semiclassical limit $A$. We show that under certain hypotheses $B^e$ can be thought of as a deformation of the Poisson enveloping algebra of $A$, and we give a criterion for the Hochschild cohomology of $B$ to be a deformation of the Poisson cohomology of $A$ in the case that $B$ is Koszul. We verify that condition for the algebra of $2\times 2$ quantum matrices and calculate its Hochschild cohomology and the Poisson cohomology of its semiclassical limit. | CommonCrawl |
1 . What should come in place of question mark (?) in the following equations?
$37 \times 37 \times + 33 \times 33 \times 33 \over 37 \times 37 + 33 \times 33 - 1221$ = ?
2 . What should come in place of question mark (?) in the following equations?
$4.5 \times 6 - 3.6 \times 5 \over 8.8 \times 5 - 5.5 \times 6$ = ?
A train covers the distance from Delhi to Patna at the speed of 42 kmph and from Patna to Delhi at 48 kmph. What is the average speed of the train?
The cost of 24 pens and 36 pencils is Rs.720. What is the cost of 6 pens and 9 pencils?
If the sum of five consecutive even numbers A, B, C, D and E is 110, what is the product of B and D?
If 45% of a number is 1291.5, what is 67.5% of that number?
Two years ago the ratio of the ages of Manoj and Shyam was 15 : 11. Two years hence the ratio of their ages will be 17 : 13. What is the present age of Shyam?
Let the present age of Manoj and Shyam be x years and y years respectively.
The cost of 19 kg Apples is Rs. 1158, that of 17 kg Tomatoes is Rs. 595, and that of 13 kg Oranges is Rs. 949. What is the total cost of 11 kg Apples, 7 kg Tomatoes and 3 kg Oranges?
Pinky invested an amount of Rs. 24500 at the rate of 9% per annum. After how many years will she get a simple interest of Rs. 37485?
50 persons can complete a piece of work in 50 days. How many persons with twice the efficiency will complete the same piece of work in 10 days? | CommonCrawl |
I know that Mg has 2 valence electrons and O has 6. But I don't know what I must do for a complete ionic compound. If I add them up I get 2+6+6+1+1 = 16, but the answer of this question was 24. Is the answer right? If yes, then how to get it?
You get 24 if you say Mg2+ has 8 valence electrons (its electron configuration is 1s22s22p6) instead of zero.
In that case, there are 8 in Mg2+, and 8 in OH- (6 from O, 1 from H, and and extra 1 because of the -1 charge). Therefore there are $8 + 2 \times 8 = 24$ per formula unit made of one Mg2+ and two OH-.
Some people do argue ionized Group 1A and 2A metals (above Period 2) have 8 valence electrons instead of zero, because technically there are indeed 8 electrons in the outermost electron shell and they are isoelectronic with the noble gas at the end of the previous row (e.g. in your case Mg2+ has the same electron configuration as Ne).
But I think most chemists would prefer to say Group 1A and 2A cations have zero valence electrons, that is, that the valence shell in these ions is the shell corresponding to their Period that used to have the outermost electrons. In that case, the answer to your question is that there are zero valence electrons in Mg2+ and 8 in OH-, for a total of 16 in Mg(OH)2. Note that this is slightly different from your reasoning, however.
An argument for saying Mg2+ has zero valence electrons is that if we draw the Lewis dot structure for the formula unit, I don't know anyone who would put any dots around the Mg. On the other hand, an argument for saying that it has 8 is that we frequently argue that Mg forms the Mg2+ ion because that gives it a complete octet in the outer shell. We rarely present the octet rule as requiring 0, 2 or 8 electrons in the valence shell.
I don't think the correct answer can be found in any mathematically rigorous sense, it depends on your preference in labeling, what you want to call "valence" electrons, and there are nontrivial arguments both ways.
Not the answer you're looking for? Browse other questions tagged ions electrons electronic-configuration ionic-compounds or ask your own question.
How many valence electrons does the azide ion have?
Is pyrite (FeS₂) an ionic or a covalent compound?
Why are electron configurations more stable when there are 8 electrons in the outer shell?
Does electron shielding increase or stay constant moving LEFT to RIGHT across a period? | CommonCrawl |
My models: Say I want to construct a portfolio so I maximize my expected return while keeping my risk (measured by Value-at-Risk) lower than my risk target.
My Value-at-Risk is based on 500 scenarios and my 5% VaR is the absolute value of 10th lowest observation in my potential PnL vector. Say there is $m$ different assets I can invest in. $x_i$ is the value of asset $i$ in my portfolio and not amount of shares.
My Findings: When I experiment with this model, I get more or less the same results even for low interest rates. That is because my Value-at-Risk is an absolute risk measure and if I buy $k$ times more then my VaR increases roughly with same factor, and my risktarget doesn't allow that.
My Question: Should I compute VaR differently now that borrowing money have entered the model?
You need to factor borrowing costs into the scenarios (and the currently low interest rates help, so you may want to check with higher rates as well). Since you compute VaR from the scenarios, this will push VaR to the left (in terms of returns, i.e. make it worse).
The key question is if your risk budget can withstand such an increase in VaR, which will go up as well by roughly your leverage factor.
So I would expect substantially different positions when you repeat the optimisation several times with different scenarios from the same distribution. You may want to consider either increasing the numbers of scenarios, or using an alternative (parametric/semiparametric) method to compute VaR. The paper The Hidden Risks of Optimizing Bond Portfolios under VaR explains some of these problems.
Not the answer you're looking for? Browse other questions tagged portfolio-management portfolio-optimization modern-portfolio-theory var value-at-risk or ask your own question. | CommonCrawl |
There are infinitely many complex numbers $z$ such that $|z|= 1$. Can anybody just explain this to me so I can get the picture.
Here $z=a+ib$ ie. $z=(a,b) $ and can be represented as a point or vector on complex plane above. $|z|^2=a^2+b^2 =1$. and this itself is a locus of a circle.
which is the equation of a circle. Hence there are infinitely many points on this unit circle which satisfy the given equation.
There are not only infinitely many complex numbers on the unit circle, there are infinitely many Gaussian numbers (both real and imaginary parts being rational) there. For, take a Gaussian integer $z=m+ni$, both $m$ and $n$ being ordinary integers, and calculate $z/\bar z$, something that you should have done many times in high school. You see that the resulting complex number $x+iy$ has both $x$ and $y$ in $\mathbb Q$, and of course its absolute value is $1$.
Do complex numbers really exist?
What are complex numbers, actually? | CommonCrawl |
We use the same notation with LDA. $z$ denotes a labeling of a followed user $g$ with a topic (interest), or simply a topic, $P(z f)$ denotes the multinomial distribution of topics given a follower $f$, and $P(g z)$ denotes the multinomial distribution of followed users given a topic. $\alpha$ and $\beta$ are Dirichlet priors, constraining $P(z f)$ and $P(g z)$, respectively. | CommonCrawl |
Abstract: We introduce an affine extension of the Heston model where the instantaneous variance process contains a jump part driven by $\alpha$-stable processes with $\alpha\in(1,2]$. In this framework, we examine the implied volatility and its asymptotic behaviors for both asset and variance options. Furthermore, we examine the jump clustering phenomenon observed on the variance market and provide a jump cluster decomposition which allows to analyse the cluster processes. | CommonCrawl |
Ritchie is a Data Scientist & Big Data Engineer at Xomnia. Besides working at Xomnia, he writes blogs on his website about all kinds of machine- and deep learning topics, such as: "Assessing road safety with computer vision" and "Computer build me a bridge". Expect more interesting blogs from Ritchie in the future.
Last post I've described the Affinity Propagation algorithm. The reason why I wrote about this algorithm was that I was interested in clustering data points without specifying k, i.e. the number of clusters present in the data.
This post continues with the same fascination, however, now we take a generative approach. In other words, we are going to examine which models could have generated the observed data. Through Bayesian inference, we hope to find the hidden (latent) distributions that most likely generated the data points. When there is more than one latent distribution we can speak of a mixture of distributions. If we assume that the latent distributions are Gaussians than we call this model a Gaussian Mixture model.
First, we are going to define a Bayesian model in Edward to determine a multivariate Gaussian mixture model where we predefine k. Just as in k-means clustering we will have a hyperparameter k that will dictate the amount of clusters. In the second part of the post, we will reduce the dimensionality of our problem to one dimension and look at a model that is completely nonparametric and will determine k for us.
Before we start with the generative model, we take a look at the Dirichlet distribution. This is a distribution of distributions and can be a little bit hard to get your head around. If we sample from a Dirichlet we'll retrieve a vector of probabilities that sum to 1. These discrete probabilities can be seen as separate events. A Dirichlet distribution can be compared to a bag of badly produced dice, where each dice has a totally different probability of throwing 6. Each time you sample a dice from the bag you sample another probability of throwing 6. However, you still need to sample from the dice. Actually throwing the dice will lead to sampling the event.
This distribution has one parameter $\alpha$ that influences the probability vector that is sampled. Let's take a look at the influence of $\alpha$ on the samples. We can best investigate the Dirichlet distribution in three dimensions; $\theta = [\theta_1, \theta_2, \theta_3]$. We can plot every probability sample $\theta$ as a point in three dimensions. By sampling a lot of distribution points $\theta$, we will get an idea of the Dirichlet distribution $Dir(\alpha)$.
Now, we have had a nice intermezzo of Dirichlet distributions, we're going to apply this distribution in a Gaussian Mixture model. We will try to cluster the Iris dataset. This is a dataset containing 4 columns of data gathered from 3 different types of flowers.
A Gaussian Mixture model is the Mother Of All Gaussians. For column 0 in our dataframe it is the cumulative of the histograms of the data labels.
However, the integral of a Gaussian distribution is equal to 1 (as a probability function should be). If we combine various Gaussians we need to weigh them so the integral will meet the condition of being equal to 1.
We could of course scale the mixture of Gaussian by weights summing to 1. Here comes the Dirichlet distribution in place. Every sample from a Dirichlet sums to one and could be used as weights to scale down the mixture of Gaussian distributions.
Interested in reading the full blog?
Or read the previous blog about the Affinity Propagation algorithm. | CommonCrawl |
MathGraph32 Java is a software for the creation of figures of geometry or analysis.
MathGraph32 has been written by a french mathematic teacher.
With a few mouse clicks MathGraph32 can create very sophisticated figures of pure geometry or analysis.
MathGraph32 is first an educational tool.
MathGraph32 is free under GNU GPL licence 3.
You can choose the level of functioning (four levels available). You can even allow the use of your preferred tools, which will allow teachers to adapt the software for their personal use .
MathGraph32 is a multi-windows software.
MathGraph32 Java runs in three langages : French, English and Spanish.
An applet is available to get the MathGraph32 figures displayed on web pages.
You can download MathGaph32 on this page et see examples of dynamic figures on this other page.
Help is available via key F1.
If you point out errors in syntax or orthography, please click on the letter box and send me a message.
Please report also any problem encountered.
Most of the tools of object creation are available through an icon displayed on the left of the window.
The tools available in the top toolbar allow the user to modify the current figure or to read-save files on disk.
Other tools ar available via menu items.
The toolbars at the right side of the window allow the user to modify the aspect of new or created objects (color, line style for example).
The menu item Edit - Created objects protocol (keyboard shortcun F5 ) opens a floating dialog box displaying each object created with full information on it as show above. You are also able to modify certain objects by a click on icon .
MathGraph32 allows the user to create most usual objects as lines, segments, rays, vectors, circles, arcs of circle and to create the image of such objects through translations, symetries, rotations, dilations, similitudes and inversions.
With MathGraph32 you can work in one or several frames.
Barycenters are also available (with a some number of points).
All measures and calculus can be used in the geometrical objects.
For example, a barycenter coefficient can be 2*sqrt(AB) where AB is a length that was measured before.
MathGraph32 is the only software of dynamic geometry that can create objects locuses.
MathGraph32 allows the user to draw function curves (function curves are in fact point locuses).
You can link a point to a function curve.
MathGraph32 fully implements complex numbers calculus and representation.
You can define your own functions (real or complex) of one, two or three variables.
MathGraph32 is also able to calculate the derivative function of a real function of one variable.
The syntax used for the calculus are adpated to the user's langage.
Macros allow the user to launch an action on the figure.
A macro is represented by a button. A click on the button launches the action.
For example a click on a macro will start the animation of a points linked to an object, hide or unmask objects, show the trace of objects when a linked point moves and many other features.
For example, in the following figure (animated by MathGraph32 applet), click on the five macros.
You can also capture points P, Q , R, $\alpha$ , $\beta$ and $\gamma$.
This figure uses object locuses to represent the surface.
With MathGraph32 you can create constructions allowing you to create final objects from a few sources objects.
Some constructions are supplied with MathGraph32 Java. To use these constructions, simply use menu item Constructions - Implement predefined construction. | CommonCrawl |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.