Search is not available for this dataset
url
string
text
string
date
timestamp[s]
meta
dict
https://en.m.wikibooks.org/wiki/UMD_Probability_Qualifying_Exams/Aug2006Probability
# UMD Probability Qualifying Exams/Aug2006Probability ## Problem 1 Consider a four state Markov chain with state space ${\displaystyle \{1,2,3,4\}}$ , initial state ${\displaystyle X_{0}=1}$ , and transition probability matrix ${\displaystyle {\begin{pmatrix}1/4&1/4&1/4&1/4\\1/6&1/3&1/6&1/3\\0&0&1&0\\0&0&0&1\end{pmatrix}}}$  (a) Compute ${\displaystyle \lim _{n\to \infty }P(X_{n}=3)}$ . (b) Let ${\displaystyle \tau =\inf\{n\geq 0:X_{n}\in \{3,4\}\}}$ . Compute ${\displaystyle E(\tau )}$ . ## Problem 2 If ${\displaystyle X_{1},...,X_{n}}$  are independent uniformly distributed random variables on [0,1], then let ${\displaystyle X_{(2),n}}$  be the second smallest among these numbers. Find a nonrandom sequence ${\displaystyle a_{n}}$  such that ${\displaystyle T_{n}=a_{n}-\log X_{(2),n}}$  converges in distribution, and compute the limiting distribution. ### Solution {\displaystyle {\begin{aligned}P(T_{n}\leq x)&=P(e^{a_{n}-\log X_{(2),n}}\leq e^{x})=P({\frac {e^{a_{n}}}{X_{(2),n}}}\leq e^{x})\\&=P(X_{(2),n}\geq e^{a_{n}-x})=n(1-e^{a_{n}-x})^{n-1}e^{a_{n}-x}+(1-e^{1_{n}-x})^{n}\end{aligned}}} The two terms on the right hand side look like the limit definition of the exponential function. Can we choose ${\displaystyle a_{n}}$  appropriately so that it is? Let ${\displaystyle a_{n}=-\log n}$ . Then {\displaystyle {\begin{aligned}P(T_{n}\leq x)&=n(1-{\frac {1}{ne^{x}}})^{n-1}{\frac {1}{ne^{x}}}+(1-{\frac {1}{ne^{x}}})^{n}\\&\to e^{-e^{x}}e^{-x}+e^{-e^{x}}\end{aligned}}} This is the distribution of ${\displaystyle \lim _{n\to \infty }T_{n}}$ . ## Problem 3 Suppose that the real-valued random variables ${\displaystyle \xi ,\eta }$  are independent, that ${\displaystyle \xi }$  has a bounded density ${\displaystyle p(x)}$  (for ${\displaystyle x\in \mathbb {R} }$ , with respect to Lebesgue measure), and that ${\displaystyle \eta }$  is integer valued. (a) Prove that ${\displaystyle \zeta =\xi +\eta }$  has a density. (b) Calculate the density of ${\displaystyle \zeta }$  in the case where ${\displaystyle \xi \sim }$  Uniform[0,1] and ${\displaystyle \eta \sim }$  Poisson(1). ### Solution (a) {\displaystyle {\begin{aligned}P(\zeta \leq x)&=P(\xi \leq x-\eta )=\sum _{n=-\infty }^{\infty }P(\eta =n)\int _{-\infty }^{x-n}p_{\xi }(t)\,dt\\&=\lim _{N\to \infty }\sum _{n=-N}^{N}P(\eta =n)\int _{-\infty }^{x-n}p_{\xi }(t)\,dt=\lim _{N\to \infty }\sum _{n=-N}^{N}P(\eta =n)\int _{-\infty }^{x}p_{\xi }(t-n)\,dt\\&=\lim _{N\to \infty }\int _{-\infty }^{x}\sum _{n=-N}^{N}P(\eta =n)p_{\xi }(t-n)\,dt=\int _{-\infty }^{x}\sum _{n=-\infty }^{\infty }P(\eta =n)p_{\xi }(t-n)\,dt\end{aligned}}} where the last equality follows from Monotone Convergence Theorem. Hence, we have shown explicitly that ${\displaystyle \zeta }$  has a density and it is given by ${\displaystyle q_{\zeta }(t)=\sum _{n=-\infty }^{\infty }P(\eta =n)p_{\xi }(t-n)}$ . (b) When ${\displaystyle \xi \sim }$  Uniform[0,1] and ${\displaystyle \eta \sim }$  Poisson(1), we have ${\displaystyle p_{\xi }(x)=\mathrm {X} _{[0,1]}(x)}$  and ${\displaystyle p_{\eta }(k)={\frac {1}{k!e}}}$  with support on ${\displaystyle k=0,1,2,...}$ . Then from part (a), the density will be ${\displaystyle q_{\zeta }(x)=\sum _{n=-\infty }^{\infty }p_{\eta }(n)p_{\xi }(x-n)=\sum _{n=0}^{\infty }{\frac {1}{k!e}}\mathrm {X} _{[0,1]}(x)}$ . ## Problem 4 Let ${\displaystyle (N(t),t\geq 0)}$  be a Poisson process with unit rate, and let ${\displaystyle W_{m,n}=\sum _{k=1}^{n}I\{N({\frac {mk}{n}})-N({\frac {m(k-1)}{n}})\geq 2\}}$  where ${\displaystyle I(A)}$  is the indicator of the event ${\displaystyle A}$ . (a) Find a formula for ${\displaystyle E(W_{m,n})}$  in terms of ${\displaystyle m,n}$ . (b) Show that if ${\displaystyle m=n^{\alpha }}$  with ${\displaystyle \alpha >1/2}$  a fixed constant, then ${\displaystyle W_{m,n}\to \infty }$  in probability. ### Solution #### (a) We know that ${\displaystyle N({\frac {mk}{n}})-N({\frac {m(k-1)}{n}})}$  is distributed as Poisson with parameter ${\displaystyle m/n}$ . So ${\displaystyle P(N({\frac {mk}{n}})-N({\frac {m(k-1)}{n}})\geq 2)=1-P(N({\frac {mk}{n}})-N({\frac {m(k-1)}{n}})=1{\text{ or }}2)=1-e^{-m/n}-{\frac {m}{n}}e^{-m/n}}$ Then {\displaystyle {\begin{aligned}E(W_{m,n})&=E[\sum _{k=1}^{n}I(N({\frac {mk}{n}})-N({\frac {m(k-1)}{n}})\geq 2)\\&=\sum _{k=1}^{n}E(I(N({\frac {mk}{n}})-N({\frac {m(k-1)}{n}})\geq 2)){\text{ by linearity}}\\&=\sum _{k=1}^{n}P(N({\frac {mk}{n}})-N({\frac {m(k-1)}{n}})\geq 2)=n\left(1-e^{-m/n}-{\frac {m}{n}}e^{-m/n}\right)=n-e^{-m/n}(n+m)\end{aligned}}} #### (b) If ${\displaystyle \lim _{n\to \infty }W_{n,n^{\alpha }}=\infty }$  then we must have ${\displaystyle I(N({\frac {mk}{n}})-N({\frac {m(k-1)}{n}})\geq 2)=0}$  only finitely often. The probability of this even (from part a) is ${\displaystyle e^{-n^{\alpha -1}}(1+n^{\alpha -1})}$ . This decays to 0 for ${\displaystyle \alpha >1}$ . Then clearly, we see that the probability of ${\displaystyle \lim _{n\to \infty }W_{n,n^{\alpha }}=\infty }$  is equal to 1. I don't know how to show the result for ${\displaystyle 1/2<\alpha <1}$ ... ## Problem 5 Let ${\displaystyle X_{0}=0}$  and for ${\displaystyle n\geq 1,X_{n}=\sum _{j=1}^{n}\xi _{j}}$  where the r.v.'s ${\displaystyle \xi _{j}}$  are i.i.d. with ${\displaystyle P(\xi _{j}=-2)=1/4,P(\xi _{j}=1)=3/4}$ . (a) Prove that there exist constants ${\displaystyle a,b}$  such that ${\displaystyle Y_{n}=X_{n}-an}$  and ${\displaystyle Z_{n}=\exp(bX_{n})}$  are martingales. (b) If ${\displaystyle \tau =\inf\{n\geq 1:X_{n}=3\}}$ , then prove that ${\displaystyle \tau <\infty }$  almost surely and find ${\displaystyle E(\tau )}$ . (c) Prove that ${\displaystyle \exp(bX_{n})}$  is not a uniformly integrable martingale. ### Solution #### (a) We want ${\displaystyle Y_{n}=E[Y_{n+1}|{\mathcal {F}}_{n}]}$ . We can compute both sides of this equation explicitly. ${\displaystyle X_{n}-an=E[X_{n+1}-a(n+1)|X_{n}]=(X_{n}-2){\frac {1}{4}}+(X_{n}+1){\frac {3}{4}}-a(n+1)=X_{n}+1/4-an-a}$ Thus if we want this equality to hold we must have ${\displaystyle a=1/4}$ . Similarly, if we want ${\displaystyle Z_{n}=E[Z_{n+1}|{\mathcal {F}}_{n}]}$  then ${\displaystyle e^{bX_{n}}=E[e^{bX_{n+1}}|X_{n}]={\frac {1}{4}}e^{b(X_{n}-2)}+{\frac {3}{4}}e^{b(X_{n}+1)}=e^{bX_{n}}({\frac {1}{4}}e^{-2b}+{\frac {3}{4}}e^{b})}$ We can easily check that ${\displaystyle b=0}$  gives a trivial solution to the equation. Using the substitution ${\displaystyle x=e^{b}}$  we can find another solution for ${\displaystyle b}$ . We should get ${\displaystyle b=\log(1+{\sqrt {13}})-\log(6)<0}$ . #### (b) We've just shown that ${\displaystyle Y_{n}=X_{n}-{\frac {1}{4}}n}$  is a martingale. Thus, ${\displaystyle E[Y_{n}|Y_{0}]=E[Y_{0}]=0}$ . Then since each ${\displaystyle \xi }$  is i.i.d., we can apply the Strong Law of Large Numbers to say ${\displaystyle 1/n(X_{n}-n/4)\to 0}$  almost surely. In other words, ${\displaystyle X_{n}\to \infty }$  almost surely and so certainly ${\displaystyle \tau <\infty }$  almost surely. Now to calculate ${\displaystyle E[\tau ]}$ . We introduce new notation: let ${\displaystyle \tau _{k}(x)=\inf\{n\geq 1:X_{n}=k,X_{0}=x\}}$ . Then {\displaystyle {\begin{aligned}E[\tau _{n}(0)]&={\frac {3}{4}}(1+E[\tau _{n}(1)])+{\frac {1}{4}}(1+E[\tau _{n}(-2)])\\&={\frac {3}{4}}(1+E[\tau _{n-1}(0)])+{\frac {1}{4}}(1+E[\tau _{n+2}(0)])=1+{\frac {3}{4}}\tau _{n-1}(0)+{\frac {1}{4}}\tau _{n+2}(0)\end{aligned}}} by a symmetry argument. So we can write ${\displaystyle E[\tau _{1}(0)]=1+{\frac {3}{4}}0+{\frac {1}{4}}\tau _{3}(0)}$ . But I don't know how to calculate ${\displaystyle E[\tau _{1}(0)]}$ .... #### (c) Recall from part (a) that the nontrivial solution for ${\displaystyle b}$  must be some negative number. Then ${\displaystyle \lim _{n\to \infty }Z_{n}\to 1}$  almost surely by part (a) as well. However, ${\displaystyle Z_{n}=e^{bX_{n}}\neq 1=E[Z_{\infty }|{\mathcal {F}}_{n}]}$ . This by the definition, means the martingale is not right closable. A martingale is right-closable iff uniformly integrable. Thus, we're done.
2020-03-28T21:31:59
{ "domain": "wikibooks.org", "url": "https://en.m.wikibooks.org/wiki/UMD_Probability_Qualifying_Exams/Aug2006Probability", "openwebmath_score": 0.9957934617996216, "openwebmath_perplexity": 3060.299789101024, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357184418847, "lm_q2_score": 0.6654105653819836, "lm_q1q2_score": 0.6532573194641024 }
http://mathhelpforum.com/differential-equations/173105-solve-initial-value-problem-given-general-solution.html
# Thread: Solve Initial Value Problem given General Solution. 1. ## Solve Initial Value Problem given General Solution. Im having a little trouble with these problems. Im sure its pretty simple. Heres one from my book: Given the family of functions is the general solution of the differential equation on the indicated interval. Find a member of the family that is a solution of the initial-value problem. $y = C1e^x + C2e^-^x, (-\infty, \infty)$ $y'' - y = 0, y(0) = 0, y'(0) = 1$ Can someone show me how to do this problem? I am just confused one how to work it. Im sure its pretty simple but I cant seem to figure it out. Thank you! 2. Your goal is to plug in the initial conditions $y(0)=0, y'(0)=1,$ and solve for $C_{1},C_{2}.$ So what two equations do you get by plugging in the initial conditions into your general solution? 3. plugging in $y(0) = 0$ $C1 + C2 = 0$ plugging in $y'(0) = 1$ $C1 - C2 = 1$ 4. Good so far! What next? 5. add the equations together? 6. Sure. What does that give you? 7. C1 = 1/2 C2 = -1/2 Then $y = .5e^x - .5e^-^x$ thats it right? 8. This is the beautiful thing about differential equations: it's quite straight-forward to check your own answer (hardly more difficult than differentiating, and quite a bit easier than integrating). So I'm going to ping this question right back at you: 1. Does your solution satisfy the DE? 2. Does your solution satisfy the initial conditions? If so, I'd say you've got it. You should get into the habit of doing two things to every solution to every DE you ever solve: 1. Find the interval of validity. 2. Check your solution by differentiation. 9. Thank you! the reason I wasnt sure what quite to do is that my teacher crammed 3 sections into an hour since we are behind. Thanks for the help tho, i appreciate it! 10. You're welcome!
2016-09-28T00:56:28
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/differential-equations/173105-solve-initial-value-problem-given-general-solution.html", "openwebmath_score": 0.6653549671173096, "openwebmath_perplexity": 604.7267372250319, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357184418847, "lm_q2_score": 0.6654105653819835, "lm_q1q2_score": 0.6532573194641023 }
http://xperimex.com/blog/circle-computations/
Last post, we looked at different types of billiard problems, a class of math problems analyzing how light bounces with different setups of mirrors. Notably, we saw how straight lines make for very simple, easy to compute mirrors, while others like circular ones, can be incredibly frustrating. A large portion of last post's content, though, was made up of interactive graphics. While I went over much of the math that goes into solving these types of problems, we skipped over a large part of the math that goes into simulating them. Math is very nice in that many problems can be solved with nothing more than a pen, paper, and your mind, but oftentimes, that's only helpful if you are confident in how to approach the problem. What computer's can do is help build our intuition to solve a problem by calculating, drawing, and modelling scenarios with precision and speed we can only wish to achieve. So, today, we'll look at some of the clever math that goes into computer graphics (that we'll later extend), and to introduce such a topic, we'll look at a simple, fundamental problem in graphics: how do you find the intersection between a line and a circle? Languishing Lines and Confounding Circles Before we can even attempt this problem, we're going to have to start from scratch, since we have one slight issue: a computer has no idea what a line or a circle is! So before we can do anything, let's teach our computer how to draw a line. Perfect Parameterizations At its core, computer graphics is displaying a set of pixels with certain colors. If we want to visualize anything on a computer screen, we just need to find all the relevant pixels (coordinates) to light up and color. Because we want to compute these individual coordinates of, say, a line or circle very quickly and easily, almost always we will use vectors. These can be typical column or row vectors you see in linear algebra, or it can even take the form of complex numbers. The reason why these tend to be helpful is that they give very easy ways to compute coordinates for lines, circles, and other shapes. If we want to draw a line with slope, say 2, we need to ensure that it is constructed by a vector of slope 2. An easy one to find is the vector $v=\small{\begin{bmatrix} 1 \\ 2 \end{bmatrix}}$ since we know that will pass through the point $(1,2)$. So, to get other points beyond this vector, we can scale $v$ by a factor of $t$ to get other vectors (i.e. points) with the same slope. If $t=2$, we get the point $(2,4)$. If $t=1.5$, we get the point $(1.5,3)$. If $t=239470$, we get the point $(239470,478940)$. Whatever you choose $t$ to be, our vector $v$ will give us a point on the line $y=2x$. However, this isn't super helpful, since we are still only restricted to lines that go through the origin at $(0,0)$. So, we can add a starting point $\color{red}{p}$ to our vector equation to offset the line by $\color{red}{p}$, guaranteeing our line goes through the point $\color{red}{p}$ (since that's the coordinate generated by $t=0$). $\large{l = \color{red}{p} + tv}$ Now we just plot every point for $t \in (-\infty, \infty)$, and we get a line with $v$ dictating the slope of our line (negative $t$ values gives us coordinates behind $\color{red}{p}$)! Our parametric line $l$ going through point $\color{red}{p}$. Drag the point to adjust it's position. We can do a similar process for a circle. To parameterize a circle, we'll have to pull from trigonometry. We know that a circle is defined by $x^2 + y^2 = r^2$. The Pythagorean identity tells us that $\cos^2(\theta) + \sin^2(\theta) = 1$, so we can quickly make the connection that $x=r\cos(\theta)$ and $y=r\sin(\theta)$ (which the geometry justifies). This precisely defines $x$ and $y$ in terms of the parameter $\theta$! Again, though, this is centered at the origin, so we can center the circle around a point $\color{blue}{q}$ by adding it to our parameterization. $\large{c = \color{blue}{q} + r\begin{bmatrix} \cos(\theta) \\ \sin(\theta) \end{bmatrix}}$ where $r$ is some real number for the radius of the circle, and $\theta \in [0, 2\pi)$. We can now easily draw both lines and circles! Now we also have a circle centered at $\color{blue}{q}$ too. Drag the center point to change its position, and the radial point its radius. Collisions and Intersections Now that we have defined our line and circle for our computer to interpret, we can start thinking about how to detect collisions between a line and a circle. Discerning Distances A good place to start is by looking at how far away the line $l$ is from the center of the circle $\color{blue}{q}$. For reference, the distance from a point to a line is the shortest (i.e. perpendicular) distance from the point to the line. If $l$ is more than a distance of $r$ away from $\color{blue}{q}$, then we know that it's outside the circle and doesn't intersect, and if $l$ is less than a distance $r$ away from $\color{blue}{q}$, then we know it's inside the circle and does intersect. $l_1$ is a distance less than $r$ away from the center, and clearly intersects the circle. $l_2$ is a distance greater than $r$ away, and clearly does not intersect the circle. $l_3$ is exactly a distance $r$ away, making it tangent to the circle (1 intersection point instead of 2). Let's look at an individual line and see if we can draw any useful conclusions about this distance. From a given point $\color{red}{p}$ on our line $l$, we can find a new vector between $\color{red}{p}$ and the circle's center $\color{blue}{q}$ as $\overrightarrow{\color{blue}{q} - \color{red}{p}}$. This will form some angle $\theta$ with $l$, more specifically its vector $v$. Recalling that $\color{green}{d}$ is the perpendicular distance between $\color{blue}{q}$ and $l$, we have a right triangle that gives us that $\color{green}{d} = |\overrightarrow{\color{blue}{q} - \color{red}{p}}| \sin \theta$. If you're familiar with your linear algebra, this almost looks like the formula for the magnitude of the cross product: $|v \times u| = |v||u|\sin \theta$. So, writing our two relevant vectors and rearranging we can see that… $|\overrightarrow{\color{blue}{q} - \color{red}{p}}||v| \sin \theta = |\overrightarrow{\color{blue}{q} - \color{red}{p}} \times v|$ $|\overrightarrow{\color{blue}{q} - \color{red}{p}}| \sin \theta = |\overrightarrow{\color{blue}{q} - \color{red}{p}} \times \frac{v}{|v|}|$ So all we need to do to see if our line intersects our circle is if that cross product is less than or equal to the radius of our circle (if you're concerned about the dimensionality of our vectors—cross products only exist in dimensions 3 and 7—we can treat them as 3D vectors with z-component 0, which makes the calculation easier and equivalent to the determinant). If this isn't totally apparent why this is true, it has to do with the geometrical interpretation for the cross product: we're finding the area of the parallelogram that the two vectors span, and since the area of a parallelogram is $A=\textrm{base}\cdot\textrm{height}$, we're essentially finding the height of that parallelogram by dividing by its base. Using the closest distance between the circle and line, we can successfully identify when the line intersects our circle. We have a working condition! Using the cross product, we can identify point-circle intersections with a single line of computation. However, this simple solution does have its limitations. Mainly, this is only a boolean condition; this method only tells us whether or not an intersection occurs, but nothing else. We don't know where on the line it intersects, nor how many times. Sometimes, this doesn't really matter like when you want to approximate lines intersecting points (since then you can treat points as small circles). But for more complex tasks and graphics like raytracing, this won't cut it. Fancy Vector Operations If we have a point $x$ on our circle, then the distance between $x$ and the center of the circle $\color{blue}{q}$ should be equal to the radius $r$. As an equation, the magnitude of the vector from $x$ to $\color{blue}{q}$ equals $r$. $|x - \color{blue}{q}| = r$ Moreover, we want this point $x$ on our circle to also be on our line $l$. So, $x = \color{red}{p} + tv$ for some value of $t$. With this in mind, we can substitute $x$ in our previous equation. $|\color{red}{p} + tv - \color{blue}{q}| = r$ Now, let's square both sides. $|\color{red}{p} + tv - \color{blue}{q}|^2 = r^2$ This may seem pointless, but it helps us rewrite that left side of the equation. Generally, working with the magnitude of a vector as an operator isn't super helpful, but we can quickly rewrite the square of the magnitude in terms of the dot product, since for any vector $v \cdot v = |v|^2$. $(\color{red}{p} + tv - \color{blue}{q}) \cdot (\color{red}{p} + tv - \color{blue}{q}) = r^2$ Expanding this out and collecting like terms gives us… $(\color{red}{p} + tv - \color{blue}{q}) \cdot (\color{red}{p} + tv - \color{blue}{q}) = r^2$ $t^2(v \cdot v) + 2t(v \cdot (\color{red}{p} - \color{blue}{q})) + (\color{red}{p} - \color{blue}{q}) \cdot (\color{red}{p} - \color{blue}{q}) - r^2 = 0$ Which is just a quadratic equation in $t$! With coefficients… \begin{align} a & = v \cdot v \ \newline b & = 2(v \cdot (\color{red}{p} - \color{blue}{q})) \ \newline c & = (\color{red}{p} - \color{blue}{q}) \cdot (\color{red}{p} - \color{blue}{q}) - r^2 \end{align} …we can solve for $t$ using our trusted quadratic formula (note that $a$, $b$, and $c$ are all outputs of dot products, ensuring they are valid scalars to plug in). $\large{t = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}}$ Remember, $t$ is the scalar that tells us where on our line we are, so if there are real solutions to $t$, then we will have the exact intersection points for our line and the circle! Our quadratic formula now not only tells us when the line intersects the circle, but also where they intersect. We can analyze this quadratic like any other to give us insight into our intersection points. Specifically, using the discriminant. When $b^2 - 4ac > 0$, then we get two solutions/intersection points. If $b^2 - 4ac < 0$, then we get no real solutions and therefore no intersection points. Finally, if $b^2 - 4ac = 0$, then we have exactly one intersection point, and can conclude our line is tangent to the circle. Also, this quadratic can straight up replace our closest-distance method from before, since the point at which our line is closest to the circle corresponds to the vertex of the parabola at $t=\frac{-b}{2a}$. Not to mention, notice how everything we did here was independent of the fact our line and circle exist in two dimensions; we can easily use this for 3D graphics, and even higher dimensions as well to find the intersections between lines and hyperspheres! Below is a raytraced scene I drew of 3 balls using this exact quadratic to compute lighting with shadows and reflections (a.k.a. my formal application to Pixar). This raytraced scene is just thousands of uses of the quadratic formula. And to think that we'd never use the quadratic formula in real life. I Don't Know Where Else to Put This Before I end off this post, I want to include some other interesting circle facts since I don't know where else to put them. Squaring the Circle (Bounces) If you have a ray of light start from the circumference of the circle, after a total of $n$ reflections within the circle, the sum of all the angles of reflection will be $n^2$ times the initial angle. Between this and the Basel problem, circles and squares are just weirdly intertwined. The reason this particular statement is true is because of how much the angle with the horizontal increases after a single bounce. If your light starts at an angle $\alpha$, we can show that every additional bounce will add $2\alpha$ to the angle with respect to the horizontal. With the help of some auxiliary lines, I hope the above picture makes this clear. Then by symmetry, of the circle, we can see that each subsequent bounce will also add $2\alpha$ to the angle. Moreover, since our initial angle itself is $\alpha$, every bounce will just be the odd multiples of $\alpha$ (since odd numbers can be thought of as a multiple of 2 plus 1, which is precisely what our angle bounces mimic)! So, for a series of $n$ bounces, the sum of the angles of each reflection is equal to \begin{align} \alpha + 3\alpha + 5\alpha + 7\alpha + … + (2n-1)\alpha & = \ \newline \alpha(1+3+5+7+…+2n-1)& = \ \newline \alpha(n\cdot 1 + 2(0+1+2+3+…+n-1)& = \ \newline \alpha(n + 2\cdot\frac{(n-1)(n)}{2}) & = \ \newline \alpha(n + n^2 - n) & = \alpha n^2 \end{align} (Yes I am aware there is a formula for an arithmetic sequence with with any initial term but this is how I remember to solve them okay) I didn't know how to fit it in last post with the mention of circular mirrors there, but here seems like a good spot to mention it. Perpendicular Parabolas The set of intersection points between two orthogonal parabolas lie on a common circle. To show this is true, we just need to crank out the algebra. To find our intersection points, we need to solve the system of equations $\begin{cases} (x - \color{red}{x_1})^2 = y - \color{red}{y_1} \ \newline (y - \color{blue}{y_2})^2 = x - \color{blue}{x_2} \end{cases}$ If these individual equations are true for our intersection points, then so is their sum. $(x - \color{red}{x_1})^2 + (y - \color{blue}{y_2})^2 = y - \color{red}{y_1} + x - \color{blue}{x_2}$ $x^2 - x(2\color{red}{x_1}) + \color{red}{x_1}^2 + y^2 - y(2\color{blue}{y_2}) + \color{blue}{y_2}^2 = y - \color{red}{y_1} + x - \color{blue}{x_2}$ $x^2 - x(2\color{red}{x_1} + 1) + y^2 - y(2\color{blue}{y_2} + 1) = -\color{red}{y_1} - \color{red}{x_1}^2 - \color{blue}{x_2} - \color{blue}{y_2}^2$ $(x - (\color{red}{x_1} + \frac{1}{2}))^2 - (\color{red}{x_1} + \frac{1}{2})^2 + (y - (\color{blue}{y_2} + \frac{1}{2}))^2 - (\color{blue}{y_2} + \frac{1}{2})^2 = -\color{red}{y_1} - \color{red}{x_1}^2 - \color{blue}{x_2} - \color{blue}{y_2}^2$ $(x - (\color{red}{x_1} + \frac{1}{2}))^2 + (y - (\color{blue}{y_2} + \frac{1}{2}))^2 = (\color{red}{x_1} + \frac{1}{2})^2 + (\color{blue}{y_2} + \frac{1}{2})^2 -\color{red}{y_1} - \color{red}{x_1}^2 - \color{blue}{x_2} - \color{blue}{y_2}^2$ While that last line may seem a bit unruly, note that $\color{red}{x_1}$, $\color{red}{y_1}$, $\color{blue}{x_2}$, and $\color{blue}{y_2}$ are all constants, so the right-hand side of that last equation can be summarized as one big constant. $(x - (\color{red}{x_1} + \frac{1}{2}))^2 + (y - (\color{blue}{y_2} + \frac{1}{2}))^2 = C$ That's precisely the equation of a circle with a center at $(\color{red}{x_1} + \frac{1}{2}, \color{blue}{y_2} + \frac{1}{2})$ and radius $\sqrt{C}$, and that's exactly what is plotted above. I have a few more circle tidbits to share, but they have more to expand on in their own posts for another day. Until then, hopefully you found this slight detour into the world of graphics interesting. There are (as you could imagine) a lot more to graphics I want to share. From image homography, to video textures, to even a more in-depth look into raytracing and rasterization, but we'll save those for later.
2022-07-01T20:26:04
{ "domain": "xperimex.com", "url": "http://xperimex.com/blog/circle-computations/", "openwebmath_score": 0.8010462522506714, "openwebmath_perplexity": 242.16657785636568, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357184418847, "lm_q2_score": 0.6654105653819835, "lm_q1q2_score": 0.6532573194641023 }
https://bldavies.com/blog/binary-distributions-risky-gambles/
This post shows how binary random variables can be defined by their mean, variance, and skewness. I use this fact to explain why variance does not (always) measure “riskiness.” Suppose I’m defining a random variable $$X$$. It takes value $$H$$ or $$L<H$$, with $$\Pr(X=H)=p$$. I want $$X$$ to have mean $$\mu$$, variance $$\sigma^2$$, and skewness coefficient $$\DeclareMathOperator{\E}{E} s\equiv\E\left[\left(\frac{X-\mu}{\sigma}\right)^3\right].$$ The target parameters $$(\mu,\sigma,s)$$ uniquely determine $$(H,L,p)$$ via \begin{align} H &= \mu+\frac{s+\sqrt{s^2+4}}{2}\sigma \\ L &= \mu+\frac{s-\sqrt{s^2+4}}{2}\sigma \\ p &= \frac{2}{4+s\left(s+\sqrt{s^2+4}\right)}. \end{align} For example, if I want $$X$$ to be symmetric (i.e., to have $$s=0$$) then I have to choose $$(H,L,p)=(\mu+\sigma,\mu-\sigma,0.5)$$. Increasing the target skewness $$s$$ makes the upside $$(H-\mu)$$ larger but less likely, and the downside $$(\mu-L)$$ smaller but more likely: This mapping between $$(\mu,\sigma,s)$$ and $$(H,L,p)$$ is useful for generating examples of “risky” gambles. Intuition suggests that a gamble is less risky if its payoffs have lower variance. But Rothschild and Stiglitz (1970) define a gamble $$A$$ to be less risky than gamble $$B$$ if every risk averse decision-maker (DM) prefers $$A$$ to $$B$$. These two definitions of “risky” agree when 1. payoffs are normally distributed, or 2. DMs have quadratic utility functions. Under those conditions, DMs’ expected utility depends only on the payoffs’ mean and variance. But if neither condition holds then DMs also care about payoffs’ skewness. We can demonstrate this using binary gambles. Consider these three: • Gamble $$A$$'s payoffs have mean $$\mu_A=10$$, variance $$\sigma_A^2=36$$, and skewness $$s_A=0$$; • Gamble $$B$$'s payoffs have mean $$\mu_B=10$$, variance $$\sigma_B^2=144$$, and skewness $$s_B=5$$; • Gamble $$C$$'s payoffs have mean $$\mu_C=10$$, variance $$\sigma_C^2=9$$, and skewness $$s_C=-3$$. The means are the same but the distributions are different. Gamble $$i\in\{A,B,C\}$$ gives me a random payoff $$X_i$$, which equals $$H_i$$ with probability $$p_i$$ and $$L_i$$ otherwise. We can compute the $$(H_i,L_i,p_i)$$ using the target parameters $$(\mu_i,\sigma_i,s_i)$$ and the formulas above: Gamble $$i$$ $$H_i$$ $$L_i$$ $$p_i$$ $$A$$ 16.00 4.00 0.50 $$B$$ 72.31 7.69 0.04 $$C$$ 10.91 0.09 0.92 Gamble $$A$$ offers a symmetric payoff: its upside $$(H_A-\mu_A)$$ and downside $$(\mu_A-L_A)$$ are equally large and equally likely. Gamble $$B$$ offers a positively skewed payoff: a large but unlikely upside, and a small but likely downside. Gamble $$C$$ offers a negatively skewed payoff: a small but likely upside, and a large but unlikely downside. These upsides and downsides affect my preferences over gambles. Suppose I get utility $$u(x)\equiv\log(x)$$ from receiving payoff $$x$$. Then gamble $$A$$ gives me expected utility \begin{align} \E[u(X_A)] &\equiv p_Au(H_A)+(1-p_A)u(L_A) \\ &= 0.5\log(16)+(1-0.5)\log(4) \\ &= 2.08, \end{align} while $$B$$ gives me $$\E[u(X_B)]=2.12$$ and $$C$$ gives me $$\E[u(X_C)]=1.99$$. So I prefer gamble $$B$$ to $$A$$, even though $$B$$'s payoffs have four times the variance of $$A$$'s. I also prefer $$B$$ to $$C$$, even though $$B$$'s payoffs have sixteen times the variance of $$C$$'s. How can I be risk averse—that is, have a concave utility function—but prefer gambles with higher variance? The answer is that I also care about skewness: I prefer gambles with large upsides and small downsides. These “sides” of risk are not captured by variance. So is gamble $$C$$ “riskier” than gambles $$A$$ and $$B$$? Rothschild and Stiglitz wouldn’t say so. To see why, suppose my friend has utility function $$v(x)=\sqrt{x}$$. Then gamble $$A$$ gives him expected utility $$\E[v(X_A)]=3$$, while $$B$$ gives him $$\E[v(X_B)]=2.98$$ and $$C$$ gives him $$\E[v(X_C)]=3.05$$. My friend and I have opposite preferences: he prefers $$C$$ to $$A$$ to $$B$$, whereas I prefer $$B$$ to $$A$$ to $$C$$. But we’re both risk averse: our utility functions are both concave! Thus, it isn’t true that every risk-averse decision-maker prefers $$A$$ or $$B$$ to $$C$$. Different risk-averse DMs have different preference rankings. This makes the three gambles incomparable under Rothschild and Stiglitz’s definition of “risky.”
2023-03-24T02:08:47
{ "domain": "bldavies.com", "url": "https://bldavies.com/blog/binary-distributions-risky-gambles/", "openwebmath_score": 0.9989267587661743, "openwebmath_perplexity": 1220.959206355209, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357184418847, "lm_q2_score": 0.6654105653819835, "lm_q1q2_score": 0.6532573194641023 }
http://mathoverflow.net/questions/69792/can-a-metric-conformal-to-a-kahler-metric-be-kahler/69798
# Can a metric conformal to a Kahler metric be Kahler? Let $X$ be a non-compact complex manifold of dimension at least 2 equipped with a Kahler metric $\omega$. Take a smooth positive function $f : X \to \mathbb R$, and define a new hermitian metric on $X$ by $\tilde \omega = f \omega$. If $f$ is non-constant, then can this new metric ever be Kahler? If $\dim_{\mathbb C} X = 1$ the new metric is automatically Kahler because of dimension. If $\dim_{\mathbb C} X \geq 2$ and if $X$ is compact the new metric is never Kahler. Indeed, we have that $d \tilde \omega = d f \wedge \omega$ is zero if and only if $df$ is zero by the hard Lefschetz theorem, so $f$ must be constant if $\tilde \omega$ is Kahler. If $X$ is not compact, then to the best of my knowledge we do not have the hard Lefschetz theorem, but does the conclusion on metrics conformal to a Kahler metric still hold? - You don't have to use hard Lefschetz to conclude $df=0$ from $\omega\wedge df=0$. This is a linear algebra fact, valid pointwise : if $\alpha \in T_x^*X$ satisfies $\omega_x \wedge \alpha=0$, then $\alpha=0$ (of course, assuming $\dim_R X \geq 4$. The short argument is that, $\omega_x^{n-1}\wedge : T^*_x X\to \bigwedge^{2n-1} T^*_x X$ is an isomorphism ("pointwise not so hard Lefschetz", so to speak). This said, as in Francesco's answer, you can have non proportional conformal riemannian metrics that are Kähler with respect to different complex structures, so that the corresponding 2-forms are no longer (pointwise) proportional. - Nice. Are we considering $\omega$ as a symplectic form here (instead of a metric)? –  Gunnar Þór Magnússon Jul 8 '11 at 13:57 @Gunnar: definitely yes, $\omega$ is a 2-form, otherwise $d\omega$ wouldn't make much sense. The riemannian metric is $\omega(J.,.)$. –  BS. Jul 8 '11 at 14:04 Looks like we were answering at the same time. At least I gave a slightly different explanation, so I might as well leave my answer there. –  Spiro Karigiannis Jul 8 '11 at 14:10 The paper by Apostolov, Calderbank, and Gauduchon that Francesco mentions find different Kaehler structures whose associated Riemannian metrics are conformal to each other. But they correspond to different complex structures $J_+$ and $J_-$. I believe what Gunnar is asking is whether or not one can have $f \omega$ be closed and thus Kaehler with respect to the same complex structure $J$ associated to $\omega$. The answer is no, and this has nothing at all to do with compactness or the hard Lefschetz theorem. On any almost Hermitian manifold $(M, J, \omega, g)$, it is a fact that the wedge product with the Kaehler form $\omega$ on the space of $1$-forms is injective, regardless of the compactness of $M$, the integrability of $J$, or the closedness of $\omega$. This follows, for example, from the identity $$\ast( \omega \wedge (\ast ( \omega \wedge \alpha) ) = - (m-1) \alpha$$ where $\alpha$ is any $1$-form on $M$, where $\ast$ is the Hodge star operator, and the real dimension of $M$ is $2m$. (One sees that the only requirement is that $m>1$.) - There are examples in real dimension $4$ of manifolds having two conformally equivalent Kahler metrics, inducing the same conformal structure but with opposite orientation. See the paper Ambikahler geometry, ambitoric surfaces and Einstein 4-orbifolds by Apostolov, Calderbank and Gauduchon. - Thanks Francesco, that's an interesting article. However it does seem to answer a slightly different question. In the notation of the article, we have a fixed complex structure $J$ on $M$ (such that $X = (M,J)$) and ask for structures $(g_1,J,\omega_1)$ and $(g_2, J, \omega_2)$ such that: $g_2 = f^2 g_1$, and ask if the second structure can be Kahler if the first one is and if $f$ is non-constant. In particular, if this is possible, then both structures would induce the same orientation on $M$. –  Gunnar Þór Magnússon Jul 8 '11 at 13:54 Ah ok, I did not notice that your complex structure $J$ was fixed. At any rate, I hope you can find this paper useful :-) –  Francesco Polizzi Jul 8 '11 at 13:58
2015-03-07T04:30:47
{ "domain": "mathoverflow.net", "url": "http://mathoverflow.net/questions/69792/can-a-metric-conformal-to-a-kahler-metric-be-kahler/69798", "openwebmath_score": 0.9492377042770386, "openwebmath_perplexity": 247.06124554803515, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357179075084, "lm_q2_score": 0.6654105653819836, "lm_q1q2_score": 0.6532573191085227 }
http://www.chegg.com/homework-help/questions-and-answers/quantity-represented-function-changes-time-e-constant--part-quantity-represented-function--q1224932
The quantity represented by is a function that changes over time (i.e., is not constant). Part A The quantity represented by is a function that changes over time (i.e., is not constant). true false Part B The quantity represented by is a function of time (i.e., is not constant). true false Part C The quantity represented by is a function of time (i.e., is not constant). true false Part D The quantity represented by is a function of time (i.e., is not constant). true false Part E A particle moves with constant acceleration . The expression represents the particle's velocity at what instant in time? at time at the "initial" time when a time has passed since the beginning of the particle's motion, when its velocity was More generally, the equations of motion can be written as and . Here is the time that has elapsed since the beginning of the particle's motion, that is, , where is the current time and is the time at which we start measuring the particle's motion. The terms and are, respectively, the position and velocity at . As you can now see, the equations given at the beginning of this problem correspond to the case , which is a convenient choice if there is only one particle of interest.
2014-10-01T16:05:35
{ "domain": "chegg.com", "url": "http://www.chegg.com/homework-help/questions-and-answers/quantity-represented-function-changes-time-e-constant--part-quantity-represented-function--q1224932", "openwebmath_score": 0.8618114590644836, "openwebmath_perplexity": 304.0611723275182, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357173731318, "lm_q2_score": 0.6654105653819836, "lm_q1q2_score": 0.6532573187529429 }
https://socratic.org/questions/how-do-you-find-the-domain-and-range-of-y-2x-3-8
# How do you find the domain and range of y = 2x^3 + 8? Feb 22, 2018 Range: $\left[- \infty , \infty\right]$ Domain: $\left[- \infty , \infty\right]$ #### Explanation: Range: How BIG can $y$ be? How SMALL can $y$ be? Because the cube of a negative number is negative and the cube of a positive number is positive, $y$ has no limits; therefore, the range is $\left[- \infty , \infty\right]$. Domain: How BIG can $x$ be so that the function is always defined? How SMALL can $x$ be so that the function is always defined? Note that this function is never undefined because there is no variable in the denominator. $y$ is continuous for all values of $x$; therefore, the domain is $\left[- \infty , \infty\right]$.
2019-11-20T06:07:18
{ "domain": "socratic.org", "url": "https://socratic.org/questions/how-do-you-find-the-domain-and-range-of-y-2x-3-8", "openwebmath_score": 0.5125152468681335, "openwebmath_perplexity": 174.58678597382922, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357173731318, "lm_q2_score": 0.6654105653819836, "lm_q1q2_score": 0.6532573187529429 }
http://rosa.unipr.it/fsda/GUImad.html
# Documentation GUImad shows the necessary calculations to obtain MAD, S_M or S_Me in a GUI. ## Syntax • out=GUImad(x)example • out=GUImad(x,flag)example • out=GUImad(x,flag,w)example ## Description This routine shows all the intermediate necessary steps to compute the three following variability indexes: $MAD =Me(|x_{i} - Me|n_i).$ $S_{M} = \frac{\sum_{i=1}^{r}|x_{i}-M| n_i}{n}$ $S_{Me} = \frac{\sum_{i=1}^r |x_i-Me|n_i}{n}.$ out =GUImad(x) Example of calculation of MAD. out =GUImad(x, flag) Example of calculation of SMe. out =GUImad(x, flag, w) Example of calculation of SM. ## Examples expand all ### Example of calculation of MAD. MAD = median absolute deviation from median. x=[98 105 85 110 102]; y=GUImad(x); ### Example of calculation of SMe. SMe= mean absolute deviation from the median. x=[98 105 85 110 102]; y=GUImad(x,2); ### Example of calculation of SM. SM= mean absolute deviation from mean. x=[98 105 85 110 102]; y=GUImad(x,0); ## Related Examples expand all ### MAD in a frequency distribution. MAD = median absolute deviation from median. % Frequency distribution of the number of children in a sample of 400 % families. (See page 29 of [CMR]). X=[0 112 1 156 2 111 3 16 4 4 7 1]; x=X(:,1); freq=X(:,2); flag=1; GUImad(x,flag,freq); ### SM in a frequency distribution. SM= mean absolute deviation from mean. % Frequency distribution of the number of children in a sample of 400 % families. (See page 29 of [CMR]). X=[0 112 1 156 2 111 3 16 4 4 7 1]; x=X(:,1); freq=X(:,2); flag=0; GUImad(x,flag,freq); ### SMe in a frequency distribution. SMe= median absolute deviation from mean. % Frequency distribution of the number of children in a sample of 400 families. (See page 29 of [CMR]). X=[0 112 1 156 2 111 3 16 4 4 7 1]; x=X(:,1); freq=X(:,2); flag=2; GUImad(x,flag,freq); ## Input Arguments ### x — vector of numeric data. Vector. Vector containing strictly numerical data. Data Types: double ### flag — median or mean absolute deviation from median or mean absolute deviation from mean. Scalar. If flag=1 (default), MAD is based on medians, i.e. median(abs(x-median(x)). elseif flag=0, $S_M$ is computed (mean absolute deviation) i.e. mean(abs(x-mean(x)). elseif flag=2, $S_{Me}$ is computed (mean absolute deviation from median) i.e. mean(abs(x-median(x)). Example: 1 Data Types: double ### w — weights. Vector. Vector of the same length of x containing the weights assigned to each observation. Example: 1:10 Data Types: double ## Output Arguments ### out —detailed output to compute the index. Table Table with n+1 rows (where n is the length of x) containing what is shown in the GUI. Last row contains the totals. ## References Cerioli, A., Milioli, M.A., Riani, M. (2016), "Esercizi di statistica (Quinta edizione)". [CMR]
2022-01-22T01:43:42
{ "domain": "unipr.it", "url": "http://rosa.unipr.it/fsda/GUImad.html", "openwebmath_score": 0.8884502053260803, "openwebmath_perplexity": 6264.714619171817, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357173731318, "lm_q2_score": 0.6654105653819836, "lm_q1q2_score": 0.6532573187529429 }
https://edu-answer.com/mathematics/question13780709
, 08.11.2019 01:31 Aldcperformer # Suppose we are interested in bidding on a piece of land and we know one other bidder is interested. the seller announced that the highest bid in excess of $10,400 will be accepted. assume that the competitor's bid x is a random variable that is uniformly distributed between$10,400 and $14,600.a. suppose you bid$12,000. what is the probability that your bid will be accepted (to 2 decimals)? b. suppose you bid $14,000. what is the probability that your bid will be accepted (to 2 decimals)? c. what amount should you bid to maximize the probability that you get the property (in dollars)? d. suppose that you know someone is willing to pay you$16,000 for the property. you are considering bidding the amount shown in part (c) but a friend suggests you bid $13,200. if your objective is to maximize the expected profit, what is your bid? (options: 1. stay with the bid in part (c) or 2. bid$13,200 to maximize profit)what is the expected profit for this bid (in dollars)? ### Another question on Mathematics Mathematics, 21.06.2019 17:00 The angle of a triangle are given as 6x,(x-3), and (3x+7). find x then find the measures of the angles Mathematics, 21.06.2019 19:30 Me. i can't figure out this question. give an explanation too. .
2021-06-20T06:39:58
{ "domain": "edu-answer.com", "url": "https://edu-answer.com/mathematics/question13780709", "openwebmath_score": 0.820716917514801, "openwebmath_perplexity": 1745.865541804866, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357173731318, "lm_q2_score": 0.6654105653819835, "lm_q1q2_score": 0.6532573187529428 }
https://www.fastcalculus.com/calculus-other-calculus-problems-28398/
# Calculus: Other Calculus Problems – #28398 Question: For the function f\left( x \right)=\left\{ \begin{align} & 2\text{ }\,\,\,\,\,\,\,\,\,\,\,\,\,\text{if }x<0 \\ & -{{x}^{2}}+x\,\,\,\,\text{if 0}\le x\le 2 \\ & 1\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{if }x>2 \\ \end{align} \right. (a) Graph in detail (b) Find all values of x where the function is discontinuous. (c) Find the limit from the left and right at any value of x found in part b.
2019-07-24T00:03:24
{ "domain": "fastcalculus.com", "url": "https://www.fastcalculus.com/calculus-other-calculus-problems-28398/", "openwebmath_score": 1.0000100135803223, "openwebmath_perplexity": 3175.1661732183616, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9817357259231533, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573179282345 }
https://socratic.org/questions/how-do-you-find-the-axis-of-symmetry-graph-and-find-the-maximum-or-minimum-value-36
# How do you find the axis of symmetry, graph and find the maximum or minimum value of the function y = 2x^2 + 12x - 7? Jan 10, 2016 Minimum at vertex (-3, -25) #### Explanation: X-coordinate of axis of symmetry: $x = - \frac{b}{2 a} = - \left(\frac{12}{4}\right) = - 3$ Since a = 2 > 0, the parabola graph opens upward, there is minimum at vertex. X-coordinate of vertex: x = -3 (vertex is on axis of symmetry) Y-coordinate of vertex: y = y(-3) = 2(9) - 36 - 7 = -25.
2021-09-21T22:34:31
{ "domain": "socratic.org", "url": "https://socratic.org/questions/how-do-you-find-the-axis-of-symmetry-graph-and-find-the-maximum-or-minimum-value-36", "openwebmath_score": 0.25010302662849426, "openwebmath_perplexity": 1023.3533762663305, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357259231532, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573179282344 }
http://www.theanalogdog.com/Wiki/doku.php?id=parallel_rlc_circuit
# The Analog Dog ### Site Tools parallel_rlc_circuit # Parallel RLC Circuit Time Domain Analysis ## Nodal Equation $i_C+i_L+i_R=0$ $C\frac{dv_{OUT}}{dt} + \frac{v_{OUT}}{R} + \frac{1}{L}\int_{-\infty}^t{v_{OUT}dt}=0$ Taking a derivative and dividing by C gives $\frac{d^2v_{OUT}}{dt^2} + \frac{1}{RC}\frac{dv_{OUT}}{dt} + \frac{1}{LC}v_{OUT}=0$ ## Solution using method of homogeneous and particular solutions First, notice that the nodal equation is a second order homogeneous equation. Therefore, we will not need to find a particular solution. As usual with problems like these, guess that the solution is $e^{st}$. Plugging in $v_{OUT}=e^{st}$ into the nodal equation yields: $e^{st}(s^2+\frac{1}{RC}s+\frac{1}{LC})=0$ To make the variables more recognizable, let's put the equation into the standard form. $e^{st}(s^2+2\alpha s+w_o^2)=0$ where $\alpha=\frac{1}{2RC}$ $w_o=\sqrt{\frac{1}{LC}}$ Using the quadratic formula, one can solve for the two values of s that satisfy the equation. $s_1=-\alpha+\sqrt{\alpha^2-w_o^2}$ $s_2=-\alpha-\sqrt{\alpha^2-w_o^2}$ To further simplify the above equations for s, one can define $w_d$ as: $w_d=\sqrt{w_o^2-\alpha^2}$ Using the new definition yields: $s_1=-\alpha+jw_d$ $s_2=-\alpha-jw_d$ There are two properties of linear homogeneous differential equations that are useful. A constant times the solution is still a solution, and the sum of solutions is still a solution. Using those two properties we can write: $v_{OUT}=A_1e^{s_1t}+A_2e^{s_2t}$ The values of $A_1$ and $A_2$ can be found using the initial conditions. As an example, let's say that the output voltage at $t=0$ is $v_{INIT}$ and the derivative of the output voltage is 0 at $t=0$. So one can write: $v_{INIT}=A_1+A_2$ $0=A_1s_1+A_2s_2$ Solving the above two equations for $A_1$ and $A_2$ yields: $A_1=\frac{v_{INIT}}{1-\frac{s_1}{s_2}}$ $A_2=v_{INIT}\left(1-\frac{1}{1-\frac{s_1}{s_2}}\right)$ As an example, I used the $v_{INIT}=1$ ,$R=2.0$ ,$L=0.1$, and $C=0.5$ and the above analysis to plot the output voltage as a function of time in wxmaxima rlc_p_maxima.pdf. to make sure the analysis was correct, I compared it to a SPICE simulation.
2020-10-20T05:44:14
{ "domain": "theanalogdog.com", "url": "http://www.theanalogdog.com/Wiki/doku.php?id=parallel_rlc_circuit", "openwebmath_score": 0.9161646962165833, "openwebmath_perplexity": 269.4936248337712, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357253887771, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573175726549 }
https://stacks.math.columbia.edu/tag/0BEM
Lemma 33.45.1. Let $k$ be a field. Let $X$ be a proper scheme over $k$. Let $\mathcal{F}$ be a coherent $\mathcal{O}_ X$-module. Let $\mathcal{L}_1, \ldots , \mathcal{L}_ r$ be invertible $\mathcal{O}_ X$-modules. The map $(n_1, \ldots , n_ r) \longmapsto \chi (X, \mathcal{F} \otimes \mathcal{L}_1^{\otimes n_1} \otimes \ldots \otimes \mathcal{L}_ r^{\otimes n_ r})$ is a numerical polynomial in $n_1, \ldots , n_ r$ of total degree at most the dimension of the support of $\mathcal{F}$. Proof. We prove this by induction on $\dim (\text{Supp}(\mathcal{F}))$. If this number is zero, then the function is constant with value $\dim _ k \Gamma (X, \mathcal{F})$ by Lemma 33.33.3. Assume $\dim (\text{Supp}(\mathcal{F})) > 0$. If $\mathcal{F}$ has embedded associated points, then we can consider the short exact sequence $0 \to \mathcal{K} \to \mathcal{F} \to \mathcal{F}' \to 0$ constructed in Divisors, Lemma 31.4.6. Since the dimension of the support of $\mathcal{K}$ is strictly less, the result holds for $\mathcal{K}$ by induction hypothesis and with strictly smaller total degree. By additivity of the Euler characteristic (Lemma 33.33.2) it suffices to prove the result for $\mathcal{F}'$. Thus we may assume $\mathcal{F}$ does not have embedded associated points. If $i : Z \to X$ is a closed immersion and $\mathcal{F} = i_*\mathcal{G}$, then we see that the result for $X$, $\mathcal{F}$, $\mathcal{L}_1, \ldots , \mathcal{L}_ r$ is equivalent to the result for $Z$, $\mathcal{G}$, $i^*\mathcal{L}_1, \ldots , i^*\mathcal{L}_ r$ (since the cohomologies agree, see Cohomology of Schemes, Lemma 30.2.4). Applying Divisors, Lemma 31.4.7 we may assume that $X$ has no embedded components and $X = \text{Supp}(\mathcal{F})$. Pick a regular meromorphic section $s$ of $\mathcal{L}_1$, see Divisors, Lemma 31.25.4. Let $\mathcal{I} \subset \mathcal{O}_ X$ be the ideal of denominators of $s$ and consider the maps $\mathcal{I}\mathcal{F} \to \mathcal{F},\quad \mathcal{I}\mathcal{F} \to \mathcal{F} \otimes \mathcal{L}_1$ of Divisors, Lemma 31.24.5. These are injective and have cokernels $\mathcal{Q}$, $\mathcal{Q}'$ supported on nowhere dense closed subschemes of $X = \text{Supp}(\mathcal{F})$. Tensoring with the invertible module $\mathcal{L}_1^{\otimes n_1} \otimes \ldots \otimes \mathcal{L}_ r^{\otimes n_ r}$ is exact, hence using additivity again we see that \begin{align*} & \chi (X, \mathcal{F} \otimes \mathcal{L}_1^{\otimes n_1} \otimes \ldots \otimes \mathcal{L}_ r^{\otimes n_ r}) - \chi (X, \mathcal{F} \otimes \mathcal{L}_1^{\otimes n_1 + 1} \otimes \ldots \otimes \mathcal{L}_ r^{\otimes n_ r}) \\ & = \chi (\mathcal{Q} \otimes \mathcal{L}_1^{\otimes n_1} \otimes \ldots \otimes \mathcal{L}_ r^{\otimes n_ r}) - \chi (\mathcal{Q}' \otimes \mathcal{L}_1^{\otimes n_1} \otimes \ldots \otimes \mathcal{L}_ r^{\otimes n_ r}) \end{align*} Thus we see that the function $P(n_1, \ldots , n_ r)$ of the lemma has the property that $P(n_1 + 1, n_2, \ldots , n_ r) - P(n_1, \ldots , n_ r)$ is a numerical polynomial of total degree $<$ the dimension of the support of $\mathcal{F}$. Of course by symmetry the same thing is true for $P(n_1, \ldots , n_{i - 1}, n_ i + 1, n_{i + 1}, \ldots , n_ r) - P(n_1, \ldots , n_ r)$ for any $i \in \{ 1, \ldots , r\}$. A simple arithmetic argument shows that $P$ is a numerical polynomial of total degree at most $\dim (\text{Supp}(\mathcal{F}))$. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
2022-06-26T07:27:31
{ "domain": "columbia.edu", "url": "https://stacks.math.columbia.edu/tag/0BEM", "openwebmath_score": 0.9966767430305481, "openwebmath_perplexity": 298.46690319935027, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9817357253887771, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573175726549 }
http://math.stackexchange.com/questions/471484/inequality-related-to-the-continued-fraction-expansion-of-sqrt3
# Inequality related to the continued fraction expansion of sqrt(3) I am working on a problem related to the continued fraction expansion of $\sqrt3$. If $p_k$ and $q_k$ denote the numerator and denominator, respectively, of the $k$th convergent, I should show that $$\left|\sqrt{3}-\frac{p_{2n+1}}{q_{2n+1}}\right| < \frac{1}{2\sqrt{3}q_{2n+1}^2}\;.$$ I have determined that the continued fraction expansion of $\sqrt3$ is $[1;\overline{1,2}]$ and I am able to show the equality apart from the factor $2\sqrt3$ in the denominator. Any suggestions? - Note that $p_{2n+1}^2 - 3 q_{2n+1}^2 = 1$. – Daniel Fischer Aug 19 '13 at 19:37 Use the standard fact that the absolute value of the difference is less than $\dfrac{1}{q_{2k+1}q_{2k+2}}$. – André Nicolas Aug 19 '13 at 19:48 And then what? From the recursive relation $q_{2k+2}=2q_{2k+1}+q_{2k}$, it seems like I want to show that $2q_{2k+1}+q_{2k}>2\sqrt3 q_{2k+1}$. This, however, implies $q_{2k}>2(\sqrt{3}-1)q_{2k+1}>q_{2k+1}$, a contradiction. – Alexandre Vandermonde Aug 19 '13 at 19:59 From the period length of $2$, you obtain that for all $n$, you have $$p_{2n+1}^2 - 3q_{2n+1}^2 = 1,$$ and from that you can deduce \begin{align} \frac{p_{2n+1}}{q_{2n+1}} - \sqrt{3} &= \frac{p_{2n+1} - \sqrt{3}q_{2n+1}}{q_{2n+1}}\\ &= \frac{(p_{2n+1} - \sqrt{3}q_{2n+1})(p_{2n+1} + \sqrt{3}q_{2n+1})}{q_{2n+1}(p_{2n+1} + \sqrt{3}q_{2n+1})}\\ &= \frac{1}{\left(\frac{p_{2n+1}}{q_{2n+1}}+\sqrt{3}\right)q_{2n+1}^2}\\ &< \frac{1}{2\sqrt{3}q_{2n+1}^2} \end{align} since $\frac{p_{2n+1}}{q_{2n+1}}>\sqrt{3}$ (which you can either deduce from the general fact that odd convergents are larger than [or equal to] the value of the continued fraction, or from the fact mentioned above). - Thank you your reply! If you have the time, would you consider leaving a hint as to why $p_{2n+1}^2-3q_{2n+1}^2=1$? I can't seem to get it right. – Alexandre Vandermonde Aug 19 '13 at 20:28 Uh, sorry, didn't think of the fact that you're probably not yet at that part of the theory. I guess the simplest way would then be an induction showing that $p_{2n}^2 - 3q_{2n}^2 = -2$, $p_{2n+1}^2-3q_{2n+1}^2 = 1$ and $p_{n+1}p_p - 3q_{n+1}q_n = (-1)^{n+1}$, using $p_{2n+2} = 2p_{2n+1} + p_{2n}$, $p_{2n+3} = p_{2n+2}+p_{2n+1}$ and analogously for $q_k$. – Daniel Fischer Aug 19 '13 at 21:03 Thanks, that solved it! – Alexandre Vandermonde Aug 20 '13 at 13:04
2016-02-11T05:30:52
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/471484/inequality-related-to-the-continued-fraction-expansion-of-sqrt3", "openwebmath_score": 0.9955273270606995, "openwebmath_perplexity": 317.71208993576334, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.981735725388777, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573175726548 }
https://math.stackexchange.com/questions/1133380/using-first-order-sentences-axiomize-the-mathcall-theory-of-an-equivalence
# Using first order sentences, axiomize the $\mathcal{L}$-theory of an equivalence relation with infinitely many infinite classes. Problem Let $\mathcal{L} = \{E\}$ where $E$ is a binary relation symbol. Let $T$ be the $\mathcal{L}$-theory of an equivalence relation with infinitely many infinite classes. Current Solution First, the axioms for equivalence class: \begin{alignat*}{3} &\forall x &&E(x,x),\\ &\forall x \forall y &&(E(x,y) \rightarrow E(y,x)),\\ &\forall x \forall y \forall z &&(E(x,y) \wedge E(y,z) \rightarrow E(x,z))).\\ \end{alignat*} Then to express infinitely many equivalence classes, we add infinitely many senteneces $\phi_n ~(n \ge 2$): $$\phi_n = \exists x_1 \ldots \exists x_n ~ \bigwedge_{i < j \le n} \neg E(x_i,x_j).$$ That is we have sentences $\phi_2,\phi_3,\ldots$. Finally, we add infinitely many sentences $\psi_n ~(n \ge 1)$ $$\psi_n = \forall x \exists x_1 \ldots \exists x_n ~ \bigwedge_{i < j \le n} x_i \neq x_j \wedge \bigwedge_{i=1}^n E(x,x_j)$$ to axiomize each class is infinite. Problem I am not confident that $\{\phi_n\}$ and $\{\psi_n\}$ are not contradicting each other • The best way to see that a set of sentences is not contradictory is to find a structure satisfying all of them. If you can find a $\mathcal L$-structure satisfying all the $\phi_n$ and $\psi_n$, you will be sure they are not contradicting each other. – Pece Feb 4 '15 at 16:08 You have axiomatized the theory asserting that $E$ is an equivalence relation with infinitely many classes, ALL of which are infinite. Your statement $\psi_n$ says that every class has at least $n$ elements. But the theory mentioned in the title problem was just that $E$ should have infinitely many infinite classes (and perhaps also some finite classes). It turns out that this problem is impossible. Proof. Let $E_0$ be an equivalence relation on a set $X$ with infinitely many classes of arbitrarily large finite size, and no infinite classes at all. So $\langle X,E_0\rangle$ is not one of the desired models. Let $T$ be the elementary diagram of $\langle X,E_0\rangle$, plus the assertions with infinitely many new constants $c^n_i$, that $c^n_i\mathrel{E} c^n_j$ and $c^n_i\neq c^n_j$ and $\neg (c^n_i\mathrel{E} c^m_j)$, whenever $n\neq m$ and $i\neq j$. This theory is finitely consistent, and so it is consistent. Any model of $T$ will be an expansion of an elementary extension of the original model $\langle X,E_0\rangle$, but it will now have infinitely many infinite classes.
2022-01-16T22:54:57
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1133380/using-first-order-sentences-axiomize-the-mathcall-theory-of-an-equivalence", "openwebmath_score": 0.9764046669006348, "openwebmath_perplexity": 96.43497267091892, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.981735725388777, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573175726548 }
http://math.stackexchange.com/questions/157706/bijective-holomorphic-map
# Bijective holomorphic map A bijective holomorphic map from unit disk to itself will be rotation? That mean $f(z)=e^{i\alpha}z$? How do I approach to solve this problem?In addition I want to know how one can remember the conformal maps which sends unit disk to upper half plane or conversely,and all possible known place to other places like that? - For $w\in\mathbb{D}$, consider the following function defined on $\mathbb{D}$: $$B_w(z) = \frac{z-w}{1-\overline{w}z}.$$ This function is called a "Blaschke factor." It defines a bijective holomorphism from $\mathbb{D}$ to $\mathbb{D}$, but it is not a rotation if $w\neq 0$. This demonstrates there are many such mappings that are not rotations. To see this, first note that that by an easy computation, $B_{w}$ is its own inverse, so it is biholomorphic. To see it maps $\mathbb{D}\rightarrow \mathbb{D}$, just take the modulus and do the standard manipulations, noting that $|z|<1$ and $|w|<1$. However, if you make the assumption that $f(0)=0$, your guess is correct. All biholomorphic maps of the unit disk that fix the origin are rotations. We use the Schwartz lemma to show this. If you are not familiar with this, leave a comment below and I will edit this answer to include a proof and discussion. By the lemma, we have $|f'(0)|\le 1$. Because $f^{-1}$ is another origin preserving biholomorphism, we also have $|f^{-1}$'$(0)|\le 1$. Because we know by the differentiation rule for inverse functions that $f'(0)=\frac{1}{f^{-1}\text{'}(0)}$, we must have $|f'(0)|=|f^{-1}\text{'}(0)|=1$, and considering the equality case in the Schwartz lemma, we see $f$ must be a rotation. We know that $f^{-1}$ is differentiable. Then the chain rule gives $$f(f^{-1}(x))=x\rightarrow f'(f^{-1}(x))=\frac{1}{f^{-1}\text{'}(x)}$$ and noting here that $f^{-1}(0)=0$ gives the result. To answer the second part of your question, I suggest you use Google and the literature available to you. The magic search terms are "automorphism group of the disk" and "automorphism group of the upper half plane." This will tell you about all possible biholomorphisms of each space. Now, because the half-plane is conformally equivalent to the unit disk, knowing the automorphism group of the unit disk is enough to know of all the the maps from the unit disk to the upper half-plane. - I know about S Lemma, and thank you very much for your answer, could you just write me one line why $|f'(0)|=|f^{-1}(0)|$?and also do you know any back ground of that B-factor?any detail? – La Belle Noiseuse Jun 13 '12 at 9:12 I hope that helps. If you need more detail, just let me know. – Potato Jun 13 '12 at 9:21 thank you again. – La Belle Noiseuse Jun 13 '12 at 9:31
2016-06-26T03:08:01
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/157706/bijective-holomorphic-map", "openwebmath_score": 0.9281923770904541, "openwebmath_perplexity": 223.32418652865584, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.981735725388777, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573175726548 }
http://math.stackexchange.com/questions/638874/factor-z7-1-into-linear-and-quadratic-factors-and-prove-that-cos-pi-7-c
# factor $z^7-1$ into linear and quadratic factors and prove that $\cos(\pi/7) \cdot\cos(2\pi/7) \cdot\cos(3\pi/7)=1/8$ Factor $z^7-1$ into linear and quadratic factors and prove that $$\cos(\pi/7) \cdot\cos(2\pi/7) \cdot\cos(3\pi/7)=1/8$$ I have been able to prove it using the value of $\cos(\pi/7)$. Given here http://mathworld.wolfram.com/TrigonometryAnglesPi7.html - Let $z=e^{\frac{i\pi}{7}}$. Then $\cos (\frac{\pi}{7})=(z+z^{-1})/2$, $\cos (\frac{2\pi}{7})=(z^2+z^{-2})/2$, $\cos (\frac{3\pi}{7})=(z^3+z^{-3})/2$. This should get you started. - Factor $x^7-1$ in $\Bbb C$ and obtain its factorization in $\Bbb R$ by pairing off conjugate roots. - $$z^7=1=e^{2n\pi i}$$ where $n$ is any integer $$\implies z=e^{\frac{2n\pi i}7}$$ where $n=0,\pm1,\pm2,\pm3$ So, the roots of $$\frac{z^7-1}{z-1}=0\iff z^6+z^5+\cdots+z+1=0\quad(1)$$ are $e^{\frac{2n\pi i}7}$ where $n=\pm1,\pm2,\pm3$ As $z\ne0,$ divide either sides by $z^3$ to get $$z^3+\frac1{z^3}+z^2+\frac1{z^2}+z+\frac1z+1=0\quad(2)$$ Now using Euler Formula, $\displaystyle z+\frac1z=e^{\frac{2n\pi i}7}+e^{-\frac{2n\pi i}7}=2\cos\frac{2n\pi }7$ where $n=1,2,3$ Again, $$\displaystyle z^2+\frac1{z^2}=\left(z+\frac1z\right)^2-2\text{ and }z^3+\frac1{z^3}=\left(z+\frac1z\right)^3-3\left(z+\frac1z\right)$$ Put the values of $\displaystyle z^2+\frac1{z^2},z^3+\frac1{z^3}$ in $(2)$ to form a Cubic Equation whose roots are $\displaystyle2\cos\frac{2n\pi }7$ where $n=1,2,3$ Now apply Vieta's formula - The Equation $(1)$ is called Reciprocal Equation See math.stackexchange.com/questions/403025/… and math.stackexchange.com/questions/480102/… –  lab bhattacharjee Jan 15 at 5:24 Let $$\alpha_1 = \cos(2\pi/7), \alpha_2 = \cos(4 \pi/7), \alpha_3 = \cos(6 \pi/7)$$ Then $$z^7-1 = (z-1) (z^2- 2 \alpha_1 z + 1)(z^2- 2 \alpha_2 z + 1)(z^2- 2 \alpha_3 z + 1)$$ Differentiate both sides and set $z=1$ to get your answer. -
2014-10-21T05:40:46
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/638874/factor-z7-1-into-linear-and-quadratic-factors-and-prove-that-cos-pi-7-c", "openwebmath_score": 0.9653843641281128, "openwebmath_perplexity": 216.85341812952052, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357248544009, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573172170754 }
https://www.shaalaa.com/question-bank-solutions/a-ladder-10-m-long-reaches-a-window-8-m-above-the-ground-find-the-distance-of-the-foot-of-the-ladder-from-the-base-of-wall-complete-the-given-activity-similarity-in-right-angled-triangles_206077
# A ladder 10 m long reaches a window 8 m above the ground. Find the distance of the foot of the ladder from the base of wall. Complete the given activity. - Geometry Sum A ladder 10 m long reaches a window 8 m above the ground. Find the distance of the foot of the ladder from the base of wall. Complete the given activity. Activity: As shown in figure suppose PR is the length of ladder = 10 m At P – window, At Q – base of wall, At R – foot of ladder ∴ PQ = 8 m ∴ QR = ? In ∆PQR, m∠PQR = 90° By Pythagoras Theorem, ∴ PQ2 + square = PR2    .....(I) Here, PR = 10, PQ = square From equation (I) 82 + QR2 = 102 QR2 = 102 – 82 QR2 = 100 – 64 QR2 = square QR = 6 ∴ The distance of foot of the ladder from the base of wall is 6 m. #### Solution As shown in figure suppose PR is the length of ladder = 10 m, At P – window, At Q – base of wall, At R – foot of ladder ∴ PQ = 8 m ∴ QR = ? In ∆PQR, m∠PQR = 90° By Pythagoras Theorem, ∴ PQ2 + QR2 = PR2    .....(I) Here, PR = 10, PQ = 8 From equation (I) 82 + QR2 = 102 QR2 = 102 – 82 QR2 = 100 – 64 QR2 = 36 QR = 6 ∴ The distance of foot of the ladder from the base of wall is 6 m. Concept: Similarity in Right Angled Triangles Is there an error in this question or solution? Share
2023-03-25T19:57:41
{ "domain": "shaalaa.com", "url": "https://www.shaalaa.com/question-bank-solutions/a-ladder-10-m-long-reaches-a-window-8-m-above-the-ground-find-the-distance-of-the-foot-of-the-ladder-from-the-base-of-wall-complete-the-given-activity-similarity-in-right-angled-triangles_206077", "openwebmath_score": 0.5340679883956909, "openwebmath_perplexity": 3299.309789929801, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357248544007, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573172170753 }
https://proofwiki.org/wiki/Definition:Closed_Ball/P-adic_Numbers
Definition Let $p$ be a prime number. Let $\struct {\Q_p, \norm {\,\cdot\,}_p}$ be the $p$-adic numbers. Let $a \in \Q_p$. Let $\epsilon \in \R_{>0}$ be a strictly positive real number. The closed $\epsilon$-ball of $a$ in $\struct {\Q_p, \norm {\,\cdot\,}_p }$ is defined as: $\map { {B_\epsilon}^-} a = \set {x \in R: \norm {x - a}_p \le \epsilon}$ In $\map { {B_\epsilon}^-} a$, the value $\epsilon$ is referred to as the radius of the closed $\epsilon$-ball. In $\map { {B_\epsilon}^-} a$, the value $a$ is referred to as the center of the closed $\epsilon$-ball.
2022-08-15T04:09:07
{ "domain": "proofwiki.org", "url": "https://proofwiki.org/wiki/Definition:Closed_Ball/P-adic_Numbers", "openwebmath_score": 0.9940589666366577, "openwebmath_perplexity": 58.50575155907216, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357248544007, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573172170753 }
https://math.stackexchange.com/questions/1577848/problem-on-normal-and-symmetric-matrices
# Problem on normal and symmetric matrices A normal matrix over $\mathbb C$ with all eigenvalues real is hermitian (using diagonalization) .But a normal matrix with real eigenvalues is symmetric, is this statement true? I think, all normal matrices are not diagonalizable i.e rotation matrix .please someone explain over $\mathbb R$,are these statements true? Thanks. A normal matrix over $\mathbb C$ is hermitian AKA self adjoint iff it has real eigenvalues. This is true. However, it does not necessarily imply that the matrix is symmetric. All normal matrices are diagonalizable with respect to a unitary matrix over $\mathbb C$. However, in the real case, this may not be true because the unitary matrix may involve complex numbers.
2021-03-09T08:37:57
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1577848/problem-on-normal-and-symmetric-matrices", "openwebmath_score": 0.8033275604248047, "openwebmath_perplexity": 144.29294669356952, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357248544007, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573172170753 }
http://mathhelpforum.com/calculus/41769-integration-help.html
# Math Help - Integration help 1. ## Integration help Hi everyone I have some homework due in a day and i can't understand this question... consider the differential equation: 2xy(dy/dx)=y^2-9, x≠0 1a) find the steady state solutions(if any) b) find the general solution c) Find the particular solution satisfying y=5 when x=4 Any help would be appreciated thanks already! 2. Part a can be solved fairly easily by understanding that a "steady state solution" means that dy/dx=0. So substitute dy/dx=0 into the equation and solve it for y. For the rest of the question, first check that you have copied it down correctly. A well placed minus sign would make the problem very much easier! 3. $ 2xy\dfrac{dy}{dx} = y^2 - 9 $ $ \Rightarrow \dfrac{2ydy}{y^2 - 9} = \dfrac{dx}{x} $ $ \\\Rightarrow \ln(y^2 - 9) = \ln x + C $ $ \\y^2 - 9 = xe^C $ $ \\y^2 = 9 + xe^C $ $ \\y = \pm \sqrt{9 + xe^C} $ sub in the numbers and you get $ \\y = \sqrt{9 + 4x} $
2016-07-31T02:26:44
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/calculus/41769-integration-help.html", "openwebmath_score": 0.749213457107544, "openwebmath_perplexity": 1393.8648709784054, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357248544006, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573172170751 }
http://clay6.com/qa/9774/the-volume-generated-when-the-region-bounded-by-y-x-y-1-x-0-is-rotated-abou
Browse Questions # The volume generated when the region bounded by $y=x,y=1,x=0$ is rotated about $y$-axis is $\begin{array}{1 1}(1)\frac{\pi}{4}&(2)\frac{\pi}{2}\\(3)\frac{\pi}{3}&(4)\frac{2\pi}{3}\end{array}$ Volume of cylinder with base radius 1 ad the height 1- volume of the cone with base radius 1 and height 1 $\qquad= \pi \times 1^2 \times 1 - \large\frac{1}{3} \pi \times 1^2 \times 1$
2016-10-24T23:47:01
{ "domain": "clay6.com", "url": "http://clay6.com/qa/9774/the-volume-generated-when-the-region-bounded-by-y-x-y-1-x-0-is-rotated-abou", "openwebmath_score": 0.9779466986656189, "openwebmath_perplexity": 595.1418197136439, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357243200245, "lm_q2_score": 0.6654105587468141, "lm_q1q2_score": 0.6532573168614958 }
https://plainmath.net/41874/having-trouble-with-the-laplace-transform-sqrt-frac-cos-the-problem-gives
# I'm having trouble with the laplace transform: L\{\sqrt{\frac{t}{\pi}}\cos(2t)\} The problem gives I'm having trouble with the laplace transform: $$\displaystyle{L}{\left\lbrace\sqrt{{{\frac{{{t}}}{{\pi}}}}}{\cos{{\left({2}{t}\right)}}}\right\rbrace}$$ The problem gives me the transform identity $$\displaystyle{L}{\left\lbrace{\frac{{{\cos{{\left({2}{t}\right)}}}}}{{\sqrt{{\pi{t}}}}}}\right\rbrace}={\frac{{{e}^{{-{\frac{{{2}}}{{{s}}}}}}}}{{\sqrt{{s}}}}}$$ but i'm not sure/confused as to why that would help me ## Want to know more about Laplace transform? • Questions are typically answered in as fast as 30 minutes ### Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Neil Dismukes Assuming t>0 (which is a usual assumption with Laplace transforms), $$\displaystyle\sqrt{{{\frac{{{t}}}{{\pi}}}}}{\cos{{2}}}{t}={t}{\frac{{{\cos{{2}}}{t}}}{{\sqrt{{\pi{t}}}}}}$$ ###### Not exactly what you’re looking for? Daniel Cormack $$\displaystyle{L}{\left[\sqrt{{{\frac{{{t}}}{{\pi}}}}}{\cos{{\left({2}{t}\right)}}}\right]}$$ If you define the function: $$\displaystyle{f{{\left({t}\right)}}}=\sqrt{{{\frac{{{t}}}{{\pi}}}}}{\cos{{\left({2}{t}\right)}}}$$ If you multiply it by ($$\displaystyle{\frac{{{t}}}{{{t}}}}$$) You can rewrite: $$\displaystyle{f{{\left({t}\right)}}}={t}{\frac{{{1}}}{{\sqrt{{{t}\pi}}}}}{\cos{{\left({2}{t}\right)}}}={t}{\frac{{{\cos{{\left({2}{t}\right)}}}}}{{\sqrt{{{t}\pi}}}}}$$ At this point, I have to remind you: $$\displaystyle\text{L[t^n f(t)]=-(1)^n \frac{d}{ds^n}(L[f(t)])}$$ Finally, if you have $$\displaystyle{L}{\left[{\frac{{{\cos{{\left({2}{t}\right)}}}}}{{\sqrt{{{t}\pi}}}}}\right]}={\frac{{{e}^{{-{\frac{{{2}}}{{{s}}}}}}}}{{\sqrt{{s}}}}}$$ you have to calculate $$\displaystyle-{\left({1}\right)}^{{n}}{\frac{{{d}}}{{{d}{s}}}}{\left({\frac{{{e}^{{-{\frac{{{2}}}{{{s}}}}}}}}{{\sqrt{{s}}}}}\right)}$$
2022-01-17T23:05:53
{ "domain": "plainmath.net", "url": "https://plainmath.net/41874/having-trouble-with-the-laplace-transform-sqrt-frac-cos-the-problem-gives", "openwebmath_score": 0.9336340427398682, "openwebmath_perplexity": 866.5161458380023, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357243200245, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573168614957 }
https://testbook.com/question-answer/find-thevenins-equivalent-resistance-for-the--602f5b7caba186355d5c4de4
# Find Thevenin's equivalent resistance for the following circuit: This question was previously asked in DFCCIL Executive S&T 2018 Official Paper View all DFCCIL Executive Papers > 1. 5.67 Ω 2. 6.66 Ω 3. 6 Ω 4. 6.25 Ω Option 2 : 6.66 Ω Free CT 1: Current Affairs (Government Policies and Schemes) 44199 10 Questions 10 Marks 10 Mins ## Detailed Solution Concept: • Thevenin’s equivalent voltage is the open-circuit voltage across the load terminals. • Thevenin’s resistance is the equivalent resistance across the load terminals after removing all the independent sources. • All independent sources can be removed by replacing them with a short circuit and all the current sources can be removed by replacing them with an open circuit. Calculation: The given circuit can be redrawn as the following by short-circuiting the voltage source: RTH = (7 ∥  5) +  (10 ∥  6) $$R_{TH}=\frac{7 \ \times \ 5}{7 \ + \ 5}+\frac{10\ \times \ 6}{10\ +\ 6}$$ $$R_{TH}=\frac{35}{12}+\frac{60}{16}= 6.66\ \Omega$$ Hence option (2) is the correct answer.
2021-09-25T14:56:32
{ "domain": "testbook.com", "url": "https://testbook.com/question-answer/find-thevenins-equivalent-resistance-for-the--602f5b7caba186355d5c4de4", "openwebmath_score": 0.5636021494865417, "openwebmath_perplexity": 2820.368289717556, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357243200245, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573168614957 }
https://www.r-bloggers.com/2018/02/deep-learning-from-first-principles-in-python-r-and-octave-part-4/
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. In this 4th post of my series on Deep Learning from first principles in Python, R and Octave – Part 4, I explore the details of creating a multi-class classifier using the Softmax activation unit in a neural network. The earlier posts in this series were 1. Deep Learning from first principles in Python, R and Octave – Part 1. In this post I implemented logistic regression as a simple Neural Network in vectorized Python, R and Octave 2. Deep Learning from first principles in Python, R and Octave – Part 2. This 2nd part implemented the most elementary neural network with 1 hidden layer and any number of activation units in the hidden layer with sigmoid activation at the output layer 3. Deep Learning from first principles in Python, R and Octave – Part 3. The 3rd implemented a multi-layer Deep Learning network with an arbitrary number if hidden layers and activation units per hidden layer. The output layer was for binary classification which was based on the sigmoid unit. This multi-layer deep network was implemented in vectorized Python, R and Octave This 4th part takes a swing at multi-class classification and uses the Softmax as the activation unit in the output layer. Inclusion of the Softmax activation unit in the activation layer requires us to compute the derivative of Softmax, or rather the “Jacobian” of the Softmax function, besides also computing the log loss for this Softmax activation during back propagation. Since the derivation of the Jacobian of a Softmax and the computation of the Cross Entropy/log loss is very involved, I have implemented a basic neural network with just 1 hidden layer with the Softmax activation at the output layer. I also perform multi-class classification based on the ‘spiral’ data set from CS231n Convolutional Neural Networks Stanford course, to test the performance and correctness of the implementations in Python, R and Octave. You can clone download the code for the Python, R and Octave implementations from Github at Deep Learning – Part 4 The Softmax function takes an N dimensional vector as input and generates a N dimensional vector as output. The Softmax function is given by $S_{j}= \frac{e_{j}}{\sum_{i}^{N}e_{k}}$ There is a probabilistic interpretation of the Softmax, since the sum of the Softmax values of a set of vectors will always add up to 1, given that each Softmax value is divided by the total of all values. As mentioned earlier, the Softmax takes a vector input and returns a vector of outputs.  For e.g. the Softmax of a vector a=[1, 3, 6]  is another vector S=[0.0063,0.0471,0.9464]. Notice that vector output is proportional to the input vector.  Also, taking the derivative of a vector by another vector, is known as the Jacobian. By the way, The Matrix Calculus You Need For Deep Learning by Terence Parr and Jeremy Howard, is very good paper that distills all the main mathematical concepts for Deep Learning in one place. Let us take a simple 2 layered neural network with just 2 activation units in the hidden layer is shown below $Z_{1}^{1} =W_{11}^{1}x_{1} + W_{21}^{1}x_{2} + b_{1}^{1}$ $Z_{2}^{1} =W_{12}^{1}x_{1} + W_{22}^{1}x_{2} + b_{2}^{1}$ and $A_{1}^{1} = g'(Z_{1}^{1})$ $A_{2}^{1} = g'(Z_{2}^{1})$ where g'() is the activation unit in the hidden layer which can be a relu, sigmoid or a tanh function Note: The superscript denotes the layer. The above denotes the equation for layer 1 of the neural network. For layer 2 with the Softmax activation, the equations are $Z_{1}^{2} =W_{11}^{2}x_{1} + W_{21}^{2}x_{2} + b_{1}^{2}$ $Z_{2}^{2} =W_{12}^{2}x_{1} + W_{22}^{2}x_{2} + b_{2}^{2}$ and $A_{1}^{2} = S(A_{1}^{1})$ $A_{2}^{2} = S(A_{2}^{1})$ where S() is the Softmax activation function $S=\begin{pmatrix} S(Z_{1}^{2})\\ S(Z_{2}^{2}) \end{pmatrix}$ $S=\begin{pmatrix} \frac{e^{Z1}}{e^{Z1}+e^{Z2}}\\ \frac{e^{Z2}}{e^{Z1}+e^{Z2}} \end{pmatrix}$ The Jacobian of the softmax ‘S’ is given by $\begin{pmatrix} \frac {\partial S_{1}}{\partial Z_{1}} & \frac {\partial S_{1}}{\partial Z_{2}}\\ \frac {\partial S_{2}}{\partial Z_{1}} & \frac {\partial S_{2}}{\partial Z_{2}} \end{pmatrix}$ $\begin{pmatrix} \frac{\partial}{\partial Z_{1}} \frac {e^{Z1}}{e^{Z1}+ e^{Z2}} & \frac{\partial}{\partial Z_{2}} \frac {e^{Z1}}{e^{Z1}+ e^{Z2}}\\ \frac{\partial}{\partial Z_{1}} \frac {e^{Z2}}{e^{Z1}+ e^{Z2}} & \frac{\partial}{\partial Z_{2}} \frac {e^{Z2}}{e^{Z1}+ e^{Z2}} \end{pmatrix}$     – (A) Now the ‘division-rule’  of derivatives is as follows. If u and v are functions of x, then $\frac{d}{dx} \frac {u}{v} =\frac {vdu -udv}{v^{2}}$ Using this to compute each element of the above Jacobian matrix, we see that when i=j we have $\frac {\partial}{\partial Z1}\frac{e^{Z1}}{e^{Z1}+e^{Z2}} = \frac {\sum e^{Z1} - e^{Z1^{2}}}{\sum ^{2}}$ and when $i \neq j$ $\frac {\partial}{\partial Z1}\frac{e^{Z2}}{e^{Z1}+e^{Z2}} = \frac {0 - e^{z1}e^{Z2}}{\sum ^{2}}$ This is of the general form $\frac {\partial S_{j}}{\partial z_{i}} = S_{i}( 1-S_{j})$  when i=j and $\frac {\partial S_{j}}{\partial z_{i}} = -S_{i}S_{j}$  when $i \neq j$ Note: Since the Softmax essentially gives the probability the following notation is also used $\frac {\partial p_{j}}{\partial z_{i}} = p_{i}( 1-p_{j})$ when i=j and $\frac {\partial p_{j}}{\partial z_{i}} = -p_{i}p_{j} when i \neq j$ If you throw the “Kronecker delta” into the equation, then the above equations can be expressed even more concisely as $\frac {\partial p_{j}}{\partial z_{i}} = p_{i} (\delta_{ij} - p_{j})$ where $\delta_{ij} = 1$ when i=j and 0 when $i \neq j$ This reduces the Jacobian of the simple 2 output softmax vectors  equation (A) as $\begin{pmatrix} p_{1}(1-p_{1}) & -p_{1}p_{2} \\ -p_{2}p_{1} & p_{2}(1-p_{2}) \end{pmatrix}$ The loss of Softmax is given by $L = -\sum y_{i} log(p_{i})$ For the 2 valued Softmax output this is $\frac {dL}{dp1} = -\frac {y_{1}}{p_{1}}$ $\frac {dL}{dp2} = -\frac {y_{2}}{p_{2}}$ Using the chain rule we can write $\frac {\partial L}{\partial w_{pq}} = \sum _{i}\frac {\partial L}{\partial p_{i}} \frac {\partial p_{i}}{\partial w_{pq}}$ (1) and $\frac {\partial p_{i}}{\partial w_{pq}} = \sum _{k}\frac {\partial p_{i}}{\partial z_{k}} \frac {\partial z_{k}}{\partial w_{pq}}$ (2) In expanded form this is $\frac {\partial L}{\partial w_{pq}} = \sum _{i}\frac {\partial L}{\partial p_{i}} \sum _{k}\frac {\partial p_{i}}{\partial z_{k}} \frac {\partial z_{k}}{\partial w_{pq}}$ Also $\frac {\partial L}{\partial Z_{i}} =\sum _{i} \frac {\partial L}{\partial p} \frac {\partial p}{\partial Z_{i}}$ Therefore $\frac {\partial L}{\partial Z_{1}} =\frac {\partial L}{\partial p_{1}} \frac {\partial p_{1}}{\partial Z_{1}} +\frac {\partial L}{\partial p_{2}} \frac {\partial p_{2}}{\partial Z_{1}}$ $\frac {\partial L}{\partial z_{1}}=-\frac {y1}{p1} p1(1-p1) - \frac {y2}{p2}*(-p_{2}p_{1})$ Since $\frac {\partial p_{j}}{\partial z_{i}} = p_{i}( 1-p_{j})$ when i=j and $\frac {\partial p_{j}}{\partial z_{i}} = -p_{i}p_{j}$ when $i \neq j$ which simplifies to $\frac {\partial L}{\partial Z_{1}} = -y_{1} + y_{1}p_{1} + y_{2}p_{1} =$ $p_{1}\sum (y_{1} + y_2) - y_{1}$ $\frac {\partial L}{\partial Z_{1}}= p_{1} - y_{1}$ Since $\sum_{i} y_{i} =1$ Similarly $\frac {\partial L}{\partial Z_{2}} =\frac {\partial L}{\partial p_{1}} \frac {\partial p_{1}}{\partial Z_{2}} +\frac {\partial L}{\partial p_{2}} \frac {\partial p_{2}}{\partial Z_{2}}$ $\frac {\partial L}{\partial z_{2}}=-\frac {y1}{p1}*(p_{1}p_{2}) - \frac {y2}{p2}*p_{2}(1-p_{2})$ $y_{1}p_{2} + y_{2}p_{2} - y_{2}$ $\frac {\partial L}{\partial Z_{2}} =p_{2}\sum (y_{1} + y_2) - y_{2}\\ = p_{2} - y_{2}$ In general this is of the form $\frac {\partial L}{\partial z_{i}} = p_{i} -y_{i}$ For e.g if the probabilities computed were p=[0.1, 0.7, 0.2] then this implies that the class with probability 0.7 is the likely class. This would imply that the ‘One hot encoding’ for  yi  would be yi=[0,1,0] therefore the gradient pi-yi = [0.1,-0.3,0.2] Note: Further, we could extend this derivation for a Softmax activation output that outputs 3 classes $S=\begin{pmatrix} \frac{e^{z1}}{e^{z1}+e^{z2}+e^{z3}}\\ \frac{e^{z2}}{e^{z1}+e^{z2}+e^{z3}} \\ \frac{e^{z3}}{e^{z1}+e^{z2}+e^{z3}} \end{pmatrix}$ We could derive $\frac {\partial L}{\partial z1}= \frac {\partial L}{\partial p_{1}} \frac {\partial p_{1}}{\partial z_{1}} +\frac {\partial L}{\partial p_{2}} \frac {\partial p_{2}}{\partial z_{1}} +\frac {\partial L}{\partial p_{3}} \frac {\partial p_{3}}{\partial z_{1}}$ which similarly reduces to $\frac {\partial L}{\partial z_{1}}=-\frac {y1}{p1} p1(1-p1) - \frac {y2}{p2}*(-p_{2}p_{1}) - \frac {y3}{p3}*(-p_{3}p_{1})$ $-y_{1}+ y_{1}p_{1} + y_{2}p_{1} + y_{3}p1 = p_{1}\sum (y_{1} + y_2 + y_3) - y_{1} = p_{1} - y_{1}$ Interestingly, despite the lengthy derivations the final result is simple and intuitive! As seen in my post ‘Deep Learning from first principles with Python, R and Octave – Part 3 the key equations for forward and backward propagation are Forward propagation equations layer 1 $Z_{1} = W_{1}X +b_{1}$     and  $A_{1} = g(Z_{1})$ Forward propagation equations layer 1 $Z_{2} = W_{2}A_{1} +b_{2}$  and  $A_{2} = S(Z_{2})$ Backward propagation equations layer 2 $\partial L/\partial W_{2} =\partial L/\partial Z_{2} *A_{1}$ $\partial L/\partial b_{2} =\partial L/\partial Z_{2}$ $\partial L/\partial A_{1} = \partial L/\partial Z_{2} * W_{2}$ Backward propagation equations layer 1 $\partial L/\partial W_{1} =\partial L/\partial Z_{1} *A_{0}$ $\partial L/\partial b_{1} =\partial L/\partial Z_{2}$ #### 2.0 Spiral data set As I mentioned earlier, I will be using the ‘spiral’ data from CS231n Convolutional Neural Networks to ensure that my vectorized implementations in Python, R and Octave are correct. Here is the ‘spiral’ data set. import numpy as np import matplotlib.pyplot as plt import os os.chdir("C:/junk/dl-4/dl-4") # Create an input data set - Taken from CS231n Convolutional Neural networks # http://cs231n.github.io/neural-networks-case-study/ N = 100 # number of points per class D = 2 # dimensionality K = 3 # number of classes X = np.zeros((N*K,D)) # data matrix (each row = single example) y = np.zeros(N*K, dtype='uint8') # class labels for j in range(K): ix = range(N*j,N*(j+1)) t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta X[ix] = np.c_[r*np.sin(t), r*np.cos(t)] y[ix] = j # Plot the data plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral) plt.savefig("fig1.png", bbox_inches='tight') The implementations of the vectorized Python, R and Octave code are shown diagrammatically below #### 2.1 Multi-class classification with Softmax – Python code A simple 2 layer Neural network with a single hidden layer , with 100 Relu activation units in the hidden layer and the Softmax activation unit in the output layer is used for multi-class classification. This Deep Learning Network, plots the non-linear boundary of the 3 classes as shown below import numpy as np import matplotlib.pyplot as plt import os os.chdir("C:/junk/dl-4/dl-4") N = 100 # number of points per class D = 2 # dimensionality K = 3 # number of classes X = np.zeros((N*K,D)) # data matrix (each row = single example) y = np.zeros(N*K, dtype='uint8') # class labels for j in range(K): ix = range(N*j,N*(j+1)) t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta X[ix] = np.c_[r*np.sin(t), r*np.cos(t)] y[ix] = j # Set the number of features, hidden units in hidden layer and number of classess numHidden=100 # No of hidden units in hidden layer numFeats= 2 # dimensionality numOutput = 3 # number of classes # Initialize the model parameters=initializeModel(numFeats,numHidden,numOutput) W1= parameters['W1'] b1= parameters['b1'] W2= parameters['W2'] b2= parameters['b2'] # Set the learning rate learningRate=0.6 # Initialize losses losses=[] for i in range(10000): # Forward propagation through hidden layer with Relu units A1,cache1= layerActivationForward(X.T,W1,b1,'relu') # Forward propagation through output layer with Softmax A2,cache2 = layerActivationForward(A1,W2,b2,'softmax') # No of training examples numTraining = X.shape[0] # Compute log probs. Take the log prob of correct class based on output y correct_logprobs = -np.log(A2[range(numTraining),y]) # Conpute loss loss = np.sum(correct_logprobs)/numTraining # Print the loss if i % 1000 == 0: print("iteration %d: loss %f" % (i, loss)) losses.append(loss) dA=0 # Backward propagation through output layer with Softmax dA1,dW2,db2 = layerActivationBackward(dA, cache2, y, activationFunc='softmax') # Backward propagation through hidden layer with Relu unit dA0,dW1,db1 = layerActivationBackward(dA1.T, cache1, y, activationFunc='relu') #Update paramaters with the learning rate W1 += -learningRate * dW1 b1 += -learningRate * db1 W2 += -learningRate * dW2.T b2 += -learningRate * db2.T #Plot losses vs iterations i=np.arange(0,10000,1000) plt.plot(i,losses) plt.xlabel('Iterations') plt.ylabel('Loss') plt.title('Losses vs Iterations') plt.savefig("fig2.png", bbox="tight") #Compute the multi-class Confusion Matrix from sklearn.metrics import confusion_matrix from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score # We need to determine the predicted values from the learnt data # Forward propagation through hidden layer with Relu units A1,cache1= layerActivationForward(X.T,W1,b1,'relu') # Forward propagation through output layer with Softmax A2,cache2 = layerActivationForward(A1,W2,b2,'softmax') #Compute predicted values from weights and biases yhat=np.argmax(A2, axis=1) a=confusion_matrix(y.T,yhat.T) print("Multi-class Confusion Matrix") print(a) ## iteration 0: loss 1.098507 ## iteration 1000: loss 0.214611 ## iteration 2000: loss 0.043622 ## iteration 3000: loss 0.032525 ## iteration 4000: loss 0.025108 ## iteration 5000: loss 0.021365 ## iteration 6000: loss 0.019046 ## iteration 7000: loss 0.017475 ## iteration 8000: loss 0.016359 ## iteration 9000: loss 0.015703 ## Multi-class Confusion Matrix ## [[ 99 1 0] ## [ 0 100 0] ## [ 0 1 99]] To know more details about Confusion Matrix and other related measures check out my book  My book ‘Practical Machine Learning with R and Python’ on Amazon #### 2.2 Multi-class classification with Softmax – R code The spiral data set created with Python was saved, and is used as the input with R code. The R Neural Network seems to perform much,much slower than both Python and Octave. Not sure why! Incidentally the computation of loss and the softmax derivative are identical for both R and Octave. yet R is much slower. To compute the softmax derivative I create matrices for the One Hot Encoded yi and then stack them before subtracting pi-yi. I am sure there is a more elegant and more efficient way to do this, much like Python. Any suggestions? library(ggplot2) library(dplyr) library(RColorBrewer) </code><code class="hljs"><span class="hljs-keyword">source</span>(<span class="hljs-string">"DLfunctions41.R"</span>) <span class="hljs-comment"># Read the spiral dataset</span> Z1=data.frame(Z) <span class="hljs-comment">#Plot the dataset</span> ggplot(Z1,aes(x=V1,y=V2,col=V3)) +geom_point() + scale_colour_gradientn(colours = brewer.pal(<span class="hljs-number">10</span>, <span class="hljs-string">"Spectral"</span>)) </code><code class="hljs"><span class="hljs-string"># Setup the data X <- Z[,<span class="hljs-number">1</span>:<span class="hljs-number">2</span>] y <- Z[,<span class="hljs-number">3</span>] X1 <- t(X) Y1 <- t(y) <span class="hljs-comment"># Initialize number of features, number of hidden units in hidden layer and</span> <span class="hljs-comment"># number of classes</span> numFeats<-<span class="hljs-number">2</span> <span class="hljs-comment"># No features</span> numHidden<-<span class="hljs-number">100</span> <span class="hljs-comment"># No of hidden units</span> numOutput<-<span class="hljs-number">3</span> <span class="hljs-comment"># No of classes</span> <span class="hljs-comment"># Initialize model</span> parameters <-initializeModel(numFeats, numHidden,numOutput) W1 <-parameters[[<span class="hljs-string">'W1'</span>]] b1 <-parameters[[<span class="hljs-string">'b1'</span>]] W2 <-parameters[[<span class="hljs-string">'W2'</span>]] b2 <-parameters[[<span class="hljs-string">'b2'</span>]] <span class="hljs-comment"># Set the learning rate</span> learningRate <- <span class="hljs-number">0.5</span> <span class="hljs-comment"># Initialize losses</span> losses <- <span class="hljs-literal">NULL</span> <span class="hljs-keyword">for</span>(i <span class="hljs-keyword">in</span> <span class="hljs-number">0</span>:<span class="hljs-number">9000</span>){ <span class="hljs-comment"># Forward propagation through hidden layer with Relu units</span> retvals <- layerActivationForward(X1,W1,b1,<span class="hljs-string">'relu'</span>) A1 <- retvals[[<span class="hljs-string">'A'</span>]] cache1 <- retvals[[<span class="hljs-string">'cache'</span>]] forward_cache1 <- cache1[[<span class="hljs-string">'forward_cache1'</span>]] activation_cache <- cache1[[<span class="hljs-string">'activation_cache'</span>]] <span class="hljs-comment"># Forward propagation through output layer with Softmax units</span> retvals = layerActivationForward(A1,W2,b2,<span class="hljs-string">'softmax'</span>) A2 <- retvals[[<span class="hljs-string">'A'</span>]] cache2 <- retvals[[<span class="hljs-string">'cache'</span>]] forward_cache2 <- cache2[[<span class="hljs-string">'forward_cache1'</span>]] activation_cache2 <- cache2[[<span class="hljs-string">'activation_cache'</span>]] <span class="hljs-comment"># No oftraining examples</span> numTraining <- dim(X)[<span class="hljs-number">1</span>] dA <-<span class="hljs-number">0</span> <span class="hljs-comment"># Select the elements where the y values are 0, 1 or 2 and make a vector</span> a=c(A2[y==<span class="hljs-number">0</span>,<span class="hljs-number">1</span>],A2[y==<span class="hljs-number">1</span>,<span class="hljs-number">2</span>],A2[y==<span class="hljs-number">2</span>,<span class="hljs-number">3</span>]) <span class="hljs-comment"># Take log</span> correct_probs = -log(a) <span class="hljs-comment"># Compute loss</span> loss= sum(correct_probs)/numTraining <span class="hljs-keyword">if</span>(i %% <span class="hljs-number">1000</span> == <span class="hljs-number">0</span>){ sprintf(<span class="hljs-string">"iteration %d: loss %f"</span>,i, loss) print(loss) } <span class="hljs-comment"># Backward propagation through output layer with Softmax units</span> retvals = layerActivationBackward(dA, cache2, y, activationFunc=<span class="hljs-string">'softmax'</span>) dA1 = retvals[[<span class="hljs-string">'dA_prev'</span>]] dW2= retvals[[<span class="hljs-string">'dW'</span>]] db2= retvals[[<span class="hljs-string">'db'</span>]] <span class="hljs-comment"># Backward propagation through hidden layer with Relu units</span> retvals = layerActivationBackward(t(dA1), cache1, y, activationFunc=<span class="hljs-string">'relu'</span>) dA0 = retvals[[<span class="hljs-string">'dA_prev'</span>]] dW1= retvals[[<span class="hljs-string">'dW'</span>]] db1= retvals[[<span class="hljs-string">'db'</span>]] <span class="hljs-comment"># Update parameters</span> W1 <- W1 - learningRate * dW1 b1 <- b1 - learningRate * db1 W2 <- W2 - learningRate * t(dW2) b2 <- b2 - learningRate * t(db2) }</span> ## [1] 1.212487 ## [1] 0.5740867 ## [1] 0.4048824 ## [1] 0.3561941 ## [1] 0.2509576 ## [1] 0.7351063 ## [1] 0.2066114 ## [1] 0.2065875 ## [1] 0.2151943 ## [1] 0.1318807 #Create iterations iterations <- seq(0,10) #df=data.frame(iterations,losses) ggplot(df,aes(x=iterations,y=losses)) + geom_point() + geom_line(color="blue") + ggtitle("Losses vs iterations") + xlab("Iterations") + ylab("Loss") plotDecisionBoundary(Z,W1,b1,W2,b2) Multi-class Confusion Matrix library(caret) library(e1071) # Forward propagation through hidden layer with Relu units retvals <- layerActivationForward(X1,W1,b1,'relu') A1 <- retvals[['A']] # Forward propagation through output layer with Softmax units retvals = layerActivationForward(A1,W2,b2,'softmax') A2 <- retvals[['A']] yhat <- apply(A2, 1,which.max) -1 Confusion Matrix and Statistics Reference Prediction 0 1 2 0 97 0 1 1 2 96 4 2 1 4 95 Overall Statistics Accuracy : 0.96 95% CI : (0.9312, 0.9792) No Information Rate : 0.3333 P-Value [Acc > NIR] : <2e-16 Kappa : 0.94 Mcnemar's Test P-Value : 0.5724 Statistics by Class: Class: 0 Class: 1 Class: 2 Sensitivity 0.9700 0.9600 0.9500 Specificity 0.9950 0.9700 0.9750 Pos Pred Value 0.9898 0.9412 0.9500 Neg Pred Value 0.9851 0.9798 0.9750 Prevalence 0.3333 0.3333 0.3333 Detection Rate 0.3233 0.3200 0.3167 Detection Prevalence 0.3267 0.3400 0.3333 Balanced Accuracy 0.9825 0.9650 0.9625 My book “Practical Machine Learning with R and Python” includes the implementation for many Machine Learning algorithms and associated metrics. Pick up your copy today! #### 2.3 Multi-class classification with Softmax – Octave code A 2 layer Neural network with the Softmax activation unit in the output layer is constructed in Octave. The same spiral data set is used for Octave also source("DL41functions.m") # Read the spiral data data=csvread("spiral.csv"); # Setup the data X=data(:,1:2); Y=data(:,3); # Set the number of features, number of hidden units in hidden layer and number of classes numFeats=2; #No features numHidden=100; # No of hidden units numOutput=3; # No of classes # Initialize model [W1 b1 W2 b2] = initializeModel(numFeats,numHidden,numOutput); # Initialize losses losses=[] #Initialize learningRate learningRate=0.5; for k =1:10000 # Forward propagation through hidden layer with Relu units [A1,cache1 activation_cache1]= layerActivationForward(X',W1,b1,activationFunc ='relu'); # Forward propagation through output layer with Softmax units [A2,cache2 activation_cache2] = layerActivationForward(A1,W2,b2,activationFunc='softmax'); # No of training examples numTraining = size(X)(1); # Select rows where Y=0,1,and 2 and concatenate to a long vector a=[A2(Y==0,1) ;A2(Y==1,2) ;A2(Y==2,3)]; #Select the correct column for log prob correct_probs = -log(a); #Compute log loss loss= sum(correct_probs)/numTraining; if(mod(k,1000) == 0) disp(loss); losses=[losses loss]; endif dA=0; # Backward propagation through output layer with Softmax units [dA1 dW2 db2] = layerActivationBackward(dA, cache2, activation_cache2,Y,activationFunc='softmax'); # Backward propagation through hidden layer with Relu units [dA0,dW1,db1] = layerActivationBackward(dA1', cache1, activation_cache1, Y, activationFunc='relu'); #Update parameters W1 += -learningRate * dW1; b1 += -learningRate * db1; W2 += -learningRate * dW2'; b2 += -learningRate * db2'; endfor # Plot Losses vs Iterations iterations=0:1000:9000 plotCostVsIterations(iterations,losses) # Plot the decision boundary plotDecisionBoundary( X,Y,W1,b1,W2,b2) The code for the Python, R and Octave implementations can be downloaded from Github at Deep Learning – Part 4 #### Conclusion In this post I have implemented a 2 layer Neural Network with the Softmax classifier. In Part 3, I implemented a multi-layer Deep Learning Network. I intend to include the Softmax activation unit into the generalized multi-layer Deep Network along with the other activation units of sigmoid,tanh and relu. Stick around, I’ll be back!! Watch this space! To see all post click Index of posts
2020-10-23T00:52:32
{ "domain": "r-bloggers.com", "url": "https://www.r-bloggers.com/2018/02/deep-learning-from-first-principles-in-python-r-and-octave-part-4/", "openwebmath_score": 0.714314877986908, "openwebmath_perplexity": 7294.436562533405, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357243200245, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573168614957 }
https://www.projectrhea.org/rhea/index.php/ECE_PhD_QE_CNSIP_Jan_2004_Problem1.3
Communication, Networking, Signal and Image Processing (CS) Question 1: Probability and Random Processes January 2004 3. (30 pts.) Let $\mathbf{X}\left(t\right)$ be a real continuous-time Gaussian random process. Show that its probabilistic behavior is completely characterized by its mean $\mu_{\mathbf{X}}\left(t\right)=E\left[\mathbf{X}\left(t\right)\right]$ and its autocorrelation function $R_{\mathbf{XX}}\left(t_{1},t_{2}\right)=E\left[\mathbf{X}\left(t_{1}\right)\mathbf{X}\left(t_{2}\right)\right].$ ## Alumni Liaison BSEE 2004, current Ph.D. student researching signal and image processing. Landis Huffman
2019-01-21T17:52:28
{ "domain": "projectrhea.org", "url": "https://www.projectrhea.org/rhea/index.php/ECE_PhD_QE_CNSIP_Jan_2004_Problem1.3", "openwebmath_score": 0.43153899908065796, "openwebmath_perplexity": 3300.5326350013256, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9817357243200244, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573168614956 }
https://math.stackexchange.com/questions/3450455/is-this-a-valid-change-of-variables
# Is this a valid change of variables? We are solving for the sound wave caused by the motion of a spherical piston. In a spherically symmetric case, the classical wave equation reads: $$\phi_{tt}=a_0^2\Big(\phi_{rr}+\frac{2\phi_r}{r}\Big),\tag{1}$$ where $$\phi$$ is the velocity potential. The general solution for the forward moving wave is given by: $$\phi=\frac{1}{r}f(r-a_0t),$$ where $$a_0$$ is the constant speed of sound. The radial velocity $$u$$ is therefore: $$u=\frac{\partial\phi}{\partial r}=\frac{1}{r^2}f(r-a_0t) - \frac{1}{r}f'(r-a_0t).$$ If the piston motion is given by $$R=R(t)$$, then using the boundary condition for velocity at the piston $$u(R(t),t)=dR/dt$$, we get from the previous equation: $$\dot{R}=\frac{1}{R^2}f(R-a_0t) - \frac{1}{R}f'(R-a_0t),\tag{2}$$ which is a first-order ordinary differential equation that we can solve for $$f$$. Well so it goes. However, the one thing I find annoying about this is the last equation. Namely, the term $$f'(R-a_0t)$$ is not the derivative of $$f$$ w.r.t $$(R-a_0t)$$, but rather the derivative of $$f$$ w.r.t $$(r-a_0t)$$, evaluated at $$r=R$$. Thus, can we solve this as an ODE even though the differentiation is not made w.r.t the same independent variable of the equation? Edit: Replying to the comment by Mattos, $$\phi$$ represents the general form of a wave transported in the positive $$r$$ direction (i.e radially outwards), and it can be easily shown that it satisfies the wave equation, namely: $$\phi_t=-\frac{a_0}{r}f'(r-a_0t),$$ $$\phi_{tt}=\frac{a_0^2}{r}f''(r-a_0t),$$ $$\phi_r=\frac{1}{r}f'(r-a_0t) - \frac{1}{r^2}f(r-a_0t),$$ $$\phi_{rr}= \frac{1}{r}f''(r-a_0t)-\frac{2}{r^2}f'(r-a_0t)+\frac{2}{r^3}f(r-a_0t),$$ and substituting these values into equation (1) yields $$\frac{a_0^2}{r}f''(r-a_0t)=\frac{a_0^2}{r}f''(r-a_0t),$$ thus satisfying the equation. As for the other question, since $$R(t)$$ is given, equation (2) can be solved for $$f$$ up to a constant. • I'm not sure what you mean by 'the general solution for the forward moving wave is $\dots$', the solution $\phi$ doesn't satisfy your PDE. Also, I don't see how you can solve the equation for $f$. The form of $f$ should have been determined from the boundary and initial conditions. – mattos Nov 25 '19 at 14:53 • @mattos please see the edit – Tofi Nov 25 '19 at 15:26 • My apologies, ignore what I said. I totally missed the fact that the solution was $f/\color{red}r$, not $f$. This is why I don't do maths at 3am. I'll have a look at your question in the morning if no one has answered it already – mattos Nov 25 '19 at 15:41
2021-06-17T20:00:17
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3450455/is-this-a-valid-change-of-variables", "openwebmath_score": 0.911006510257721, "openwebmath_perplexity": 129.44365837001902, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357237856483, "lm_q2_score": 0.6654105587468141, "lm_q1q2_score": 0.6532573165059162 }
https://counterexamplesinalgebra.wordpress.com/2017/05/11/morita-equivalence-but-not-equivalence/
# Morita Equivalence but not Equivalence ## Morita Equivalence but not Equivalence ### Non-Commutative Algebra with an Example This is technically a counterexample, it is not one in the traditional sense. Normally, a counterexample is an object for which some intuition says it should not exist. Nobody thinks all Morita equivalent algebras and equivalent. But, just as nobody thinks all exact sequences split, it is helpful to have examples of non-splitting exact sequences at one’s elbow. Morita equivalence is a property of (associative) algebras $A$ and $B.$ It means they have the same representation theory. Definition 1. Let $A,B$ be associative algebras over a field $k.$ Then, $A$ is said to be Morita equivalent to $B$, written $A \sim B$, if there is an equivalence of categories $A\textbf{-mod} \simeq B\textbf{-mod}.$ To paraphrase: being acted on by $A$ is essentially the same as being acted on by $B.$ Notation: Throughout, let $A,B$ denote associative algebras over a fixed field $\mathbb{k}$ For commutative algebras, Morita equivalence coincides with isomorphism. Lemma 2. $Z(A\textbf{-mod}) \simeq Z(A)$ where $Z(A)$ denotes the center of $A$ and $Z(A\textbf{-mod})=\text{End}(\text{Id}_{A\textbf{-mod}}).$ Proof.  Let $z \in A$ be central. Then, define a natural transformation whose component $\varphi_M : M \to M$ at a left $A$-module $M$ is the action of $z$ on $M$. By centrality of $z$$\varphi_M$ is a module homomorphism. Conversely, given a natural transformation $\varphi : \text{Id}_{A\textbf{-mod}} \to \text{Id}_{A\textbf{-mod}}$ define $z:= \varphi_A(1_A) \in A.$ This $z$ is central by naturality of $\varphi.$ Q.E.D. Proposition 3. Let $A,B$ be commutative. Then, $A \sim B$ if and only if $A$ is isomorphic to $B.$ Proof. Clearly, if $A \cong B$ then $A \sim B.$ Conversely, suppose $A \sim B.$ Then, $Z(A\textbf{-mod}) \simeq Z(B\textbf{-mod}).$ By Lemma 2, we are done. Q.E.D. To show this result is “sharp” we construct non-isomorphic algebras that are Morita equivalent. By Proposition 3. (at least) one algebra must be non-commutative. Theorem 4. For $n \geq 1$, $A \sim \text{Mat}_n(A).$ Proof. Let $e \in \text{Mat}_n(A)$ denote the matrix with a $1$ in the $1 \times 1$ spot and zeros everywhere else. Then, $A \simeq e \cdot \text{Mat}_n(A) \cdot e$ and $\text{Mat}_n(A) =\text{Mat}_n(A) \cdot e \cdot\text{Mat}_n(A).$ So by Morita’s Theorem [1, Corollary 2.3.2] we’re done. Q.E.D. For $n \geq 1$, Mat$_n(A)$ is non-commutative. So, for a commutative algebra $A$ we cannot have an isomorphism between $A$ and Mat$_n(A).$ References [1] Ginzburg, V. Lectures on Noncommutative Geometry. https://arxiv.org/pdf/math/0506603.pdf
2017-06-22T18:17:46
{ "domain": "wordpress.com", "url": "https://counterexamplesinalgebra.wordpress.com/2017/05/11/morita-equivalence-but-not-equivalence/", "openwebmath_score": 0.9996231198310852, "openwebmath_perplexity": 1971.3414338584937, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357237856482, "lm_q2_score": 0.6654105587468141, "lm_q1q2_score": 0.6532573165059161 }
https://oneclass.com/class-notes/ca/utsg/mat/mat-136h1/153676-1110-taylor-maclaurin-series-question-4-medium.en.html
Class Notes (1,100,000) CA (630,000) UTSG (50,000) MAT (4,000) MAT136H1 (900) all (200) Lecture # MAT136H1 Lecture Notes - Antiderivative Department Mathematics Course Code MAT136H1 Professor all This preview shows half of the first page. to view the full 1 pages of the document. 11.10 Infinite Sequences & Series Taylor & Maclaurin Series Question #4 (Medium): Evaluating Indefinite Integral as Infinite Series Strategy Any given function inside the indefinite integral can be expressed as a infinite series. Then put that sum expression inside the indefinite integral, then focus on the power of , since the integral is with respect to variable . Then all else is treated as number, including the series counter , but power is raised by , and that power needs to be offset by the reciprocal fraction, thus the outcome being similar series expression with power raised to one more, and offsetting coefficient added to the series expression. Remember to add the arbitrary constant factor of since it is indefinite integral. Sample Question Evaluate the indefinite integral as an infinite series.   Solution First take the expression inside the indefinite integral and express as an infinite series.          Then to evaluate the indefinite integral, put this summation expression inside the integral:         . Remember add the arbitrary constant factor of ! and   , then       
2019-11-22T19:58:24
{ "domain": "oneclass.com", "url": "https://oneclass.com/class-notes/ca/utsg/mat/mat-136h1/153676-1110-taylor-maclaurin-series-question-4-medium.en.html", "openwebmath_score": 0.9712170958518982, "openwebmath_perplexity": 2942.894321974051, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9817357237856483, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573165059161 }
https://www.actucation.com/calculus-2/approximate-area-under-the-graph
Informative line ### Approximate Area Under The Graph Find the area under the graph & a curve approximate the area using rectangles. Practice upper and lower estimates, area under a curve calculus by rectangular approximation method. # The Area Problem From the earlier classes we have learned the formulas for finding the areas of some figures, which have only straight lines as sides. Some of these are given below. (1) (2) (3) (4) #### Consider a rectangle whose length and breadth are $$5\,cm$$ and $$7\,cm$$ respectively and a square whose side is $$6\,cm$$. If $$R$$ is the area of rectangle and $$S$$ is the area of square then, A $$R>S$$ B $$R<S$$ C $$R=S$$ D $$R=2S$$ × $$R=\ell×b$$ $$=5cm×7cm$$ $$=35\,\text{cm}^2$$ $$S=a^2$$ $$=(6cm)^2$$ $$=36\,\text{cm}^2$$ $$R<S$$ ### Consider a rectangle whose length and breadth are $$5\,cm$$ and $$7\,cm$$ respectively and a square whose side is $$6\,cm$$. If $$R$$ is the area of rectangle and $$S$$ is the area of square then, A $$R>S$$ . B $$R<S$$ C $$R=S$$ D $$R=2S$$ Option B is Correct # The Area of Regions which have Curved Sides It is not so easy to find the area of regions which have curved sides. We first start by approximating the area bounded by curved sides through rectangles. Consider the curve $$\to\;y=x^3$$ Suppose we want to estimate the area under $$y=x^3$$ from 0 to 1, base being $$x$$ axis (shaded portion). We can easily say that this area is less than that of square of length 1 so, shaded area < 1. Since, area is a positive quantity, we say that $$0<A<1$$. #### Let $$A$$ be the area under the curve $$y=x^4$$ from 0 to 1, base being $$x$$ axis, as shown by shaded region in the given figure, then A $$A>5$$ B $$0<A<1$$ C $$A>7$$ D $$A>2$$ × Area is less than square of side length 1 Area is a physical quantity so, it is always positive. $$\therefore\;0<A<1$$ ### Let $$A$$ be the area under the curve $$y=x^4$$ from 0 to 1, base being $$x$$ axis, as shown by shaded region in the given figure, then A $$A>5$$ . B $$0<A<1$$ C $$A>7$$ D $$A>2$$ Option B is Correct # Estimating the Lower Bound of Area by using  Left End Points If we use the left end point height of the rectangles, we get a value which is a  lower estimate of the area. Consider  $$y=x^3$$  in [0,1]. We will use four  rectangles.The area we calculate is shaded and is obviously less than the actual area. $$L_4 = \dfrac{1}{4} × 0+\dfrac{1}{4}× \left(\dfrac{1}{4}\right)^3+\dfrac{1}{4}× \left(\dfrac{1}{2}\right)^3+\dfrac{1}{4}× \left(\dfrac{3}{4}\right)^3$$ (The left height of first rectangle is 0) $$= \dfrac{1}{4} \left[\dfrac{1}{64}+\dfrac{1}{8}+\dfrac{27}{64}\right]$$ $$= \dfrac{1}{4}\left[\dfrac{1+8+27}{64}\right]$$ $$= \dfrac{9}{64}$$ $$=0 .1406$$ We say the area $$A>0 .1406$$ $$L_n$$ will represent the area obtained by dividing the interval into n equal parts  and using left end points for heights. $$R_n$$ = Shaded area $$R_n>A$$ $$L_n$$= Shaded area $$L_n<A$$ • As n increases, the values of $$R_n$$ and $$L_n$$ will approach the actual area. • If the function is decreasing, we will observe that  $$R_n<A$$  and  $$L_n >A$$ . #### Estimate the lower bound of area bounded by $$y=\sqrt x$$  in [0,1] by calculating $$L_4$$ . A $$A>0.518$$ B $$A=7$$ C $$A=-2$$ D $$A=11$$ × Divide the interval $$[0,1]$$ into four equal parts i.e., $$\left[0,\dfrac{1}{4}\right],\left[\dfrac{1}{4},\dfrac{1}{2}\right],\left[\dfrac{1}{2},\dfrac{3}{4}\right],\left[\dfrac{3}{4},1\right]$$ $$L_4 = \dfrac{1}{4} × \sqrt 0 + \dfrac{1}{4}× \sqrt{\dfrac{1}{4}}+ \dfrac{1}{4}× \sqrt{\dfrac{1}{2}} + \dfrac{1}{4}× \sqrt{\dfrac{3}{4}}$$ $$= \dfrac{1}{4}× \left[\dfrac{1}{2}+\dfrac{1}{\sqrt2} +\dfrac{\sqrt3}{2} \right]$$ $$= \dfrac{1}{4}× \left[.5+.707 +.866 \right]$$ $$=0.518$$ $$\Rightarrow A>0.518$$ ### Estimate the lower bound of area bounded by $$y=\sqrt x$$  in [0,1] by calculating $$L_4$$ . A $$A>0.518$$ . B $$A=7$$ C $$A=-2$$ D $$A=11$$ Option A is Correct # Estimating the Areas by using Left End Points and Right End Points For an increasing function, we can get a closer estimate to the areas bounded by curved sides by breaking it into parts and then estimating each part by a rectangle. Consider $$y=x^3$$ in $$[0,\,1]$$ We divide $$[0,\,1]$$ into two parts $$\to\;\left[0,\,\dfrac{1}{2}\right]$$ and $$\left[\dfrac{1}{2},\,1\right]$$. Now, we observe that shaded area can be approximated by sum of two rectangles. $$\therefore$$ Area of rectangles $$=\dfrac{1}{2}×\left(\dfrac{1}{2}\right)^3+\dfrac{1}{2}×1^3$$ $$=\dfrac{1}{2}×\dfrac{1}{8}+\dfrac{1}{2}×1$$ $$=\dfrac{1}{16}+\dfrac{1}{2}$$ $$=\dfrac{9}{16}=0.5625$$ $$\therefore$$ Shaded Area $$=A<0.56$$ We have obtained a closer estimate to the area. We can get even better approximation by increasing the number of rectangles and ultimately an accurate area value by taking the limit. • In this approximation, we have used the right end points as heights. ### Estimating the Lower Bound of Area by Using  Left End Points If we use the left end point height of the  rectangles we get a value which is a  lower  estimate of the area. Consider $$y=x^3$$ in [0,1]. We will use four  rectangles. The area we calculate is shaded and is obviously less than the actual area. $$L_4 = \dfrac{1}{4} × 0+\dfrac{1}{4}× \left(\dfrac{1}{4}\right)^3+\dfrac{1}{4}× \left(\dfrac{1}{2}\right)^3+\dfrac{1}{4}× \left(\dfrac{3}{4}\right)^3$$ (The left height of first rectangle is 0) $$= \dfrac{1}{4} \left[\dfrac{1}{64}+\dfrac{1}{8}+\dfrac{27}{64}\right]$$ $$= \dfrac{1}{4}\left[\dfrac{1+8+27}{64}\right]$$ $$= \dfrac{9}{64}$$ $$= .1406$$ We say the area $$A> .1406$$ $$L_n$$ will represent the area obtained by dividing the interval into n equal parts  and using left end points for heights. $$R_n$$= Shaded Area $$R_n>A$$ $$L_n$$= Shaded Area $$L_n <A$$ • As n increases the values of $$R_n$$ and $$L_n$$ will approach the actual area. • If the function is decreasing,  we will observe that $$R_n<A$$ and $$L_n >A$$ . #### Using four rectangles, find lower and upper estimate  of area under the given  graph of a function $$'f'$$ from $$x = 0$$ to $$x = 8$$ . A $$32<A<46$$ B $$A>49$$ C $$2<A<4$$ D $$-1<A<18$$ × Divide the interval [0,8] into 4 equal parts  i.e.,  [0,2], [2,4], [4,6], [6,8]. $$R_4 = 2×\underline 3 + 2 ×\underline 5 +2× \underline7 +2×\underline8$$ $$= 2[3+5+7+8]$$ $$=2×23$$ $$=46$$ The underlined values are $$f(2), f(4), f(6) \,\,{\text{&}}\,\, f(8)$$. $$R_4 = 46$$ $$L_4 = 2× 1+2×3+2× 5+2×7$$ $$=2[1+3+5+7]$$ $$= 2× 16$$ $$= 32$$ $$L_4 =32$$ $$32<A<46$$ ### Using four rectangles, find lower and upper estimate  of area under the given  graph of a function $$'f'$$ from $$x = 0$$ to $$x = 8$$ . A $$32<A<46$$ . B $$A>49$$ C $$2<A<4$$ D $$-1<A<18$$ Option A is Correct # Estimation to the Areas Bounded by Curved Sides by Breaking it into Parts We can get a closer estimate to the areas bounded by curved sides by breaking it into parts and then estimating each part by a rectangle. Consider $$y=x^3$$ in $$[0,\,1]$$ We divide $$[0,\,1]$$ into two parts $$\to\;\left[0,\,\dfrac{1}{2}\right]$$ and $$\left[\dfrac{1}{2},\,1\right]$$ Now, we observe that shaded area can be approximated by sum of two rectangles $$\therefore$$ Area of rectangles $$=\dfrac{1}{2}×\left(\dfrac{1}{2}\right)^3+\dfrac{1}{2}×1^3$$ $$=\dfrac{1}{2}×\dfrac{1}{8}+\dfrac{1}{2}×1$$ $$=\dfrac{1}{16}+\dfrac{1}{2}$$ $$=\dfrac{9}{16}=0.5625$$ $$\therefore$$ Shaded Area $$=A<0.56$$ We have obtained a closer estimate to the area. We can get even better approximation by increasing the number of rectangles and ultimately an accurate area value by taking the limits. • In this approximation, we have used the right end points as heights. #### The estimated value of area bounded by $$y=x^2$$  from 0 to 1 by using two rectangles is A $$A>2$$ B $$A<0.625$$ C $$A>-2$$ D $$A<0.1$$ × Divide the interval $$[0,\,1]$$ into two equal parts $$\to\;\left[0,\,\dfrac{1}{2}\right]$$ and $$\left[\dfrac{1}{2},\,1\right]$$ Now from figure, $$R_2=$$ sum of areas of two rectangles $$=\dfrac{1}{2}×\dfrac{1}{4}+\dfrac{1}{2}×1$$ $$=\dfrac{1}{8}+\dfrac{1}{2}$$ $$=\dfrac{5}{8}=0.625$$ $$\therefore\;A<0.625$$ $$R_n$$ now becomes the symbol for sum of area of $$n$$ rectangles formed by dividing the interval into $$n$$ parts and taking right heights or right end points. ### The estimated value of area bounded by $$y=x^2$$  from 0 to 1 by using two rectangles is A $$A>2$$ . B $$A<0.625$$ C $$A>-2$$ D $$A<0.1$$ Option B is Correct # Approximating the Area using Four Rectangles (R4) We can get a closer estimate to the area bounded by curved sides by breaking it into parts and then estimating end part by a rectangle. Consider  $$y=x^6$$ in $$[0,\,1]$$ The area bounded is as shown. Now this area can be approximated by dividing $$[0,\,1]$$ into four equal intervals $$\left[0,\,\dfrac{1}{4}\right],\;\left[\dfrac{1}{4},\,\dfrac{1}{2}\right],\;\left[\dfrac{1}{2},\,\dfrac{3}{4}\right]$$ and $$\left[\dfrac{3}{4},\,1\right]$$ and calculating the area of rectangle formed. $$(A_1,\,A_2,\,A_3,\;\&\;\,A_4)$$ $$R_4=$$ estimated area using four rectangles $$=\dfrac{1}{4}×\underbrace{\left(\dfrac{1}{4}\right)^6}_{A_1}+\dfrac{1}{4}× \underbrace{\left(\dfrac{1}{2}\right)^6}_{A_2}+\dfrac{1}{4}×\underbrace{\left(\dfrac{3}{4}\right)^6}_{A_3}+\underbrace{\dfrac{1}{4}×1^6}_{A_4}$$ • This value will be larger than shaded area which was to be found. $$\therefore$$ We say that $$A<R_4$$. #### Estimate the area A bounded by the curve $$y=x^3$$ in $$[0,\,1]$$ by taking four rectangles i.e., $$R_4$$ . A $$A<0.3906$$ B $$A>5$$ C $$A=7$$ D $$A<-2$$ × Divide the interval $$[0,\,1]$$ into four equal parts i.e., $$\left[0,\,\dfrac{1}{4}\right],\;\left[\dfrac{1}{4},\,\dfrac{1}{2}\right],\;\left[\dfrac{1}{2},\,\dfrac{3}{4}\right]$$ and $$\left[\dfrac{3}{4},\,1\right]$$ . $$R_4=\dfrac{1}{4}×\left(\dfrac{1}{4}\right)^3+\dfrac{1}{4}×\left(\dfrac{1}{2}\right)^3+\dfrac{1}{4}×\left(\dfrac{3}{4}\right)^3+\dfrac{1}{4}×1^3$$ $$=\dfrac{1}{4}×\dfrac{1}{64}+\dfrac{1}{4}×\dfrac{1}{8}+\dfrac{1}{4}×\dfrac{27}{64}+\dfrac{1}{4}$$ $$=\dfrac{100}{256}$$ $$=0.3906$$ $$A<0.3906$$ ### Estimate the area A bounded by the curve $$y=x^3$$ in $$[0,\,1]$$ by taking four rectangles i.e., $$R_4$$ . A $$A<0.3906$$ . B $$A>5$$ C $$A=7$$ D $$A<-2$$ Option A is Correct # Estimating the Area for a Decreasing Function By Finding $$R_n$$  and $$L_n$$ If $$f$$ is  a decreasing function , then $$R_n$$ will give the lower estimate and $$L_n$$ will give the upper estimate of the actual area. Estimation using $$R_n=Shaded\;Area$$ Estimation using $$L_n=Shaded\; Area$$ $$R_n<A<L_n$$ #### Using five rectangles, find the upper and lower estimate of the area under the given graph of $$'f'$$ from $$x =0$$ to $$x = 10$$ . A $$54<A<72$$ B $$A>96$$ C $$A<34$$ D $$2<A<3$$ × Divide the interval [0,10] into five equal parts, i.e., $$[0,2], [2,4],[4,6],[6,8],[8,10]$$. $$R_5 = 2× f(2) + 2× f(4) + 2×f(6) +2× f(8) +2×f(10)$$ $$= 2× 9+2× 8+2× 6+2× 3+2× 1$$ $$=2[9+8+6+3+1]$$ $$\;= 2× 27\\ = 54$$ $$\Rightarrow R_5 =54$$ $$\Rightarrow L_5 = 2× f(0) +2×f(2) + 2× f(4) +2× f(6)+2× f(8)$$ $$= 2× 10+2× 9+2× 8+2× 6+2× 3$$ $$= 2[10+9+8+6+3]$$ $$\;=2× 36\\=72$$ $$\Rightarrow L_5 = 72$$ $$54<A<72$$ ### Using five rectangles, find the upper and lower estimate of the area under the given graph of $$'f'$$ from $$x =0$$ to $$x = 10$$ . A $$54<A<72$$ . B $$A>96$$ C $$A<34$$ D $$2<A<3$$ Option A is Correct # Approximating Area of Regions Bounded by Functions Whose Expressions are Known Suppose instead of graphs we are given an expression of the function. We sketch that (using standard graphs and graphical transformation) and then approximate using $$R_n \,\,{\text{or}}\,\, L_n$$ as the case may be • If a function is not increasing or decreasing in an interval there is no fixed relation between $$R_n,\,L_n \,\,{\text {and}}\,\,A$$. #### The area bounded by $$f(x)= 3+sin\,x$$  and $$x- axis$$ between $$x =0$$ and $$x = \pi$$ can be approximated as (use four rectangles, $$R_4\,\,\text{and} \,\,L_4$$) A $$A\cong11.315$$ B $$A\cong52$$ C $$A\cong-\dfrac{1}{2}$$ D $$A\cong.01$$ × Difficult to judge whether shaded area is more or less than required area. Sketch $$y=sin\,x$$ and apply the transformation $$y= f(x) +C$$ where $$C=3$$   . Divide the interval  $$[0,\pi]$$ into four equal parts i.e., $$\left(0,\dfrac{\pi}{4}\right),\left(\dfrac{\pi}{4},\dfrac{\pi}{2}\right),\left(\dfrac{\pi}{2},\dfrac{3\pi}{4}\right),\left(\dfrac{3\pi}{4},\pi\right)$$ $$R_4 = \dfrac{\pi}{4}× f\left(\dfrac{\pi}{4}\right) +\dfrac{\pi}{4}× f\left(\dfrac{\pi}{2}\right) + \dfrac{\pi}{4}× f\left(\dfrac{3\pi}{4}\right) +\dfrac{\pi}{4}× f\left(\pi\right)$$ $$= \dfrac{\pi}{4}×\Big[\left(3+\dfrac{1}{\sqrt{2}}\right)+4+\left(3+\dfrac{1}{\sqrt{2}}\right)+3\Big]$$ $$=\dfrac{\pi}{4}[14.414]$$ $$=11.315$$ $$L_4 = \dfrac{\pi}{4}×f(0)+\dfrac{\pi}{4}× f\left(\dfrac{\pi}{4}\right) +\dfrac{\pi}{4}× f\left(\dfrac{\pi}{2}\right) + \dfrac{\pi}{4}× f\left(\dfrac{3\pi}{4}\right)$$$$=\dfrac{\pi}{4}× \left[3+\left(3+\dfrac{1}{\sqrt2}\right)+4+\left(3+\dfrac{1}{\sqrt2}\right) \right]$$ $$=\dfrac{\pi}{4} [14.414] =11.315$$ We observe that,  $$R_4 = L_4 = 11.315$$ We say, $$A\cong11.315$$ ### The area bounded by $$f(x)= 3+sin\,x$$  and $$x- axis$$ between $$x =0$$ and $$x = \pi$$ can be approximated as (use four rectangles, $$R_4\,\,\text{and} \,\,L_4$$) A $$A\cong11.315$$ . B $$A\cong52$$ C $$A\cong-\dfrac{1}{2}$$ D $$A\cong.01$$ Option A is Correct
2018-12-10T07:44:46
{ "domain": "actucation.com", "url": "https://www.actucation.com/calculus-2/approximate-area-under-the-graph", "openwebmath_score": 0.9172749519348145, "openwebmath_perplexity": 594.6165866106718, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357237856482, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.653257316505916 }
https://classes.areteem.org/mod/forum/discuss.php?d=1249
## Online Course Discussion Forum ### Combinatorics question Combinatorics question If you have 5 numbered balls and 5 numbered boxes, then there are 5^5=3125 ways of putting them in the boxes. However, if you first do stars and bars and then determine what the balls' numbers are, the answer is (5+5-1)C5*5!  = 15120, which is not 3125. Where did the overcounting happen? Re: Combinatorics question If a few balls are in the same box, their order does not matter.  That's why it is not simply $5!$ in each case, and that is also why stars and bars cannot apply when the balls are distinguishable.
2023-02-08T06:49:01
{ "domain": "areteem.org", "url": "https://classes.areteem.org/mod/forum/discuss.php?d=1249", "openwebmath_score": 0.7586373686790466, "openwebmath_perplexity": 467.45985302133664, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357237856482, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.653257316505916 }
https://math.stackexchange.com/questions/1602960/given-marginal-probability-distribution-functions-does-there-always-exsit-a-joi
# Given marginal probability distribution functions, does there always exsit a joint distribution which can produce these marginals? For N variables $X_1,X_2,\ldots,X_N$, the PDFs of all $X_i$s are given. Can we prove a joint distribution always exists? What if some consistent PDFs of combinations of these variables are given, i.e., one PDF does not contradict with another? By "consistency" and "contradiction", I meant you cannot produce different margins from any two given PDFs. For example, one can calculate the marginal distribution of $X_1$ from the marginal distribution of $X_1,X_2$. If the marginal distribution of $X_1,X_2$ on $X_1$ is different from the given PDF of $X_1$, then there is an inconsistency (contradiction). • Hint: can you answer the question with the added assumption that $X_1,\ldots, X_n$ are independent? – Jonathan Y. Jan 7 '16 at 8:10 • @JonathanY. Thanks for pointing it out. See my update. – xuhdev Jan 7 '16 at 8:20 • I'm sorry, I don't follow the clarification. What do we mean by one PDF doesn't contradict with another? – Jonathan Y. Jan 7 '16 at 8:22 • @JonathanY. Please see my update. – xuhdev Jan 7 '16 at 8:32 • Let me rephrase, and tell me if I've got your question correctly: Suppose that for some $J_1,\ldots,J_k\subseteq \{X_1,\ldots,X_n\}$ we're given joint PDF's $F_{J_i}$, $i=1,\ldots, k$, such that whenever $J\subset J_i\cap J_j$, the marginal PDF $F_J$ derived from $F_{J_i}$ and $F_{J_j}$ is the same. Then can we make a consistent choice for $F_{X_1,\ldots,X_n}$? – Jonathan Y. Jan 7 '16 at 8:36 Given $\mathcal{X}=\{X_1,\ldots,X_n\}$ and some collection $J_1,\ldots,J_k\in \mathcal{P}(\mathcal{X})$, with PDF's $F_{J_i}$ for all $i=1,\ldots,k$, such that whenever $J\subset J_i\cap J_j$ we have that $F_{J_i}$ and $F_{J_j}$ induce the same marginal PDF $F_J$, we don't necessarily have $F_{\mathcal{X}}$ inducing $F_{J_i}$ for all $i=1,\ldots,k$. Consider, e.g., $X,Y,Z$ where $$F_{X,Y}(x,y) = \begin{cases}0 & \min(x,y)<0\\ \frac{1}{2} & 0\leq\min(x,y)<1\\ 1 & \min(x,y)\geq 1\end{cases}$$ (which amounts to having $X\stackrel{a.s.}{=}Y\sim B(0.5)$), $$F_{Y,Z}(y,z) = \begin{cases}0 & \min(y,z)<0\\ \frac{1}{2} & 0\leq\min(y,z)<1\\ 1 & \min(y,z)\geq 1\end{cases}$$ ($Y\stackrel{a.s.}{=}Z\sim B(0.5)$) and $$F_{X,Z}(x,z) = \begin{cases}0 & \min(x,z)<0 \vee \max(x,z)<1\\ \frac{1}{2} & 0\leq\min(x,z)<1\leq\max(x,z)\\ 1 & \min(x,z)\geq 1\end{cases}$$ ($(1-X)\stackrel{a.s.}{=}Z\sim B(0.5)$). Then from any pair of PDF's we consistently get $$F_X(t) = F_Y(t) = F_Z(t) = \begin{cases}0 & t<0\\ \frac{1}{2} & 0\leq t<1\\ 1 & t\geq 1\end{cases}$$ but no joint PDF $F_{X,Y,Z}$ generates all three marginal distribution functions (since that would imply $X\stackrel{a.s.}{=}(1-X)$). However, if the maximal elements of $J_1,\ldots,J_k$ (w.r.t. inclusion) are all pairwise-disjoint, then we can construct $F_{\mathcal{X}}$ by assuming independence.
2019-09-21T00:21:20
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1602960/given-marginal-probability-distribution-functions-does-there-always-exsit-a-joi", "openwebmath_score": 0.9391493797302246, "openwebmath_perplexity": 362.9788339436775, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357237856481, "lm_q2_score": 0.6654105587468141, "lm_q1q2_score": 0.653257316505916 }
https://math.stackexchange.com/questions/2598158/discontinuity-of-a-series-of-continuous-functions/2598167
# Discontinuity of a series of continuous functions Is the function $$f(x)=x^2+\frac{x^2}{1+x^2}+\frac{x^2}{(1+x^2)^2}+\ldots\infty$$ continuous $\forall x\in\mathbb{R}$? I think no, probably the point of discontinuity is 0. Am I right in that? I think we should use the sum of a geometric series here. Also, is there any relationship to concepts of uniform convergence, like dini theorem etc.? Any hints? Thanks beforehand. Since $f(0)=0$ and since, if $x\neq 0$,$$x^2\left(1+\frac1{1+x^2}+\left(\frac1{1+x^2}\right)^2+\cdots\right)=x^2\frac1{1-\frac1{1+x^2}}=1+x^2,$$your function is discontinuous at $0$, since $\lim_{x\to0}f(x)=1\neq0=f(0)$. • yes, I expected this, but is there any relation to uniform convergence like dini theorem etc.? – vidyarthi Jan 9 '18 at 12:58 • @vidyarthi You can deduce from what I wrote that the convergence is not uniform, since you have a series of continuous functions which converge pointwise to a discontinuous one. – José Carlos Santos Jan 9 '18 at 12:59
2020-08-08T09:43:51
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2598158/discontinuity-of-a-series-of-continuous-functions/2598167", "openwebmath_score": 0.9297904968261719, "openwebmath_perplexity": 223.99069471751804, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357237856482, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.653257316505916 }
https://math.stackexchange.com/questions/2992458/elementary-number-theory-determining-if-there-exist-roots-for-a-polynomial-con
# Elementary Number Theory - Determining if there exist roots for a polynomial congruence with a prime modulus If we consider something like the polynomial $$f(x) = x^3-1$$, and we want to know if there exists any solutions at all for $$x^3 - 1 \equiv 0 \ (mod \ p)$$, where $$p$$ is prime, is there a way to answer this without just plugging in every possible value of $$x$$? (i.e, $$0,1,...,m-1$$). From my class, we learned that if $$p$$ is a prime, and assuming there exists a coefficient of a term in $$f(x)$$ who $$p$$ does not divide, the congruence has at most $$n$$ solutions, where $$n$$ is the degree of the highest degree term , $$cx^a$$, such that $$p \nmid c$$. I've read about some particular examples, like with a quadratic, there are solutions if and only if the discriminant is a square modulo p. For linear congruence's, we do know that there is a simple way to check if there are any solutions at all, which is basically like checking if a linear Diophantine equation has any solutions. I am wondering if there is a general way to extend the question of "do there exist any roots" to any polynomial. A result like this would be delightful to speed-up the process of finding roots when solving congruence's for polynomials. Thanks! • Do you know about primitive roots $\pmod p$? – lulu Nov 10 '18 at 10:29 • Hmm, I don't believe so – Stawbewwy Nov 10 '18 at 10:44 • Well, that theory makes your specific problem easy...$x^3-1\equiv 0\pmod p$ has a solution iff $3\,|\,p-1$. But, generally speaking, it is not easy to find the roots of a general polynomial, even $\pmod p$. – lulu Nov 10 '18 at 10:46 • Your example $x^3-1\equiv0$ is not a great one, since it has an obvious solution $x\equiv1$. – Lord Shark the Unknown Nov 10 '18 at 10:51 • We could just say $x^3 - c$ in general. Also thank you Lulu, I'll take a look at it :) – Stawbewwy Nov 10 '18 at 18:43 To determine whether $$f(x)\equiv0\pmod p$$ has solutions where $$p$$ is prime, compute $$\gcd(f(x),x^p-x)$$ over the field $$\Bbb Z_p$$ of integers modulo $$p$$. This can be done by the Euclidean algorithm for polynomials. The answer you get will be a polynomial $$h(x)$$ whose roots are the roots of $$f$$ lying in $$\Bbb Z_p$$, since $$x^p-x=0$$ has precisely the elements of $$\Bbb Z_p$$ as roots. As long as the degree $$d$$ of $$f$$ is small, this can be done efficiently. The first step of the Euclidean algorithm might seem to pose problems, as it requires finding the remainder when dividing $$x^p-x$$ (which may have large degree) by $$f(x)$$. But this boils down to computing $$x^p-x$$ in the quotient ring $$\Bbb Z_p/(f(x))$$ and computing $$x^p$$ therein can be done efficiently by the binary squaring method.
2019-04-20T12:13:18
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2992458/elementary-number-theory-determining-if-there-exist-roots-for-a-polynomial-con", "openwebmath_score": 0.829417884349823, "openwebmath_perplexity": 115.11203044896376, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357237856482, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.653257316505916 }
https://stats.stackexchange.com/questions/326090/determining-how-much-data-is-needed-for-statistical-significance-of-a-percentage/326108
# Determining how much data is needed for statistical significance of a percentage difference between two points? Imagining the following situation where A/B are independent results: • A is a 51% result, given N datapoints • B is a 50% result, given M datapoints how many data points (ie N and M) do you need to have confidence that the difference between A/B is statistically significant? There are various sample size calculators you can use online and in various software for a two-sample proportions test. Here's the calculation using Stata that suggests that you would need N=M=39,240 for a two-sided alternative test at conventional levels: . power twoproportions .50 .51 Performing iteration ... Estimated sample sizes for a two-sample proportions test Pearson's chi-squared test Ho: p2 = p1 versus Ha: p2 != p1 Study parameters: alpha = 0.0500 power = 0.8000 delta = 0.0100 (difference) p1 = 0.5000 p2 = 0.5100 Estimated sample sizes: N = 78,480 N per group = 39,240 • @enderland Did this clarify things? – Dimitriy V. Masterov Feb 5 '18 at 22:06
2020-02-18T03:48:31
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/326090/determining-how-much-data-is-needed-for-statistical-significance-of-a-percentage/326108", "openwebmath_score": 0.327338844537735, "openwebmath_perplexity": 4400.938477995521, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357237856482, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.653257316505916 }
http://mathhelpforum.com/algebra/214060-algebra-volume-ratios-print.html
# Algebra (Volume, Ratios) • March 1st 2013, 01:16 PM Fratricide Algebra (Volume, Ratios) The figure (fig) shows a solid consisting of three parts, a cone, a cylinder and a hemisphere, all of the same base radius. The first part of the question asks us to find the volume of each part, in terms of w, s, t and π (pi): Volume of cone = (1/3)πt2w Volume of cylinder = πt2s Volume of hemisphere = (1/2)*(4/3)πt3 The next part of the question reads: i: If the volume of each of the three parts is the same, find the ratio w : s : t. ii: If also w + s + t = 11, find the total volume in terms of π. Aaaaaaaand I have no idea, nor am I expected to have an idea -- I was never actually taught how to do this, it was some kind of maniacal challenge set by my teacher to see how I would go about finding an answer. Any help would be much appreciated. Thanks for your time. • March 1st 2013, 02:51 PM Prove It Re: Algebra (Volume, Ratios) Well start by setting two of the volumes equal to each other. \displaystyle \begin{align*} \frac{1}{3}\,\pi\, t^2 \, w &= \pi \, t^2 \, s \end{align*} and find the relationship between w and s. You should be able to form the ratio between them from there. • March 2nd 2013, 01:35 PM Fratricide Re: Algebra (Volume, Ratios) So I did (1/3)πt2w = πt2s and came out with s = (1/3)w and thus w = s/(1/3). I tried solving (1/3)πt2w = (1/2)*(4/3)πt3 for t but failed miserably. Have I done this right? Also, what significance does this have in finding the ratio? • March 3rd 2013, 07:16 PM Fratricide Re: Algebra (Volume, Ratios) Bump. Anyone? • March 3rd 2013, 07:46 PM ibdutt Re: Algebra (Volume, Ratios) • March 3rd 2013, 10:01 PM Fratricide Re: Algebra (Volume, Ratios) Quote: Originally Posted by ibdutt So how did you go from having (1/3)w = s = (2/3)t to having the ratio (6:2:3)? And also, where did "k" come from? • March 4th 2013, 12:39 AM ibdutt Re: Algebra (Volume, Ratios) • March 4th 2013, 12:32 PM Fratricide Re: Algebra (Volume, Ratios) Thanks heaps. One final question to clarify my understanding: do we divide by 2 to make each fraction (1/X), (1/Y), etc? (i.e to turn (2/3) into (1/3) we must divide it by 2 and thus all the others by 2) • March 4th 2013, 08:35 PM ibdutt Re: Algebra (Volume, Ratios) Yes we divide the fractions by a suitable number so that we have 1 in the numerator of each fraction, in this case we divide by 2. • March 4th 2013, 08:36 PM Fratricide Re: Algebra (Volume, Ratios) Thanks heaps.
2016-08-28T05:55:59
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/214060-algebra-volume-ratios-print.html", "openwebmath_score": 0.9271334409713745, "openwebmath_perplexity": 2036.8445574850991, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9817357237856482, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.653257316505916 }
https://susan-stepney.blogspot.com/2017/12/generators-have-state.html
## Saturday, 9 December 2017 ### Generators have state Python’s generator expressions (discussed in two previous posts) are very useful for stream programming. But sometimes, you want some state to be preserved between calls. That is where we need full Python generators. A generator expression is actually a special case of a generator that can be written inline. The generator expression (<expr> for i in <iter>) is shorthand for the generator: def some_gen(): for i in <iter>: yield <expr> For example, (n*n for n in count(1)) can be equivalently written as: from itertools import * def squares(): for n in count(1): yield n*n print_for(squares()) 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, ... The yield statement acts somewhat like a return in providing its value. But the crucial difference is what happens on the next call. An ordinary function starts off again from the top; a generator starts again directly after the previous yield. This becomes important if the generator includes some internal state: this state is maintained between calls. That is, we can have memory of previous state on the next call. This is particularly useful for recurrence relations, where the current value is expressed in terms of previous values (kept as remembered state). ## Running total The accumulate() generator provides a running total of its iterator argument. We can write this as an explicit sum: $$T_N = \sum_{i=1}^N x_i$$ We can instead write this sum as a recurrence relation: $$T_0 = 0 ; T_n = T_{n-1} + x_n$$ Expanding this out explicitly we get $$T_0 = 0 ; T_1 = T_0 + x_1 = 0 + x_1 = x_1 ; T_2 = T_1 + x_2 = x_1 + x_2 ; \ldots$$ $T_0$ is the initial state, which is observed directly. The recurrence term $T_n$ tells us what needs to be remembered of the previous state(s), and how to use that to generate the current value. def running_total(gen): tot = 0 for x in gen: tot += x yield tot print_for(repeat(3)) print_for(running_total(repeat(3))) print_for(running_total(count(1))) 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ... 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, ... 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ... ## Factorial Similarly we can write the factorial as an explicit product:$$N! = F_N = \prod_{i=1}^N i$$ And we can write it as a recurrence relation: $$F_1 = 1; F_n = n F_{n-1}$$ $F_1$ is the initial state, which should be the first output of the generator. We can yield this directly on the first call, then (on the next call) go into a loop yielding the following values. def fact(): f = 1 yield f for n in count(2): f *= n yield f print_for(fact()) 1, 2, 6, 24, 120, 720, 5040, 40320, 362880, 3628800, ... We can remove the special case, and instead calculate the next value after the yield in the loop. The subsequent call then picks up at that calculation. def fact(): f = 1 for n in count(2): yield f f *= n print_for(fact()) 1, 2, 6, 24, 120, 720, 5040, 40320, 362880, 3628800, ... ## Fibonacci numbers The perennially popular Fibonacci numbers are naturally defined using a recurrence relation, involving two previous states:$$F_1 = F_2 = 1 ; F_n = F_{n-1} + F_{n-2}$$ This demonstrates how we can store more state than just the result of the previous yield. def fib(start=(1,1)): a,b = start while True: yield a a,b = b,a+b print_for(fib(),20) print_for(fib((0,3)),20) 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, ... 0, 3, 3, 6, 9, 15, 24, 39, 63, 102, 165, 267, 432, 699, 1131, 1830, 2961, 4791, 7752, 12543, ... ## Logistic map Difference equations, discrete-time analogues of differential equations, are a form of recurrence relation: the value of the state at the next time step is defined in terms of its value at previous timesteps. The logistic map is a famous difference equation, exhibiting a range of periodic and chaotic behaviours, depending on the value of its paramenter $r$. $$x_0 \in (0,1) ; x_{n+1} = r x_n(1-x_n)$$ def logistic_map(r=4): x = 0.1 while True: yield x x = r * x * (1-x) print_for(logistic_map()) print_for(logistic_map(2)) 0.1, 0.36000000000000004, 0.9216, 0.28901376000000006, 0.8219392261226498, 0.5854205387341974, 0.970813326249438, 0.11333924730376121, 0.4019738492975123, 0.9615634951138128, ... 0.1, 0.18000000000000002, 0.2952, 0.41611392, 0.4859262511644672, 0.49960385918742867, 0.49999968614491325, 0.49999999999980305, 0.5, 0.5, ... We can plot these values to see the chaotic ($r=4$) and periodic ($r=2$) behaviours over time. (The %matplotlib inline “magic” allows the plot to display in a Jupyter notebook.) %matplotlib inline import matplotlib.pyplot as py py.plot(list(islice(logistic_map(),200))) py.plot(list(islice(logistic_map(3.5),200))) We can also produce the more usual plot, with the parameter $r$ running along the $x$-axis, highlighting the various areas of periodicity and chaos. from numpy import arange start = 2.8 stop = 4 step = (stop-start)*0.002 skip = int(1/step) # no of initial vals to skip (converged to attractor) npts = 150 # no of values to plot per value of lambda for r in arange(start,stop,step): yl = logistic_map(r) list(islice(yl,skip)) # consume first few items py.scatter(list(islice(repeat(r,npts),npts)), list(islice(yl,npts)), marker='.', s=1) py.xlim(start,stop) py.ylim(0,1) (Here I have used s=1 to get a small point, rather than use a comma marker to get a pixel, because there is currently a bug in the use of pixels in scatter plots.) ## Faster $\pi$ Many series for generating $\pi$ converge very slowly.  One that converges extremely quickly is:$$\pi = \sum_{i=0}^\infty \frac{(i!)^2 \, 2^{i+1}}{(2i+1)!}$$We could use generators for each component of the term to code this us as: import operator def pi_term(): fact = accumulate(count(1), operator.mul) factsq = (i*i for i in fact) twoi1 = (2**(i+1) for i in count(1)) def fact2i1(): i,f = 3,6 while True: yield f f = f * (i+1) * (i+2) i += 2 yield 2 # the i=0 term (needed because of 0! = 1 issues) for i in map(lambda x,y,z: x*y/z, factsq, twoi1, fact2i1()): yield i print_for(accumulate(pi_term()),40) 2, 2.6666666666666665, 2.933333333333333, 3.0476190476190474, 3.098412698412698, 3.121500721500721, 3.132156732156732, 3.1371295371295367, 3.1394696806461506, 3.140578169680336, 3.141106021601377, 3.1413584725201353, 3.1414796489611394, 3.1415379931734746, 3.1415661593449467, 3.1415797881375944, 3.1415863960370602, 3.141589605588229, 3.1415911669915006, 3.1415919276751456, 3.1415922987403384, 3.141592479958223, 3.1415925685536337, 3.1415926119088344, 3.1415926331440347, 3.1415926435534467, 3.141592648659951, 3.14159265116678, 3.141592652398205, 3.1415926530034817, 3.1415926533011587, 3.141592653447635, 3.141592653519746, 3.1415926535552634, 3.141592653572765, 3.1415926535813923, 3.141592653585647, 3.141592653587746, 3.141592653588782, 3.141592653589293, ... However, these terms get large, and take a long time to calculate. import timeit %time [ i for i in islice(pi_term(),10000) if i < 0 ]; Wall time: 10.9 s We can instead write the terms as a recurrence relation. Let $$F_{k-1} = \frac{((k-1)!)^2 \, 2^{k}}{(2k-1)!}$$ Then $$\begin{eqnarray} F_{k} &=& \frac{((k)!)^2 \, 2^{k+1}}{(2k+1)!} \ &=& \frac{((k-1)! k)^2 \, 2 \times 2^{k}}{(2k-1)!(2k)(2k+1)} \ &=& \frac{((k-1)!)^2 \, 2^{k} 2k^2}{(2k-1)! 2k(2k+1)} \ &=& F_{k-1} \frac{k}{2k+1} \end{eqnarray}$$ So $$F_0 = 2 ; F_{n} = F_{n-1} \frac{n}{2n+1}$$ import math def pi_term_rec(): pt = 2 for n in count(1): yield pt pt = pt * n/(2*n+1) print_for(pi_term_rec(),30) print_for(accumulate(pi_term_rec()),30) print(math.pi) We can see how quickly the terms shrink. 2, 0.6666666666666666, 0.26666666666666666, 0.1142857142857143, 0.0507936507936508, 0.02308802308802309, 0.010656010656010658, 0.004972804972804974, 0.002340143516614105, 0.0011084890341856288, 0.0005278519210407756, 0.0002524509187586318, 0.00012117644100414327, 5.8344212335328244e-05, 2.816617147222743e-05, 1.3628792647851983e-05, 6.607899465625204e-06, 3.209551169017956e-06, 1.561403271414141e-06, 7.606836450479149e-07, 3.710651927063e-07, 1.8121788481005348e-07, 8.85954103515817e-08, 4.335520081034849e-08, 2.123520039690538e-08, 1.0409411959267343e-08, 5.1065039800179424e-09, 2.5068292265542627e-09, 1.2314248832196377e-09, 6.052766375147372e-10, ... 2, 2.6666666666666665, 2.933333333333333, 3.0476190476190474, 3.098412698412698, 3.121500721500721, 3.132156732156732, 3.1371295371295367, 3.1394696806461506, 3.140578169680336, 3.141106021601377, 3.1413584725201353, 3.1414796489611394, 3.1415379931734746, 3.1415661593449467, 3.1415797881375944, 3.1415863960370602, 3.141589605588229, 3.1415911669915006, 3.1415919276751456, 3.1415922987403384, 3.141592479958223, 3.1415925685536337, 3.1415926119088344, 3.1415926331440347, 3.1415926435534467, 3.141592648659951, 3.14159265116678, 3.141592652398205, 3.1415926530034817, ... 3.141592653589793 Not only is this code simpler, it is also much faster: %time [ i for i in islice(pi_term_rec(),10000) if i < 0 ] Wall time: 3.98 ms ### Sieving for primes No discussion of generators would be complete without including a prime number generator. The standard algorithm is quite straightforward: • Generate consecutive whole numbers, and test each for divisibility.  If the current number isn’t divisible by anything, yield a new prime. • Only test for divisibility by primes, and only up to the square root of the number being tested; this requires keeping a record of the primes found so far. • Optimisation: treat 2 as a special case, and generate and test only the odd numbers. def primessqrt(): primessofar = [] yield 2 for n in count(3,2): # check odd numbers only, starting with 3 sqrtn = int(math.sqrt(n)) testprimes = takewhile(lambda i: i<=sqrtn, primessofar) isprime = all(n % p for p in testprimes) # n % p == 0 if n is divisible if isprime: # if new prime, add to list and yield, else continue yield n primessofar.append(n) print_while(primessqrt(), 200) 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, ... This algorithm is often referred to as the Sieve of Eratosthenes, but the true sieve is somewhat different in its operation.  The sieve doesn’t require any square roots or divisions: it uses addition only, moving a marker through the numbers, striking out the multiples.  Using generators we can do this lazily, using a dictionary to store which marks have struck out a particular value (for example, when we get to 15, the dictionary entry will show that it has been struck out by 3). • Generate consecutive whole numbers, and check whether each one’s dictionary entry has any markers. • If there are no markers, yield a new prime $p$; start a new marker to strike out multiples of $p$ (start it at $p^2$, for the same reason the previous algorithm only needs to test values up to $\sqrt n$). • If there are markers (such as for 15), the value isn’t prime; move each marker on to the next value it strikes out (so for 15, move the 3 marker on 6 places to strike out 21: each marker is moved on by twice its value, since we are optimising by not considering even values) • Delete the current dictionary entry (so that the dictionary grows only as the number of primes found so far, rather than as the number of values checked, speeding up dictionary access). from collections import defaultdict def primesieve(): yield 2 sieve = defaultdict(set) # dict of n:{divisor} elems for n in count(3,2): # check odd numbers only if sieve[n] : # there are divisors, so not prime for d in sieve[n]: # move the sieve markers on else: #set empty, so prime yield n # remove current dict item del sieve[n] print_for(primesieve(), 100) 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509, 521, 523, 541, ... And this “true” sieve is faster than the divisor version. %time [ i for i in islice(primessqrt(),100000) if i < 0 ] %time [ i for i in islice(primesieve(),100000) if i < 0 ] Wall time: 4.69 s Wall time: 735 ms So full Python generators are even more fun than generator expressions! And in fact, there’s yet more fun to be had with generators, because it is possible to send values to them on each call, too; but that’s another story for another time.
2018-08-14T21:31:31
{ "domain": "blogspot.com", "url": "https://susan-stepney.blogspot.com/2017/12/generators-have-state.html", "openwebmath_score": 0.49030253291130066, "openwebmath_perplexity": 1199.3678638761353, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357232512719, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573161503363 }
https://web2.0calc.com/questions/plz-help_36
+0 # PLZ HELP!!! +1 42 1 +25 Triangle ABC has vertices A(0, 0), B(0, 3) and C(5, 0). A point P inside the triangle is √10 units from point A and √13 units from point B. How many units is P from point C? Express your answer in simplest radical form. OOPS!!! CluelesssPersonnn  Jul 12, 2018 edited by CluelesssPersonnn  Jul 12, 2018 #1 +19636 +2 Triangle ABC has vertices A(0, 0), B(0, 3) and C(5, 0). A point P inside the triangle is v10 units from point A and v13 units from point B. How many units is P from point C? $$\text{Circle at A:}$$ $$\begin{array}{|rcll|} \hline (x-x_A)^2 +(y-y_A)^2 &=& r_A^2 \quad & | \quad x_A = 0 \\ && \quad & | \quad y_A = 0 \\ && \quad & | \quad r_A = \sqrt{10} \\ x^2 +y^2 &=& 10 & (1) \\ \hline \end{array}$$ $$\text{Circle at B:}$$ $$\begin{array}{|rcll|} \hline (x-x_B)^2 +(y-y_B)^2 &=& r_B^2 \quad & | \quad x_B = 0 \\ && \quad & | \quad y_B = 3 \\ && \quad & | \quad r_B = \sqrt{13} \\ x^2 +(y-3)^2 &=& 13 & (2) \\ \hline \end{array}$$ $$\text{Point P at (x_P,y_P):}$$ $$\begin{array}{|lrcll|} \hline & x_P^2 +y_P^2 &=& 10 \qquad (3) \quad & | \quad x^2 +y^2 = 10\\ & x_P^2 +(y_P-3)^2 &=& 13 \qquad (4) \quad & | \quad x^2 +(y-3)^2 = 13 \\ \hline (3)-(4): & x_P^2 +y_P^2 - ( x_P^2 +(y_P-3)^2 ) &=& 10-13 \\ & x_P^2 +y_P^2 - x_P^2 - (y_P-3)^2 &=& -3 \\ & y_P^2 - (y_P-3)^2 &=& -3 \\ & y_P^2 - y_P^2 + 6y_P -9 &=& -3 \\ & 6y_P -9 &=& -3 \\ & 6y_P &=& 6 \\ & \mathbf{ y_P } & \mathbf{=}& \mathbf{1} \\\\ \hline & x_P^2 +y_P^2 &=& 10 \quad & | \quad y_P=1 \\ & x_P^2 + 1&=& 10 \\ & x_P^2 &=& 9 \\ & \mathbf{ x_P } & \mathbf{=}& \mathbf{3} \\ \hline \end{array}$$ $$\text{Distance P and C :}$$ $$\begin{array}{|rcll|} \hline P(3,1) \\ C(5, 0) \\ \text{Distance} &=& \sqrt{(3-5)^2+(1-0)^2} \\ &=& \sqrt{(-2)^2+1^2} \\ &=& \sqrt{4+1} \\ & \mathbf{=}& \mathbf{\sqrt{5} } \\ \hline \end{array}$$ heureka  Jul 12, 2018 #1 +19636 +2 Triangle ABC has vertices A(0, 0), B(0, 3) and C(5, 0). A point P inside the triangle is v10 units from point A and v13 units from point B. How many units is P from point C? $$\text{Circle at A:}$$ $$\begin{array}{|rcll|} \hline (x-x_A)^2 +(y-y_A)^2 &=& r_A^2 \quad & | \quad x_A = 0 \\ && \quad & | \quad y_A = 0 \\ && \quad & | \quad r_A = \sqrt{10} \\ x^2 +y^2 &=& 10 & (1) \\ \hline \end{array}$$ $$\text{Circle at B:}$$ $$\begin{array}{|rcll|} \hline (x-x_B)^2 +(y-y_B)^2 &=& r_B^2 \quad & | \quad x_B = 0 \\ && \quad & | \quad y_B = 3 \\ && \quad & | \quad r_B = \sqrt{13} \\ x^2 +(y-3)^2 &=& 13 & (2) \\ \hline \end{array}$$ $$\text{Point P at (x_P,y_P):}$$ $$\begin{array}{|lrcll|} \hline & x_P^2 +y_P^2 &=& 10 \qquad (3) \quad & | \quad x^2 +y^2 = 10\\ & x_P^2 +(y_P-3)^2 &=& 13 \qquad (4) \quad & | \quad x^2 +(y-3)^2 = 13 \\ \hline (3)-(4): & x_P^2 +y_P^2 - ( x_P^2 +(y_P-3)^2 ) &=& 10-13 \\ & x_P^2 +y_P^2 - x_P^2 - (y_P-3)^2 &=& -3 \\ & y_P^2 - (y_P-3)^2 &=& -3 \\ & y_P^2 - y_P^2 + 6y_P -9 &=& -3 \\ & 6y_P -9 &=& -3 \\ & 6y_P &=& 6 \\ & \mathbf{ y_P } & \mathbf{=}& \mathbf{1} \\\\ \hline & x_P^2 +y_P^2 &=& 10 \quad & | \quad y_P=1 \\ & x_P^2 + 1&=& 10 \\ & x_P^2 &=& 9 \\ & \mathbf{ x_P } & \mathbf{=}& \mathbf{3} \\ \hline \end{array}$$ $$\text{Distance P and C :}$$ $$\begin{array}{|rcll|} \hline P(3,1) \\ C(5, 0) \\ \text{Distance} &=& \sqrt{(3-5)^2+(1-0)^2} \\ &=& \sqrt{(-2)^2+1^2} \\ &=& \sqrt{4+1} \\ & \mathbf{=}& \mathbf{\sqrt{5} } \\ \hline \end{array}$$ heureka  Jul 12, 2018
2018-07-19T07:58:45
{ "domain": "0calc.com", "url": "https://web2.0calc.com/questions/plz-help_36", "openwebmath_score": 0.8772412538528442, "openwebmath_perplexity": 1515.1318909563665, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357232512719, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573161503363 }
https://bitdrivencircuits.com/Circuit_Analysis/Phasors_AC/meshHome.html
# Analyzing AC Circuits using Mesh Analysis Mesh analysis is based off of Kirchoff's Voltage Law (KVL). Since KVL is valid for AC circuits, we can therefore use mesh analysis to analyze these same circuits. The simplest way is to begin with an example problem. ### For the following circuit, determine the phasor current (Io) using mesh analysis: Notice that in the above circuit the bottom two meshes have a common current source. In such a case we proceed in a similar manner as with a DC circuit that has two meshes with a common current source and create a "super-mesh" by eliminating the current source but retaining the individual mesh currents. The following circuit shows our super mesh as well as the 4 individual mesh currents: ### Apply KVL inside the super-mesh: $$\mathbb{I}_3 + \mathbb{I}_4 - j4(\mathbb{I}4 - \mathbb{I}_1) + j2(\mathbb{I}_3 - \mathbb{I}_2) = 0$$ $$\mathbb{I}_3 + \mathbb{I}_4 - j4\mathbb{I}_4 + j4\mathbb{I}_1 + j2\mathbb{I}_3 - j2\mathbb{I}_2 = 0$$ $$j4\mathbb{I}_1 - j2\mathbb{I}_2 + \mathbb{I}_3(1+j2) + \mathbb{I}_4(1-j4) = 0 \qquad,(Eqn \; 1)$$ However, we can also create an equation out of meshes 3/4 and their common current source: $$\mathbb{I}_3 - \mathbb{I}_4 = 4$$ $$\mathbb{I}_3 = 4 + \mathbb{I}_4$$ Now go ahead and substitute this expression for phasor current II3 into equation #1: Equation #1 now becomes: $$j4\mathbb{I}_1 - j2\mathbb{I}_2 + (4 + \mathbb{I}_4)(1+j2) + \mathbb{I}_4(1-j4) = 0$$ $$j4\mathbb{I}_1 - j2\mathbb{I}_2 + 4 + j8 + \mathbb{I}_4 + j2\mathbb{I}_4 + \mathbb{I}_4(1-j4) = 0$$ $$j4\mathbb{I}_1 - j2\mathbb{I}_2 + \mathbb{I}_4(1-j4+1+j2) = -4-j8$$ $$j4\mathbb{I}_1 - j2\mathbb{I}_2 + \mathbb{I}_4(2-j2) = -4-j8$$ If we convert the above equation (containing complex numbers) from rectangular to polar form we get: $$4\mathbb{I}_1 \angle 90^{\circ} + 2\mathbb{I}_2 \angle (-90^{\circ}) + 2.828\mathbb{I}_4 \angle (-45^{\circ}) = 8.944 \angle 243.4^{\circ} \;\;\;\;,(Eqn \; 2)$$ However, notice that in mesh #2 we have: $$\mathbb{I}_2 = -2$$ ...and if we substitute this into equation #2 and bring the result to the right side of the equation, we get: $$4\mathbb{I}_1 \angle 90^{\circ} + 2(-2) \angle (-90^{\circ}) + 2.828\mathbb{I}_4 \angle (-45^{\circ})= 8.944 \angle 243.4^{\circ}$$ $$4\mathbb{I}_1 \angle 90^{\circ} + 2.828\mathbb{I}_4 \angle (-45^{\circ}) = 8.944 \angle 243.4^{\circ} + 4 \angle (-90^{\circ})$$ Working only the right side of the above equation (in rectangular form): $$"\qquad \qquad \qquad " = -4-j8-j4$$ $$"\qquad \qquad \qquad " = -4-j12$$ ...gives us the following (after converting back to polar form): $$4\mathbb{I}_1 \angle 90^{\circ} + 2.828\mathbb{I}_4 \angle (-45^{\circ}) = 12.65 \angle 251.6^{\circ} \qquad,(Eqn \; 3)$$ ### Apply KVL inside mesh #1: $$-j4(\mathbb{I}_1 - \mathbb{I}_4) - 10 \angle 90^{\circ} + 2(\mathbb{I}_1 - \mathbb{I}_2) = 0$$ Recall that: $$\mathbb{I}_2 = -2$$ and substitute it into the above equation to get: $$-j4\mathbb{I}_1 + j4\mathbb{I}_4 + 2\mathbb{I}_1 + 4 = 10 \angle 90^{\circ}$$ $$\mathbb{I}_1(2-j4) + j4\mathbb{I}_4 = 10 \angle 90^{\circ} - 4$$ $$\mathbb{I}_1(2-j4) + j4\mathbb{I}_4 = -4 + 10j$$ Converting to polar form gives us: $$4.472\mathbb{I}_1 \angle(-63.43^{\circ}) + 4\mathbb{I}_4 \angle 90^{\circ} = 10.77 \angle 111.8^{\circ} \qquad, (Eqn \; 4)$$ ### Solve system of equations for equations 3/4 While several techniques can be used to solve the equations, we will use Cramer's Rule. # Create an augmented matrix of coefficients from our system of equations: $$\begin{pmatrix} 4 \angle 90^{\circ}&2.828 \angle (-45^{\circ})&12.65 \angle 251.6^{\circ}\\ 4.472\angle (-63.43^{\circ})&4 \angle 90^{\circ}&10.77 \angle 111.8^{\circ}\\ \end{pmatrix}$$ # Find the determinant of the non-augmented matrix of coefficients: $$D = \begin{vmatrix} 4 \angle 90^{\circ}&2.828 \angle (-45^{\circ})\\ 4.472\angle (-63.43^{\circ})&4 \angle 90^{\circ}\\ \end{vmatrix}$$ $$= (4 \angle 90^{\circ})(4 \angle 90^{\circ}) - \Big( 4.472\angle (-63.43^{\circ}) \Big) \Big( 2.828 \angle (-45^{\circ}) \Big)$$ $$= 16 \angle 180^{\circ} - 12.65 \angle (-108.4^{\circ})$$ $$-16 + 3.993 + j12$$ $$= -12.01 + j12$$ $$D = 16.98 \angle 135^{\circ}$$ # Create a matrix that consists of the non-augmented matrix of coefficients with its 1st column replaced by the 3rd column of the augmented matrix of coefficients: ...and proceed to find the determinant: $$D_{\mathbb{I}_1} = \begin{vmatrix} 12.65 \angle 251.6^{\circ}&2.828 \angle (-45^{\circ})\\ 10.77 \angle 111.8^{\circ}&4 \angle 90^{\circ}\\ \end{vmatrix}$$ $$D_{\mathbb{I}_1} = (12.65 \angle 251.6^{\circ})(4 \angle 90^{\circ}) - (10.77 \angle 111.8^{\circ})(2.828 \angle (-45^{\circ}))$$ $$= 50.6 \angle 341.6^{\circ} - 30.46 \angle 66.8^{\circ}$$ $$= 48.01 - j15.97 - 12 - j28$$ $$36.01 - j43.97$$ $$D_{\mathbb{I}_1} = 56.83 \angle (-50.68^{\circ})$$ From the original two circuit schematics, note that: $$\mathbb{I}_1 = \mathbb{I}_o$$ ...and recall that Cramer's Rule states the following: $$\mathbb{I}_1 = \frac{D_{\mathbb{I}_1}}{D}$$ Therefore: $$\mathbb{I}_o = \frac{D_{\mathbb{I}_1}}{D}$$ $$= \frac{56.83 \angle (-50.68^{\circ})}{16.98 \angle 135^{\circ}}$$ $$= \frac{56.83}{16.98} \angle (-50.68^{\circ} - 135^{\circ})$$ $$\mathbb{I}_o = 3.347 \angle (-185.7^{\circ}) = 3.347 \angle 174.3^{\circ}$$
2022-01-23T16:06:09
{ "domain": "bitdrivencircuits.com", "url": "https://bitdrivencircuits.com/Circuit_Analysis/Phasors_AC/meshHome.html", "openwebmath_score": 0.7853605151176453, "openwebmath_perplexity": 473.63756047488755, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357227168957, "lm_q2_score": 0.6654105587468141, "lm_q1q2_score": 0.6532573157947569 }
https://derangedphysiology.com/main/cicm-primary-exam/required-reading/research-methods-and-statistics/Chapter%203.0.2/variability-dispersion-and-central-tendency
Quantitative data can be described by measures of central tendency, dispersion, and "shape". Central tendency is described by median, mode, and the means (there are different means- geometric and arithmetic). Dispersion is the degree to which data is distributed around this central tendency, and is represented by range, deviation, variance, standard deviation and standard error. This chapter answers parts from Section A(e) of the Primary Syllabus, "Describe frequency distributions and measures of central tendency and dispersion".  This topic was examined in Question 23 from the first paper of 2015. The pass rate was 8%. The only candidate who passed "gave an example of a simple data set (a set of numbers) and calculated the mean, median & mode and explained the effect of an outlier". Quantitative data can be described by measures of central tendency, dispersion, and "shape". This sort of data is: • Expressed numerically, and ordered on a scale • Interval data: increase at constant intervals, but do not start at zero, eg. temperature on the Celsius scale • Ratio data: interval data which has a true zero, eg. pressure • Binary data: yes or no answers • Discrete data: isolated data points separated by gaps • Continuous data: part of a continuous range of values ### Measure of central tendency • This is the average of a population - allowing the population to be represented by a single value. • Examples: • Median is the middle number in a data set that is ordered from least to greatest • Mode is the number that occurs most often in a data set • Arithmetic mean  is the average of a set of numerical values, • Geometric mean is the nth root of the product of n numbers ### Degree of dispersion • These describe the dispersion of data around some sort of mean. • Range: the highest and the lowest score • Interquartile range: the difference between 75th and 25th percentiles • Percentile: the percentage band into which the score falls (mean = the 50th percentile) • Deviation: distance between an observed score and the mean score. Because the difference can be positive or negative and this is cumbersome, usually the absolute deviation is used (which ignores the plus or minus sign). • Variance: deviation squared • Standard deviation: square root of variance • Measure of the average spread of individual samples from the mean • Reporting the SD along with the mean gives one the impression of how valid that mean value actually is (i.e. if the SD is huge, the mean is totally invalid - it is not an accurate measure of central tendency, because the data is so widely scattered.) • Standard error • This is an estimate of spread of samples around the population mean. • You don't known the population mean- you only know the sample mean and the standard deviation for your sample, but if the standard deviation is large, the sample mean may be rather far from the population mean. How far is it? The SE can estimate this. • Mean absolute deviation is the average of the absolute deviations from a central point for all data. As such, it is a summary of the net statistical variability in the data set. On average, it says, the data is this different from this central point. • Coefficient of variation, also known as "relative standard deviation", is the SD divided by the mean. As a dimensionless number, it allows comparisons between different data sets (i.e. ones using different units). Standard Error (SE) = SD / square root of n • The variability among sample means will be increased if there is (a) a wide variability of individual data and (b) small samples • SE is used to calculate the confidence interval. ### Shape of the data • This vaguely refers to the shape of the probability distribution bell curve. • Skewness is a measure of the assymetry of the probability distribution - the tendency of the bell curve to be assymmetrical. • Kurtosis or "peakedness" describes the width and height of the peak of the bell curve, i.e. the tendency for the scores to gather around the middle of the bell curve. • A normal distribution is a perfectly symmetrical bell curve, and is not skewed. ### Point estimate • According to the college, point estimate is "a single value estimate of a population parameter. It represents a descriptive statistic for a summary measure, or a measure of central tendency from a given population sample." For example, in a population there exists a "true" average height; the point estimate of this average height is the average height of a sample group taken from that population. ### Confidence interval •  The range of values within which the "actual" result is found. • A CI of 95% means that if a trial was repeated an infinite number of times, 95% of the results would fall within this range of values. • The CI gives an indication of the precision of the sample mean as an estimate of the "true" population mean • A wide CI can be caused by small samples or by a large variance within a sample. ### References Lecture on types of data; by Keith G. Calkins Richards, Derek. "Types of data." Evidence-based dentistry 8.2 (2007): 57-58.
2020-02-29T07:38:45
{ "domain": "derangedphysiology.com", "url": "https://derangedphysiology.com/main/cicm-primary-exam/required-reading/research-methods-and-statistics/Chapter%203.0.2/variability-dispersion-and-central-tendency", "openwebmath_score": 0.8395689725875854, "openwebmath_perplexity": 851.1183965316269, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9817357227168956, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573157947568 }
https://math.stackexchange.com/questions/1516223/how-many-ways-can-pq-people-sit-around-2-circular-tables-of-sizes-p-q
# How many ways can $p+q$ people sit around $2$ circular tables - of sizes $p,q$? How many ways can $p+q$ people sit around $2$ circular tables - first table of size $p$ and the second of size $q$? My attempt was: • First choose one guy for the first table - $p+q\choose1$. • Then choose the rest $p-1$ people - $(p+q-1)! \over p!$.
2019-06-27T12:25:25
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1516223/how-many-ways-can-pq-people-sit-around-2-circular-tables-of-sizes-p-q", "openwebmath_score": 0.8664847612380981, "openwebmath_perplexity": 268.15995434054713, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9817357227168956, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573157947568 }
https://testbook.com/question-answer/a-slender-uniform-rigid-bar-of-mass-m-is-hinged-at--5de273dff60d5d5a988e7bec
# A slender uniform rigid bar of mass m is hinged at O and supported by two springs, with stiffnesses 3k and k, and a damper with damping coefficient c, as shown in the figure. For the system to be critically damped, the ratio $$\frac{c}{\sqrt {km} }$$ should be This question was previously asked in PY 4: GATE ME 2019 Official Paper: Shift 2 View all GATE ME Papers > 1. 2 2. 4 3. 2√7 4. 4√7 Option 4 : 4√7 Free CT 1: Ratio and Proportion 1963 10 Questions 16 Marks 30 Mins ## Detailed Solution For small angular rotation θ of the rod, compression in the spring (3k) is $${\delta _1} = \left( {\frac{L}{4}} \right)\theta \Rightarrow {F_1} = {F_{3k}} = {k_1}{\delta _1} = 3k\frac{L}{4}\theta = \frac{3}{4}kL\theta$$ Expansion of damper: $${\delta _2} = \left( {\frac{L}{4}} \right)\theta$$ $$\Rightarrow {\dot \delta _2} = \frac{L}{4}\dot \theta$$ $$\Rightarrow {F_2} = {F_c} = C\;{\dot \delta _2} = \frac{{CL}}{4}\dot \theta$$ Expansion of spring with stiffness k is $${\delta _3} = \left( {\frac{L}{2} + \frac{L}{4}} \right)\theta = \frac{{3L}}{4}\theta$$ $$\Rightarrow {F_3} = {F_k} = \frac{{3L}}{4}\theta k$$ Now taking moment of all forces about O and inertia forces to be zero, we get $$I\overset{\ddot{\ }}{\mathop{\theta }}\,+\frac{3}{4}kL\theta \left( \frac{L}{4} \right)+\frac{CL}{4}\dot{\theta }\frac{L}{4}+\frac{3L}{4}\theta k\left( \frac{3L}{4} \right)=0$$ $$I = {I_{about\;centre}} + m{\left( {\frac{L}{4}} \right)^2} = \frac{{m{L^2}}}{{12}} + \frac{{m{L^2}}}{{16}} = \frac{{7m{L^2}}}{{48}}$$ $$\frac{7}{48}m{{L}^{2}}\overset{\ddot{\ }}{\mathop{\theta }}\,+\frac{3}{16}k{{L}^{2}}\theta +\frac{9}{16}k{{L}^{2}}\theta +\frac{C{{L}^{2}}}{16}\dot{\theta }=0$$ $$\Rightarrow \frac{7}{3}m{{L}^{2}}\overset{\ddot{\ }}{\mathop{\theta }}\,+12k{{L}^{2}}\theta +C{{L}^{2}}\dot{\theta }=0$$ Comparing with: meqθ̈ + ceqθ̇ +keqθ = 0 $$\Rightarrow \xi =\frac{{{C}_{eq}}}{2\sqrt{{{k}_{eq}}{{m}_{eq}}}}=\frac{C{{L}^{2}}}{2\sqrt{12k{{L}^{2}}\times \frac{7}{3}m{{L}^{2}}}}=\frac{C{{L}^{2}}}{2\sqrt{12k{{L}^{2}}\times \frac{7}{3}m{{L}^{2}}}}=\frac{1}{2\left( 2\sqrt{7} \right)}\frac{C}{\sqrt{mk}}$$ Also for critical damping ξ = 1 $$\Rightarrow \frac{C}{\sqrt{mk}}\times \frac{1}{4\sqrt{7}}=1$$ $$\Rightarrow \frac{C}{\sqrt{mk}}=4\sqrt{7}$$
2021-09-27T04:45:56
{ "domain": "testbook.com", "url": "https://testbook.com/question-answer/a-slender-uniform-rigid-bar-of-mass-m-is-hinged-at--5de273dff60d5d5a988e7bec", "openwebmath_score": 0.6959406137466431, "openwebmath_perplexity": 4494.465552897949, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357227168956, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573157947568 }
https://planetmath.org/TranslationAutomorphismOfAPolynomialRing
# translation automorphism of a polynomial ring Let $R$ be a commutative ring, let $R[X]$ be the polynomial ring over $R$, and let $a$ be an element of $R$. Then we can define a homomorphism $\tau_{a}$ of $R[X]$ by constructing the evaluation homomorphism from $R[X]$ to $R[X]$ taking $r\in R$ to itself and taking $X$ to $X+a$. To see that $\tau_{a}$ is an automorphism, observe that $\tau_{-a}\circ\tau_{a}$ is the identity on $R\subset R[X]$ and takes $X$ to $X$, so by the uniqueness of the evaluation homomorphism, $\tau_{-a}\circ\tau_{a}$ is the identity. Title translation automorphism of a polynomial ring TranslationAutomorphismOfAPolynomialRing 2013-03-22 14:16:13 2013-03-22 14:16:13 archibal (4430) archibal (4430) 4 archibal (4430) Example msc 12E05 msc 11C08 msc 13P05 IsomorphismSwappingZeroAndUnity
2019-09-22T20:13:52
{ "domain": "planetmath.org", "url": "https://planetmath.org/TranslationAutomorphismOfAPolynomialRing", "openwebmath_score": 0.9869512915611267, "openwebmath_perplexity": 234.44495603827963, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357227168957, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573157947568 }
https://en.m.wikiversity.org/wiki/Nonlinear_finite_elements/Kinematics_-_spectral_decomposition
# Nonlinear finite elements/Kinematics - spectral decomposition ## Spectral decompositions Many numerical algorithms use spectral decompositions to compute material behavior. ### Spectral decompositions of stretch tensors Infinitesimal line segments in the material and spatial configurations are related by ${\displaystyle d\mathbf {x} ={\boldsymbol {F}}\cdot d{\boldsymbol {X}}={\boldsymbol {R}}\cdot ({\boldsymbol {U}}\cdot d{\boldsymbol {X}})={\boldsymbol {V}}\cdot ({\boldsymbol {R}}\cdot d{\boldsymbol {X}})~.}$ So the sequence of operations may be either considered as a stretch of in the material configuration followed by a rotation or a rotation followed by a stretch. Also note that ${\displaystyle {\boldsymbol {V}}={\boldsymbol {R}}\cdot {\boldsymbol {U}}\cdot {\boldsymbol {R}}^{T}~.}$ Let the spectral decomposition of ${\displaystyle {\boldsymbol {U}}}$  be ${\displaystyle {\boldsymbol {U}}=\sum _{i=1}^{3}\lambda _{i}~{\boldsymbol {N}}_{i}\otimes {\boldsymbol {N}}_{i}}$ and the spectral decomposition of ${\displaystyle {\boldsymbol {V}}}$  be ${\displaystyle {\boldsymbol {V}}=\sum _{i=1}^{3}{\hat {\lambda }}_{i}~\mathbf {n} _{i}\otimes \mathbf {n} _{i}~.}$ Then ${\displaystyle {\boldsymbol {V}}={\boldsymbol {R}}\cdot {\boldsymbol {U}}\cdot {\boldsymbol {R}}^{T}=\sum _{i=1}^{3}\lambda _{i}~{\boldsymbol {R}}\cdot ({\boldsymbol {N}}_{i}\otimes {\boldsymbol {N}}_{i})\cdot {\boldsymbol {R}}^{T}=\sum _{i=1}^{3}\lambda _{i}~({\boldsymbol {R}}\cdot {\boldsymbol {N}}_{i})\otimes ({\boldsymbol {R}}\cdot {\boldsymbol {N}}_{i})}$ Therefore the uniqueness of the spectral decomposition implies that ${\displaystyle \lambda _{i}={\hat {\lambda }}_{i}\quad {\text{and}}\quad \mathbf {n} _{i}={\boldsymbol {R}}\cdot {\boldsymbol {N}}_{i}}$ The left stretch (${\displaystyle {\boldsymbol {V}}}$ ) is also called the spatial stretch tensor while the right stretch (${\displaystyle {\boldsymbol {U}}}$ ) is called the material stretch tensor. ### Spectral decompositions of deformation gradient The deformation gradient is given by ${\displaystyle {\boldsymbol {F}}={\boldsymbol {R}}\cdot {\boldsymbol {U}}}$ In terms of the spectral decomposition of ${\displaystyle {\boldsymbol {U}}}$  we have ${\displaystyle {\boldsymbol {F}}=\sum _{i=1}^{3}\lambda _{i}~{\boldsymbol {R}}\cdot ({\boldsymbol {N}}_{i}\otimes {\boldsymbol {N}}_{i})=\sum _{i=1}^{3}\lambda _{i}~({\boldsymbol {R}}\cdot {\boldsymbol {N}}_{i})\otimes {\boldsymbol {N}}_{i}=\sum _{i=1}^{3}\lambda _{i}~\mathbf {n} _{i}\otimes {\boldsymbol {N}}_{i}}$ Therefore the spectral decomposition of ${\displaystyle {\boldsymbol {F}}}$  can be written as ${\displaystyle {\boldsymbol {F}}=\sum _{i=1}^{3}\lambda _{i}~\mathbf {n} _{i}\otimes {\boldsymbol {N}}_{i}}$ Let us now see what effect the deformation gradient has when it is applied to the eigenvector ${\displaystyle {\boldsymbol {N}}_{i}}$ . We have ${\displaystyle {\boldsymbol {F}}\cdot {\boldsymbol {N}}_{i}={\boldsymbol {R}}\cdot {\boldsymbol {U}}\cdot {\boldsymbol {N}}_{i}={\boldsymbol {R}}\cdot \left(\sum _{j=1}^{3}\lambda _{j}~{\boldsymbol {N}}_{j}\otimes {\boldsymbol {N}}_{j}\right)\cdot {\boldsymbol {N}}_{i}}$ From the definition of the dyadic product ${\displaystyle (\mathbf {u} \otimes \mathbf {v} )\cdot \mathbf {w} =(\mathbf {w} \cdot \mathbf {v} )~\mathbf {u} }$ Since the eigenvectors are orthonormal, we have ${\displaystyle ({\boldsymbol {N}}_{j}\otimes {\boldsymbol {N}}_{j})\cdot {\boldsymbol {N}}_{i}={\begin{cases}0&{\mbox{if}}~i\neq j\\{\boldsymbol {N}}_{i}&{\mbox{if}}~i=j\end{cases}}}$ Therefore, ${\displaystyle \left(\sum _{j=1}^{3}\lambda _{j}~{\boldsymbol {N}}_{j}\otimes {\boldsymbol {N}}_{j}\right)\cdot {\boldsymbol {N}}_{i}=\lambda _{i}~{\boldsymbol {N}}_{i}{\text{no sum on}}~i}$ ${\displaystyle {\boldsymbol {F}}\cdot {\boldsymbol {N}}_{i}=\lambda _{i}~({\boldsymbol {R}}\cdot {\boldsymbol {N}}_{i})=\lambda _{i}~\mathbf {n} _{i}}$ So the effect of ${\displaystyle {\boldsymbol {F}}}$  on ${\displaystyle {\boldsymbol {N}}_{i}}$  is to stretch the vector by ${\displaystyle \lambda _{i}}$  and to rotate it to the new orientation ${\displaystyle \mathbf {n} _{i}}$ . We can also show that ${\displaystyle {\boldsymbol {F}}^{-T}\cdot {\boldsymbol {N}}_{i}={\cfrac {1}{\lambda _{i}}}~\mathbf {n} _{i}~;~~{\boldsymbol {F}}^{T}\cdot \mathbf {n} _{i}=\lambda _{i}~{\boldsymbol {N}}_{i}~;~~{\boldsymbol {F}}^{-1}\cdot \mathbf {n} _{i}={\cfrac {1}{\lambda _{i}}}~{\boldsymbol {N}}_{i}}$ ### Spectral decompositions of strains Recall that the Lagrangian Green strain and its Eulerian counterpart are defined as ${\displaystyle {\boldsymbol {E}}={\frac {1}{2}}~({\boldsymbol {F}}^{T}\cdot {\boldsymbol {F}}-{\boldsymbol {\mathit {1}}})~;~~{\boldsymbol {e}}={\frac {1}{2}}~({\boldsymbol {\mathit {1}}}-\left({\boldsymbol {F}}\cdot {\boldsymbol {F}}^{T}\right)^{-1})}$ Now, ${\displaystyle {\boldsymbol {F}}^{T}\cdot {\boldsymbol {F}}={\boldsymbol {U}}\cdot {\boldsymbol {R}}^{T}\cdot {\boldsymbol {R}}\cdot {\boldsymbol {U}}={\boldsymbol {U}}^{2}~;~~{\boldsymbol {F}}\cdot {\boldsymbol {F}}^{T}={\boldsymbol {V}}\cdot {\boldsymbol {R}}\cdot {\boldsymbol {R}}^{T}\cdot {\boldsymbol {V}}={\boldsymbol {V}}^{2}}$ Therefore we can write ${\displaystyle {\boldsymbol {E}}={\frac {1}{2}}~({\boldsymbol {U}}^{2}-{\boldsymbol {\mathit {1}}})~;~~{\boldsymbol {e}}={\frac {1}{2}}~({\boldsymbol {\mathit {1}}}-{\boldsymbol {V}}^{-2})}$ Hence the spectral decompositions of these strain tensors are ${\displaystyle {\boldsymbol {E}}=\sum _{i=1}^{3}{\frac {1}{2}}(\lambda _{i}^{2}-1)~{\boldsymbol {N}}_{i}\otimes {\boldsymbol {N}}_{i}~;~~\mathbf {e} =\sum _{i=1}^{3}{\frac {1}{2}}\left(1-{\cfrac {1}{\lambda _{i}^{2}}}\right)~\mathbf {n} _{i}\otimes \mathbf {n} _{i}}$ #### Generalized strain measures We can generalize these strain measures by defining strains as ${\displaystyle {\boldsymbol {E}}^{(n)}={\cfrac {1}{n}}~({\boldsymbol {U}}^{n}-{\boldsymbol {\mathit {1}}})~;~~{\boldsymbol {e}}^{(n)}={\cfrac {1}{n}}~({\boldsymbol {\mathit {1}}}-{\boldsymbol {V}}^{-n})}$ The spectral decomposition is ${\displaystyle {\boldsymbol {E}}^{(n)}=\sum _{i=1}^{3}{\cfrac {1}{n}}(\lambda _{i}^{n}-1)~{\boldsymbol {N}}_{i}\otimes {\boldsymbol {N}}_{i}~;~~\mathbf {e} ^{(n)}=\sum _{i=1}^{3}{\cfrac {1}{n}}\left(1-{\cfrac {1}{\lambda _{i}^{n}}}\right)~\mathbf {n} _{i}\otimes \mathbf {n} _{i}}$ Clearly, the usual Green strains are obtained when ${\displaystyle n=2}$ . #### Logarithmic strain measure A strain measure that is commonly used is the logarithmic strain measure. This strain measure is obtained when we have ${\displaystyle n\rightarrow 0}$ . Thus ${\displaystyle {\boldsymbol {E}}^{(0)}=\ln({\boldsymbol {U}})~;~~{\boldsymbol {e}}^{(0)}=\ln({\boldsymbol {V}})}$ The spectral decomposition is ${\displaystyle {\boldsymbol {E}}^{(0)}=\sum _{i=1}^{3}\ln \lambda _{i}~{\boldsymbol {N}}_{i}\otimes {\boldsymbol {N}}_{i}~;~~\mathbf {e} ^{(0)}=\sum _{i=1}^{3}\ln \lambda _{i}~\mathbf {n} _{i}\otimes \mathbf {n} _{i}}$
2021-07-24T21:17:39
{ "domain": "wikiversity.org", "url": "https://en.m.wikiversity.org/wiki/Nonlinear_finite_elements/Kinematics_-_spectral_decomposition", "openwebmath_score": 0.8231748938560486, "openwebmath_perplexity": 1228.35065321428, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357221825194, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573154391772 }
https://math.stackexchange.com/questions/3109175/prove-that-if-aa-leq-ka-then-2a-2a-is-a-k16-approximate-group
# Prove that if $|A+A| \leq K|A|$ then $2A - 2A$ is a $K^{16}$-approximate group. Let $$A$$ be a finite subset of an abelian group, $$G$$ (call the operation addition). We say $$A$$ is a $$K$$-approximate group if: 1) $$e_G \in A$$ 2) $$A^{-1} = \{ a^{-1} \mid a \in A \} = A$$ 3) $$\exists X \subset G, \; |X| \leq K$$ such that: $$2A \subset X+A$$ Where: $$A+A = \{a+b \mid a,b \in A\}$$ I am asked to show that if $$|A+A| \leq K|A|$$, then $$2A - 2A$$ is a $$K^{16}$$-approximate group. To this end, I am not entirely sure where to start. The first two properties fall out reasonably easily. I believe it is well known that $$|2A - 2A| \leq K^4|A|$$ I am aware of a result that allows me to find an $$X \subset G, \; |X| \leq K^4$$ such that $$nA - A \subset (n-1)X + A - A$$, which I believe implies: $$2A \subset X+A$$ What can I do now though? I don't see how I can get the required subset of $$G$$, and the corresponding bound on the size? • What is $K{}{}{}?$ – Thomas Andrews Feb 11 at 19:39 • @ThomasAndrews $K$ is a constant such that $|A+A| \leq K|A|$ – user366818 Feb 11 at 19:43 In $$nA-A\subset(n-1)X+A-A$$, take $$n=2$$ to get $$2A-A\subset X+A-A.$$ It follows that $$2A-2A \subset X+A-2A = X - (2A-A) \subset X-X+A-A.$$ This yields $$4A-4A \subset 2(X-X) + (2A-2A).$$ In view of $$|2(X-X)|\le|X|^4\le K^{16}$$, this shows that $$2A-2A$$ is a $$K^{16}$$-approximate group
2019-05-22T14:59:06
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3109175/prove-that-if-aa-leq-ka-then-2a-2a-is-a-k16-approximate-group", "openwebmath_score": 0.8814165592193604, "openwebmath_perplexity": 115.66847476652484, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357221825194, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573154391772 }
http://mathhelpforum.com/calculus/166700-please-remind-how-do-simple-differentiation-question-print.html
please remind how to do - simple differentiation question • Dec 21st 2010, 05:23 AM manalive04 please remind how to do - simple differentiation question differentiate a/1+a I just cant remember how to do it • Dec 21st 2010, 05:27 AM Prove It $\displaystyle \frac{d}{da}\left(\frac{a}{1+a}\right) = \frac{(1+a)\frac{d}{da}(a) - a\,\frac{d}{da}(1+a)}{(1+a)^2}$ $\displaystyle = \frac{(1+a)1 - a(1)}{(1+a)^2}$ $\displaystyle = \frac{1+ a - a}{(1 + a)^2}$ $\displaystyle = \frac{1}{(1+a)^2}$. • Dec 21st 2010, 06:24 AM $f(a) = 1 - \frac{1}{a+1}$ $\frac{d(f(a))}{da} = 0 + \frac{1}{(a+1)^2}$ • Dec 22nd 2010, 03:21 AM HallsofIvy Yet a third way! $\frac{a}{a+ 1}= a(a+ 1)^{-1}$. Use the "product rule", rather than the "quotient rule", together with the chain rule: $\left(a(a+1)^{1}\right)'= (a)'(a+ 1)^{1}+ a((a+1)^{-1})'$ $= (1)(a+ 1)^{-1}+ (a)(-1(a+1)^{-2}(1))$ You could stop there or you could write it as $= \frac{1}{a+ 1}- \frac{a}{(a+1)^2}= \frac{a+ 1}{(a+1)^2}- \frac{a}{(a+1)^2}= \frac{1}{(a+1)^2}$
2016-10-23T20:54:45
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/calculus/166700-please-remind-how-do-simple-differentiation-question-print.html", "openwebmath_score": 0.8846878409385681, "openwebmath_perplexity": 5271.744404020829, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9817357221825193, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573154391771 }
https://socratic.org/questions/how-do-you-write-the-standard-from-of-the-equation-of-the-circle-given-center-2-
How do you write the standard from of the equation of the circle given center:(2,-6) radius: 2? Jun 3, 2016 $\textcolor{g r e e n}{{\left(x - 2\right)}^{2} + {\left(y + 6\right)}^{2} = {2}^{2}}$ See explanation as to why it is in this form. Explanation: The standardised formula for a circle with its centre at the origin is: ${x}^{2} + {y}^{2} = {r}^{2}$ This is derived from Pythagoras rule about a 'right triangle'. You know the one; ${a}^{2} + {b}^{2} = {c}^{2}$ However, in this case the centre is not at the origin. It is at $\left(x , y\right) \to \left(2 , - 6\right)$ Consequently we mathematically move all the points as if the centre of the circle was at the origin so that $\left(x , y\right) \to \left(0 , 0\right)$ '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $\textcolor{b r o w n}{\text{Moving the } x ' s}$ ${x}_{\text{moved")=x_("given}} - 2$ so each generic point we have $x - 2$ $\textcolor{b r o w n}{\text{Moving the } y ' s}$ ${y}_{\text{moved")=y_("given}} + 6$ so each generic point we have $y + 6$ '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ $\textcolor{g r o w n}{\implies {\left({x}_{\text{generic"))^2+(y_("generic}}\right)}^{2} = {r}^{2}}$ Becomes: $\textcolor{g r e e n}{{\left(x - 2\right)}^{2} + {\left(y + 6\right)}^{2} = {2}^{2}}$
2019-11-12T21:43:05
{ "domain": "socratic.org", "url": "https://socratic.org/questions/how-do-you-write-the-standard-from-of-the-equation-of-the-circle-given-center-2-", "openwebmath_score": 0.6794260144233704, "openwebmath_perplexity": 478.39581931299807, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357219153311, "lm_q2_score": 0.6654105587468141, "lm_q1q2_score": 0.6532573152613874 }
https://scikit-learn.org/stable/modules/unsupervised_reduction.html
4.5. Unsupervised dimensionality reduction¶ If your number of features is high, it may be useful to reduce it with an unsupervised step prior to supervised steps. Many of the Unsupervised learning methods implement a transform method that can be used to reduce the dimensionality. Below we discuss two specific example of this pattern that are heavily used. Pipelining The unsupervised data reduction and the supervised estimator can be chained in one step. See Pipeline: chaining estimators. 4.5.1. PCA: principal component analysis¶ decomposition.PCA looks for a combination of features that capture well the variance of the original features. See Decomposing signals in components (matrix factorization problems). 4.5.2. Random projections¶ The module: random_projection provides several tools for data reduction by random projections. See the relevant section of the documentation: Random Projection. 4.5.3. Feature agglomeration¶ cluster.FeatureAgglomeration applies Hierarchical clustering to group together features that behave similarly. Feature scaling Note that if features have very different scaling or statistical properties, cluster.FeatureAgglomeration may not be able to capture the links between related features. Using a preprocessing.StandardScaler can be useful in these settings.
2018-12-15T21:38:15
{ "domain": "scikit-learn.org", "url": "https://scikit-learn.org/stable/modules/unsupervised_reduction.html", "openwebmath_score": 0.3867487609386444, "openwebmath_perplexity": 1852.8500861244486, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.981735721648143, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573150835975 }
http://mathhelpforum.com/algebra/30221-can-t-solve-simple-conversion-help.html
# Math Help - Can't solve simple conversion? help! 1. ## Can't solve simple conversion? help! how do you convert 2 centimetres per second into kilometres per day? according to the answer key..the answer is 1.728 2. Originally Posted by wickedcharchar how do you convert 2 centimetres per second into kilometres per day? according to the answer key..the answer is 1.728 know your conversion factors 1 km = 1000 m = 100,000 cm and 1 day = 24 hrs = 1440 mins = 86,400 sec thus, $\frac {2~cm}{1~sec} \times \frac {1~km}{100,000~cm} \times \frac {86,400~sec}{1~day} = \frac {1.728~km}{1~day} = 1.728 \mbox{ km/day}$ do you need help setting up the above expression?
2014-04-19T13:09:35
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/30221-can-t-solve-simple-conversion-help.html", "openwebmath_score": 0.34165099263191223, "openwebmath_perplexity": 5298.851759494811, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.981735721648143, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573150835975 }
https://en.lntwww.de/Aufgaben:Exercise_5.1:_Gaussian_ACF_and_Gaussian_Low-Pass
# Exercise 5.1: Gaussian ACF and Gaussian Low-Pass Gaussian ACF at the filter input and output At the input of a low-pass filter with frequency response  $H(f)$,  there is a Gaussian distributed mean-free noise signal  $x(t)$  with the following auto-correlation function  $\rm (ACF)$: $${\it \varphi}_{x}(\tau) = \sigma_x^2 \cdot {\rm e}^{- \pi (\tau /{\rm \nabla} \tau_x)^2}.$$ This ACF is shown in the accompanying diagram above. Let the filter be Gaussian with the DC gain  $H_0$  and the equivalent bandwidth  $\Delta f$.  Thus,  for the frequency response,  it can be written: $$H(f) = H_{\rm 0} \cdot{\rm e}^{- \pi (f/ {\rm \Delta} f)^2}.$$ In the course of this task,  the two filter parameters  $H_0$  and  $\Delta f$  are to be dimensioned so that the output signal  $y(t)$  has an ACF corresponding to the diagram below. Notes: • Consider the following Fourier correspondence: $${\rm e}^{- \pi (f/{\rm \Delta} f)^2} \hspace{0.15cm} \bullet\!\!-\!\!\!-\!\!\!\hspace{0.03cm}\circ \hspace{0.15cm}{\rm \Delta} f \cdot {\rm e}^{- \pi ({\rm \Delta} f \hspace{0.03cm} \cdot \hspace{0.03cm} t)^2}.$$ ### Questions 1 What is the standard deviation of the filter input signal? $\sigma_x \ = \$ $\ \rm V$ 2 From the sketched ACF,  also determine the equivalent ACF duration  $\nabla\tau_x$  of the input signal.  How can this be determined in general? $\nabla\tau_x \ = \$ $\ \rm µ s$ 3 What is the power-spectral density  ${\it Φ}_x(f)$  of the input signal?  What is the PSD value at  $f= 0$? ${\it Φ}_x(f=0) \ = \$ $\ \cdot 10^{-9}\ \rm V^2/Hz$ 4 Calculate the PSD  ${\it Φ}_y(f)$  at the filter output in general as a function of  $\sigma_x$,  $\nabla \tau_x$,  $H_0$  and  $\Delta f$.  Which statements are true? The PSD  ${\it Φ}_y(f)$  is also Gaussian. The smaller  $\Delta f$  is,  the wider  ${\it Φ}_y(f)$. $H_0$  only affects the height,  but not the width of  ${\it Φ}_y(f)$. 5 How large must the equivalent filter bandwidth  $\Delta f$  be chosen so that  $\nabla \tau_y = 3 \ \rm µ s$  holds for the equivalent ACF duration? $\Delta f \ = \$ $\ \rm MHz$ 6 How large must one select the DC signal transfer factor  $H_0$  so that the condition  $\sigma_y = \sigma_x$  is fulfilled? $H_0 \ = \$ ### Solution #### Solution (1)  The variance is equal to the ACF value at  $\tau = 0$,  so  $\sigma_x^2 = 0.04 \ \rm V^2$. • From this follows  $\sigma_x\hspace{0.15cm}\underline {= 0.2 \ \rm V}$. (2)  The equivalent ACF duration can be determined via the rectangle of equal area. • According to the sketch,  we obtain  $\nabla \tau_x\hspace{0.15cm}\underline {= 1 \ \rm µ s}$. (3)  The PSD is the Fourier transform of the ACF. • With the given Fourier correspondence holds: $${\it \Phi}_{x}(f) = \sigma_x^2 \cdot {\rm \nabla} \tau_x \cdot {\rm e}^{- \pi ({\rm \nabla} \tau_x \hspace{0.03cm}\cdot \hspace{0.03cm}f)^2} .$$ • At frequency  $f = 0$,  we obtain: $${\it \Phi}_{x}(f = 0) = \sigma_x^2 \cdot {\rm \nabla} \tau_x = \rm 0.04 \hspace{0.1cm} V^2 \cdot 10^{-6} \hspace{0.1cm} s \hspace{0.15cm} \underline{= 40 \cdot 10^{-9} \hspace{0.1cm} V^2 / Hz}.$$ (4)  Solutions 1 and 3  are correct: • In general,  ${\it \Phi}_{y}(f) = {\it \Phi}_{x}(f) \cdot |H(f)|^2$.  It follows: $${\it \Phi}_{y}(f) = \sigma_x^2 \cdot {\rm \nabla} \tau_x \cdot {\rm e}^{- \pi ({\rm \nabla} \tau_x \cdot f)^2}\cdot H_{\rm 0}^2 \cdot{\rm e}^{- 2 \pi (f/ {\rm \Delta} f)^2} .$$ • By combining the two exponential functions,  we obtain: $${\it \Phi}_{y}(f) = \sigma_x^2 \cdot {\rm \nabla} \tau_x \cdot H_0^2 \cdot {\rm e}^{- \pi\cdot ({\rm \nabla} \tau_x^2 + 2/\Delta f^2 ) \hspace{0.1cm}\cdot f^2}.$$ • Also  ${\it \Phi}_{y}(f)$  is Gaussian and never wider than  ${\it \Phi}_{x}(f)$.  For $f \to \infty$,  the approximation  ${\it \Phi}_{y}(f) \approx {\it \Phi}_{x}(f)$  holds. • As  $\Delta f$  gets smaller,  ${\it \Phi}_{y}(f)$  gets narrower  (so the second statement is false). • $H_0$  actually affects only the PSD height,  but not the width of the PSD. (5)  Analogous to task  (1),  it can be written for the PSD of the output signal  $y(t)$: $${\it \Phi}_{y}(f) = \sigma_y^2 \cdot {\rm \nabla} \tau_y \cdot {\rm e}^{- \pi \cdot {\rm \nabla} \tau_y^2 \cdot f^2 }.$$ • By comparing with the result from  (4)  we get: $${{\rm \nabla} \tau_y^2} = {{\rm \nabla} \tau_x^2} + \frac {2}{{\rm \Delta} f^2}.$$ • Solving the equation for  $\Delta f$  and considering the values  $\nabla \tau_x {= 1 \ \rm µ s}$  as well as  $\nabla \tau_y {= 3 \ \rm µ s}$,  it follows: $${\rm \Delta} f = \sqrt{\frac{2}{{\rm \nabla} \tau_y^2 - {\rm \nabla} \tau_x^2}} = \sqrt{\frac{2}{9 - 1}} \hspace{0.1cm}\rm MHz \hspace{0.15cm} \underline{= 0.5\hspace{0.1cm} MHz} .$$ (6)  The condition  $\sigma_y = \sigma_x$  is equivalent to  $\varphi_y(\tau = 0)= \varphi_x(\tau = 0)$. • Moreover,  since  $\nabla \tau_y = 3 \cdot \nabla \tau_x$  is given,  therefore  ${\it \Phi}_{y}(f= 0) = 3 \cdot {\it \Phi}_{x}(f= 0)$  must also hold. • From this we obtain: $$H_{\rm 0} = \sqrt{\frac{\it \Phi_y (f \rm = 0)}{\it \Phi_x (f = \rm 0)}} = \sqrt {3}\hspace{0.15cm} \underline{=1.732}.$$
2022-05-16T11:57:38
{ "domain": "lntwww.de", "url": "https://en.lntwww.de/Aufgaben:Exercise_5.1:_Gaussian_ACF_and_Gaussian_Low-Pass", "openwebmath_score": 0.8776234984397888, "openwebmath_perplexity": 10022.305517807661, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357216481429, "lm_q2_score": 0.6654105587468141, "lm_q1q2_score": 0.6532573150835975 }
https://mathoverflow.net/questions/370854/topological-and-algebraic-covering-spaces-in-berkovich-geometry
# Topological and algebraic covering spaces in Berkovich geometry Let $$k$$ be a complete, non-archimedean field, and $$X$$ a Berkovich space over $$k$$ (as nice as you like, for arguments sake let's say strictly $$k$$-analytic, good, and geometrically connected). As discussed in this article of de Jong, covering spaces of $$X$$ come in two slightly different flavours. One the one hand you can take finite etale covers $$Y\rightarrow X$$ as you would for schemes, on the other hand you can take a covering space $$Y\rightarrow |X|$$ of the underlying topological space of $$X$$, and, roughly speaking, use the Berkovich space structure of $$X$$ to put one on $$Y$$. Following de Jong, let us call the first of these 'algebraic' and the second 'topological'. A general covering space is then some kind of mixture of the two. If $$k$$ is not separably closed, then one way of producing algebraic covering spaces is via finite separable extensions of $$k$$: if $$L/k$$ is such an extension then $$X_L \rightarrow X$$ is a finite etale map of Berkovich spaces, where $$X_L$$ denotes the base change of $$X$$ to $$L$$. My question is then the following: Question: Is it possible that $$X_L \rightarrow X$$ is a topological covering space, for some non-trivial extension $$L/k$$? It's not to hard to see this can't be the case if $$X$$ has a $$k$$-rational point (since the fibre of $$X_L\rightarrow X$$ over this point will have cardinality 1), but I'm particularly interested in the case when we might have $$X(k)=\emptyset$$. Concretely, I'm interested in the case when $$X$$ is (the analyitification of) a smooth projective conic over $$k$$, without a rational point, and $$L/k$$ is a quadratic extension over which $$X$$ does admit a rational point. In your particular case, $$X_L$$ has a point, so it is isomorphic to $$P^{1,\mathrm{an}}_L$$, hence simply connected. If your covering $$X_L \to X$$ were a covering, it would then be a universal covering. But we know that Berkovich curves retract by deformation onto graphs, so the topological fundamental group of $$X$$ is a free group. In particular, the universal covering of $$X$$ is either $$X$$ itself or of infinite degree, and we get a contradiction. • Hi Jerome, I was actually about to email you this question, so I'm glad you've popped up here! I think I perhaps didn't explain it very well - the question was more about whether $X_L$ can ever be a topological covering space of $X$ - i.e. a covering space map on the underlying topological spaces. It boils down to the question of whether or not every point of $X$ (of any Type) has precisely $[L:k]$ preimages in $X_L$. It feels like this is unlikely - for conics, I feel as though it should be possible to cook up some Type II point with only one preimage, but I didn't manage to do so. Sep 5 '20 at 7:54
2021-10-24T01:23:47
{ "domain": "mathoverflow.net", "url": "https://mathoverflow.net/questions/370854/topological-and-algebraic-covering-spaces-in-berkovich-geometry", "openwebmath_score": 0.8968958258628845, "openwebmath_perplexity": 86.12836782156928, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.981735721648143, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573150835975 }
https://homework.zookal.com/questions-and-answers/consider-the-following-ma2-process-zt--ut--1ut1-673088890
2. Finance 3. consider the following ma2 process zt ut 1ut1... # Question: consider the following ma2 process zt ut 1ut1... ###### Question details Consider the following MA(2) process: , where ut is a zero-mean white noise process with variance ${\sigma }^{2}$. (a) Calculate the conditional and unconditional means of ${z}_{t}$, that is, ${E}_{t}\left[{z}_{t+1}\right]$ and E[${z}_{t}$]. (b) Calculate the conditional and unconditional variances of ${z}_{t}$, that is, $Va{r}_{t}\left[{z}_{t+1}\right]$ and Var[${z}_{t}$]. (c) Derive the autocorrelation function of this process for all lags as functions of the parameters ${\alpha }_{1}$ and ${\alpha }_{2}$
2021-04-14T16:56:50
{ "domain": "zookal.com", "url": "https://homework.zookal.com/questions-and-answers/consider-the-following-ma2-process-zt--ut--1ut1-673088890", "openwebmath_score": 0.9776621460914612, "openwebmath_perplexity": 1050.403325779801, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357216481429, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573150835974 }
https://math.stackexchange.com/questions/3284093/given-one-of-de-morgans-laws-prove-the-other-from-it-using-equivalences
# Given one of De Morgan's laws, prove the other from it using equivalences. I have one of De Morgan's laws (in propositional logic). I would like to prove the other law from the first using a sequence of equivalences (Resolution). One is not allowed to use truth tables or the particular De Morgans law which we are trying to prove (obviously). How can this be done? $$\lnot (A\land B)\equiv \lnot A \lor \lnot B$$ $$\lnot (A\lor B)\equiv \lnot A \land \lnot B$$ a set of resolution equivalences laws can be found on wikipedia Let assume $$\lnot (A \land B) \equiv \lnot A \lor \lnot B$$. Then from $$\lnot (A \lor B)$$ using Double Negation, we get $$\lnot (\lnot \lnot A \lor \lnot \lnot B) \equiv \lnot \lnot (\lnot A \land \lnot B)$$.
2020-04-08T10:09:20
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3284093/given-one-of-de-morgans-laws-prove-the-other-from-it-using-equivalences", "openwebmath_score": 0.9112049341201782, "openwebmath_perplexity": 149.40998468528878, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357216481429, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573150835974 }
https://www.albert.io/ie/sat-math-1-and-2-subject-test/negative-exponents-and-equivalency
? Free Version Moderate Negative Exponents and Equivalency SATSTM-FVYYM9 If $a$, $b$, and $c$ are nonzero real numbers and if ${a}^{5}{b}^{8}{c}^{6}=\cfrac{3{a}^{4}{b}^{8}}{{c}^{-6}}$, then $a=$ ? A $1$ B $3$ C $3a$ D $0.3333333333333333$ E $9a$
2016-12-06T21:51:10
{ "domain": "albert.io", "url": "https://www.albert.io/ie/sat-math-1-and-2-subject-test/negative-exponents-and-equivalency", "openwebmath_score": 0.9643025994300842, "openwebmath_perplexity": 1392.5180583257677, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357216481429, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573150835974 }
http://mathoverflow.net/questions/228024/descent-of-sheaves-under-galois-covering
# Descent of sheaves under galois covering Let $\pi: Y\rightarrow X$ be a finite Galois covering between normal projective varieties with Galois group $G$. Let $E$ be a coherent sheaf on $Y$ with a $G$-linearisation, i.e., there are isomorphisms $\lambda_g:E\cong g^*E$ for all $g\in G$ satisfying $\lambda_1=id$ and $\lambda_{gh}=h^*\lambda_g\circ\lambda_h$. Does there always exist a coherent sheaf $F$ on $X$ such that $\pi^*F\cong E$? I know that when $\pi$ is etale, the answer is yes by descent theory along torsors (see Vistoli: "Notes on Grothendieck topologies, fibered categories and descent theory," arXiv preprint math/0412512). But how about the general case? In the proof of Lemma 3.2.2 in the book "D Huybrechts, M Lehn: The Geometry of Moduli Spaces of Sheaves", the authors claim that the answer is yes by descent theory, but don't give us a reference. Which descent theory they used? Could someone show me a proof? Thank you very much! - The answer is no. First note that you misquote the Lemma of Huybrechts-Lehn: they talk about a subsheaf of a sheaf on $Y$ which is already of the form $\pi ^*\mathcal{F}$. For a counter-example, take for $\pi :Y\rightarrow X$ a double covering of smooth projective curves, ramified at a point $p\in Y$, and take $E=\mathcal{O}_Y(p)$. Let $\sigma$ be the nontrivial involution of $Y$ such that $\pi \circ \sigma =\pi$. Clearly $\sigma ^*E\cong E$; choose any $\lambda :\sigma ^*E\stackrel{\sim}{\longrightarrow }E$. Then $\sigma ^*\lambda \circ \lambda$ is an automorphism of $E$, hence multiplication by some $t\in\mathbb{C}^*$; dividing $\lambda$ by $\sqrt{t}$ we may assume $t=1$, so that $\lambda$ gives a $\sigma$-linearization of $E$. But obviously $E$ does not descend since its degree is odd. Could you show me how to prove an invariant subsheaf of $\pi^*F$ can be descent? – Diego Maradona Jan 10 at 3:56 I think the statement in Huybrechts-Lehn is slightly incorrect: one should assume that the subsheaf $\mathcal{G}$ of $\pi ^*\mathcal{F}$ is saturated, that is, the quotient $\pi ^*\mathcal{F}/\mathcal{G}$ is torsion-free. A maximum destabilizing subsheaf has this property, so their proof still works. You want to prove that the natural homomorphism $\pi ^*((\pi _*\mathcal{G})^G)\rightarrow \mathcal{G}$ is an isomorphism in codimension 1. I think this follows by considering the exact sequence $0\rightarrow \mathcal{G}\rightarrow \pi ^*\mathcal{F}\rightarrow \mathcal{Q}\rightarrow 0$. – abx Jan 10 at 18:21
2016-06-25T21:41:32
{ "domain": "mathoverflow.net", "url": "http://mathoverflow.net/questions/228024/descent-of-sheaves-under-galois-covering", "openwebmath_score": 0.9822821617126465, "openwebmath_perplexity": 129.31626850183085, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357213809549, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573149058077 }
http://mathoverflow.net/feeds/question/111637
Intersection of vector spaces - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-24T21:35:09Z http://mathoverflow.net/feeds/question/111637 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/111637/intersection-of-vector-spaces Intersection of vector spaces Abel Stolz 2012-11-06T12:55:08Z 2012-11-06T14:04:48Z <p>Let ${v_{1,j},\ldots, v_{n,j}}$ be a basis of the $n$-dimensional vector space $V$ for $j=1\ldots k$ (and assume $2k\leq n$). Let $V_i$ be the subspace spanned by $v_{i,1},\ldots, v_{i,k}$ for $i=1\ldots n$. (So the dimension of each $V_i$ can be anything between $1$ and $k$.) Is it true that $V_1\cap V_2\cap \ldots \cap V_{2k}=0$ (or likewise any intersection of $2k$ of the $V_i$ is trivial)?</p> <p>(The answer to the question is needed to proof a quite technical lemma, the details of which I prefer not to spread in full length. In any case I want to do something like span "as much of $V$ as possible, using as few $V_i$ as possible".)</p> <p>I seem not be able to construct a counterexample, neither to do a proof by induction, though I suspect the latter should be possible. Any pointer to a reference or hint to a simple argument is appreciated!</p> http://mathoverflow.net/questions/111637/intersection-of-vector-spaces/111642#111642 Answer by Peter Mueller for Intersection of vector spaces Peter Mueller 2012-11-06T14:04:48Z 2012-11-06T14:04:48Z <p>I believe the answer is no if $n\ge4$. Here is a counterexample for $n=4$, $k=2$: Let $v$ be the sum $v_{1,1}+v_{2,1}+v_{3,1}+v_{4,1}$ of the vectors of the first basis. Define the second basis by $v_{i,2}=v-v_{i,1}$. Then $v\in V_i$ for $i=1,2,3,4$, a contradiction as $v\ne 0$.</p>
2013-05-24T21:35:15
{ "domain": "mathoverflow.net", "url": "http://mathoverflow.net/feeds/question/111637", "openwebmath_score": 0.9412696957588196, "openwebmath_perplexity": 318.6951190540844, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357211137667, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573147280178 }
https://curveball.yoavram.com/likelihood
# Likelihood¶ The likelihood module contains functions used to calculate and visualize the log-likelihood surfaces for models and data in Curveball. Functions that use growth data expect a pandas.DataFrame generated by the ioutils module. Functions that use often results of model fitting require lmfit.model.ModelResult objects. ## Members¶ curveball.likelihood.loglik(t, y, y_sig, f, penalty=None, **params)[source] Computes the log-likelihood of seeing the data given a model assuming normal distributed observation/measurement errors. $\log{L(y | \theta)} = -\frac{1}{2} \sum_i { \log{(2 \pi \sigma_{i}^{2})} + \frac{(y - f(t_i; \theta))^2}{\sigma_{i}^{2}} }$ which is the log-likelihood of seeing the data points $$t_i, y_i$$ with measurement error $$\sigma_i$$ given the model function $$f$$, the model parameters $$\theta$$, and that the measurement error at time $$t_i$$ has a normal distribution with mean 0. Parameters tnp.ndarray one dimensional array of time ynp.ndarray one dimensional array of the means of the observations y_signp.ndarray one dimensional array of standrad deviations of the observations fcallable a function the calculates the expected observations (f(t)) from t and any parameters in params penaltycallable a function that calculates a scalar penalty from the parameters in params to be substracted from the log-likelihood paramsfloats, optional model parameters Returns float the log-likelihood result curveball.likelihood.loglik_r_nu(r_range, nu_range, df, f=<function baranyi_roberts_function at 0x2ad1d7f82c80>, penalty=None, **params)[source] Estimates the log-likelihood surface for $$r$$ and $$\nu$$ given data and a model function. Parameters r_range, nu_rangenumpy.ndarray vectors of floats of $$r$$ and $$\nu$$ values on which to compute the log-likelihood dfpandas.DataFrame data frame with Time and OD columns fcallable, optional model function, defaults to curveball.baranyi_roberts_model.baranyi_roberts_function() penaltycallable, optional if given, the result of penalty will be substracted from the log-likelihood for each parameter set paramsfloats values for the model model parameters used by f Returns np.ndarray two-dimensional array of log-likelihood calculations; value at index i, j will have the value for r_range[i] and nu_range[j] curveball.likelihood.loglik_r_q0(r_range, q0_range, df, f=<function baranyi_roberts_function at 0x2ad1d7f82c80>, penalty=None, **params)[source] Estimates the log-likelihood surface for $$r$$ and $$\nu$$ given data and a model function. Parameters r_range, q0_rangenumpy.ndarray vectors of floats of $$r$$ and $$q_0$$ values on which to compute the log-likelihood dfpandas.DataFrame data frame with Time and OD columns fcallable, optional model function, defaults to curveball.baranyi_roberts_model.baranyi_roberts_function() penaltycallable, optional if given, the result of penalty will be substracted from the log-likelihood for each parameter set paramsfloats values for the model model parameters used by f Returns np.ndarray two-dimensional array of log-likelihood calculations; value at index i, j will have the value for r_range[i] and q0_range[j] curveball.likelihood.plot_loglik(Ls, xrange, yrange, xlabel=None, ylabel=None, columns=4, fig_title=None, normalize=True, ax_titles=None, cmap='viridis', colorbar=True, ax_width=4, ax_height=4, ax=None)[source] Plots one or more log-likelihood surfaces. Parameters Lssequence of numpy.ndarray list or tuple of log-likelihood two-dimensional arrays; if one array is given it will be converted to a size 1 list xrange, yrangenp.ndarray values on x-axis and y-axis of the plot (rows and columns of Ls, respectively) xlabel, ylabelstr, optional strings for x and y labels columnsint, optional number of columns in case that Ls has more than one matrice fig_titlestr, optional a title for the whole figure normalizebool, optional if True, all matrices will be plotted using a single color scale ax_titleslist or tuple of str, optional titles corresponding to the different matrices in Ls cmapstr. optional name of a matplotlib colormap (to see list, call matplotlib.pyplot.colormaps()), defaults to viridis colorbarbool, optional if True a colorbar will be added to the plot ax_width, ax_heightint width and height of each panel (one for each matrice in Ls) axmatplotlib axes or numpy.ndarray of axes if given, will plot into ax, otherwise will create a new figure Returns figmatplotlib.figure.Figure figure object axnumpy.ndarray array of axis objects Examples >>> L = loglik_r_nu(rs, nus, df, y0=y0, K=K, q0=q0, v=v) >>> plot_loglik(L0, rs, nus, normalize=False, fig_title=fig_title, xlabel=r'$r$', ylabel=r'$\nu$', colorbar=False) curveball.likelihood.plot_model_loglik(m, df, fig_title=None)[source] Plot the log-ikelihood surfaces for $$\nu$$ over $$r$$ and $$q_0$$ over $$r$$ for given data and model fitting result. Parameters mlmfit.model.ModelResult model for which to plot the log-likelihood surface dfpandas.DataFrame data frame with Time and OD columns used to fit the model fig_titlestr title for the plot Returns figmatplotlib.figure.Figure figure object axnumpy.ndarray array of axis objects Examples >>> m = curveball.models.fit_model(df) >>> curveball.likelihood.plot_model_loglik(m, df) curveball.likelihood.ridge_regularization(lam, **center)[source] Create a penaly function that employs the ridge regularization method: $P = \lambda ||\theta - \theta_0||_2$ where $$\lambda$$ is the regularization scale, $$\theta$$ is the model parameters vector, and $$\theta_0$$ is the model parameters guess vector. This is similar to using a multivariate Gaussian prior distribution on the model parameters with the Gaussian centerd at $$\theta_0$$ and scaled by $$\lambda$$. Parameters lamfloat the penalty factor or regularization scale centerfloats, optional guesses of model parameters Returns callable the penalty function, accepts model parameters as float keyword arguments and returns a float penalty to the log-likelihood Examples >>> penalty = ridge_regularization(1, y=0.1, K=1, r=1) >>> loglik(t, y, y_sig, logistic, penalty=penalty, y0=0.12, K=0.98, r=1.1)
2019-07-23T16:17:32
{ "domain": "yoavram.com", "url": "https://curveball.yoavram.com/likelihood", "openwebmath_score": 0.42164260149002075, "openwebmath_perplexity": 6466.517427541712, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357211137666, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573147280177 }
http://math.stackexchange.com/questions/293489/if-psi-h-is-in-l2-for-all-h-in-l2-must-psi-be-essentially-bounded
# If $\psi h$ is in $L^2$ for all $h\in L^2$, must $\psi$ be essentially bounded? If $\psi:[a,b]\to\mathbb C$ is a (measurable) function such that $\psi h$ is in $L^2[a,b]$ for all $h\in L^2[a,b]$, then must $\psi$ be essentially bounded? The converse direction is clear: If $\psi$ is essentially bounded, then $\|\psi h\|_2\leq \|\psi\|_\infty\|h\|_2$ for all $h\in L^2$, and thus the multiplication operator $M_\psi:h\mapsto \psi h$ is defined and bounded. The question here is whether, without prior assumptions on $\psi$, simply having $M_\psi(L^2)\subseteq L^2$ implies that $\psi$ is bounded. - What is the question here? – T. Eskin Feb 3 '13 at 9:26 prove that Essential supremum of the function ψ is finite. – Alexander Osorio Feb 3 '13 at 9:31 Why do you think it is true? – Michael Greinecker Feb 3 '13 at 9:40 If $\psi h$ is in $L^2$ for all $h\in L^2$, define $M_\psi:L^2\to L^2$ by $M_\psi h=\psi h$. Then $M_\psi$ is a linear operator on $L^2$, and the closed graph theorem can be used to show that $M_\psi$ is bounded. Suppose that $\{(h_n,\psi h_n)\}$ is a sequence in the graph of $M_\psi$ converging to $(h,k)\in L^2\times L^2$. We want to see that $k=\psi h$. This can be seen from the fact that convergence in $L^2$ norm implies pointwise a.e. convergence of a subsequence. (There's a subsequence of $\{h_n\}$ converging pointwise a.e. to $h$, and a subsequence of $\{\psi h_n\}$ converging pointwise a.e. to $k$, and these can be coordinated.) So $M_\psi$ is bounded. Suppose that $|\psi(x)|\geq c$ on a set $E$ of positive measure. Then $\|M_\psi \chi_{E}\|_2\geq C \|\chi_{E}\|_2$, which implies that $c\leq M_\psi$. Thus $\|\psi\|_\infty\leq \|M_\psi\|$.
2016-02-07T00:14:19
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/293489/if-psi-h-is-in-l2-for-all-h-in-l2-must-psi-be-essentially-bounded", "openwebmath_score": 0.9858115315437317, "openwebmath_perplexity": 49.60194378961777, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357205793903, "lm_q2_score": 0.6654105587468141, "lm_q1q2_score": 0.6532573143724383 }
https://blogs.mathworks.com/cleve/2012/11/12/magic-squares-part-3-linear-algebra/?from=kr
# Magic Squares, Part 3, Linear Algebra We know a little bit, but not much, about the linear algebraic properties of magic squares. ### Contents #### Rank Here is my favorite experiment involving the linear algebraic properties of magic squares. Recall that the rank of a matrix is the number of linearly independent rows or columns. If an n -by- n matrix has full rank, that is rank equal to n, then it is nonsingular and has an inverse. Let's look at the rank of the magic squares generated by MATLAB. r = zeros(1,33); for n = 3:33 r(n) = rank(magic(n)); end bar(r) xlabel('n') ylabel('rank') title('rank(magic(n))') snapnow We can clearly see three different cases. Odd order magic squares have full rank, singly even order magic squares have rank equal to roughly half their order, and doubly even order magic squares of any order have rank equal to three. Why is that? It reflects the three different cases in the MATLAB magic square generator. #### Odd Order Recall from last week that we have two different ways of generating the odd order magic square produced by magic(n). De La Loubere's method generates the matrix elements one-by-one along broken diagonals in a manner that seems very unlikely to generate linear dependence between rows or columns. Our new algorithm generates the same magic square with a few array operations. [I,J] = ndgrid(1:n); A = mod(I+J+(n-3)/2,n); B = mod(I+2*J-2,n); M = n*A + B + 1; It is not hard to prove that A and B have full rank and so it is likely that their sum has full rank as well. There are many other magic squares besides the ones generated by MATLAB. Are all the odd ordered ones nonsingular? Frankly, I have no idea. The magic square property tells us very little about linear independence. Does anybody know of a 5-by-5 magic square that is also a singular matrix? I would love to see it. Or, can anybody prove that odd-ordered magic squares must be nonsingular? I would love to see a proof as well. Actually, I better see just one or the other. #### Doubly Even Order For doubly even order, the rank is always three. When I first saw that -- years ago -- I was really surprised. For a long time I had a vague idea that the rank could be explained by the simple algorithm that generates these matrices. But now I'm not so sure. Let's look at the generation of magic(4). Start with the integers 1:16 arranged in a 4-by-4 array. This, of course, is a matrix with rank 1, but we're about to alter it drastically. Mark half of the elements with an asterisk in a pattern like this. 1* 2 3 4* 5 6* 7* 8 9 10* 11* 12 13* 14 15 16* Now regard the marked elements as a one-dimensional array and flip that array end-for-end. This produces magic(4). 16* 2 3 13* 5 11* 10* 8 9 7* 6* 12 4* 14 15 1* It must be easy to describe that indexing operation in matrix terms in a way that generalizes to higher order and explains why the resulting matrix happens to have rank 3. But I don't see how today. #### Order Four What about other 4-by-4 magic squares? Do they all have rank 3? Are there any nonsingular magic squares of order 4? I am able to answer that question because I came across this terrific paper by Bill Press, one of the authors of Numerical Recipes. Did Dürer Intentionally Show Only His Second-Best Magic Square? (I won't spoil Bill's punch line by revealing the answer to the question he poises in his title.) In order to investigate Dürer's possible choices for a special magic square, Press describes a seven variable parameterization of the 16 elements in a 4-by-4 array that characterizes all possible magic squares of order 4. (He attributes the parameterization to Maurice Kraitchik.) This approach leads to a program, enumerate.m, that checks the rank of all magic squares of order 4. This is the first time I have ever written a MATLAB code with for loops nested seven deep. It generates 16*15*14*13*12*11*10 = 57657600 matrices. Of these, only 7040 pass the check in the inner block to qualify as legal magic squares. Dividing the final counts by 8 to account for eight-fold symmetry, we find that that there are 880 magic squares of order four. Their ranks are rank count 3 640 4 240 So, there ware nonsingular 4-by-4 magic squares. Other than that, I attach no great significant to the actual counts. The process of finding them is more interesting than the counts themselves. By the way, the entire computation by enumerate4 takes about 415 seconds on my Lenovo X201 2.67 GHz i7 laptop. The first nonsingular 4-by-4 magic square that enumerate4 finds is 1 2 16 15 13 14 4 3 12 7 9 6 8 11 5 10 #### Singly Even The bar chart above shows that the rank for singly even magic squares, where n is divisible by two but not by four, is about $n/2$. Actually, the rank is $n/2+2$, and it is not hard to see why. The first step of the singly even algorithm is to stack together four copies of the square of odd order $n/2$. p = n/2; A = magic(p); M = [A A+2*p^2; A+3*p^2 A+p^2]; This M has equal column sums, but not equal row sums. And, since A has rank $n/2$, this M has rank $n/2+1$ The next step interchanges some blocks of elements to correct the row sums. i = (1:p)'; k = (n-2)/4; j = [1:k (n-k+2):n]; M([i; i+p],j) = M([i+p; i],j); It turns out that this step does not change the rank. Now this M has equal row and column sums, but its diagonal sums are not quite right. The final step swaps two pairs of elements to correct the diagonal. i = k+1; j = [1 i]; M([i; i+p],j) = M([i+p; i],j); This is a rank one change that produces the final magic square and increases the rank to $n/2+2$.
2021-11-27T10:51:54
{ "domain": "mathworks.com", "url": "https://blogs.mathworks.com/cleve/2012/11/12/magic-squares-part-3-linear-algebra/?from=kr", "openwebmath_score": 0.558430016040802, "openwebmath_perplexity": 618.5105630647919, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357205793903, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573143724382 }
http://clay6.com/qa/9949/which-of-the-following-is-correct-
Browse Questions # Which of the following is correct? (1)An element of a group can have more then one inverse.(2)If every element of a group is its own inverse , then the group is abelian.(3)The set of all $2\times 2$ real matrices forms a group under matrix multiplication.(4)$(a^{\ast} b)^{-1}=a^{-1} \ast b^{-1}$ for all $a,b\in G$ Statement (1) is false since an element of a group can have only one inverse Statement (2) is correct. Statement (3) is false. Since inverse does not exist. Singular matrices does not possesinverse The matrix $\begin {bmatrix} a & a \\a & a \end{bmatrix}$ is a $2 \times 2$ matrix Whose inverse does not exist. Statement (4) is false. $(a^{\ast} b)^{-1}=b^{-1} \ast a^{-1}$ for all $a,b\in G$ Hence 2 is the correct answer.
2016-10-24T12:32:28
{ "domain": "clay6.com", "url": "http://clay6.com/qa/9949/which-of-the-following-is-correct-", "openwebmath_score": 0.7787659764289856, "openwebmath_perplexity": 413.77524185487977, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357205793902, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573143724381 }
https://testbook.com/question-answer/the-current-in-a-circuit-is-measured-as-235-a--5fa52041ed09f18b379209d0
# The current in a circuit is measured as 235 μA and the accuracy of measurement is ± 0.5%. This current is passes through a resistor 35 kΩ ± 0.2%. The voltage is estimated to be 8.23 V. The error in the estimation would be This question was previously asked in ESE Electronics 2014 Paper 1: Official Paper View all UPSC IES Papers > 1. ± 0.06 V 2. ± 0.04 V 3. ± 0.016 V 4. ± 0.1 V Option 1 : ± 0.06 V Free CT 3: Building Materials 2962 10 Questions 20 Marks 12 Mins ## Detailed Solution Concept: Limiting error: • The maximum allowable error in the measurement is specified in terms of true value, is known as limiting error. • It will give a range of errors. It is always with respect to the true value, so it is a variable error. • If two quantities are getting added, then their limiting errors also gets added. • If two quantities are getting multiplied or divided, then their percentage limiting errors get added. Calculation: Given that, current (I) = 235 μA ± 0.5% Resistance (R) = 35 kΩ ± 0.2% Voltage (V) = 8.23 V From Ohm’s law, V = IR As, the quantities are getting multiplied, their percentage of errors get added. Error in the measurement of voltage = 0.5 + 0.2 = ± 0.7% The error in the measurement of voltage in volts is $$= \frac{{0.7}}{{100}} \times 8.23 = 0.057 \approx 0.06\;V$$
2021-10-23T21:47:44
{ "domain": "testbook.com", "url": "https://testbook.com/question-answer/the-current-in-a-circuit-is-measured-as-235-a--5fa52041ed09f18b379209d0", "openwebmath_score": 0.5660977959632874, "openwebmath_perplexity": 2753.8826941313487, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357205793902, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573143724381 }
http://mathhelpforum.com/differential-geometry/104997-new-question.html
Math Help - New Question 1. New Question A subset A of a metric space X is called uniformly discrete if there epsilon> 0 with the property that the distance between two distinct points of A is always greater or equal to epsilon: ∀ a, b ∈ A, (a!= b) ⇒ (dX (a, b) ≥ epsilon). Show that any subset of a uniformly discrete metric space X is closed in X. 2. Originally Posted by donsmith A subset A of a metric space X is called uniformly discrete if there epsilon> 0 with the property that the distance between two distinct points of A is always greater or equal to epsilon: ∀ a, b ∈ A, (a!= b) ⇒ (dX (a, b) ≥ epsilon). Show that any subset of a uniformly discrete metric space X is closed in X. Always begin a thread for a new question. Say tha $\varepsilon$ is fixed by the problem. If $x$ were a limit of some set $Y$ then the ball $B\left( {x;\frac{\varepsilon }{2}} \right)$ must contain a point of $Y$ distinct from $x$. But that is impossible. So $Y$ is closed by default. 3. solved
2015-08-02T12:40:03
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/differential-geometry/104997-new-question.html", "openwebmath_score": 0.7343854308128357, "openwebmath_perplexity": 549.7741997533424, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9817357205793902, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573143724381 }
https://en.formulasearchengine.com/wiki/Pinhole_camera_model
# Pinhole camera model A diagram of a pinhole camera. The pinhole camera model describes the mathematical relationship between the coordinates of a 3D point and its projection onto the image plane of an ideal pinhole camera, where the camera aperture is described as a point and no lenses are used to focus light. The model does not include, for example, geometric distortions or blurring of unfocused objects caused by lenses and finite sized apertures. It also does not take into account that most practical cameras have only discrete image coordinates. This means that the pinhole camera model can only be used as a first order approximation of the mapping from a 3D scene to a 2D image. Its validity depends on the quality of the camera and, in general, decreases from the center of the image to the edges as lens distortion effects increase. Some of the effects that the pinhole camera model does not take into account can be compensated, for example by applying suitable coordinate transformations on the image coordinates, and other effects are sufficiently small to be neglected if a high quality camera is used. This means that the pinhole camera model often can be used as a reasonable description of how a camera depicts a 3D scene, for example in computer vision and computer graphics. ## The geometry and mathematics of the pinhole camera The geometry of a pinhole camera The geometry related to the mapping of a pinhole camera is illustrated in the figure. (NOTE: The x1-x2-x3 coordinate system in the figure is left-handed, which is risky to one's intuition. A right-handed system may be used, instead). The figure contains the following basic objects • A 3D orthogonal coordinate system with its origin at O. This is also where the camera aperture is located. The three axes of the coordinate system are referred to as X1, X2, X3. Axis X3 is pointing in the viewing direction of the camera and is referred to as the optical axis, principal axis, or principal ray. The 3D plane which intersects with axes X1 and X2 is the front side of the camera, or principal plane. • An image plane where the 3D world is projected through the aperture of the camera. The image plane is parallel to axes X1 and X2 and is located at distance ${\displaystyle f}$ from the origin O in the negative direction of the X3 axis. A practical implementation of a pinhole camera implies that the image plane is located such that it intersects the X3 axis at coordinate -f where f > 0. f is also referred to as the focal length{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} of the pinhole camera. • A point R at the intersection of the optical axis and the image plane. This point is referred to as the principal point or image center. • The projection line of point P into the camera. This is the green line which passes through point P and the point O. • The projection of point P onto the image plane, denoted Q. This point is given by the intersection of the projection line (green) and the image plane. In any practical situation we can assume that ${\displaystyle x_{3}}$ > 0 which means that the intersection point is well defined. The pinhole aperture of the camera, through which all projection lines must pass, is assumed to be infinitely small, a point. In the literature this point in 3D space is referred to as the optical (or lens or camera) center.[1] Next we want to understand how the coordinates ${\displaystyle (y_{1},y_{2})}$ of point Q depend on the coordinates ${\displaystyle (x_{1},x_{2},x_{3})}$ of point P. This can be done with the help of the following figure which shows the same scene as the previous figure but now from above, looking down in the negative direction of the X2 axis. The geometry of a pinhole camera as seen from the X2 axis In this figure we see two similar triangles, both having parts of the projection line (green) as their hypotenuses. The catheti of the left triangle are ${\displaystyle -y_{1}}$ and f and the catheti of the right triangle are ${\displaystyle x_{1}}$ and ${\displaystyle x_{3}}$. Since the two triangles are similar it follows that ${\displaystyle {\frac {-y_{1}}{f}}={\frac {x_{1}}{x_{3}}}}$ or ${\displaystyle y_{1}=-{\frac {f\,x_{1}}{x_{3}}}}$ A similar investigation, looking in the negative direction of the X1 axis gives ${\displaystyle {\frac {-y_{2}}{f}}={\frac {x_{2}}{x_{3}}}}$ or ${\displaystyle y_{2}=-{\frac {f\,x_{2}}{x_{3}}}}$ This can be summarized as ${\displaystyle {\begin{pmatrix}y_{1}\\y_{2}\end{pmatrix}}=-{\frac {f}{x_{3}}}{\begin{pmatrix}x_{1}\\x_{2}\end{pmatrix}}}$ which is an expression that describes the relation between the 3D coordinates ${\displaystyle (x_{1},x_{2},x_{3})}$ of point P and its image coordinates ${\displaystyle (y_{1},y_{2})}$ given by point Q in the image plane. ### Rotated image and the virtual image plane The mapping from 3D to 2D coordinates described by a pinhole camera is a perspective projection followed by a 180° rotation in the image plane. This corresponds to how a real pinhole camera operates; the resulting image is rotated 180° and the relative size of projected objects depends on their distance to the focal point and the overall size of the image depends on the distance f between the image plane and the focal point. In order to produce an unrotated image, which is what we expect from a camera, there are two possibilities: • Rotate the coordinate system in the image plane 180° (in either direction). This is the way any practical implementation of a pinhole camera would solve the problem; for a photographic camera we rotate the image before looking at it, and for a digital camera we read out the pixels in such an order that it becomes rotated. • Place the image plane so that it intersects the X3 axis at f instead of at -f and rework the previous calculations. This would generate a virtual (or front) image plane which cannot be implemented in practice, but provides a theoretical camera which may be simpler to analyse than the real one. In both cases the resulting mapping from 3D coordinates to 2D image coordinates is given by ${\displaystyle {\begin{pmatrix}y_{1}\\y_{2}\end{pmatrix}}={\frac {f}{x_{3}}}{\begin{pmatrix}x_{1}\\x_{2}\end{pmatrix}}}$ (same as before except no minus sign) ## Homogeneous coordinates {{#invoke:main|main}} The mapping from 3D coordinates of points in space to 2D image coordinates can also be represented in homogeneous coordinates. Let ${\displaystyle \mathbf {x} }$ be a representation of a 3D point in homogeneous coordinates (a 4-dimensional vector), and let ${\displaystyle \mathbf {y} }$ be a representation of the image of this point in the pinhole camera (a 3-dimensional vector). Then the following relation holds ${\displaystyle \mathbf {y} \sim \mathbf {C} \,\mathbf {x} }$ where ${\displaystyle \mathbf {C} }$ is the ${\displaystyle 3\times 4}$ camera matrix and the ${\displaystyle \,\sim }$ means equality between elements of projective spaces. This implies that the left and right hand sides are equal up to a non-zero scalar multiplication. A consequence of this relation is that also ${\displaystyle \mathbf {C} }$ can be seen as an element of a projective space; two camera matrices are equivalent if they are equal up to a scalar multiplication. This description of the pinhole camera mapping, as a linear transformation ${\displaystyle \mathbf {C} }$ instead of as a fraction of two linear expressions, makes it possible to simplify many derivations of relations between 3D and 2D coordinates.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} • Entrance pupil, the equivalent location of the pinhole in relation to object space in a real camera. • Exit pupil, the equivalent location of the pinhole in relation to the image plane in a real camera. • Pinhole camera, the practical implementation of the mathematical model described in this article. ## References {{ safesubst:#invoke:Unsubst||$N=Refimprove |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }} ## Bibliography • {{#invoke:citation/CS1|citation |CitationClass=book }} • {{#invoke:citation/CS1|citation |CitationClass=book }} • {{#invoke:citation/CS1|citation |CitationClass=book }} • {{#invoke:citation/CS1|citation |CitationClass=book }} • {{#invoke:citation/CS1|citation |CitationClass=book }}
2023-04-01T16:32:21
{ "domain": "formulasearchengine.com", "url": "https://en.formulasearchengine.com/wiki/Pinhole_camera_model", "openwebmath_score": 0.6815093755722046, "openwebmath_perplexity": 1232.2024544332528, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.981735720045014, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573140168585 }
https://math.stackexchange.com/questions/838970/if-a-non-negative-matrix-dominates-another-non-negative-matrix-does-it-say-anyt
# If a non-negative matrix dominates another non-negative matrix, does it say anything about the relations between their eigenvalues? I was reading an old paper by Wilf: http://www.math.upenn.edu/~wilf/website/Eigenvalues%20of%20a%20graph.pdf In short, he has made some claim about a lower bound for the largest eigenvalue of a non-negative matrix. (Which, by the Perron–Frobenius theorem is non-negative, and it has an all positive eigenvector) And it made great sense to me until I got to one claim in the end, where he discussed another matrix, which was identical to the old one, except he replaced some of its entries by 0s, and claimed that the new matrix' highest eigenvalue can't be higher than the original's, and after spending some time trying to prove that fact, I haven't been able to. Am I missing something? Is it true in general that if a non-negative matrix A dominates a second non-negative matrix B, all of B's positive eigenvalues are no greater than A's? ## migrated from mathoverflow.netJun 18 '14 at 22:07 This question came from our site for professional mathematicians. Probably what is meant is that if $A \geq B$ (entrywise), then the largest eigenvalue of $A$ is at least as large as that of $B$ (when $A$ and $B$ are nonnegative matrices). If $B$ is primitive (that is, some power is strictly positive), then an easy argument with the H-transform shows that the spectral radius of $A$ is strictly greater than that of $B$ if $A \geq B$ and $A \neq B$.
2019-06-26T02:17:49
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/838970/if-a-non-negative-matrix-dominates-another-non-negative-matrix-does-it-say-anyt", "openwebmath_score": 0.8870745301246643, "openwebmath_perplexity": 228.98563856471085, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.981735720045014, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573140168585 }
https://www.physicsforums.com/threads/trigonometry-assistance.85561/
# Trigonometry assistance TonyC How do I go about finding the equivalent expression for cot 50 degrees? When I work the problem, I come up with tan 130 degrees. Where am I going wrong? Homework Helper I rather work in radians, if you don't mind. $$50^\circ = \frac{{5\pi }}{{18}}$$ We know that: $$\cot \left( \alpha \right) = \tan \left( {\frac{\pi }{2} - \alpha } \right)$$ So: $$\cot \left( {\frac{{5\pi }}{{18}}} \right) = \tan \left( {\frac{\pi }{2} - \frac{{5\pi }}{{18}}} \right) = \tan \left( {\frac{{2\pi }}{9}} \right) = \tan \left( {40^\circ } \right)$$ TonyC That makes perfect sense now..... The light has come on, thank you Homework Helper No problem
2022-09-26T11:50:45
{ "domain": "physicsforums.com", "url": "https://www.physicsforums.com/threads/trigonometry-assistance.85561/", "openwebmath_score": 0.8195804357528687, "openwebmath_perplexity": 4713.40707556264, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357200450139, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573140168584 }
https://stacks.math.columbia.edu/tag/00T5
Lemma 10.137.5. Let $k$ be a field. Let $S$ be a smooth $k$-algebra. Then $S$ is a local complete intersection. Proof. By Lemmas 10.137.4 and 10.135.11 it suffices to prove this when $k$ is algebraically closed. Choose a presentation $\alpha : k[x_1, \ldots , x_ n] \to S$ with kernel $I$. Let $\mathfrak m$ be a maximal ideal of $S$, and let $\mathfrak m' \supset I$ be the corresponding maximal ideal of $k[x_1, \ldots , x_ n]$. We will show that condition (5) of Lemma 10.135.4 holds (with $\mathfrak m$ instead of $\mathfrak q$). We may write $\mathfrak m' = (x_1 - a_1, \ldots , x_ n - a_ n)$ for some $a_ i \in k$, because $k$ is algebraically closed, see Theorem 10.34.1. By our assumption that $k \to S$ is smooth the $S$-module map $\text{d} : I/I^2 \to \bigoplus _{i = 1}^ n S \text{d}x_ i$ is a split injection. Hence the corresponding map $I/\mathfrak m' I \to \bigoplus \kappa (\mathfrak m') \text{d}x_ i$ is injective. Say $\dim _{\kappa (\mathfrak m')}(I/\mathfrak m' I) = c$ and pick $f_1, \ldots , f_ c \in I$ which map to a $\kappa (\mathfrak m')$-basis of $I/\mathfrak m' I$. By Nakayama's Lemma 10.20.1 we see that $f_1, \ldots , f_ c$ generate $I_{\mathfrak m'}$ over $k[x_1, \ldots , x_ n]_{\mathfrak m'}$. Consider the commutative diagram $\xymatrix{ I \ar[r] \ar[d] & I/I^2 \ar[rr] \ar[d] & & I/\mathfrak m'I \ar[d] \\ \Omega _{k[x_1, \ldots , x_ n]/k} \ar[r] & \bigoplus S\text{d}x_ i \ar[rr]^{\text{d}x_ i \mapsto x_ i - a_ i} & & \mathfrak m'/(\mathfrak m')^2 }$ (proof commutativity omitted). The middle vertical map is the one defining the naive cotangent complex of $\alpha$. Note that the right lower horizontal arrow induces an isomorphism $\bigoplus \kappa (\mathfrak m') \text{d}x_ i \to \mathfrak m'/(\mathfrak m')^2$. Hence our generators $f_1, \ldots , f_ c$ of $I_{\mathfrak m'}$ map to a collection of elements in $k[x_1, \ldots , x_ n]_{\mathfrak m'}$ whose classes in $\mathfrak m'/(\mathfrak m')^2$ are linearly independent over $\kappa (\mathfrak m')$. Therefore they form a regular sequence in the ring $k[x_1, \ldots , x_ n]_{\mathfrak m'}$ by Lemma 10.106.3. This verifies condition (5) of Lemma 10.135.4 hence $S_ g$ is a global complete intersection over $k$ for some $g \in S$, $g \not\in \mathfrak m$. As this works for any maximal ideal of $S$ we conclude that $S$ is a local complete intersection over $k$. $\square$ ## Comments (3) Comment #721 by Keenan Kidwell on In the third line of the proof, the containment $\mathfrak{m}^\prime\subseteq I$ goes the other way. Comment #6688 by WhatJiaranEatsTonight on I think the commutativity has an intuitive explanation. Let f be a polynomial that represents an element in $I/I^2$. The composition of the left and the bottom takes f to $\sum \partial f/\partial x_i(x_i -a_i)$. Notice that the Taylor expansion of f has vanishing constant since $I\subset m$. And the higher items lie in $m^2$. Thus f is equal to the first order items of its Taylor expansion, which is exactly the form as above. Comment #6894 by on @#6688: I sort of agree, but I am not sure it is worth editing what we have now. Can somebody suggest a better way of writing the argument? ## Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work. All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 00T5. Beware of the difference between the letter 'O' and the digit '0'.
2022-08-16T20:10:02
{ "domain": "columbia.edu", "url": "https://stacks.math.columbia.edu/tag/00T5", "openwebmath_score": 0.9225847125053406, "openwebmath_perplexity": 210.7322309810853, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357200450139, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573140168584 }
http://www.physicsforums.com/showthread.php?p=4050384
## commutation between subgroups Hello, let's suppose I have two subgroups $R$ and $T$, and I know that in general they do not commute: that is, $rt\neq tr$ for some $r\in R$, $t\in T$. Is it possible, perhaps after making specific assumptions on R and T, to find some $r'\in R$, and $t'\in T$ such that: $$rt=t'r'$$. This is possible, for example, with some matrix manipulations if R and T are respectively the groups of rotations and translations in 2D. I was wondering if it is possible to find a more general algebraic approach without making explicit how R and T are defined. PhysOrg.com science news on PhysOrg.com >> Galaxies fed by funnels of fuel>> The better to see you with: Scientists build record-setting metamaterial flat lens>> Google eyes emerging markets networks Recognitions: Science Advisor Hi, mnb96. A few questions: Do the subgroups have trivial intersection? Are they part of any specific supergroup? How about looking at the group table (or, otherwise, how is the group given to you)? Hi Bacle2, thanks for your help. I consider R and T as being subgroups of the supergroup G=RT, and I do not assume that R and T have trivial intersection. However I noticed that if I assume that at least one of the two subgroups is normal, then I could solve the problem. Let's suppose for example that T is a normal subgroup of G=RT, then we have: $$rt=[r,t]tr$$ where [r,t] is the commutator. Thus, $$rt=(rtr^{-1})t^{-1}tr$$ and since T is normal we have: $$rt=t'r$$ where $t'=rtr^{-1}\in T$. If you define T as the group of 2D translations and R as the group of 2D rotations, and observe that T is a normal subgroup of G=RT, the above construction actually yields a well-known result... I just wonder if it is possible to drop the assumption of normality and still come up with some more general result.
2013-05-25T15:35:58
{ "domain": "physicsforums.com", "url": "http://www.physicsforums.com/showthread.php?p=4050384", "openwebmath_score": 0.890704870223999, "openwebmath_perplexity": 350.61048650135325, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357195106375, "lm_q2_score": 0.6654105587468141, "lm_q1q2_score": 0.6532573136612789 }
https://socratic.org/questions/how-do-you-solve-the-equation-x-2-1-4x-1-2-by-completing-the-square
# How do you solve the equation x^2+1.4x=1.2 by completing the square? May 19, 2017 $x = 0.6$ or $x = - 2$ #### Explanation: Divide the coefficient of the $x$ term by $2$ and square it. $\frac{1.4}{2} = 0.7$ ${\left(0.7\right)}^{2} = 0.49$ Now add it to both sides of the equation ${x}^{2} + 1.4 x + 0.49 = 1.2 + 0.49$ This becomes ${\left({x}^{2} + 0.7\right)}^{2} = 1.69$ Take the square root of both sides $\sqrt{{\left(x + 0.7\right)}^{2}} = \pm \sqrt{1.69}$ $x + 0.7 = \pm 1.3$ $x = \pm 1.3 - 0.7$ $x = 1.3 - 0.7 = 0.6$ $x = - 1.3 - 0.7 = - 2$
2021-06-23T23:03:25
{ "domain": "socratic.org", "url": "https://socratic.org/questions/how-do-you-solve-the-equation-x-2-1-4x-1-2-by-completing-the-square", "openwebmath_score": 0.7574861645698547, "openwebmath_perplexity": 890.2632078077231, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357195106375, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573136612788 }
http://math.stackexchange.com/questions/190968/partitions-of-binary-numbers-into-binary-numbers-with-fixed-digits
# Partitions of binary numbers into binary numbers with fixed digits? If we are to have (two, for example) binary numbers, such that their sum is $100111010_2$, and given that the first number has 5 ones, and the second number has 3 ones, can I find the numbers that when added together gave $100111010_2$ (the two numbers are $11010110_2$ and $01100100_2$, by the way)? My theory is that, the number of partitions of a binary number $n_2$ into $k$ binary parts with the given constraint of ones present in each, there can at most be $k$ ways in which $n_2$ can be partitioned (without order, not necessarily distinct). Anyone here to prove (or disprove) this? - Not true. For instance, suppose you want to write the number with $kn$ binary digits that are all one as a sum of $k$ binary numbers each having $n$ ones. For each digit you can let all the summands except one have a zero, and you can choose the nonzero summand independently for each digit. There are $k^{kn}$ ways to make these choices. Even if you treat permutations of the summands as equivalent, there are still at least $k^{kn}/k!$ distinct ways to form the sum, which is much greater than $k$ (exponentially so as $n$ increases). -
2016-06-29T05:55:30
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/190968/partitions-of-binary-numbers-into-binary-numbers-with-fixed-digits", "openwebmath_score": 0.8867445588111877, "openwebmath_perplexity": 108.56530110462221, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357195106375, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573136612788 }
http://mathoverflow.net/questions/160075/are-there-geometrically-formal-manifolds-which-are-not-rationally-elliptic
# Are there geometrically formal manifolds, which are not rationally elliptic? Formality of a space is meant in the sense of Sullivan, i.e. a space $X$ is called formal, if it's commutative differential graded algebra of piecewise linear differential forms $(A_{PL}(X),d)$ is weakly equivalent (There is zig-zag of quasi-isomoprhisms.) to the rational cohomology algebra $(H^*(X,\mathbb{Q}),0)$ of $X$. Since I'm only interested in manifolds, this is equivalent to the case of weak equivalence between the de Rham algebra of differential forms $(A_{DR}(X),d)$ and the real cohomology algebra of $X$. A compact oriented manifold $X$ is called geometrically formal, if it admits a formal metric, i.e. a Riemannian metric $g$, such that the wedge product of harmonic forms is harmonic again. The typical class of examples are compact symmetric spaces. By using Hodge decomposition, it's easy to see, that geometric formality implies formality. The converse is not true, geometric formality is much stronger than formality. A formal metric has topological consequences, which go beyond formality. One can for example show that the Betti numbers of a geometric formal manifold $X$ are bounded above by the one of a torus in the same dimension, e.g. $$b_i(X)\le b_i(T^{dim(X)})={dim(X) \choose i}.$$ This bound on the Betti numbers is also known for rationally elliptic spaces $X$, that are spaces whose total dimensions of their rational cohomology $H^*(X,\mathbb{Q})$ and their rational homotopy $\pi_*(X)\otimes\mathbb{Q}$ are finite. For compact manifolds, this is clearly equivalent to having finite dimensional rational homotopy. There are examples of rationally elliptic compact simply-connected manifolds, which are not geometrically formal. (Some of them generalized symmetric spaces, see On formality of generalised symmetric spaces by D. Kotschick and S. Terzic) Are there examples of compact oriented manifolds, which are geometrically formal, but not rationally elliptic? Are there even examples of simply connected ones? Edit: As a sidenote, beside symmetric spaces, one can show, that manifolds with nonnegative curvature operatore are geometric formal. But since those have nonnegative sectional curvature, they won't serve as counterexamples easily (at least in the simply connected case), because of the Bott conjecture, which states that all simply connected nonnegative sectional curved manifolds are rationally elliptic. - Note that examples of these kind cannot be homogeneous spaces or even biquotients, as they are rationally elliptic. –  archipelago Mar 11 at 20:43 Is geometric formality preserved under rational homotopy equivalence? –  Igor Belegradek Mar 12 at 3:13
2014-08-23T18:22:40
{ "domain": "mathoverflow.net", "url": "http://mathoverflow.net/questions/160075/are-there-geometrically-formal-manifolds-which-are-not-rationally-elliptic", "openwebmath_score": 0.8564515709877014, "openwebmath_perplexity": 422.4001557832525, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357195106375, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573136612788 }
https://web2.0calc.com/questions/triangle_71
+0 # triangle 0 88 3 The incircle of radius 4 of a triangle ABC touches the side BC at D. If BD=6, DC=10, then what is the area of triangle ABC? Jun 10, 2021 #1 +524 +2 Okay, let's imagine the scenario as in this diagram Length of 2 tangents drawn from same point are equal BE = BD = 6 cm CF = CD = 10 cm AF = AE = x cm Semi-perimeter of circle $$s={AB+BC+CA \over 2}$$ $$= {6+10+10+6+x+x \over 2}$$ $$={32+2x \over 2}$$ $$=16+x$$ Area of △ABC $$=\sqrt{s(s-a)(s-b)(s-c)}$$ $$=\sqrt{(16+x)(16+x-16)(16+x-x-6)(16+x-x-10)}$$ $$=\sqrt{(16+x)60x}$$                                       ...(1) Also, area of △ABC $$=2 ×$$ area of$$(△AOE+ △OBD+△OCD)$$ $$=2×[({1\over 2}×4×x)+({1\over 2}×4×6)+({1\over 2}×4×10)]$$ $$=4x+24+40$$ $$=4x+64$$                                           ...(2) Equating eq (1) and (2) $$​​\sqrt{(16+x)60x}=4x+64$$ $$960x+60x^2=16x^2+512x+4096$$ $$44x^2-448x-4096=0$$ $$x=16$$ Thus, area of △ABC $$=64+64$$ $$=128$$ sq. units ~ Hope you got it. Thanks. Jun 11, 2021 edited by amygdaleon305  Jun 11, 2021 #2 +121054 +1 Very nice, amy  !!!!! Jun 11, 2021 #3 +524 0 Thanks a lot Phill! amygdaleon305  Jun 12, 2021
2021-09-18T22:02:26
{ "domain": "0calc.com", "url": "https://web2.0calc.com/questions/triangle_71", "openwebmath_score": 0.9553095102310181, "openwebmath_perplexity": 10298.839529835743, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357195106375, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573136612788 }
http://math.stackexchange.com/questions/69161/open-ball-over-the-real-numbers
# Open ball over the real numbers In my book, it says that any open ball $B(a,r)$ over the real numbers is equal to the open interval $(a-r,a+r)$. I wonder how I can prove that this is true, only using the metric axioms. If the metric equals $d: \mathbb{R} \times \mathbb{R}: a,b \mapsto |b-a|$, then this is obviously true. I know that a metric represents the distance, but does it necessarily have to equal the actual distance function $d$ I mentioned above. The reason why I ask this is because on $R^n$ you have several metrics, e.g. the Euclidian metric, the maximum metric, ... - The statement is predicated on the assumption that $B(a,r)$ is defined in terms of the Euclidean metric. The function $$d:\mathbb{R}\times\mathbb{R}\to\mathbb{R}:(x,y)\mapsto 2\vert x-y\vert$$ is a perfectly good metric on $\mathbb{R}$ that generates the usual topology, but \begin{align*} B_d(a,r) &= \{x\in\mathbb{R}:d(a,x)<r\}\\ &= \{x\in\mathbb{R}:2\vert a-x\vert<r\}\\ &= \left(a-\frac{r}2,a+\frac{r}2\right), \end{align*} not $(a-r,a+r)$.
2014-11-28T19:23:38
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/69161/open-ball-over-the-real-numbers", "openwebmath_score": 0.9990891218185425, "openwebmath_perplexity": 141.99895469703657, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357195106374, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573136612787 }
https://math.stackexchange.com/questions/2797598/maximum-likelihood-estimator-of-fx-theta-x-2-0-theta-leq-x
# Maximum Likelihood Estimator of : $f(x) = \theta x^{-2}, \; \; 0< \theta \leq x < \infty$ Exercise : Find a maximum likelihood estimator of $\theta$ for : $f(x) = \theta x^{-2}, \; \; 0< \theta \leq x < \infty$. Attempt : $$L(x;\theta) = \prod_{i=1}^n \theta x^{-2} \mathbb{I}_{[\theta, + \infty)}(x_i) = \theta^n \mathbb{I}_{[\theta, + \infty)}(\min x_i)$$ How should one proceed from now on to find a MLE ? I think it should be such as : $$\begin{cases} \theta \; \text{sufficiently large} \\ \min x_i \geq \theta \end{cases} \implies \hat{\theta} = \min x_i$$ Is my approach correct ? • Are you sure $f(x)$ has a maximum? – qwr May 27 '18 at 6:09 • @qwr Yes, it does. Look at the support of the sample $X$. – Rebellos May 27 '18 at 6:10 • – StubbornAtom Sep 17 at 15:19 $$L(x;\theta) = \prod_{i=1}^n \theta x_{\color{red}i}^{-2} \mathbb{I}_{[\theta, + \infty)}(x_i)$$ If $\theta > x_i$ for any of the $x_i$, then the likelihood dropped to $0$. Hence we need $\theta \le x_i, \forall i \in \{ 1, \ldots, n\}$ . Also, note that $\theta_1^n \le \theta_2^n$ if and only if $\theta_1 \le \theta_2$. hence we want $\theta$ to be as big as possible but it needs to be upper bounded by the minimal of $x_i$. Hence, you are right that $\hat{\theta}=\min_{i \in \{1, \ldots, n\}} x_i$.
2019-09-21T13:31:36
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2797598/maximum-likelihood-estimator-of-fx-theta-x-2-0-theta-leq-x", "openwebmath_score": 0.9235687255859375, "openwebmath_perplexity": 215.2998510373586, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357195106374, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573136612787 }
http://math.stackexchange.com/questions/273367/norm-form-of-remainder-of-taylors-expansion
# norm form of remainder of Taylor's expansion Let $\mathbf x=(x_1,x_2,\cdots,x_n), \mathbf a=(a_1,a_2,\cdots,a_n)$, Can we write Taylor's expansion of $f(\mathbf x)$ at $\mathbf a$ as $f(\mathbf a)+(f_1(\mathbf a),f_2(\mathbf a),\cdots,f_n(\mathbf a))(\mathbf x-\mathbf a)^T+\frac{1}{2}(\mathbf x-\mathbf a)\left(\begin{array}{cccc}f_{11}(\mathbf a)&f_{12}(\mathbf a)&\cdots&f_{1n}(\mathbf a)\\f_{21}(\mathbf a)&f_{22}(\mathbf a)&\cdots&f_{2n}(\mathbf a)\\\vdots&\vdots&\vdots&\vdots\\f_{n1}(\mathbf a)&f_{n2}(\mathbf a)&\cdots&f_{nn}(\mathbf a)\end{array}\right)(\mathbf x-\mathbf a)^T+o(\|\mathbf x-\mathbf a\|^2)$ where the norm in the remainder is any norm of $\mathbb R^n$. If it can, please refer me to a textbook that explicitly mentions this. Thank you! - This is multivariable Taylor's formula with the remainder in Peano's form. One should assume that $f$ is twice continuously differentiable. The formula is stated in Encyclopedia of Mathematics where textbook references are given, such as
2016-06-30T01:23:08
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/273367/norm-form-of-remainder-of-taylors-expansion", "openwebmath_score": 0.9199670553207397, "openwebmath_perplexity": 199.67130951424315, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357189762612, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573133056991 }
http://underactuated.mit.edu/lyapunov.html
# Underactuated Robotics Algorithms for Walking, Running, Swimming, Flying, and Manipulation Russ Tedrake How to cite these notes   |   Send me your feedback Note: These are working notes used for a course being taught at MIT. They will be updated throughout the Spring 2020 semester. Lecture videos are available on YouTube. # Lyapunov Analysis Optimal control provides a powerful framework for formulating control problems using the language of optimization. But solving optimal control problems for nonlinear systems is hard! In many cases, we don't really care about finding the optimal controller, but would be satisfied with any controller that is guaranteed to accomplish the specified task. In many cases, we still formulate these problems using computational tools from optimization, and in this chapter we'll learn about tools that can provide guaranteed control solutions for systems that are beyond the complexity for which we can find the optimal feedback. There are many excellent books on Lyapunov analysis; for instance Slotine90 is an excellent and very readable reference and Khalil01 can provide a rigorous treatment. In this chapter I will summarize (without proof) some of the key theorems from Lyapunov analysis, but then will also introduce a number of numerical algorithms... many of which are new enough that they have not yet appeared in any mainstream textbooks. # Stability of the Damped Pendulum Recall that the equations of motion of the damped simple pendulum are given by $ml^2 \ddot{\theta} + mgl\sin\theta = -b\dot{\theta},$ which I've written with the damping on the right-hand side to remind us that it is an external torque that we've modeled. These equations represent a simple second-order differential equation; in chapter 2 we discussed at some length what was known about the solutions to this differential equation--in practice we do not have a closed-form solution for $\theta(t)$ as a function of the initial conditions. Since we couldn't provide a solution analytically, in chapter 2 we resorted to a graphical analysis, and confirmed the intuition that there are fixed points in the system (at $\theta = k\pi$ for every integer $k$) and that the fixed points at $\theta = 2\pi k$ are asymptotically stable with a large basin of attraction. The graphical analysis gave us this intuition, but can we actually prove this stability property? In a way that might also work for much more complicated systems? One route forward was from looking at the total system energy (kinetic + potential), which we can write down: $E(\theta,\dot{\theta}) = \frac{1}{2} ml^2\dot{\theta}^2 - mgl \cos\theta.$ Recall that the contours of this energy function are the orbits of the undamped pendulum. A natural route to proving the stability of the downward fixed points is by arguing that energy decreases for the damped pendulum (with $b>0$) and so the system will eventually come to rest at the minimum energy, $E = -mgl$, which happens at $\theta=2\pi k$. Let's make that argument slightly more precise. Evaluating the time derivative of the energy reveals $\frac{d}{dt} E = - b\dot\theta^2 \le 0.$ This is sufficient to demonstrate that the energy will never increase, but it doesn't actually prove that the energy will converge to the minimum when $b>0$ because there are multiple states(not only the minimum) for which $\dot{E}=0$. To take the last step, we must observe that set of states with $\dot\theta=0$ is not an invariant set; that if the system is in, for instance $\theta=\frac{\pi}{4}, \dot\theta=0$ that it will not stay there, because $\ddot\theta \neq 0$. And once it leaves that state, energy will decrease once again. In fact, the fixed points are the only subset the set of states where $\dot{E}=0$ which do form an invariant set. Therefore we can conclude that as $t\rightarrow \infty$, the system will indeed come to rest at a fixed point (though it could be any fixed point with an energy less than or equal to the initial energy in the system, $E(0)$). This is an important example. It demonstrated that we could use a relatively simple function -- the total system energy -- to describe something about the long-term dynamics of the pendulum even though the actual trajectories of the system are (analytically) very complex. It also demonstrated one of the subtleties of using an energy-like function that is non-increasing (instead of strictly decreasing) to prove asymptotic stability. Lyapunov functions generalize this notion of an energy function to more general systems, which might not be stable in the sense of some mechanical energy. If I can find any positive function, call it $V(\bx)$, that gets smaller over time as the system evolves, then I can potentially use $V$ to make a statement about the long-term behavior of the system. $V$ is called a Lyapunov function. Recall that we defined three separate notions for stability of a fixed-point of a nonlinear system: stability i.s.L., asymptotic stability, and exponential stability. We can use Lyapunov functions to demonstrate each of these, in turn. # Lyapunov's Direct Method Given a system $\dot{\bx} = f(\bx)$, with $f$ continuous, and for some region ${\cal B}$ around the origin (specifically an open subset of $\mathbf{R}^n$ containing the origin), if I can produce a scalar, continuously-differentiable function $V(\bx)$, such that \begin{gather*} V(\bx) > 0, \forall \bx \in {\cal B} \setminus \{0\} \quad V(0) = 0, \text{ and} \\ \dot{V}(\bx) = \pd{V}{\bx} f(\bx) \le 0, \forall \bx \in {\cal B} \setminus \{0\} \quad \dot{V}(0) = 0, \end{gather*} then the origin $(\bx = 0)$ is stable in the sense of Lyapunov (i.s.L.). [Note: the notation $A \setminus B$ represents the set $A$ with the elements of $B$ removed.] If, additionally, we have $$\dot{V}(\bx) = \pd{V}{\bx} f(\bx) < 0, \forall \bx \in {\cal B} \setminus \{0\},$$ then the origin is (locally) asymptotically stable. And if we have $$\dot{V}(\bx) = \pd{V}{\bx} f(\bx) \le -\alpha V(x), \forall \bx \in {\cal B} \setminus \{0\},$$ for some $\alpha>0$, then the origin is (locally) exponentially stable. Note that for the sequel we will use the notation $V \succ 0$ to denote a positive-definite function, meaning that $V(0)=0$ and $V(\bx)>0$ for all $\bx\ne0$ (and also $V \succeq 0$ for positive semi-definite, $V \prec 0$ for negative-definite functions). The intuition here is exactly the same as for the energy argument we made in the pendulum example: since $\dot{V}(x)$ is always zero or negative, the value of $V(x)$ will only get smaller (or stay the same) as time progresses. Inside the subset ${\cal B}$, for every $\epsilon$-ball, I can choose a $\delta$ such that $|x(0)|^2 < \delta \Rightarrow |x(t)|^2 < \epsilon, \forall t$ by choosing $\delta$ sufficiently small so that the sublevel-set of $V(x)$ for the largest value that $V(x)$ takes in the $\delta$ ball is completely contained in the $\epsilon$ ball. Since the value of $V$ can only get smaller (or stay constant) in time, this gives stability i.s.L.. If $\dot{V}$ is strictly negative away from the origin, then it must eventually get to the origin (asymptotic stability). The exponential condition is implied by the fact that $\forall t>0, V(\bx(t)) \le V(\bx(0)) e^{-\alpha t}$. Notice that the system analyzed above, $\dot{\bx}=f(\bx)$, did not have any control inputs. Therefore, Lyapunov analysis is used to study either the passive dynamics of a system or the dynamics of a closed-loop system (system + control in feedback). We will see generalizations of the Lyapunov functions to input-output systems later in the text. # Global Stability The notion of a fixed point being stable i.s.L. is inherently a local notion of stability (defined with $\epsilon$- and $\delta$- balls around the origin), but the notions of asymptotic and exponential stability can be applied globally. The Lyapunov theorems work for this case, too, with only minor modification. # Lyapunov analysis for global stability Given a system $\dot{\bx} = f(\bx)$, with $f$ continuous, if I can produce a scalar, continuously-differentiable function $V(\bx)$, such that \begin{gather*} V(\bx) \succ 0, \\ \dot{V}(\bx) = \pd{V}{\bx} f(\bx) \prec 0, \text{ and} \\ V(\bx) \rightarrow \infty \text{ whenever } ||x||\rightarrow \infty,\end{gather*} then the origin $(\bx = 0)$ is globally asymptotically stable (G.A.S.). If additionally we have that $$\dot{V}(\bx) \preceq -\alpha V(\bx),$$ for some $\alpha>0$, then the origin is globally exponentially stable. The new condition, on the behavior as $||\bx|| \rightarrow \infty$ is known as "radially unbounded", and is required to make sure that trajectories cannot diverge to infinity even as $V$ decreases; it is only required for global stability analysis. # LaSalle's Invariance Principle Perhaps you noticed the disconnect between the statement above and the argument that we made for the stability of the pendulum. In the pendulum example, using the mechanical energy resulted in a Lyapunov function that was only negative semi-definite, but we eventually argued that the fixed points were asymptotically stable. That took a little extra work, involving an argument about the fact that the fixed points were the only place that the system could stay with $\dot{E}=0$; every other state with $\dot{E}=0$ was only transient. We can formalize this idea for the more general Lyapunov function statements--it is known as LaSalle's Theorem. # LaSalle's Theorem Given a system $\dot{\bx} = f(\bx)$ with $f$ continuous. If we can produce a scalar function $V(\bx)$ with continuous derivatives for which we have $$V(\bx) \succ 0,\quad \dot{V}(\bx) \preceq 0,$$ and $V(\bx)\rightarrow \infty$ as $||\bx||\rightarrow \infty$, then $\bx$ will converge to the largest invariant set where $\dot{V}(\bx) = 0$. To be clear, an invariant set, ${\cal G}$, of the dynamical system is a set for which $\bx(0)\in{\cal G} \Rightarrow \forall t>0, \bx(t) \in {\cal G}$. In other words, once you enter the set you never leave. The "largest invariant set" need not be connected; in fact for the pendulum example each fixed point is an invariant set, so the largest invariant set is the union of all the fixed points of the system. There are also variants of LaSalle's Theorem which work over a region. Finding a Lyapunov function which $\dot{V} \prec 0$ is more difficult than finding one that has $\dot{V} \preceq 0$. LaSalle's theorem gives us the ability to make a statement about asymptotic stability even in this case. In the pendulum example, every state with $\dot\theta=0$ had $\dot{E}=0$, but only the fixed points are in the largest invariant set. # Relationship to the Hamilton-Jacobi-Bellman equations At this point, you might be wondering if there is any relationship between Lyapunov functions and the cost-to-go functions that we discussed in the context of dynamic programming. After all, the cost-to-go functions also captured a great deal about the long-term dynamics of the system in a scalar function. We can see the connection if we re-examine the HJB equation $0 = \min_\bu \left[ \ell(\bx,\bu) + \pd{J^*}{\bx}f(\bx,\bu). \right]$Let's imagine that we can solve for the optimizing $\bu^*(\bx)$, then we are left with $0 = \ell(\bx,\bu^*) + \pd{J^*}{\bx}f(\bx,\bu^*)$ or simply $\dot{J}^*(\bx) = -\ell(\bx,\bu^*) \qquad \text{vs} \qquad \dot{V}(\bx) \preceq 0.$ In other words, in optimal control we must find a cost-to-go function which matches this gradient for every $\bx$; that's very difficult and involves solving a potentially high-dimensional partial differential equation. By contrast, Lyapunov analysis is asking for much less - any function which is going downhill (at any rate) for all states. This can be much easier, for theoretical work, but also for our numerical algorithms. Also note that if we do manage to find the optimal cost-to-go, $J^*(\bx)$, then it can also serve as a Lyapunov function so long as $\ell(\bx,\bu^*(\bx)) \succeq 0$. Include instability results, as in Briat15 Theorem 2.2.5 # Lyapunov analysis with convex optimization One of the primary limitations in Lyapunov analysis as I have presented it so far is that it is potentially very difficult to come up with suitable Lyapunov function candidates for interesting systems, especially for underactuated systems. ("Underactuated" is almost synonymous with "interesting" in my vocabulary.) Even if somebody were to give me a Lyapunov candidate for a general nonlinear system, the Lyapunov conditions can be difficult to check -- for instance, how would I check that $\dot{V}$ is strictly negative for all $\bx$ except the origin if $\dot{V}$ is some arbitrarily complicated nonlinear function over a vector $\bx$? In this section, we'll look at some computational approaches to verifying the Lyapunov conditions, and even to searching for (the coefficients of) the Lyapunov functions themselves. If you're imagining numerical algorithms to check the Lyapunov conditions for complicated Lyapunov functions and complicated dynamics, the first thought is probably that we can evaluate $V$ and $\dot{V}$ at a large number of sample points and check whether $V$ is positive and $\dot{V}$ is negative. This does work, and could potentially be combined with some smoothness or regularity assumptions to generalize beyond the sample points. Add python bindings for the pendulum energy lp lyapunov example. But in many cases we will be able to do better -- providing optimization algorithms that will rigorously check these conditions for all $\bx$ without dense sampling; these will also give us additional leverage in formulating the search for Lyapunov functions. # Lyapunov analysis for linear systems Let's take a moment to see how things play out for linear systems. # Lyapunov analysis for stable linear systems Imagine you have a linear system, $\dot\bx = {\bf A}\bx$, and can find a Lyapunov function $$V(\bx) = \bx^T {\bf P} \bx, \quad {\bf P} = {\bf P^T} \succ 0,$$ which also satisfies $$\dot{V}(\bx) = \bx^T {\bf PA} \bx + \bx^T {\bf A}^T {\bf P}\bx \prec 0.$$ Then the origin is globally asymptotically stable. Note that the radially-unbounded condition is satisfied by ${\bf P} \succ 0$, and that the derivative condition is equivalent to the matrix condition $${\bf PA} + {\bf A}^T {\bf P} \prec 0.$$ For stable linear systems the existence of a quadratic Lyapunov function is actually a necessary (as well as sufficient) condition. Furthermore, a Lyapunov function can always be found by finding the positive-definite solution to the matrix Lyapunov equation $${\bf PA} + {\bf A}^T{\bf P} = - {\bf Q},\label{eq:algebraic_lyapunov}$$ for any ${\bf Q}={\bf Q}^T\succ 0$. add an example here. double integrator? re-analyze the LQR output? This is a very powerful result - for nonlinear systems it will be potentially difficult to find a Lyapunov function, but for linear systems it is straight-forward. In fact, this result is often used to propose candidates for non-linear systems, e.g., by linearizing the equations and solving a local linear Lyapunov function which should be valid in the vicinity of a fixed point. # Lyapunov analysis as a semi-definite program (SDP) Lyapunov analysis for linear systems has an extremely important connection to convex optimization. In particular, we could have also formulated the Lyapunov conditions for linear systems above using semi-definite programming (SDP). Semidefinite programming is a subset of convex optimization -- an extremely important class of problems for which we can produce efficient algorithms that are guaranteed find the global optima solution (up to a numerical tolerance and barring any numerical difficulties). If you don't know much about convex optimization or want a quick refresher, please take a few minutes to read the optimization preliminaries in the appendix. The main requirement for this section is to appreciate that it is possible to formulate efficient optimization problems where the constraints include specifying that one or more matrices are positive semi-definite (PSD). These matrices must be formed from a linear combination of the decision variables. For a trivial example, the optimization $$\min_a a,\quad \subjto \begin{bmatrix} a & 0 \\ 0 & 1 \end{bmatrix} \succeq 0,$$ returns $a = 0$ (up to numerical tolerances). The value in this is immediate for linear systems. For example, we can formulate the search for a Lyapunov function for the linear system $\dot\bx = {\bf A} \bx$ by using the parameters ${\bf p}$ to populate a symmetric matrix ${\bf P}$ and then write the SDP: $\find_{\bf p} \quad \subjto \quad {\bf P} \succeq 0, \quad {\bf PA} + {\bf A}^T {\bf P} \preceq 0.$ Note that you would probably never use that particular formulation, since there specialized algorithms for solving the simple Lyapunov equation which are more efficient and more numerically stable. But the SDP formulation does provide something new -- we can now easily formulate the search for a "common Lyapunov function" for uncertain linear systems. # Common Lyapunov analysis for linear systems Suppose you have a system governed by the equations $\dot\bx = {\bf A}\bx$, where the matrix ${\bf A}$ is unknown, but its uncertain elements can be bounded. There are a number of ways to write down this uncertainty set; let us choose to write this by describing ${\bf A}$ as the convex combination of a number of known matrices, $${\bf A} = \sum_{i} \beta_i {\bf A}_i, \quad \sum_i \beta_i = 1, \quad \forall i, \beta_i > 0.$$ This is just one way to specify the uncertainty; geometrically it is describing a polygon of uncertain parameters (in the space of elements of ${\bf A}$ with each ${\bf A}_i$ as one of the vertices in the polygon. Now we can formulate the search for a common Lyapunov function using $\find_{\bf p} \quad \subjto \quad {\bf P} \succeq 0, \quad \forall_i, {\bf PA}_i + {\bf A}_i^T {\bf P} \preceq 0.$ The solver will then return a matrix ${\bf P}$ which satisfies all of the constraints, or return saying "problem is infeasible". It can easily be verified that if ${\bf P}$ satisfies the Lyapunov condition at all of the vertices, then it satisfies the condition for every ${\bf A}$ in the set: ${\bf P}(\sum_i \beta_i {\bf A}_i) + (\sum_i \beta_i {\bf A}_i)^T {\bf P} = \sum_i \beta_i ({\bf P A}_i + {\bf A}_i^T {\bf P}) \preceq 0,$ since $\forall i$, $\beta_i > 0$. Note that, unlike the simple Lyapunov equation for a known linear system, this condition being satisfied is a sufficient but not a necessary condition -- it is possible that the set of uncertain matrices ${\bf A}$ is robustly stable, but that this stability cannot be demonstrated with a common quadratic Lyapunov function. You can try this example for yourself in . examples/lyapunov.ipynb As always, make sure that you open up the code and take a look. There are many small variants of this result that are potentially of interest. For instance, a very similar set of conditions can certify "mean-square stability" for linear systems with multiplicative noise (see e.g. Boyd94, § 9.1.1). Add example or exercise based on e.g. Briat15 (LPV) example 1.3.1, showing off how powerful this can be. Also perhaps the parameter-dependent robust stability of his Def 2.3.5. This example is very important because it establishes a connection between Lyapunov functions and (convex) optimization. But so far we've only demonstrated this connection for linear systems where the PSD matrices provide a magical recipe for establishing the positivity of the (quadratic) functions for all $\bx$. Is there any hope of extending this type of analysis to more general nonlinear systems? Surprisingly, it turns out that there is. # Lyapunov analysis for polynomial systems Sums of squares optimization provides a natural generalization of SDP to optimizing over positive polynomials (if you are not familiar, take a moment to read the appendix). This suggests that it may be possible to generalize the optimization approach using SDP to search for Lyapunov functions for linear systems to searching for Lyapunov functions for at least the polynomial systems: $\dot\bx = f(\bx),$ where $f$ is a vector-valued polynomial function. If we parameterize a fixed-degree Lyapunov candidate as a polynomial with unknown coefficients, e.g., $V_\alpha(\bx) = \alpha_0 + \alpha_1 x_1 + \alpha_2 x_2 + \alpha_3 x_1x_2 + \alpha_4 x_1^2 + ...,$ then the time-derivative of $V$ is also a polynomial, and I can formulate the optimization: \begin{align*} \find_\alpha, \quad \subjto \quad& V_\alpha(\bx) \sos \\ & -\dot{V}_\alpha(\bx) = -\pd{V_\alpha}{\bx} f(\bx) \sos. \end{align*} Because this is a convex optimization, the solver will return a solution if one exists. # Verifying a Lyapunov candidate via SOS This example is example 7.2 from Parrilo00. Consider the nonlinear system: \begin{align*} \dot{x}_0 =& -x_0 - 2x_1^2 \\ \dot{x}_1 =& -x_1 - x_0 x_1 - 2x_1^3,\end{align*} and the fixed Lyapunov function $V(x) = x_0^2 + 2x_1^2$, test if $\dot{V}(x)$ is negative definite. The numerical solution can be written in a few lines of code, and is a convex optimization. examples/lyapunov.ipynb # Searching for a Lyapunov function via SOS Verifying a candidate Lyapunov function is all well and good, but the real excitement starts when we use optimization to find the Lyapunov function. In the following code, we parameterize $V(x)$ as the polynomial containing all monomials up to degree 2, with the coefficients as decision variables: $$V(x) = c_0 + c_1x_0 + c_2x_1 + c_3x_0^2 + c_4 x_0x_1 + c_5 x_1^2.$$ We will set the scaling (arbitrarily) to avoid numerical issues by setting $V(0)=0$, $V([1,0]) = 1$. Then we write: \begin{align*} \find_{\bc} \ \ \subjto \ \ & V\text{ is sos, } \\ & -\dot{V} \text{ is sos.}\end{align*} examples/lyapunov.ipynb Up to numerical convergence tolerance, it discovers the same coefficients that we chose above (zeroing out the unnecessary terms). It is important to remember that there are a handful of gaps which make the existence of this solution a sufficient condition (for proving that every sub-level set of $V$ is an invariant set of $f$) instead of a necessary one. First, there is no guarantee that a stable polynomial system can be verified using a polynomial Lyapunov function (of any degree, and in fact there are known counter-examples Ahmadi11a) and here we are only searching over a fixed-degree polynomial. Second, even if a polynomial Lyapunov function does exist, there is a gap between the SOS polynomials and the positive polynomials. Despite these caveats, I have found this formulation to be surprisingly effective in practice. Intuitively, I think that this is because there is relatively a lot of flexibility in the Lyapunov conditions -- if you can find one function which is a Lyapunov function for the system, then there are also many "nearby" functions which will satisfy the same constraints. # Lyapunov functions for estimating regions of attraction There is another very important connection between Lyapunov functions and the concept of an invariant set: any sub-level set of a Lyapunov function is also an invariant set. This gives us the ability to use sub-level sets of a Lyapunov function as approximations of the region of attraction for nonlinear systems. # Lyapunov invariant set and region of attraction theorem Given a system $\dot{\bx} = f(\bx)$ with $f$ continuous, if we can find a scalar function $V(\bx) \succ 0$ and a sub-level set $${\cal G}: \{ \bx | V(\bx) < \rho \}$$ on which $$\forall \bx \in {\cal G}, \dot{V}(\bx) \preceq 0,$$ then ${\cal G}$ is an invariant set. By LaSalle, $\bx$ will converge to the largest invariant subset of ${\cal G}$ on which $\dot{V}=0$. Furthermore, if $\dot{V}(\bx) \prec 0$ in ${\cal G}$, then the origin is locally asymptotically stable and the set ${\cal G}$ is inside the region of attraction of this fixed point. Alternatively, if $\dot{V}(\bx) \preceq 0$ in ${\cal G}$ and $\bx = 0$ is the only invariant subset of ${\cal G}$ where $\dot{V}=0$, then the origin is asymptotically stable and the set ${\cal G}$ is inside the region of attraction of this fixed point. # Region of attraction for a one-dimensional system Consider the first-order, one-dimensional system $\dot{x} = -x + x^3.$ We can quickly understand this system using our tools for graphical analysis. In the vicinity of the origin, $\dot{x}$ looks like $-x$, and as we move away it looks increasingly like $x^3$. There is a stable fixed point at the origin and unstable fixed points at $\pm 1$. In fact, we can deduce visually that the region of attraction to the stable fixed point at the origin is $\bx \in (-1,1)$. Let's see if we can demonstrate this with a Lyapunov argument. First, let us linearize the dynamics about the origin and use the Lyapunov equation for linear systems to find a candidate $V(\bx)$. Since the linearization is $\dot{x} = -x$, if we take ${\bf Q}=1$, then we find ${\bf P}=\frac{1}{2}$ is the positive definite solution to the algebraic Lyapunov equation (\ref{eq:algebraic_lyapunov}). Proceeding with $$V(\bx) = \frac{1}{2} x^2,$$ we have $$\dot{V} = x (-x + x^3) = -x^2 + x^4.$$ This function is zero at the origin, negative for $|x| < 1$, and positive for $|x| > 1$. Therefore we can conclude that the sub-level set $V < \frac{1}{2}$ is invariant and the set $x \in (-1,1)$ is inside the region of attraction of the nonlinear system. In fact, this estimate is tight. # Robustness analysis using "common Lyapunov functions" While we will defer most discussions on robustness analysis until later in the notes, there is a simple and powerful idea that can be presented now: the idea of a common Lyapunov function. Imagine that you have a model of a dynamical system but that you are uncertain about some of the parameters. For example, you have a model of a quadrotor, and are fairly confident about the mass and lengths (both of which are easy to measure), but are not confident about the moment of inertia. One approach to robustness analysis is to define a bounded uncertainty, which could take the form of $$\dot{\bx} = f_\alpha(\bx), \quad \alpha_{min} \le \alpha \le \alpha_{max},$$ with $\alpha$ a vector of uncertain parameters in your model. Richer specifications of the uncertainty bounds are also possible, but this will serve for our examples. In standard Lyapunov analysis, we are searching for a function that goes downhill for all $\bx$ to make statements about the long-term dynamics of the system. In common Lyapunov analysis, we can make many similar statements about the long-term dynamics of an uncertain system if we can find a single Lyapunov function that goes downhill for all possible values of $\alpha$. If we can find such a function, then we can use it to make statements with all of the variations we've discussed (local, regional, or global; in the sense of Lyapunov, asymptotic, or exponential). # A one-dimensional system with gain uncertainty Let's consider the same one-dimensional example used above, but add an uncertain parameter into the dynamics. In particular, consider the system: $$\dot{x} = -x + \alpha x^3, \quad \frac{3}{4} < \alpha < \frac{3}{2}.$$ Plotting the graph of the one-dimensional dynamics for a few values of $\alpha$, we can see that the fixed point at the origin is still stable, but the robust region of attraction to this fixed point (shaded in blue below) is smaller than the region of attraction for the system with $\alpha=1$. Taking the same Lyapunov candidate as above, $V = \frac{1}{2} x^2$, we have $$\dot{V} = -x^2 + \alpha x^4.$$ This function is zero at the origin, and negative for all $\alpha$ whenever $x^2 > \alpha x^4$, or $$|x| < \frac{1}{\sqrt{\alpha_{max}}} = \sqrt{\frac{2}{3}}.$$ Therefore, we can conclude that $|x| < \sqrt{\frac{2}{3}}$ is inside the robust region of attraction of the uncertain system. Not all forms of uncertainty are as simple to deal with as the gain uncertainty in that example. For many forms of uncertainty, we might not even know the location of the fixed points of the uncertain system. In this case, we can often still use common Lyapunov functions to give some guarantees about the system, such as guarantees of robust set invariance. For instance, if you have uncertain parameters on a quadrotor model, you might be ok with the quadrotor stabilizing to a pitch of $0.01$ radians, but you would like to guarantee that it definitely does not flip over and crash into the ground. # A one-dimensional system with additive uncertainty Now consider the system: $$\dot{x} = -x + x^3 + \alpha, \quad -\frac{1}{4} < \alpha < \frac{1}{4}.$$ Plotting the graph of the one-dimensional dynamics for a few values of $\alpha$, this time we can see that the fixed point is not necessarily at the origin; the location of the fixed point moves depending on the value of $\alpha$. But we should be able to guarantee that the uncertain system will stay near the origin if it starts near the origin, using an invariant set argument. Taking the same Lyapunov candidate as above, $V = \frac{1}{2} x^2$, we have $$\dot{V} = -x^2 + x^4 + \alpha x.$$ This Lyapunov function allows us to easily verify, for instance, that $V \le \frac{1}{3}$ is a robust invariant set, because whenever $V = \frac{1}{3}$, we have $$\forall \alpha \in [\alpha_{min},\alpha_{max}],\quad \dot{V}(x,\alpha) < 0.$$ Therefore $V$ can never start at less than one-third and cross over to greater than one-third. To see this, see that $$V=\frac{1}{3} \quad \Rightarrow \quad x = \pm \sqrt{\frac{2}{3}} \quad \Rightarrow \quad \dot{V} = -\frac{2}{9} \pm \alpha \sqrt{\frac{2}{3}} < 0, \forall \alpha \in \left[-\frac{1}{4},\frac{1}{4} \right].$$ Note that not all sub-level sets of this invariant set are invariant. For instance $V < \frac{1}{32}$ does not satisfy this condition, and by visual inspection we can see that it is in fact not robustly invariant. # Region of attraction estimation for polynomial systems Now we have arrived at the tool that I believe can be a work-horse for many serious robotics applications. Most of our robots are not actually globally stable (that's not because they are robots -- if you push me hard enough, I will fall down, too), which means that understanding the regions where a particular controller can be guaranteed to work can be of critical importance. Sums-of-squares optimization effectively gives us an oracle which we can ask: is this polynomial positive for all $\bx$? To use this for regional analysis, we have to figure out how to modify our questions to the oracle so that the oracle will say "yes" or "no" when we ask if a function is positive over a certain region which is a subset of $\Re^n$. That trick is called the S-procedure. It is closely related to the Lagrange multipliers from constrained optimization, and has deep connections to "Positivstellensatz" from algebraic geometry. # The S-procedure Consider a scalar polynomial, $p(\bx)$, and a semi-algebraic set $g(\bx) \preceq 0$, where $g$ is a vector of polynomials. If I can find a polynomial "multiplier", $\lambda(\bx)$, such that $p(\bx) + \lambda^T(\bx) g(\bx) \sos, \quad \text{and} \quad \lambda(\bx) \sos,$ then this is sufficient to demonstrate that $$p(\bx)\ge 0 \quad \forall \bx \in \{ g(\bx) \le 0 \}.$$ To convince yourself, observe that when $g(\bx) \le 0$, it is only harder to be positive, but when $g(\bx) > 0$, it is possible for the combined function to be SOS even if $p(\bx)$ is negative. We can also handle equality constraints with only a minor modification -- we no longer require the multiplier to be positive. If I can find a polynomial "multiplier", $\lambda(\bx)$, such that $p(\bx) + \lambda^T(\bx) g(\bx) \sos$ then this is sufficient to demonstrate that $$p(\bx)\ge 0 \quad \forall \bx \in \{ g(\bx) = 0 \}.$$ Here the intuition is that $\lambda(x)$ can add arbitrary positive terms to help me be SOS, but those terms contribute nothing precisely when $g(x)=0$. # Region of attraction for the one-dimensional cubic system Let's return to our example from above: $\dot{x} = -x + x^3$ and try to use SOS optimization to demonstrate that the region of attraction of the fixed point at the origin is $x \in (-1,1)$, using the Lyapunov candidate $V = x^2.$ First, define the multiplier polynomial, $\lambda(x) = c_0 + c_1 x + c_2 x^2.$ Then define the optimization \begin{align*} \find_{\bf c} \quad & \\ \subjto \quad& - \dot{V}(x) - \lambda(x) (1-V(x)) \sos \\ & \lambda(x) \sos \end{align*} You can try this example for yourself in . examples/lyapunov.ipynb While the example above only verifies that the one-sub-level set of the pre-specified Lyapunov candidate is negative (certifying the ROA that we already understood), we can generalize the optimization to allow us to search for the largest sub-level set (with the objective using a convex approximation for volume). We can even search for the coefficients of the Lyapunov function using an iteration of convex optimizations. There are a number of variations and nuances in the various formulations, and some basic rescaling tricks that can help make the numerics of the problem better for the solvers. # Region of Attraction codes in Drake In , we have packaged most of the work in setting up and solving the sums-of-squares optimization for regions of attraction into a single method RegionOfAttraction(system, context, options). This makes it as simple as, for instance: x = Variable("x") sys = SymbolicVectorSystem(state=[x], dynamics=[-x+x**3]) context = sys.CreateDefaultContext() V = RegionOfAttraction(sys, context) examples/lyapunov.ipynb Remember that although we have tried to make it convenient to call these functions, they are not a black box. I highly recommend opening up the RegionOfAttraction method and understanding how it works. There are lots of different options / formulations, and numerous numerical recipes to improve the numerics of the optimization problem. # Seeding the optimization with linear analysis LQR gives the cost-to-go which can be used as the Lyapunov candidate. Otherwise, use a Lyapunov equation. You may not even need to search for a better Lyapunov function, but rather just ask the question: what is the largest region of attraction that can be demonstrated for the nonlinear system using the Lyapunov function from linear analysis? # Rigid-body dynamics are (rational) polynomial We've been talking a lot in this chapter about numerical methods for polynomial systems. But even our simple pendulum has a $\sin\theta$ in the dynamics. Have I been wasting your time? Must we just resort to polynomial approximations of the non-polynomial equations? It turns out that our polynomial tools can perform exact analysis of the manipulation equation for almost all††the most notable exception to this is if your robot has helical/screw joints (see Wampler11).of our robots. We just have to do a little more work to reveal that structure. Let us first observe that rigid-body kinematics are polynomial (except the helical joint). This is fundamental -- the very nature of a "rigid body" assumption is that Euclidean distance is preserved between points on the body; if $\bp_1$ and $\bp_2$ are two points on a body, then the kinematics enforce that $|\bp_1 - \bp_2|_2^2$ is constant -- these are polynomial constraints. Of course, we commonly write the kinematics in a minimal coordinates using $\sin\theta$ and $\cos\theta$. But because of rigid body assumption, these terms only appear in the simplest forms, and we can simply make new variables $s_i = \sin\theta_i, c_i = \cos\theta_i$, and add the constraint that $s_i^2 + c_i^2 = 1.$ For a more thorough discussion see, for instance, Wampler11 and Sommese05. Since the potential energy of a multi-body system is simply an accumulation of weight times the vertical position for all of the points on the body, the potential energy is polynomial. If configurations (positions) of our robots can be described by polynomials, then velocities can as well: forward kinematics $\bp_i = f(\bq)$ implies that $\dot\bp_i = \frac{\partial f}{\partial \bq}\dot\bq,$ which is polynomial in $s, c, \dot\theta$. Since the kinetic energy of our robot is given by the accumulation of the kinetic energy of all the mass, $T = \sum_i \frac{1}{2} m_i v_i^Tv_i,$ the kinetic energy is polynomial, too (even when we write it with inertial matrices and angular velocities). Finally, the equations of motion can be obtained by taking derivatives of the Lagrangian (kinetic minus potential). These derivatives are still polynomial! # Global stability of the simple pendulum via SOS We opened this chapter using our intuition about energy to discuss stability on the simple pendulum. Now we'll replace that intuition with convex optimization (because it will also work for more difficult systems where our intuition fails). Let's change coordinates from $[\theta,\dot\theta]^T$ to $\bx = [s,c,\dot\theta]^t$, where $s \equiv \sin\theta$ and $c \equiv \cos\theta$. Then we can write the pendulum dynamics as $$\dot\bx = \begin{bmatrix} c \dot\theta \\ -s \dot\theta \\ -\frac{1}{m l^2} \left( b \dot\theta + mgls \right) \end{bmatrix}.$$ Now let's parameterize a Lyapunov candidate $V(s,c,\dot\theta)$ as the polynomial with unknown coefficients which contains all monomials up to degree 2: $$V = \alpha_0 + \alpha_1 s + \alpha_2 c + ... \alpha_{9} s^2 + \alpha_{10} sc + \alpha_{11} s\dot\theta.$$ Now we'll formulate the feasibility problem: $\find_{\bf \alpha} \quad \subjto \quad V \sos, \quad -\dot{V} \sos.$ In fact, this is asking too much -- really $\dot{V}$ only needs to be negative when $s^2+c^2=1$. We can accomplish this with the S-procedure, and instead write $\find_{{\bf \alpha},\lambda} \quad \subjto \quad V \sos, \quad -\dot{V} -\lambda(\bx)(s^2+c^2-1) \sos.$ (Recall that $\lambda(\bx)$ is another polynomial with free coefficients which the optimization can use to make terms arbitrarily more positive when $s^2+c^2 \neq 1$.) Finally, for style points, in the code example in we ask for exponential stability: examples/lyapunov.ipynb As always, make sure that you open up the code and take a look. The result is a Lyapunov function that looks familiar (visualized as a contour plot here): Aha! Not only does the optimization finds us coefficients for the Lyapunov function which satisfy our Lyapunov conditions, but the result looks a lot like mechanical energy. In fact, the result is a little better than energy... there are some small extra terms added which prove exponential stability without having to invoke LaSalle's Theorem. The one-degree-of-freedom pendulum did allow us to gloss over one important detail: while the manipulator equations $\bM(\bq) \ddot{\bq} + \bC(\bq, \dot\bq)\dot{\bq} = ...$ are polynomial, in order to solve for $\ddot{\bq}$ we actually have to multiply both sides by $\bM^{-1}$. This, unfortunately, is not a polynomial operation, so in fact the final dynamics of the multibody systems are rational polynomial. Not only that, but evaluating $\bM^{-1}$ symbolically is not advised -- the equations get very complicated very fast. But we can actually write the Lyapunov conditions using the dynamics in implicit form, e.g. by writing $V(\bq,\dot\bq,\ddot\bq)$ and asking it to satisfy the Lyapunov conditions everywhere that $\bM(\bq)\ddot\bq + ... = ... + {\bf B}\bu$ is satisfied, using the S-procedure. # Verifying dynamics in implicit form Typically we write our differential equations in the form $\dot\bx = {\bf f}(\bx, \bu).$ But let us consider for a moment the case where the dynamics are given in the form $${\bf g}(\bx, \bu, \dot\bx ) = 0.$$ This form is strictly more general because I can always write ${\bf g}(\bx,\bu,\dot\bx) = f(\bx,\bu) - \dot\bx$, but importantly here I can also write the bottom rows of ${\bf g}$ as $\bM(\bq)\ddot\bq + \bC(\bq,\dot\bq)\dot\bq - \btau_g - \bB \bu$. This form can also represent differential algebraic equations (DAEs) which are more general than ODEs; $\bg$ could even include algebraic constraints like $s_i^2 + c^2 - 1$. Most importantly, for manipulators, ${\bf g}$ can be polyonimal, even if ${\bf f}$ would have been rational. Interestingly, we can check the Lyapunov conditions, $\dot{V}(\bx) \le 0$, directly on a system (with no inputs) in its implicit form, $\bg(\bx,\dot\bx)=0$. Simply define a new function $Q(\bx, \bz) = \frac{\partial V(\bx)}{\partial \bx} \bz.$ If we can show $Q(\bx, \bz) \le 0, \forall \bx,\bz \in \{ \bx, \bz | \bg(\bx,\bz) = 0 \}$ using SOS, then we have verified that $\dot{V}(\bx) \le 0$, albeit at the non-trivial cost of adding indeterminates $\bz$ and an additional S-procedure. There are a few things that do break this clean polynomial view of the world. Rotary springs, for instance, if modeled as $\tau = k (\theta_0 - \theta)$ will mean that $\theta$ appears alongside $\sin\theta$ and $\cos\theta$, and the relationship between $\theta$ and $\sin\theta$ is sadly not polynomial. Linear feedback from LQR actually looks like the linear spring, although writing the feedback as $u = -\bK \sin\theta$ is a viable alternative. In practice, you can also Taylor approximate any smooth nonlinear function using polynomials. This can be an effective strategy in practice, because you can limit the degrees of the polynomial, and because in many cases it is possible to provide conservative bounds on the errors due to the approximation. One final technique that is worth knowing about is a change of coordinates, often referred to as the stereographic projection, that provides a coordinate system in which we can replace $\sin$ and $\cos$ with polynomials: By projecting onto the line, and using similar triangles, we find that $p = \frac{\sin\theta}{1 + \cos\theta}.$ Solving for $\sin\theta$ and $\cos\theta$ reveals $$\cos\theta = \frac{1-p^2}{1+p^2}, \quad \sin\theta = \frac{2p}{1+p^2}, \quad \text{and} \quad \frac{\partial p}{\partial \theta} = \frac{1+p^2}{2},$$ where $\frac{\partial p}{\partial \theta}$ can be used in the chain rule to derive the dynamics $\dot{p}$. Although the equations are rational, they share the denominator $1+p^2$ and can be treated efficiently in mass-matrix form. Compared to the simple substitution of $s=\sin\theta$ and $c=\cos\theta$, this is a minimal representation (scalar to scalar, no $s^2+c^2=1$ required); unfortunately it does have a singularity at $\theta=\pi$, so likely cannot be used for global analysis. convex ROA w/ outer approx. control design w/ alternations. # Valid Lyapunov Function for Global Stability For the system \begin{align*} \dot x_1 &=-\frac{6x_1}{(1+x_1^2)^2}+2x_2, \\ \dot x_2 &=-\frac{2(x_1+x_2)}{(1+x_1^2)^2}, \end{align*} you are given the positive definite function $V(\bx) =\frac{x_1^2}{1 + x_1^2}+ x_2^2$ and told that, for this system, $\dot V(\bx)$ is negative definite over the entire space. Is $V(\bx)$ a valid Lyapunov function to prove global asymptotic stability of the origin for the system described by the equations above? Motivate your answer. # Invariant Sets and Regions of Attraction You are given a dynamical system $\dot \bx = f(\bx)$, with $f$ continuous, which has a fixed point at the origin. Let $B_r$ be a ball of (finite) radius $r > 0$ centered at the origin: $B_r = \{ \bx : \| \bx \| \leq r \}$. Assume you found a continuously-differentiable scalar function $V(\bx)$ such that: $V(0) = 0$, $V(\bx) > 0$ for all $\bx \neq 0$ in $B_r$, and $\dot V(\bx) < 0$ for all $\bx \neq 0$ in $B_r$. Determine whether the following statements are true or false. Briefly justify your answer. 1. $B_r$ is an invariant set for the given system, i.e.: if the initial state $\bx(0)$ lies in $B_r$, then $\bx(t)$ will belong to $B_r$ for all $t \geq 0$. 2. $B_r$ is a subset of the ROA of the fixed point $\bx = 0$, i.e.: if $\bx(0)$ lies in $B_r$, then $\lim_{t \rightarrow \infty} \bx(t) = 0$. # Are Lyapunov Functions Unique? If $V_1(\bx)$ and $V_2(\bx)$ are valid Lyapunov functions that prove global asymptotic stability of the origin, does $V_1(\bx)$ necessarily equal $V_2(\bx)$? # Proving Global Asymptotic Stability Consider the system given by \begin{align*} \dot x_1 &= x_2 - x_1^3, \\ \dot x_2 &= - x_1 - x_2^3. \end{align*} Show that the Lyapunov function $V(\bx) = x_1^2 + x_2^2$ proves global asymptotic stability of the origin for this system. # Control-Lyapunov Function Consider the problem of synthesizing a stabilizing feedback law $\bu = \pi (\bx)$ for the dynamical system $\dot \bx = f(\bx, \bu)$. Lyapunov analysis suggests us a very simple approach to this problem: choose a candidate Lyapunov function $V(\bx)$, and design a control law $\pi (\bx)$ such that $V(\bx)$ always decreases along the trajectories of the closed-loop system $\dot \bx = f(\bx, \pi (\bx))$. A function $V(\bx)$ for which such a control law exists is called a control-Lyapunov function. In this exercise, we use this idea to drive a wheeled robot, implementing the controller proposed in Aicardi95. Similar to this previous example, we use a kinematic model of the robot. We represent with $z_1$ and $z_2$ its Cartesian position and with $z_3$ its orientation. The controls are the linear $u_1$ and angular $u_2$ velocities. The equations of motion read \begin{align*} \dot z_1 &= u_1 \cos z_3, \\ \dot z_2 &= u_1 \sin z_3, \\ \dot z_3 &= u_2.\end{align*} The goal is to design a feedback law $\pi(\bz)$ that drives the robot to the origin $\bz=0$ from any initial condition. As pointed out in Aicardi95, this problem becomes dramatically easier if we analyze it in polar coordinates. As depicted below, we let $x_1$ be the radial and $x_2$ the angular coordinate of the robot, and we define $x_3 = x_2 - z_3$. Analyzing the figure, basic kinematic considerations lead to \begin{align*} \dot x_1 &= u_1 \cos x_3, \\ \dot x_2 &= - \frac{u_1 \sin x_3}{x_1}, \\ \dot x_3 &= - \frac{u_1 \sin x_3}{x_1} - u_2.\end{align*} 1. For the candidate Lyapunov function $V(\bx) = V_1(x_1) + V_2(x_2, x_3)$, with $V_1(x_1) = \frac{1}{2} x_1^2$ and $V_2(x_2, x_3) = \frac{1}{2}(x_2^2 + x_3^2)$, compute the time derivatives $\dot V_1 (\bx, u_1)$ and $\dot V_2(\bx, \bu)$. 2. Show that the choice \begin{align*} u_1 &= \pi_1(\bx) = - x_1 \cos x_3, \\ u_2 &= \pi_2(\bx) = x_3 + \frac{(x_2 + x_3) \cos x_3 \sin x_3}{x_3}, \end{align*} makes $\dot V_1 (\bx, \pi_1(\bx)) \leq 0$ and $\dot V_2 (\bx, \pi(\bx)) \leq 0$ for all $\bx$. (Technically speaking, $\pi_2(\bx)$ is not defined for $x_3=0$. In this case, we let $\pi_2(\bx)$ assume its limiting value $x_2 + 2 x_3$, ensuring continuity of the feedback law.) 3. Explain why Lyapunov's direct method does not allow us to establish asymptotic stability of the closed-loop system. 4. Substitute the control law $\bu = \pi (\bx)$ in the equations of motion, and derive the closed-loop dynamics $\dot \bx = f(\bx, \pi(\bx))$. Use LaSalle's theorem to show (global) asymptotic stability of the closed-loop system. 5. In this python notebook we set up a simulation environment for you to try the controller we just derived. Type the control law from point (b) in the dedicated cell, and use the notebook plot to check your work. # Limitations of SOS Polynomials in Lyapunov Analysis 1. Are there positive definite functions that are not representable as sums of squares? 2. If a fixed point of our dynamical system does not admit a SOS Lyapunov function, what can we conclude about its stability? # ROA Estimation for the Time-Reversed Van der Pol Oscillator In this exercise you will use SOS optimization to approximate the ROA of the time-reversed Van der Pol oscillator (a variation of the classical Van der Pol oscillator which evolves backwards in time). In this python notebook, you are asked to test the following SOS formulations. 1. The one from the example above, augmented with a line search that maximizes the area of the ROA. 2. A single-shot SOS program that can directly maximize the area of the ROA, without any line search. 3. An improved version of the previous, where less SOS constraints are imposed in the optimization problem.
2020-07-13T15:24:31
{ "domain": "mit.edu", "url": "http://underactuated.mit.edu/lyapunov.html", "openwebmath_score": 0.9420238733291626, "openwebmath_perplexity": 254.07402936507899, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357189762611, "lm_q2_score": 0.6654105587468141, "lm_q1q2_score": 0.6532573133056991 }
https://www.wyzant.com/resources/answers/709959/suppose-that-the-area-of-a-trapezoid-can-be-represented-by-the-expression-3
Katherine R. # Suppose that the area of a trapezoid can be represented by the expression 35/x square feet and lengths of the parallel bases of the trapezoid can be represented by the expressions Suppose that the area of a trapezoid can be represented by the expression 35/x square feet and lengths of the parallel bases of the trapezoid can be represented by the expressions a=2/x feet and b=8/x+1 feet respectively. If the height is 10 feet determine the value of x . Then determine the area and values of the lengths of the parallel bases By:
2020-02-28T12:35:41
{ "domain": "wyzant.com", "url": "https://www.wyzant.com/resources/answers/709959/suppose-that-the-area-of-a-trapezoid-can-be-represented-by-the-expression-3", "openwebmath_score": 0.9361632466316223, "openwebmath_perplexity": 320.70708012119115, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9817357189762611, "lm_q2_score": 0.6654105587468141, "lm_q1q2_score": 0.6532573133056991 }
https://cstheory.stackexchange.com/questions/40192/eigenvalues-of-adjacency-matrix-of-a-connected-bipartite-graph
# Eigenvalues of adjacency matrix of a connected bipartite graph Let $G=(V,E)$ is a connected d-regular bipartite graph with the same number of vertices on both sides of the bipartition. It's known that that the largest eigenvalue of its adjacency matrix would be d, and the smallest would be -d. I was wondering if it was true that if the k-th largest eigenvalue here was $\lambda_k$, then the k-th smallest one would be $-\lambda_k$. I seem to be able to show that this is true for the second largest eigenvalue (using the fact that the all ones and indicators on each side of the bipartite graph are the largest and smallest eigenvectors, and using that the eigenvalues have multiplicity 1 and then applying the sign flipping trick). However this proof does not seem to hold in general.
2020-08-07T01:47:36
{ "domain": "stackexchange.com", "url": "https://cstheory.stackexchange.com/questions/40192/eigenvalues-of-adjacency-matrix-of-a-connected-bipartite-graph", "openwebmath_score": 0.908968985080719, "openwebmath_perplexity": 90.0916591131812, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9817357189762611, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.653257313305699 }
http://read.somethingorotherwhatever.com/entry/Powerlawdistributionsinempiricaldata
# Power-law distributions in empirical data • Published in 2007 In the collections Power-law distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and man-made phenomena. Unfortunately, the detection and characterization of power laws is complicated by the large fluctuations that occur in the tail of the distribution -- the part of the distribution representing large but rare events -- and by the difficulty of identifying the range over which power-law behavior holds. Commonly used methods for analyzing power-law data, such as least-squares fitting, can produce substantially inaccurate estimates of parameters for power-law distributions, and even in cases where such methods return accurate answers they are still unsatisfactory because they give no indication of whether the data obey a power law at all. Here we present a principled statistical framework for discerning and quantifying power-law behavior in empirical data. Our approach combines maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov statistic and likelihood ratios. We evaluate the effectiveness of the approach with tests on synthetic data and give critical comparisons to previous approaches. We also apply the proposed methods to twenty-four real-world data sets from a range of different disciplines, each of which has been conjectured to follow a power-law distribution. In some cases we find these conjectures to be consistent with the data while in others the power law is ruled out. ### BibTeX entry @article{Powerlawdistributionsinempiricaldata, title = {Power-law distributions in empirical data}, abstract = {Power-law distributions occur in many situations of scientific interest and have significant consequences for our understanding of natural and man-made phenomena. Unfortunately, the detection and characterization of power laws is complicated by the large fluctuations that occur in the tail of the distribution -- the part of the distribution representing large but rare events -- and by the difficulty of identifying the range over which power-law behavior holds. Commonly used methods for analyzing power-law data, such as least-squares fitting, can produce substantially inaccurate estimates of parameters for power-law distributions, and even in cases where such methods return accurate answers they are still unsatisfactory because they give no indication of whether the data obey a power law at all. Here we present a principled statistical framework for discerning and quantifying power-law behavior in empirical data. Our approach combines maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov statistic and likelihood ratios. We evaluate the effectiveness of the approach with tests on synthetic data and give critical comparisons to previous approaches. We also apply the proposed methods to twenty-four real-world data sets from a range of different disciplines, each of which has been conjectured to follow a power-law distribution. In some cases we find these conjectures to be consistent with the data while in others the power law is ruled out.}, url = {http://arxiv.org/abs/0706.1062v2 http://arxiv.org/pdf/0706.1062v2}, year = 2007, author = {Aaron Clauset and Cosma Rohilla Shalizi and M. E. J. Newman}, comment = {}, urldate = {2018-09-24}, archivePrefix = {arXiv}, eprint = {0706.1062}, primaryClass = {physics.data-an}, collections = {Drama!,Probability and statistics} }
2018-12-14T03:05:05
{ "domain": "somethingorotherwhatever.com", "url": "http://read.somethingorotherwhatever.com/entry/Powerlawdistributionsinempiricaldata", "openwebmath_score": 0.574090838432312, "openwebmath_perplexity": 1090.0887303046031, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357184418848, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573129501195 }
https://www.enotes.com/homework-help/function-f-x-x-2-2-using-inverse-function-419753
# For the function `f(x) = x^2 - 2` and using the inverse function, represented by `f^(-1)` , find `f(f^-1(-7))` . `f(x)=x^2-2` ,   `f(f^(-1)(-7))=?` First, determine the inverse function of f(x). To do so, replace f(x) with y. `y=x^2-2` Then, switch x and y. `x=y^2-2` And solve for y. So, add both sided by 2. `x+2=y^2-2+2` `x+2=y^2` Also, take the square root of both sides. `+-sqrt(x+2)=sqrt(y^2)` `+-sqrt(x+2)=y` And, replace y with `f^(-1)(x) ` to indicate that it is the inverse of f(x). `f^(-1)(x)=+-sqrt(x+2)` Next, solve for value of `f^(-1)(x)` when x=-7. `f^(-1)(-7)=+-sqrt(-7+2)=+-sqrt(-5)` Since the radicand of the square root is negative, to simplify apply `i=sqrt(-1)` . `f^(-1)(x)=+-isqrt5` So,when x=-7, the values of the inverse function are: `f^(-1)(-7)=isqrt5`      and      `f^(-1)(-7)=-sqrt5` Next, plug-in the values of the inverse function to f(x) to get `f(f^(-1)(-7))` . This is the same as solving for the values of f(x) when `x=isqrt5` and `x=-isqrt5` . `f(f^-1)(-7)) = f(isqrt5) `      and      `f(f^(-1)(-7)=f(-sqrt5)` So when `x=isqrt5` , the value of f(x) is: `f(x)=x^2-2` `f(isqrt5)=(isqrt5)^2-2=5i^2-2` Since i^2=-1, then: `f(isqrt5)=5(-1)-2=-5-2` `f(isqrt5)=-4` And when `x=-isqrt5` , the value of f(x) is: `f(x)=x^2-2` `f(-isqrt5)=(-isqrt5)-2=5i^2-2=5(-1)-2=-5-2` `f(-isqrt5)=-7` Since the value of f(x) is the same for both values of x, hence `f(f^(-1)(-7))=-7` . Approved by eNotes Editorial Team
2022-11-28T19:43:57
{ "domain": "enotes.com", "url": "https://www.enotes.com/homework-help/function-f-x-x-2-2-using-inverse-function-419753", "openwebmath_score": 0.8575648069381714, "openwebmath_perplexity": 1761.0592616416882, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357184418847, "lm_q2_score": 0.6654105587468141, "lm_q1q2_score": 0.6532573129501195 }
https://www.beatthegmat.com/in-parallelogram-abcd-point-e-is-the-midpoint-of-side-ad-and-point-f-is-the-midpoint-of-side-cd-if-t326350.html?sid=34df0ebfdd527a52d86fba3bc6571a31
## In parallelogram $$ABCD,$$ point $$E$$ is the midpoint of side $$AD$$ and point $$F$$ is the midpoint of side $$CD.$$ If ##### This topic has expert replies Legendary Member Posts: 1441 Joined: 01 Mar 2018 Followed by:2 members ### In parallelogram $$ABCD,$$ point $$E$$ is the midpoint of side $$AD$$ and point $$F$$ is the midpoint of side $$CD.$$ If by Gmat_mission » Sun Sep 12, 2021 8:29 am 00:00 A B C D E ## Global Stats In parallelogram $$ABCD,$$ point $$E$$ is the midpoint of side $$AD$$ and point $$F$$ is the midpoint of side $$CD.$$ If point $$G$$ is the midpoint of line segment $$ED$$ and point $$H$$ is the midpoint of line segment $$DF,$$ what is the area of triangle $$DGH?$$ (1) Parallelogram $$ABCD$$ has an area of $$96.$$ (2) The parallelogram has sides of lengths $$8$$ and $$12.$$
2021-10-26T10:54:08
{ "domain": "beatthegmat.com", "url": "https://www.beatthegmat.com/in-parallelogram-abcd-point-e-is-the-midpoint-of-side-ad-and-point-f-is-the-midpoint-of-side-cd-if-t326350.html?sid=34df0ebfdd527a52d86fba3bc6571a31", "openwebmath_score": 0.39168715476989746, "openwebmath_perplexity": 195.85926857325725, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357184418847, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573129501194 }
https://algorithmtutor.com/Analysis-of-Algorithm/Solving-Recurrence-Relations-Part-III/
# Solving Recurrence Relations (Part III) In the previous post, we learned the Master method and the Akra-Bazzi method to solve the divide and conquer recurrence relations. In this post, we will learn another technique known as a recursion tree to solve the divide and conquer recurrences. A recurrence tree helps to visualize every step of the recursion call. It shows the input size and time taken to process that input at a particular level. Using recursion tree, we can calculate the work done by the algorithm on each level. Summing the work done on each level gives the overall running time of the algorithm. Recursion trees can be useful to gain the intuition about the closed form of a recurrence relation. We use following steps to solve the recurrence relation using recursion tree method. 1. Create a recursion tree from the recurrence relation 2. Calculate the work done in each node of the tree 3. Calculate the work done in each level of the tree (this can be done by adding the work done in each node corresponding to that level). 4. The sum over the work done in each level to get the solution. Example 1: Consider a recurrence $$T(n) = 2T(n/2) + n$$ There are 2 recursive calls in the recurrence. That means every node in the tree has two child nodes. The input size for each recursive call is $n/2$ which means each child of a node gets the half of the input i.e. if a parent node has input size $n$, each child will have input size $n/2$. The tree has the following form In this example, each level has done the equal number of operations i.e. $n$ and there are $\log_2 n$ levels i.e. the height of the tree is $\log_2 n$. This is given in the figure below. The total work is therefore, $$n * \log_2 n = \Theta(n\log n$$ Example 2: Consider $$T(n) = T(n/3) + T(2n/3) + n$$ It has two recursions with different input sizes. The left child gets input size $n/3$ and right child gets input size $2n/3$. As shown in the figure below, the tree is not a height balanced tree and therefore the height of the tree is the rightmost path (longest path) i.e. $\log_{3/2} n$. Each level of the tree does total $n$ work. The total work is $$n * \log_{3/2} n = \Theta(n\log n)$$
2019-08-21T08:16:08
{ "domain": "algorithmtutor.com", "url": "https://algorithmtutor.com/Analysis-of-Algorithm/Solving-Recurrence-Relations-Part-III/", "openwebmath_score": 0.8107566833496094, "openwebmath_perplexity": 229.57062106810562, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357179075082, "lm_q2_score": 0.6654105587468141, "lm_q1q2_score": 0.6532573125945397 }
https://math.stackexchange.com/questions/1807698/real-projective-space-tangent-space-orientation
# Real projective space, tangent space, orientation In the O'neill's differential geometry text, there are following problems. The projective plane $P$ is defined as follows. As you know, this $P$ is an abstract surface. So, we have to define the tangent vectors on an abstract surface. But, this text book just gives following definitions. Def) Let $\alpha:I \rightarrow M$ be a curve on the abstract surface $M$. The velocity vector( or tangent vector) $\alpha'(t)$ is defined by $\alpha'(t)=(f\alpha)'(t)$ for each differentiable function $f:M\rightarrow R$. I think the answer of (a) is {$v_p, -v_{-p}$} where $v_p$ is a tangent vector on $\Sigma$. How can we evaluate this problem? Let $x$ in $P$, $x=F(y)=F(-y), y\in \Sigma$. Let $v\in T_xP$, since $dF_y$ and $dF_{-y}$ are isomorphism since $F$ is a local diffeomorphism, there exist a unique $v_y\in T_y\Sigma$ and a unique $v_{-y}\in T_{-y}$ such that $dF_y(v_y)=dF_{-y}(v_{-y})=v$. Since $\Sigma$ is compact and connected and $F$ continue, $F(\Sigma)=P$ is compact and connected. Suppose that $P$ is orientable and $\omega$ a volume form on $P$, $F^*\omega$ is invariant by $i:x\rightarrow -x$, this implies that $i^*(F^*\omega)=F^*\omega=-F^*\omega$. This implies that $F^*\omega=0$ contradiction. • Could you explain why $F$ is a local diffeomorphism? $F$ is defined by $F(p)=${$p,-p$}. – Chris kim Jun 1 '16 at 0:59
2019-10-17T17:48:29
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1807698/real-projective-space-tangent-space-orientation", "openwebmath_score": 0.9842655062675476, "openwebmath_perplexity": 113.31168358180416, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357179075082, "lm_q2_score": 0.6654105587468141, "lm_q1q2_score": 0.6532573125945397 }
https://mathoverflow.net/questions/161481/most-regular-way-to-triangulate-mathbbr3
# Most regular way to triangulate $\mathbb{R}^3$? By "regular", I'm going by a property of Delaunay Triangulation, which is to maximize the minimum angle. In $\mathbb{R}^2$, tiling with equilateral triangles gives you a minimum angle of 60 degrees. Any other triangulation would give you a smaller minimum angle, so in this sense, triangulating $\mathbb{R}^2$ with equilateral triangles is the most regular way. What's the best way to triangulate $\mathbb{R}^3$ so that the minimum solid angle of the tetrahedra is maximized? I assume that the dual tessellation of closely-packed spheres would give a good triangulation since that works in $\mathbb{R}^2$. You get some regular tetrahedra and regular octahedra which can be divided into 4 congruent tetrahedra which contain the minimum solid angle. Is there a way to check if this is maximized? • An estimate (rather trivial) would be the solid angle at the corner of a pyramid with a square base, with height 1/2 the side of the base: Six of these form a cube. (This ought not be the best.) We need to further split the pyramid into two tetrahedra (making the solid angle even smaller). – Mirko Mar 26 '14 at 15:41 • I would check what happens by using close-packing of spheres and taking dual tessellations. en.wikipedia.org/wiki/Close-packing_of_equal_spheres – user126154 Mar 26 '14 at 16:17 • Though I understand you do not require that all tetrahedra of the triangulation be congruent, your question seems closely related to the still unsolved problem of classifying the tetrahedra that tile $\mathbb{R}^3$. For a survey of known results, see Egon Schulte's article arxiv.org/pdf/1005.3836.pdf – Wlodek Kuperberg Mar 26 '14 at 22:36
2019-02-21T13:00:25
{ "domain": "mathoverflow.net", "url": "https://mathoverflow.net/questions/161481/most-regular-way-to-triangulate-mathbbr3", "openwebmath_score": 0.7535644769668579, "openwebmath_perplexity": 497.45857485401825, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357179075082, "lm_q2_score": 0.6654105587468141, "lm_q1q2_score": 0.6532573125945397 }
https://www.naftaliharris.com/blog/lasso-polytope-geometry/
# Visualizing Lasso Polytope Geometry June 8, 2014 Some recent research about the lasso exploits a beautiful geometric picture: Suppose you fix the design matrix X and the regularization parameter $\lambda$. For a particular value of y, the n-dimensional response variable, you can then solve the lasso problem and examine the signs of the coefficients. Now if you partition n-dimensional space based upon the signs of the coefficients you'd get if you solved the lasso problem at each value of y, then you'll find that each of the resulting regions are convex polytopes. This is kind of a mouthful, so I made the visualization below to illustrate this partitioning. You can drag x1, x2, and x3 to rotate them, and slide the slider to increase or decrease $\lambda$. For this visualization, the design matrix X has two rows, (so there are two observations and y lives in the two-dimensional plane). X also has three columns, (variables), called x1, x2, and x3, each normalized to have length one. Each of these variables, (ie columns of X), also lives in the plane, and is drawn above. I've labeled each of the regions in the plane with the sign of the coefficients that would result from fitting the lasso with y in that region, and colored them based upon how many variables would be the model. For example, if you fit the lasso using any y in a region labeled "-+0", then the resulting x1 coefficient will be negative, the x2 coefficient will be positive, and the x3 coefficient will be zero. To be concrete, the version of the lasso that I'm using here is the one that solves this optimization problem: $$\min_{\beta} \frac{1}{2}||y - X\beta||_2^2 + \lambda ||\beta||_1$$ If you play around with this picture long enough, you'll find a number of beautiful features of this picture. Each of the ones I describe below also generalizes to more observations, more variables, and variables without norm one: ## Varying $\lambda$ When $\lambda$ is small, notice that almost the whole space becomes blue, (regions with two variables in the model). When $\lambda$ is large, the whole space becomes white, (the null model with no variables in it). This is this picture's representation of the fact that you fit larger models with smaller values of $\lambda$. ## Characterizing the Null Model Polytope The white central polytope is the set of y's resulting in coefficients with all zeros. Notice that each of its faces is orthogonal to one of the x's, and a distance of exactly $\lambda$ away for the origin. You can see this most clearly when $\lambda=1$; at this point, the faces all touch one of the x vectors (or one of the negative x vectors), since each of the x's is normalized to have length one. This feature is used by lasso solvers to determine the value of $\lambda$ when the first variable enters the model; (it's just the largest absolute inner product between y and any of the columns of X). ## "Dimensions" of the Polytopes The blue regions, (those for which the lasso selects two variables), are two-dimensional pie-slices. If you squint, (or make $\lambda$ small), you'll notice that the red regions, (those for which the which the lasso selects just one variable), look approximately like 1-dimensional lines. And of course, the null model is a bounded polytope, which is kind of like a "0-dimensional point", (so long as the variables span n-dimensional space). So if you squint, the polytope associated with a k-dimensional model is "sort of k-dimensional". This pattern holds more generally: You can show that for any point y in the region associated with a model of size k, if you travel away from y in the cone generated by the (signed) variables in the model, then you'll still stay in that region. For example, if you start at any point in a red region, and walk in the direction of the associated x, you'll stay in that region. And for any point in a blue region, if you walk in either of the directions indicated by the active variables, you'll also stay in that region. ## The Projection Picture There is substantial structure associated with the model-regions. To see this, pick any point y in the plane, and project it onto the null-model polytope. (That is, find the nearest point on the white null polytope). You'll find that all of the points in a particular model-region project onto the same face of the null-model polytope. Moreover, the dimension of this face is two minus the number of nonzero coefficients in the model, (or more generally, the number of observations minus the number of nonzero coefficients), and the face is formed from the intersection of facets associated with the variables in the model. Tibshirani and Taylor (2012) used this projection picture as a critical component of their proof that the degrees of freedom of the lasso, for a fixed value of $\lambda$, is exactly equal to the expected number of variables in the model. ## Sets of Accessible Models In a forthcoming paper, my friend Amir Sepehri and I also noticed that this projection picture admits a very beautiful description of which (signed) variables can possibly be in a model together: only those which form a face of the convex hull of all the variables and their negations. Using the upper bound theorem for polytopes, we were able to bound the number of possible lasso models. As a simple consequence of this bound, we were also able to prove model-selection inconsistency for the lasso when the number of variables in the true model exceeds half the number of observations, simply because with extremely high probability there is no y at all in n-dimensional space that results in a lasso fit with the signs of the true model.
2017-10-18T09:21:27
{ "domain": "naftaliharris.com", "url": "https://www.naftaliharris.com/blog/lasso-polytope-geometry/", "openwebmath_score": 0.771888792514801, "openwebmath_perplexity": 322.19153874048635, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357179075082, "lm_q2_score": 0.6654105587468141, "lm_q1q2_score": 0.6532573125945397 }
https://math.stackexchange.com/questions/2767564/what-is-the-sum-of-a-sinc-function-series-sampled-periodically
# What is the sum of a sinc function series sampled periodically What is the sum of the following sinc series? $$f(k,y)=\sum_{n=-\infty}^{\infty} \frac{\text{sin} \pi(kn-y)}{\pi(kn-y)}$$ where $k$ is an integer greater then zero This question is a generalisation of What is the sum over a shifted sinc function? and not a duplicate. The former question only considers the case for $k=1$ • Welcome to MSE. It will be more likely that you will get an answer if you show us that you made an effort. – José Carlos Santos May 5 '18 at 9:36 • Possible duplicate of What is the sum over a shifted sinc function? – mol3574710n0fN074710n May 5 '18 at 9:40 • As I explained this is not a duplicate but a genertalisation of math.stackexchange.com/questions/1242280/… – Daniel May 5 '18 at 10:16 • By checking my previous answer you'll see that the series converges when $\displaystyle k \in \left(0,2\right)$. Otherwise, you can always write $\displaystyle k$ as $\displaystyle 2\left\lfloor{k \over 2}\right\rfloor + 2\left\{{k \over 2}\right\}$ for any other $\displaystyle k$-value. – Felix Marin May 5 '18 at 23:18 • @FelixMarin I am only interested in k being an integer, and can't relate well what you are saying with what I expect. I don't think the answer is a constant, I am expecting an answer function of $k$ and $y$ – Daniel May 6 '18 at 10:04 I formulate a general solution here to compute $\sum_{n=-\infty}^{n=\infty}\frac{\sin(N\pi(y-\frac{n}{T}))}{\pi(y-\frac{n}{T})}$ for $T<N,$ with $T,N\in\mathbb{\mathbb{R}}^{+}$, and then use the case where $T=1/k$ for an integer $k>1$ and $N=1$ to show that the sum requested in this problem, $f(k,y)\equiv\sum_{n=-\infty}^{\infty}g(kn-y)=\sum_{n=-\infty}^{\infty}g(y-kn)$ for the (even) sinc function $g(y)=\begin{cases} \frac{sin\pi y}{y} & y\not\neq0\\ 1 & y=0 \end{cases}$ is equal to: $f(k,y)=\begin{cases} k & \frac{y}{k}\in\mathbb{Z}\\ \cos(\pi y)+\frac{\sin(\pi y(1-1/k))}{\sin(\pi y/k)} & \frac{y}{k}\notin\mathbb{Z},\,k\,\mathrm{even}\\ \frac{\sin(\pi y)}{\sin(\pi y/k)} & \frac{y}{k}\notin\mathbb{Z},\,k\,\mathrm{odd} \end{cases}$ Let $W_{N}(t)$ be the rectangular window of width $N$ defined in continuous time given by $W_{N}(t)=\begin{cases} 1 & -N/2<t<N/2\\ 1/2 & |t|=N/2\\ 0 & \textrm{else} \end{cases}$ where $t$ is in units of seconds. Its Fourier transform is then the (continuous, aperiodic) sinc function: $\hat{W}_{N}(y)=\begin{cases} \frac{\sin(N\pi y)}{\pi y} & y\neq0\\ N & y=0 \end{cases}$ where the units of $y$ is hz (cycles per second). Sampling $W_{N}(t)$ at discrete intervals of $T$ seconds, yields a discrete, aperiodic signal, $w_{N}(t)=\begin{cases} 1 & -N/2<t<N/2\\ 1/2 & |t|=N/2\\ 0 & \textrm{else} \end{cases},t\in\mathbb{Z}T.$ The (continuous, periodic) Discrete Time Fourier Transform of this, $\hat{w}_{N}(y)$ can be derived as folllows: Directly from the definition of DTFT, and making the change of variable $n=t/T$, we have $w_{N}(n)=\begin{cases} 1 & -N/2T<n<N/2T\\ 1/2 & |n|=N/2T\\ 0 & \textrm{else} \end{cases},n\in\mathbb{Z}.$ For $yT\notin\mathbb{Z}$ and $N/T$ is an even integer (implying that $N/2T$ is an integer value, and so $w_{N}(n)$ is evaluated at the endpoints), then: \begin{eqnarray*} \hat{w}_{N}(y) & = & \sum_{n=-\infty}^{\infty}w_{N}(n)e^{-i2\pi yTn}=\frac{1}{2}\left(e^{-iN\pi y}+e^{iN\pi y}\right)+\sum_{n=-\frac{N-2T}{2T}}^{\frac{N-2T}{2T}}e^{-i2\pi yTn}=\\ & =\frac{1}{2}\left(e^{-iN\pi y}+e^{iN\pi y}\right)+ & \frac{e^{i\pi(N-2T)y}-e^{-i\pi Ny}}{1-e^{-i2\pi yT}}=\frac{e^{-i\pi yT}}{e^{-i\pi yT}}\left(\frac{e^{i\pi y(N-T)}-e^{-i\pi y(N-T)}}{e^{i\pi yT}-e^{-i\pi yT}}\right)\\ & =\cos(\pi yN)+ & \frac{\cos(\pi y(N-T))+i\sin(\pi y(N-T))-(\cos(\pi y(N-T))-i\sin(\pi y(N-T)))}{\cos(\pi yT)+i\sin(\pi yT)-(\cos(\pi yT)-i\sin(\pi yT))}\\ & = & \cos(\pi yN)+\frac{2i\sin(\pi y(N-T))}{2i\sin(\pi yT)}=\cos(\pi yN)+\frac{\sin(\pi y(N-T))}{\sin(\pi yT)} \end{eqnarray*} (where the geometric series sum formula used above relies on the assumption that $yT\notin\mathbb{Z}$ so that $e^{-i2\pi yT}\neq1.$ ) If $N/T$ is an even integer but we now assume $yT\in\mathbb{Z}$, then we can write $y=\frac{l}{T}$ for some $l\in\mathbb{Z}$ and in this case we have that \begin{eqnarray*} \hat{w}_{N}(y) & = & \sum_{n=-\infty}^{\infty}w_{N}(n)e^{-i2\pi yTn}=\sum_{n=-\infty}^{\infty}w_{N}(n)e^{-i2\pi ln}=\sum_{n=-\infty}^{\infty}w_{N}(n)\\ & = & \sum_{n=-\frac{N}{2T}}^{\frac{N}{2T}}w_{N}(n)=\frac{N}{T} \end{eqnarray*} If $yT\notin\mathbb{Z}$ and $N/T$ is not even, then, letting $F\equiv Floor\left[N/2T\right]$ and $R=N/Tmod2$, we have that \begin{eqnarray*} \hat{w}_{N}(y) & = & \sum_{n=-\infty}^{\infty}w_{N}(n)e^{-i2\pi yTn}=\sum_{n=-F}^{F}e^{-i2\pi yTn}=\sum_{t=-(N-RT)/2T}^{(N-RT)/2T}\left(e^{-i2\pi yT}\right)^{n}\\ & & =\frac{e^{i\pi(N-RT)y}-e^{-i\pi(N-RT+2T)y}}{1-e^{-i2\pi yT}}=\frac{e^{-i\pi yT}}{e^{-i\pi yT}}\left(\frac{e^{i\pi y(N-RT+T)}-e^{-i\pi y(N-RT+T)}}{e^{i\pi yT}-e^{-i\pi yT}}\right)\\ & & =\frac{\cos(\pi y(N+T(1-R)))+i\sin(\pi y(N+T(1-R)))-(\cos(\pi y(N+T(1-R)))-i\sin(\pi y(N+T(1-R)))}{\cos(\pi yT)+i\sin(\pi yT)-(\cos(\pi yT)-i\sin(\pi yT))}\\ & & =\frac{2i\sin(\pi y(N+T(1-R)))}{2i\sin(\pi yT)}=\frac{\sin(\pi y(N+T(1-R))}{\sin(\pi yT)}\\ \end{eqnarray*} (Note that in the case of $N/T$ integer valued, this gives $R=1$ and so the above would simplify to $\frac{\sin(\pi yN)}{\sin(\pi yT)}$). If $N/T$ is not an even integer but we now assume $yT\in\mathbb{Z}$, then we again write $y=\frac{l}{T}$ and in this case we have that \begin{eqnarray*} \hat{w}_{N}(y) & = & \sum_{n=-\infty}^{\infty}w_{N}(n)e^{-i2\pi yTn}=\sum_{n=-\infty}^{\infty}w_{N}(n)e^{-i2\pi ln}=\sum_{n=-\infty}^{\infty}w_{N}(n)\\ & = & \sum_{n=-\frac{N-RT}{2T}}^{\frac{N-RT}{2T}}w_{N}(n)=\frac{N-RT}{T}+1=\frac{N+T(1-R)}{T} \end{eqnarray*} We also know from the Poisson summation formula, that $\hat{w}_{N}(y)$ is equal to the periodic summation of $\hat{W}_{N}(y)$ at periods of $1/T$: $$\hat{w}_{N}(y)=\sum_{n=-\infty}^{n=\infty}\hat{W}_{N}(y-\frac{n}{T})=\begin{cases} N\sum_{n=-\infty}^{n=\infty}g\left(N(y-\frac{n}{T})\right)\end{cases}$$ as the sum of a periodically sinc function was defined above. This is precisely the definition of $f(k,y)$ above for $k=1/T$ and $N=1$, so plugging these into the general solution of DTFT(y)=\begin{cases} N/T & yT\in\mathbb{Z},\,N/T\,\mathrm{even}\\ \frac{N+T(1-R)}{T} & yT\in\mathbb{Z},\,N/T\,\mathrm{not\,even}\\ \cos(\pi yN)+\frac{\sin(\pi y(N-T))}{\sin(\pi yT)} & yT\notin\mathbb{Z},\,N/T\,\mathrm{even}\\ \frac{\sin(\pi y(N+T(1-R))}{\sin(\pi yT)} & yT\notin\mathbb{Z},\,N/T\,\textrm{not}\,\mathrm{even} \end{cases} where $R=NT/P\,\textrm{mod}\,2$ yields the summation is as given above. • I like this answer a lot as it gives a simple answer, but I am a bit "surprised" that it is much simpler than the other answers – Daniel May 15 '18 at 8:02 • I believe the edited solution is now correct. I had previously defined the rectangular window to equal 1 on the interval [-N/2,N/2] rather than (-N/2,N/2) with value 1/2 at +/-N/2. With this correction, spot checks for values of k and y for which the sum can easily be computed explicitly check out; please let me know if you find any counterexamples or mistakes in the above. – Lauren Deason May 16 '18 at 3:54 $$f(k,y)=\sum_{n=-\infty}^{\infty} \frac{\sin \pi(kn-y)}{\pi(kn-y)}=\cos(\pi y)\sum_{n=-\infty}^{\infty} \frac{\sin (\pi kn)}{\pi(kn-y)}-\sin(\pi y)\sum_{n=-\infty}^{\infty} \frac{\cos (\pi kn)}{\pi(kn-y)}$$ $\sin(\pi kn)=0\quad$ and $\quad\cos(\pi kn)=(-1)^{kn}$ $$f(k,y)=-\sin(\pi y)\sum_{n=-\infty}^{\infty} \frac{(-1)^{kn}}{\pi(kn-y)}= \frac{\sin(\pi y)}{\pi k}\sum_{n=-\infty}^{\infty} \frac{(-1)^{kn}}{n-\frac{y}{k}}$$ $$\sum_{n=0}^{\infty} \frac{(-1)^{kn}}{n-\frac{y}{k}}=\Phi\left((-1)^k\:,\:1\:,\:-\frac{y}{k}\right)$$ $\Phi(z,s,a)$ is the Lerch function : http://mathworld.wolfram.com/LerchTranscendent.html $$\sum_{n=-\infty}^{0} \frac{(-1)^{kn}}{n-\frac{y}{k}}= \sum_{m=0}^{\infty} \frac{(-1)^{-km}}{-m+\frac{y}{k}} =-\Phi\left((-1)^k\:,\:1\:,\:\frac{y}{k}\right)$$ and the term for $n=0$ is equal to$\quad\frac{1}{0-\frac{y}{k}}=-\frac{k}{y}$ $$\sum_{n=-\infty}^{\infty} \frac{(-1)^{kn}}{n-\frac{y}{k}}=\Phi\left((-1)^k\:,\:1\:,\:-\frac{y}{k}\right)-\Phi\left((-1)^k\:,\:1\:,\:\frac{y}{k}\right)-\left(-\frac{k}{y}\right)$$ $$f(y,k)=-\frac{\sin(\pi y)}{\pi k}\left(\Phi\left((-1)^k\:,\:1\:,\:-\frac{y}{k}\right)-\Phi\left((-1)^k\:,\:1\:,\:\frac{y}{k}\right)+\frac{k}{y}\right)$$ If $\phi(t)$ is arbitrary function and $\Phi(\omega)$ its Fourier transform, then the following identity, known as Poisson's formula is true (see [1] pg.47): $$\sum^{\infty}_{n=-\infty}\phi(t+nT)=\frac{1}{T}\sum^{\infty}_{n=-\infty}e^{2\pi i nt/T}\Phi\left(\frac{2\pi n}{T}\right)\textrm{, }T>0.$$ In the case of your's, the function $\phi(t)=\frac{\sin(at)}{t}$ have Fourier transform $\Phi(t)=\pi X_{[-a,a]}(t)$ ($X_{[-a,a]}(t)$ is the characteristic function in $[-a,a]$, $a>0$). Hence we get $$\sum^{\infty}_{n=-\infty}\frac{\sin(a(t+n T))}{t+nT}=\frac{\pi}{T}\sum^{\infty}_{n=-\infty}e^{2\pi i n t/T}X_{[-a,a]}\left(\frac{2\pi n}{T}\right),$$ where $$X_{[-a,a]}\left(\frac{2\pi n}{T}\right)=\left\{ \begin{array}{cc} 1,\textrm{ if }\left|\frac{2\pi n}{T}\right|\leq a\textrm{, for a certain }n\in\textbf{Z}\\ 0,\textrm{ otherwise } \end{array}\right\}.$$ Consequently we have $$\sum^{\infty}_{n=-\infty}\frac{\sin(a(t+n T))}{t+nT}=\frac{\pi}{T}\sum_{|n|\leq aT/(2\pi)}e^{2\pi i n t/T}.$$ Hence if for example $a=5/2$, $T=3$, then $n=-1,0,1$ and $$\sum^{\infty}_{n=-\infty}\frac{\sin(a(t+n T))}{t+nT}=\sum^{\infty}_{n=-\infty}\frac{\sin(\frac{5}{2}(t+2n))}{t+2n}=$$ $$=e^{2\pi i (-1) t/3}+e^{2\pi i 0 t/3}+e^{2\pi i 1 t/3}=\frac{\pi}{3}\left(1+2\cos\left(\frac{2\pi t}{3}\right)\right).$$ Note that the Fourier transform $\Phi(x)$ of $\phi(t)$ is $$\Phi(x)=\int^{\infty}_{-\infty}\phi(t)e^{-itx}dt$$ References [1]: Athanasios Papoulis. ''The Fourier Integral and its Applications''. McGraw-Hill Book Company, Inc. United States of America, 1962 • We have: $$f(k,y)=\sum_{n=-\infty}^{\infty} \frac{\text{sin} \pi(kn-y)}{\pi(kn-y)}$$ We consider first that $k$ is even, that means $\sin{(\pi kn-\pi y)}=\sin{(-\pi y)}$ no matter $n$ $f(k,y)=\frac{\sin(-\pi y)}{\pi}\sum_{n=-\infty}^{\infty} \frac{1}{(kn-y)}$ Let $\delta(n)=\frac{1}{(kn-y)}$, when $n=0$, $\delta=\frac{-1}{y}$ When $n$ is different than zero, $\delta(n)=\frac{1}{kn-y}$ if $n>0$, otherwise $\delta(x)=\frac{-1}{kx+y}$ where $x$ is positive and $n=-x$ which means, $\sum_{n=-\infty}^{\infty} \delta(n)=\frac{-1}{y}+\sum_{n=1}^{\infty}(\frac{1}{kn-y}-\frac{1}{kn+y})$ $$\sum_{n=1}^{\infty}(\frac{1}{kn-y}-\frac{1}{kn+y})=\left [\frac{1}{k}(ln(n-y/k)-ln(n+y/k))\right ]^\infty_1=\left [ \frac{1}{k}ln(\frac{n-y/k}{n+y/k})\right ]^\infty_1=\frac{1}{k}(ln1-ln(1-y/k)+ln(1+y/k))$$ The quantity is positive since $ln(k+y)>ln(k-y)$, that leads to: $$f(k,y)=\frac{\sin(-\pi y)}{\pi}(\frac{-1}{y}+\frac{1}{k}ln(1+y/k)-\frac{1}{k}ln(1-y/k))$$ When $k$ is odd that leaves us with two sub-cases, $n=2l$ and $n=2l+1$. Cases where $n=2l$ we call $f_{even}(k,y)=\frac{\sin(-\pi y)}{\pi}\sum_{n=-\infty}^{\infty} \frac{1}{(2kn-y)}$ because $\sin{(\pi kn-\pi y)}=\sin{(-\pi y)}$, Which gives the same first formula with $n$ substitued by $2n$. If $n=2l+1$ , $\sin{(\pi kn-\pi y)}=\sin{(\pi y)}$ because $kn$ is odd. In such case: $f_{odd}=\frac{\sin(\pi y)}{\pi}\sum_{n=1}^{\infty} (\frac{1}{k(2n+1)-y}-\frac{1}{k(2n+1)+y})$ Merging both cases; \$f=f_{odd}+f_{even}$$=\sin(-\pi y)\left [ \frac{-1}{y}+\frac{1}{2k}ln(\frac{n-y/2k}{n+y/2k})\right ]^\infty_1+\sin(\pi y)\left [ \frac{1}{2k}ln(\frac{n+(k-y)/2k}{n+(k+y)/2k})\right ]^\infty_0$$$$=\frac{\sin(-\pi y)}{\pi}(\frac{-1}{y}+\frac{1}{2k}ln(1+y/2k)-\frac{1}{2k}ln(1-y/2k))+\frac{\sin(\pi y)}{\pi}(\frac{1}{2k}ln(\frac{k+y}{2k})-\frac{1}{2k}ln(\frac{k-y}{2k}))$$
2020-01-21T06:04:48
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2767564/what-is-the-sum-of-a-sinc-function-series-sampled-periodically", "openwebmath_score": 0.9260016083717346, "openwebmath_perplexity": 631.4348396887883, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357179075082, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573125945396 }
https://socratic.org/questions/how-do-you-simplify-sqrt18-sqrt8-sqrt32
# How do you simplify sqrt18 + sqrt8 - sqrt32? ##### 1 Answer Feb 13, 2016 First, we can simplify each individual radical. $\sqrt{18} = \sqrt{9 \cdot 2} = 3 \sqrt{2}$ $\sqrt{8} = \sqrt{4 \cdot 2} = 2 \sqrt{2}$ $\sqrt{32} = \sqrt{16 \cdot 2} = 4 \sqrt{2}$ So now we have: $3 \sqrt{2} + 2 \sqrt{2} - 4 \sqrt{2}$ Since these have the same radicals, we can add and subtract them as we do with variables. $5 \sqrt{2} - 4 \sqrt{2}$ And our answer is $\sqrt{2}$
2021-09-17T19:27:07
{ "domain": "socratic.org", "url": "https://socratic.org/questions/how-do-you-simplify-sqrt18-sqrt8-sqrt32", "openwebmath_score": 0.7928795218467712, "openwebmath_perplexity": 2153.4436696521093, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357173731318, "lm_q2_score": 0.665410558746814, "lm_q1q2_score": 0.6532573122389599 }
https://usamo.wordpress.com/page/2/
# Algebraic Topology Functors This will be old news to anyone who does algebraic topology, but oddly enough I can’t seem to find it all written in one place anywhere, and in particular I can’t find the bit about ${\mathsf{hPairTop}}$ at all. In algebraic topology you (for example) associate every topological space ${X}$ with a group, like ${\pi_1(X, x_0)}$ or ${H_5(X)}$. All of these operations turn out to be functors. This isn’t surprising, because as far as I’m concerned the definition of a functor is “any time you take one type of object and naturally make another object”. The surprise is that these objects also respect homotopy in a nice way; proving this is a fair amount of the “setup” work in algebraic topology. ## 1. Homology, ${H_n : \mathsf{hTop} \rightarrow \mathsf{Grp}}$ Note that ${H_5}$ is a functor $\displaystyle H_5 : \mathsf{Top} \rightarrow \mathsf{Grp}$ i.e. to every space ${X}$ we can associate a group ${H_5(X)}$. (Of course, replace ${5}$ by integer of your choice.) Recall that: Definition 1 Two maps ${f, g : X \rightarrow Y}$ are homotopy equivalent if there exists a homotopy between them. Thus for a map we can take its homotopy class ${[f]}$ (the equivalence class under this relationship). This has the nice property that ${[f \circ g] = [f] \circ [g]}$ and so on. Definition 2 Two spaces ${X}$ and ${Y}$ are homotopic if there exists a pair of maps ${f : X \rightarrow Y}$ and ${g : Y \rightarrow X}$ such that ${[f \circ g] = [\mathrm{id}_X]}$ and ${[g \circ f] = [\mathrm{id}_Y]}$. In light of this, we can define Definition 3 The category ${\mathsf{hTop}}$ is defined as follows: • The objects are topological spaces ${X}$. • The morphisms ${X \rightarrow Y}$ are homotopy classes of continuous maps ${X \rightarrow Y}$. Remark 4 Composition is well-defined since ${[f \circ g] = [f] \circ [g]}$. Two spaces are isomorphic in ${\mathsf{hTop}}$ if they are homotopic. Remark 5 As you might guess this “quotient” construction is called a quotient category. Then the big result is that: Theorem 6 The induced map ${f_\sharp = H_n(f)}$ of a map ${f: X \rightarrow Y}$ depends only on the homotopy class of ${f}$. Thus ${H_n}$ is a functor $\displaystyle H_n : \mathsf{hTop} \rightarrow \mathsf{Grp}.$ The proof of this is geometric, using the so-called prism operators. In any case, as with all functors we deduce Corollary 7 ${H_n(X) \cong H_n(Y)}$ if ${X}$ and ${Y}$ are homotopic. In particular, the contractible spaces are those spaces ${X}$ which are homotopy equivalent to a point. In which case, ${H_n(X) = 0}$ for all ${n \ge 1}$. ## 2. Relative Homology, ${H_n : \mathsf{hPairTop} \rightarrow \mathsf{Grp}}$ In fact, we also defined homology groups $\displaystyle H_n(X,A)$ for ${A \subseteq X}$. We will now show this is functorial too. Definition 8 Let ${\varnothing \neq A \subset X}$ and ${\varnothing \neq B \subset X}$ be subspaces, and consider a map ${f : X \rightarrow Y}$. If ${f(A) \subseteq B}$ we write $\displaystyle f : (X,A) \rightarrow (Y,B).$ We say ${f}$ is a map of pairs, between the pairs ${(X,A)}$ and ${(Y,B)}$. Definition 9 We say that ${f,g : (X,A) \rightarrow (Y,B)}$ are pair-homotopic if they are “homotopic through maps of pairs”. More formally, a pair-homotopy ${f, g : (X,A) \rightarrow (Y,B)}$ is a map ${F : [0,1] \times X \rightarrow Y}$, which we’ll write as ${F_t(X)}$, such that ${F}$ is a homotopy of the maps ${f,g : X \rightarrow Y}$ and each ${F_t}$ is itself a map of pairs. Thus, we naturally arrive at two categories: • ${\mathsf{PairTop}}$, the category of pairs of toplogical spaces, and • ${\mathsf{hPairTop}}$, the same category except with maps only equivalent up to homotopy. Definition 10 As before, we say pairs ${(X,A)}$ and ${(Y,B)}$ are pair-homotopy equivalent if they are isomorphic in ${\mathsf{hPairTop}}$. An isomorphism of ${\mathsf{hPairTop}}$ is a pair-homotopy equivalence. Then, the prism operators now let us derive Theorem 11 We have a functor $\displaystyle H_n : \mathsf{hPairTop} \rightarrow \mathsf{Grp}.$ The usual corollaries apply. Now, we want an analog of contractible spaces for our pairs: i.e. pairs of spaces ${(X,A)}$ such that ${H_n(X,A) = 0}$ for ${n \ge 1}$. The correct definition is: Definition 12 Let ${A \subset X}$. We say that ${A}$ is a deformation retract of ${X}$ if there is a map of pairs ${r : (X, A) \rightarrow (A, A)}$ which is a pair homotopy equivalence. Example 13 (Examples of Deformation Retracts) 1. If a single point ${p}$ is a deformation retract of a space ${X}$, then ${X}$ is contractible, since the retraction ${r : X \rightarrow \{\ast\}}$ (when viewed as a map ${X \rightarrow X}$) is homotopic to the identity map ${\mathrm{id}_X : X \rightarrow X}$. 2. The punctured disk ${D^2 \setminus \{0\}}$ deformation retracts onto its boundary ${S^1}$. 3. More generally, ${D^{n} \setminus \{0\}}$ deformation retracts onto its boundary ${S^{n-1}}$. 4. Similarly, ${\mathbb R^n \setminus \{0\}}$ deformation retracts onto a sphere ${S^{n-1}}$. Of course in this situation we have that $\displaystyle H_n(X,A) \cong H_n(A,A) = 0.$ ## 3. Homotopy, ${\pi_1 : \mathsf{hTop}_\ast \rightarrow \mathsf{Grp}}$ As a special case of the above, we define Definition 14 The category ${\mathsf{Top}_\ast}$ is defined as follows: • The objects are pairs ${(X, x_0)}$ of spaces ${X}$ with a distinguished basepoint ${x_0}$. We call these pointed spaces. • The morphisms are maps ${f : (X, x_0) \rightarrow (Y, y_0)}$, meaning ${f}$ is continuous and ${f(x_0) = y_0}$. Now again we mod out: Definition 15 Two maps ${f , g : (X, x_0) \rightarrow (Y, y_0)}$ of pointed spaces are homotopic if there is a homotopy between them which also fixes the basepoints. We can then, in the same way as before, define the quotient category ${\mathsf{hTop}_\ast}$. And lo and behold: Theorem 16 We have a functor $\displaystyle \pi_1 : \mathsf{hTop}_\ast \rightarrow \mathsf{Grp}.$ Same corollaries as before. # Notes on Publishing My Textbook Hmm, so hopefully this will be finished within the next 10 years. — An email of mine at the beginning of this project My Euclidean geometry book was published last March or so. I thought I’d take the time to write about what the whole process of publishing this book was like, but I’ll start with the disclaimer that my process was probably not very typical and is unlikely to be representative of what everyone else does. ## Writing the Book ### The Idea I’m trying to pin-point exactly when this project changed from “daydream” to “let’s do it”, but I’m not quite sure; here’s the best I can recount. It was sometimes in the fall of 2013, towards the start of the school year; I think late September. I was a senior in high school, and I was only enrolled in two classes. It was fantastic, because it meant I had lots of time to study math. The superintendent of the school eventually found out, though, and forced me to enroll as an “office assistant” for three periods a day. Nonetheless, office assistant is not a very busy job, and so I had lots of time, all the time, every day. Anyways, I had written a bit of geometry material for my math club the previous year, which was intended to be a light introduction. But in doing so I realized that there was much, much more I wanted to say, and so somewhere on my mental to-do list I added “flesh these notes out”. So one day, sitting in the office, after having spent another hour playing StarCraft, I finally got down to this item on the list. I hadn’t meant it to be a book; I was just wanted to finish what I had started the previous year. But sometimes your own projects spiral out of your control, and that’s what happened to me. Really, I hadn’t come up with a brilliant idea that no one had thought of before. To my knowledge, no one had even tried yet. If I hadn’t gone and decided to write this book, someone else would have done it; maybe not right away, but within many years. Indeed, I was honestly surprised that I was the first one to make an attempt. The USAMO has been a serious contest since at least the 1990’s and 2000’s, and the demand for this book certainly existed well before my time. Really, I think this all just goes to illustrate that the Efficient Market Hypothesis is not so true in these kind of domains. ### Setting Out Initially, this text was titled A Voyage in Euclidean Geometry and the filename Voyage.pdf would persist throughout the entire project even though the title itself would change throughout. The beginning of the writing was actually quite swift. Like everyone else, I started out with an empty LaTeX file. But it was different from blank screens I’ve had to deal with in my life; rather than staring in despair (think English essay mode), I exploded. I was bursting with things I wanted to write. It was the result of having years of competitive geometry bottled up in my head. In fact, I still have the version 0 of the table of contents that came to life as I started putting things together. • Angle Chasing (include “Fact 5”) • Centers of the Triangle • The Medial Triangle • The Euler Line • The Nine-Point Circle • Circles • Incircles and Excircles • The Power of a Point • Computational Geometry • All the Areas (include Extended Sine Law, Ceva/Menelaus) • Similar Triangles • Homothety • Stewart’s Theorem • Ptolemy’s Theorem • Some More Configurations (include symmedians) • Simson lines • Incircles and Excenters, Revisited • Midpoints of Altitudes • Circles Again • Inversion • Circles Inscribed in Segments • The Miquel Point (include Brokard, this could get long) • Spiral Similarity • Projective Geometry • Harmonic Division • Brokard’s Theorem • Pascal’s Theorem • Computational Techniques • Complex Numbers • Barycentric Coordinates Of course the table of contents changed drastically over time, but that wasn’t important. The point of the initial skeleton was to provide a bucket sort for all the things that I wanted to cover. Often, I would have three different sections I wanted to write, but like all humans I can only write one thing at a time, so I would have to create section headers for the other two and try to get the first section done as quickly as I could so that I could go and write the other two as well. I did take the time to do some things correctly, mostly LaTeX. Some examples of things I did: • Set up proper amsthm environments: earlier versions of the draft had “lemma”, “theorem”, “problem”, “exercise”, “proposition”, all distinct • Set up an organized master LaTeX file with \include’s for the chapters, rather than having just one fat file. • Set up shortcuts for setting up diagrams and so on. • Set up a “hints” system where hints to the problems would be printed in random order at the end of the book. • Set up a special command for new terms (\vocab). At the beginning all it did was made the text bold, but I suspected that later I might it do other things (like indexing). In other words, whenever possible I would pay O(1) cost to get back O(n) returns. Indeed the point of using LaTeX for a long document is so that you can “say what you mean”: you type \begin{theorem} … \end{theorem}, and all the formatting is taken care of for you. Decide you want to change it later, and you only have to change the relevant code in the beginning. And so, for three hours a day, five days a week, I sat in the main office of Irvington High School, pounding out chapter after chapter. I was essentially typing up what had been four years of competition experience; when you’re 17 years old, that’s a big chunk of your life. I spent surprisingly little time revising (before first submission). Mostly I just fired away. I have always heard things about how important it is to rewrite things and how first drafts are always terrible, but I’m glad I ignored that advice at least at the beginning. It was immensely helpful to have the skeleton of the book laid out in a tangible form that I could actually see. That’s one thing I really like about writing; helps you collect your thoughts together. It’s possible that this is part of my writing style; compared to what everyone says I should do, I don’t do very much rewriting. My first and final drafts tend to look pretty similar. I think this is just because when I write something that’s not an English essay, I already have a reasonably good idea what I want to say, and that the process of writing it out does much of the polishing for me. I’m also typically pretty hesitant when I write things: I do a lot of pausing for a few minutes deciding whether this sentence is really what I want before actually writing it down, even in drafts. ### Some Encouragement By late October, I had about 80 or so pages content written. Not that impressive if you think about it; I think it works out to something like 4 pages per day. In fact, looking through my data, I’m pretty sure I had a pretty consistent writing rate of about 30 minutes per page. It didn’t matter, since I had so much time. At this point, I was beginning to think about possibly publishing the book, so it was coming out reasonably well. It was a bit embarrassing, since as far as I could tell, publishing books was done by people who were actually professionals in some way or another. So I reached out to a couple of teachers of mine (not high school) who I knew had published textbooks in one form or another; I politely asked them what their thoughts were, and if they had any advice. I got some gentle encouragement, but also a pointer to self-publishing: turns out in this day and age, there are services like Lulu or CreateSpace that will just let you publish… whatever you want. This gave me the guts to keep working on this, because it meant that there was a minimal floor: even if I couldn’t get a traditional publisher, the worst I could do was self-publish through Amazon, which was at any rate strictly better than the plan of uploading a PDF somewhere. So I kept writing. The seasons turned, and by February, the draft was 200 pages strong. In April, I had staked out a whopping 333 pages. ## The Review Process ### Entering the MAA’s Queue I was finally beginning to run out of things I wanted to add, after about six months of endless typing. So I decided to reach out again; this time I contacted a professor (henceforth Z) that I knew, whom I knew well from time at the Berkeley Math Circle. After some discussion, Z agreed to look briefly at an early draft of the manuscript to get a feel for what it was like. I must have exceeded his expectations, because Z responded enthusiastically suggesting that I submit it to the Problem Book Series of the MAA. As it turns out, he was on the editorial board, so in just a few days my book was in the official queue. This was all in April. The review process was scheduled to begin in June, and likely take the entirety of the summer. I was told that if I had a more revised draft before the review that I should also send it in. It was then I decided I needed to get some feedback. So, I reached out to a few of my close friends asking them if they’d be willing to review drafts of the manuscript. This turned out to not go quite as well as I hoped, since • Many people agreed eagerly, but then didn’t actually follow through with going through and reading chapter by chapter. • I was stupid enough to send the entire manuscript rather than excerpts, and thus ran myself a huge risk of getting the text leaked. Fortunately, I have good friends, but it nagged at me for quite a while. Learned my lesson there. That’s not to say it was completely useless; I did get some typos fixed. But just not as many as I hoped. ### The First Review Not very much happened for the rest of the summer while I waited impatiently; it was a long four month wait for me. Finally, in the end of August 2014, I got the comments from the board; I remember I was practicing the piano at Harvard when I saw the email. There had been six reviews. While I won’t quote the exact reviews, I’ll briefly summarize them. 1. There is too much idiosyncratic terminology. 2. This is pretty impressive, but will need careful editing. 3. This project is fantastic; the author should be encouraged to continue. 4. This is well developed; may need some editing of contents since some topics are very advanced. 5. Overall I like this project. That said, it could benefit from some reading and editing. For example, here are some passages in particular that aren’t clear. 6. This manuscript reads well, written at a fairly high level. The motivation provided are especially good. It would be nice if there were some solutions or at least longer hints for the (many) problems in the text. Overall the author should be encouraged to continue. The most surprising thing was how short the comments were. I had expected that, given the review had consumed the entire summer, the reviewers would at least have read the manuscript in detail. But it turns out that mostly all that had been obtained were cursory impressions from the board members: the first four reviews were only a few sentences long! The fifth review was more detailed, but it was essentially a “spot check”. I admit, I was really at a loss for how I should proceed. The comments were not terribly specific, and the only real action-able item were to use less extravagant terms in response to 1 (I originally had “configuration”, “exercise” vs “problem”, etc.) and to add solutions (in response to 5). When I showed he comments to Z, he commented that while they were positive, they seemed to suggest that the publication may not be anytime soon. So I decided to try submitting a second draft to the MAA, but if that didn’t work I would fall back on the self-publishing route. The reviewers had commented about finding a few typos, so I again enlisted the help of some friends of mine to eliminate them. This time I was a lot smarter. First, I only sent the relevant excerpts that I wanted them to read, and watermarked the PDF’s with the names of the recipients. Secondly, this time I paid them as well: specifically, I gave $40 + \min(40, 0.1n^2)$ dollars for each chapter read, where $n$ was the number of errors found. I also gave a much clearer “I need this done by X” deadline. This worked significantly better than my first round of edits. Note to self: people feel more obliged to do a good job if you pay them! All in all my friends probably eliminated about 500 errors. I worked as rapidly as I could, and within four weeks I had the new version. The changes that I made were: • In response to the first board comment, I eliminated some of the most extravagant terminology (“demonstration”, “configuration”, etc.) in favor of more conventional terms (“example”, “lemma”). • I picked about 5-10 problems from each chapter and added full solutions for them. This inflated the manuscript by another 70 pages, for a new total of 400 pages. • Many typos and revisions were corrected, thanks to my team of readers. • Some formatting changes; most notably, I got the idea to put theorems and lemmas in boxes using mdframed (most of my recent olympiad handouts have the same boxes). I sent this out and sat back. ### The Second Review What followed was another long waiting process for what again were ended up being cursory comments The delay between the first and second review was definitely the most frustrating part — there seemed to be nothing I could do other than sit and wait. I seriously considered dropping the MAA and self-publishing during this time. I had been told to expect comments back in the spring. Finally, in early April I poked the editorial board again asking whether there had been any progress, and was horrified to find out that the process hadn’t even started out due to a miscommunication. Fortunately, the editor was apologetic enough about the error that she asked the board to try to expedite the process a little. The comments then arrived in mid-May, six weeks afterwards. There were eight reviewers this time. In addition to some stylistic changes suggested (e.g. avoid contractions), here were some of the main comments. • The main complaint was that I had been a bit too informal. They were right on all accounts here: in the draft I had sent, the chapters had opened with some quotes from years of MOP (which confused the board, for obvious reasons) and I had some snarky comments about high school geometry (since I happen to despise the way Euclidean geometry is taught in high school.) I found it amusing that no one had brought it up yet, and happily obliged to fix them. • Some reviewers had pointed out that some of the topics were very advanced. In fact, one of the reviewers actually recommend against the publication of the book on the account that no one would want to buy it. Fortunately, the book ended up getting accepted anyways. • In that vein, there were some remarks that this book, although it serves its target audience well, is written at a fairly advanced level. Some of the reviews were cursory like before, but some of them were line-by-line readings of a random chapter, and so this time I had something more tangible to work with. So I proceeded to make the changes. For the first time, I finally had the brains to start using git to track the changes I made to the book. This was an enormously good idea, and I wish I had done so earlier. Here are some selected changes that were made (the full list of changes is quite long). • Eliminate a bunch of snarky comments about high school, and the MOP quotes. • Eliminate about 50 instances of unnecessary future tense. • Eliminate the real product from the text. • Added and improved significantly on the index of the book, making it far more complete. • Fix more references. • Change the title to “Euclidean Geometry in Mathematical Olympiads” (it was originally “Geometra Galactica”). • Change the name of Part II from “Dark Arts” to “Analytic Techniques”. (Hehe.) • Added people to the acknowledgments. • Changes in formatting: most notably I change the font size from 11pt to 10pt to decrease the page count, since my book was already twice as long as many of the other books in the series. This dropped me from about 400 pages back to about 350 pages. • Fix about 200 more typos. Thanks to those of you who found them! I sent out the third draft just as June started, about three weeks after I had received the comments. (I like to work fast.) ### The Last Revisions There were another two rounds afterwards. In late June, I got a small set of about three pages of additional typos and clarifying suggestions. I sent back the third draft one day later. Six days later, I got back a list of four remaining edits to make. I sent an updated fourth draft 17 minutes after receiving those comments. Unfortunately, it then took another five weeks for the four changes I made to be acknowledged. Finally, in early August, the changes were approved and the editorial board forwarded an official recommendation to MAA to publish the book. ### Summary of Review Timeline In summary, the timeline of the review process was • First draft submitted: April 6, 2014 • Feedback received: August 28, 2014 Second draft submitted: November 5, 2014 • Feedback received: May 19, 2015 Third draft submitted: June 23, 2015 • Feedback received: June 29, 2015 Fourth draft submitted: June 29, 2015 • Official recommendation to MAA made: August 2015 I think with traditional publishers there is a lot of waiting; my understanding is that the editorial board largely consists of volunteers, so this seems inevitable. ## Approval and Onwards On September 3, 2015, I got the long-awaited message: It is a pleasure to inform you that the MAA Council on Books has approved the recommendation of the MAA Problem Books editorial board to publish your manuscript, Euclidean Geometry in Mathematical Olympiads. I got a fairly standard royalty contract from the publisher, which I signed off without much thought. ### Editing I had a total of zero math editors and one copy editor provided. It shows through on the enormous list of errors (and this is after all the mistakes my friends helped me find). Fortunately, my copy editor was quite good (and I have a lot of sympathy for this poor soul, who had to read every word of the entire manuscript). My Git history indicates that approximately 1000 corrections were made; on average, this is about 2 per page, which sounds about right. I got the corrections on hard copy in the mail; the entire printout of my book, except well marked with red ink. Many of the changes fell into general shapes: • Capitalization. I was unwittingly inconsistent with “Law of Cosines” versus “Law of cosines” versus “law of cosines”, etc and my copy editor noticed every one of these. Similarly, cases of section and chapter titles were often not consistent; should I use “Angle Chasing” or “Angle chasing”? The main point is to pick one convention and stick with it. • My copy editor pointed out every time I used “Problems for this section” and had only one problem. • Several unnecessary “quotes” and italics were deleted. • Oxford commas. My god, so many Oxford commas. You just don’t notice when the IMO Shortlist says “the circle through the points E, G, and H” but the European Girls’ Olympiad says “show that KH, EM and BC are concurrent”. I swear there were at least 100 of these in the boko. I tried to write a regular expression to find such mistakes, but there were lots of edge cases that came up, and I still had to do many of these manually. • Inconsistency of em dashes and en dashes. This one worked better with regular expressions. But of course there were plenty of other mistakes like missing spaces, missing degree spaces, punctuation errors, etc. ### Cover Art This was handled for me by the publisher: they gave me a choice of five or so designs and I picked one I liked. (If you are self-publishing, this is actually one of the hardest parts of the publishing logistics; you need to design the cover on your own.) ### Proofs It turns out that after all the hard work I spent on formatting the draft, the MAA has a standard template and had the production team re-typeset the entire book using this format. Fortunately, the publisher’s format is pretty similar to mine, and so there were no huge cosmetic changes. At this point I got the proofs, which are essentially the penultimate drafts of the book as they will be sent to the printers. ### Affiliation and Miscellani There was a bit more back-and-forth with the publisher towards the end. For example, they asked me if I would like my affiliation to be listed as MIT or to not have an affiliation. I chose the latter. I also send them a bio and photograph, and an author questionaire, asking me for some standard details. Marketing was handled by the publisher based on these details. ## The End Without warning, I got an email on March 25 announcing that the PDF versions of my book were now available on MAA website. The hard copies followed a few months afterwards. That marked the end of my publication process. If I were to do this sort of thing again, I guess the main decision would be whether to self-publish or go through a formal publisher. The main disadvantage seems to be the time delay, and possibly also that the royalties are lesser than in self-publishing. On the flip side, the advantages of a formal publisher were: • Having a real copy editor read through the entire manuscript. • Having a committee of outsiders knock some common sense into me (e.g. not calling the book “Geometra Galactica”). • Having cover art and marketing completely done for me. • It’s more prestigious; having a real published book is (for whatever reason) a very nice CV item. Overall I think publishing formally was the right thing to do for this book, but your mileage may vary. Other advice I would give to my past self, mentioned above already: keep paying O(1) for O(n), use git to keep track of all versions, and be conscious about which grammatical conventions to use (in particular, stay consistent). Here’s a better concluding question: what surprised me about the process, i.e, what was different than what I expected? Here’s a partial list of answers: • It took even longer than I was expecting. Large committees are inherently slow; this is no slight to the MAA, it is just how these sorts of things work. • I was surprised that at no point did anyone really check the manuscript for mathematical accuracy. In hindsight this should have been obvious; I expect reading the entire book properly takes at least 1-2 years. • I was astounded by how many errors there were in the text, be it math or grammatical or so on. During the entire process something like 2000 errors were corrected (admittedly several were minor, like Oxford commas). Yet even as I published the book, I knew that there had to be errors left. But it was still irritating to hear about them post-publication. All in all, the entire process started in September 2013 and ended in March 2016, which is 30 months. The time was roughly 30% writing, 50% review, and 20% production. # DNSCrypt Setup with PDNSD Here are notes for setting up DNSCrypt on Arch Linux, using pdnsd as a DNS cache, assuming the use of NetworkManager. I needed it one day since the network I was using blocked traffic to external DNS servers (parental controls), and the DNS server provided had an outdated entry for hmmt.co. (My dad then pointed out to me I could have just hard-coded the necessary IP address in /etc/hosts, oops.) For the whole process, useful commands to test with are: • nslookup hmmt.co will tell you the IP used and the server from which it came. • dig http://www.hmmt.co gives much more detailed information to this effect. (From bind-tools.) • dig @127.0.0.1 http://www.hmmt.co lets you query a specific DNS server (in this case 127.0.0.1). • drill @127.0.0.1 http://www.hmmt.co behaves similarly. First, pacman -S pdnsd dnscrypt-proxy (with sudo ostensibly, but I’ll leave that out here and henceforth). Run systemctl edit dnscrypt-proxy.socket and fill in override.conf with [Socket] ListenStream= ListenDatagram= ListenStream=127.0.0.1:40 ListenDatagram=127.0.0.1:40 Optionally, one can also specify which server which DNS serve to use with systemctl edit dnscrypt-proxy.service. For example for cs-uswest I write [Service] ExecStart= ExecStart=/usr/bin/dnscrypt-proxy \ -R cs-uswest The empty ExecStart= is necessary, since otherwise systemctl will complain about multiple ExecStart commands. This thus configures dnscrypt-proxy to listen on 127.0.0.1, port 40. Now we configure pdnsd to listen on port 53 (default) for cache, and relay cache misses to dnscrypt-proxy. This is accomplished by using the following for /etc/pdnsd.conf: global { perm_cache = 1024; cache_dir = "/var/cache/pdnsd"; run_as = "pdnsd"; server_ip = 127.0.0.1; status_ctl = on; query_method = udp_tcp; min_ttl = 15m; # Retain cached entries at least 15 minutes. max_ttl = 1w; # One week. timeout = 10; # Global timeout option (10 seconds). neg_domain_pol = on; udpbufsize = 1024; # Upper limit on the size of UDP messages. } server { label = "dnscrypt-proxy"; ip = 127.0.0.1; port = 40; timeout = 4; proxy_only = on; } source { owner = localhost; file = "/etc/hosts"; } Now it remains to change the DNS server from whatever default is used into 127.0.0.1. For NetworkManager users, it is necessary to edit /etc/NetworkManager/NetworkManager.conf to prevent it from overriding this file: [main] ... dns=none This will cause resolv.conf to be written as an empty file by NetworkManager: in this case, the default 127.0.0.1 is used as the nameserver, which is what we want. Needless to say, one finishes with systemctl enable dnscrypt-proxy systemctl start dnscrypt-proxy systemctl enable pdnsd systemctl start pdnsd # A Sketchy Overview of Green-Tao These are the notes of my last lecture in the 18.099 discrete analysis seminar. It is a very high-level overview of the Green-Tao theorem. It is a subset of this paper. ## 1. Synopsis This post as in overview of the proof of: Theorem 1 (Green-Tao) The prime numbers contain arbitrarily long arithmetic progressions. Here, Szemerédi’s theorem isn’t strong enough, because the primes have density approaching zero. Instead, one can instead try to prove the following “relativity” result. Theorem (Relative Szemerédi) Let ${S}$ be a sparse “pseudorandom” set of integers. Then subsets of ${A}$ with positive density in ${S}$ have arbitrarily long arithmetic progressions. In order to do this, we have to accomplish the following. • Make precise the notion of “pseudorandom”. • Prove the Relative Szemerédi theorem, and then • Exhibit a “pseudorandom” set ${S}$ which subsumes the prime numbers. This post will use the graph-theoretic approach to Szemerédi as in the exposition of David Conlon, Jacob Fox, and Yufei Zhao. In order to motivate the notion of pseudorandom, we return to the graph-theoretic approach of Roth’s theorem, i.e. the case ${k=3}$ of Szemerédi’s theorem. ## 2. Defining the linear forms condition ### 2.1. Review of Roth theorem Roth’s theorem can be phrased in two ways. The first is the “set-theoretic” formulation: Theorem 2 (Roth, set version) If ${A \subseteq \mathbb Z/N}$ is 3-AP-free, then ${|A| = o(N)}$. The second is a “weighted” version Theorem 3 (Roth, weighted version) Fix ${\delta > 0}$. Let ${f : \mathbb Z/N \rightarrow [0,1]}$ with ${\mathbf E f \ge \delta}$. Then $\displaystyle \Lambda_3(f,f,f) \ge \Omega_\delta(1).$ We sketch the idea of a graph-theoretic proof of the first theorem. We construct a tripartite graph ${G_A}$ on vertices ${X \sqcup Y \sqcup Z}$, where ${X = Y = Z = \mathbb Z/N}$. Then one creates the edges • ${(x,y)}$ if ${2x+ y \in A}$, • ${(x,z)}$ if ${x-z \in A}$, and • ${(y,z)}$ if ${-y-2z \in A}$. This construction is selected so that arithmetic progressions in ${A}$ correspond to triangles in the graph ${G_A}$. As a result, if ${A}$ has no 3-AP’s (except trivial ones, where ${x+y+z=0}$), the graph ${G_A}$ has exactly one triangle for every edge. Then, we can use the theorem of Ruzsa-Szemerédi, which states that this graph ${G_A}$ has ${o(n^2)}$ edges. ### 2.2. The measure ${\nu}$ Now for the generalized version, we start with the second version of Roth’s theorem. Instead of a set ${S}$, we consider a function $\displaystyle \nu : \mathbb Z/N \rightarrow \mathbb R_{\ge 0}$ which we call a majorizing measure. Since we are now dealing with ${A}$ of low density, we normalize ${\nu}$ so that $\displaystyle \mathbf E[\nu] = 1 + o(1).$ Our goal is to now show a result of the form: Theorem (Relative Roth, informally, weighted version) If ${0 \le f \le \nu}$, ${\mathbf E f \ge \delta}$, and ${\nu}$ satisfies a “pseudorandom” condition, then ${\Lambda_3(f,f,f) \ge \Omega_{\delta}(1)}$. The prototypical example of course is that if ${A \subset S \subset \mathbb Z_N}$, then we let ${\nu(x) = \frac{N}{|S|} 1_S(x)}$. ### 2.3. Pseudorandomness for ${k=3}$ So, how can we put the pseudorandom condition? Initially, consider ${G_S}$ the tripartite graph defined earlier, and let ${p = |S| / N}$; since ${S}$ is sparse we expect ${p}$ small. The main idea that turns out to be correct is: The number of embeddings of ${K_{2,2,2}}$ in ${S}$ is “as expected”, namely ${(1+o(1)) p^{12} N^6}$. Here ${K_{2,2,2}}$ is actually the ${2}$-blow-up of a triangle. This condition thus gives us control over the distribution of triangles in the sparse graph ${G_S}$: knowing that we have approximately the correct count for ${K_{2,2,2}}$ is enough to control distribution of triangles. For technical reasons, in fact we want this to be true not only for ${K_{2,2,2}}$ but all of its subgraphs ${H}$. Now, let’s move on to the weighted version. Let’s consider a tripartite graph ${G}$, which we can think of as a collection of three functions \displaystyle \begin{aligned} \mu_{-z} &: X \times Y \rightarrow \mathbb R \\ \mu_{-y} &: X \times Z \rightarrow \mathbb R \\ \mu_{-x} &: Y \times Z \rightarrow \mathbb R. \end{aligned} We think of ${\mu}$ as normalized so that ${\mathbf E[\mu_{-x}] = \mathbf E[\mu_{-y}] = \mathbf E[\mu_{-z}] = 1}$. Then we can define Definition 4 A weighted tripartite graph ${\mu = (\mu_{-x}, \mu_{-y}, \mu_{-z})}$ satisfies the ${3}$-linear forms condition if \displaystyle \begin{aligned} \mathbf E_{x^0,x^1,y^0,y^1,z^0,z^1} &\Big[ \mu_{-x}(y^0,z^0) \mu_{-x}(y^0,z^1) \mu_{-x}(y^1,z^0) \mu_{-x}(y^1,z^1) \\ & \mu_{-y}(x^0,z^0) \mu_{-y}(x^0,z^1) \mu_{-y}(x^1,z^0) \mu_{-y}(x^1,z^1) \\ & \mu_{-z}(x^0,y^0) \mu_{-z}(x^0,y^1) \mu_{-z}(x^1,y^0) \mu_{-z}(x^1,y^1) \Big] \\ &= 1 + o(1) \end{aligned} and similarly if any of the twelve factors are deleted. Then the pseudorandomness condition is according to the graph we defined above: Definition 5 A function ${\nu : \mathbb Z / N \rightarrow \mathbb Z}$ is satisfies the ${3}$-linear forms condition if ${\mathbf E[\nu] = 1 + o(1)}$, and the tripartite graph ${\mu = (\mu_{-x}, \mu_{-y}, \mu_{-z})}$ defined by \displaystyle \begin{aligned} \mu_{-z} &= \nu(2x+y) \\ \mu_{-y} &= \nu(x-z) \\ \mu_{-x} &= \nu(-y-2z) \end{aligned} satisfies the ${3}$-linear forms condition. Finally, the relative version of Roth’s theorem which we seek is: Theorem 6 (Relative Roth) Suppose ${\nu : \mathbb Z/N \rightarrow \mathbb R_{\ge 0}}$ satisfies the ${3}$-linear forms condition. Then for any ${f : \mathbb Z/N \rightarrow \mathbb R_{\ge 0}}$ bounded above by ${\nu}$ and satisfying ${\mathbf E[f] \ge \delta > 0}$, we have $\displaystyle \Lambda_3(f,f,f) \ge \Omega_{\delta}(1).$ ### 2.4. Relative Szemerédi We of course have: Theorem 7 (Szemerédi) Suppose ${k \ge 3}$, and ${f : \mathbb Z/n \rightarrow [0,1]}$ with ${\mathbf E[f] \ge \delta}$. Then $\displaystyle \Lambda_k(f, \dots, f) \ge \Omega_{\delta}(1).$ For ${k > 3}$, rather than considering weighted tripartite graphs, we consider a ${(k-1)}$-uniform ${k}$-partite hypergraph. For example, given ${\nu}$ with ${\mathbf E[\nu] = 1 + o(1)}$ and ${k=4}$, we use the construction \displaystyle \begin{aligned} \mu_{-z}(w,x,y) &= \nu(3w+2x+y) \\ \mu_{-y}(w,x,z) &= \nu(2w+x-z) \\ \mu_{-x}(w,y,z) &= \nu(w-y-2z) \\ \mu_{-w}(x,y,z) &= \nu(-x-2y-3z). \end{aligned} Thus 4-AP’s correspond to the simplex ${K_4^{(3)}}$ (i.e. a tetrahedron). We then consider the two-blow-up of the simplex, and require the same uniformity on subgraphs of ${H}$. Here is the compiled version: Definition 8 A ${(k-1)}$-uniform ${k}$-partite weighted hypergraph ${\mu = (\mu_{-i})_{i=1}^k}$ satisfies the ${k}$-linear forms condition if $\displaystyle \mathbf E_{x_1^0, x_1^1, \dots, x_k^0, x_k^1} \left[ \prod_{j=1}^k \prod_{\omega \in \{0,1\}^{[k] \setminus \{j\}}} \mu_{-j}\left( x_1^{\omega_1}, \dots, x_{j-1}^{\omega_{j-1}}, x_{j+1}^{\omega_{j+1}}, \dots, x_k^{\omega_k} \right)^{n_{j,\omega}} \right] = 1 + o(1)$ for all exponents ${n_{j,w} \in \{0,1\}}$. Definition 9 A function ${\nu : \mathbb Z/N \rightarrow \mathbb R_{\ge 0}}$ satisfies the ${k}$-linear forms condition if ${\mathbf E[\nu] = 1 + o(1)}$, and $\displaystyle \mathbf E_{x_1^0, x_1^1, \dots, x_k^0, x_k^1} \left[ \prod_{j=1}^k \prod_{\omega \in \{0,1\}^{[k] \setminus \{j\}}} \nu\left( \sum_{i=1}^k (j-i)x_i^{(\omega_i)} \right)^{n_{j,\omega}} \right] = 1 + o(1)$ for all exponents ${n_{j,w} \in \{0,1\}}$. This is just the previous condition with the natural ${\mu}$ induced by ${\nu}$. The natural generalization of relative Szemerédi is then: Theorem 10 (Relative Szemerédi) Suppose ${k \ge 3}$, and ${\nu : \mathbb Z/n \rightarrow \mathbb R_{\ge 0}}$ satisfies the ${k}$-linear forms condition. Let ${f : \mathbb Z/N to \mathbb R_{\ge 0}}$ with ${\mathbf E[f] \ge \delta}$, ${f \le \nu}$. Then $\displaystyle \Lambda_k(f, \dots, f) \ge \Omega_{\delta}(1).$ ## 3. Outline of proof of Relative Szemerédi The proof of Relative Szeremédi uses two key facts. First, one replaces ${f}$ with a bounded ${\widetilde f}$ which is near it: Theorem 11 (Dense model) Let ${\varepsilon > 0}$. There exists ${\varepsilon' > 0}$ such that if: • ${\nu : \mathbb Z/N \rightarrow \mathbb R_{\ge 0}}$ satisfies ${\left\lVert \nu-1 \right\rVert^{\square}_r \le \varepsilon'}$, and • ${f : \mathbb Z/N \rightarrow \mathbb R_{\ge 0}}$, ${f \le \nu}$ then there exists a function ${\widetilde f : \mathbb Z/N \rightarrow [0,1]}$ such that ${\left\lVert f - \widetilde f \right\rVert^{\square}_r \le \varepsilon}$. Here we have a new norm, called the cut norm, defined by $\displaystyle \left\lVert f \right\rVert^{\square}_r = \sup_{A_i \subseteq (\mathbb Z/N)^{r-1}} \left\lvert \mathbf E_{x_1, \dots, x_r} f(x_1 + \dots + x_r) 1_{A_1}(x_{-1}) \dots 1_{A_r}(x_{-r}) \right\rvert.$ This is actually an extension of the cut norm defined on a ${r}$-uniform ${r}$-partite hypergraph (not ${(r-1)}$-uniform like before!): if ${g : X_1 \times \dots \times X_r \rightarrow \mathbb R}$ is such a graph, we let $\displaystyle \left\lVert g \right\rVert^{\square}_{r,r} = \sup_{A_i \subseteq X_{-i}} \left\lvert g(x_1, \dots, x_r) 1_{A_1}(x_{-1}) \dots 1_{A_r}(x_{-r}) \right\rvert.$ Taking ${g(x_1, \dots, x_r) = f(x_1 + \dots + x_r)}$, ${X_1 = \dots = X_r = \mathbb Z/N}$ gives the analogy. For the second theorem, we define the norm $\displaystyle \left\lVert g \right\rVert^{\square}_{k-1,k} = \max_{i=1,\dots,k} \left( \left\lVert g_{-i} \right\rVert^{\square}_{k-1, k-1} \right).$ Theorem 12 (Relative simplex counting lemma) Let ${\mu}$, ${g}$, ${\widetilde g}$ be weighted ${(k-1)}$-uniform ${k}$-partite weighted hypergraphs on ${X_1 \cup \dots \cup X_k}$. Assume that ${\mu}$ satisfies the ${k}$-linear forms condition, and ${0 \le g_{-i} \le \mu_{-i}}$ for all ${i}$, ${0 \le \widetilde g \le 1}$. If ${\left\lVert g-\widetilde g \right\rVert^{\square}_{k-1,k} = o(1)}$ then $\displaystyle \mathbf E_{x_1, \dots, x_k} \left[ g(x_{-1}) \dots g(x_{-k}) - \widetilde g(x_{-1}) \dots \widetilde g(x_{-k}) \right] = o(1).$ One then combines these two results to prove Szemerédi, as follows. Start with ${f}$ and ${\nu}$ in the theorem. The ${k}$-linear forms condition turns out to imply ${\left\lVert \nu-1 \right\rVert^{\square}_{k-1} = o(1)}$. So we can find a nearby ${\widetilde f}$ by the dense model theorem. Then, we induce ${\nu}$, ${g}$, ${\widetilde g}$ from ${\mu}$, ${f}$, ${\widetilde f}$ respectively. The counting lemma then reduce the bounding of ${\Lambda_k(f, \dots, f)}$ to the bounding of ${\Lambda_k(\widetilde f, \dots, \widetilde f)}$, which is ${\Omega_\delta(1)}$ by the usual Szemerédi theorem. ## 4. Arithmetic progressions in primes We now sketch how to obtain Green-Tao from Relative Szemerédi. As expected, we need to us the von Mangoldt function ${\Lambda}$. Unfortunately, ${\Lambda}$ is biased (e.g. “all decent primes are odd”). To get around this, we let ${w = w(N)}$ tend to infinity slowly with ${N}$, and define $\displaystyle W = \prod_{p \le w} p.$ In the ${W}$-trick we consider only primes ${1 \pmod W}$. The modified von Mangoldt function then is defined by $\displaystyle \widetilde \Lambda(n) = \begin{cases} \frac{\varphi(W)}{W} \log (Wn+1) & Wn+1 \text{ prime} \\ 0 & \text{else}. \end{cases}$ In accordance with Dirichlet, we have ${\sum_{n \le N} \widetilde \Lambda(n) = N + o(N)}$. So, we need to show now that Proposition 13 Fix ${k \ge 3}$. We can find ${\delta = \delta(k) > 0}$ such that for ${N \gg 1}$ prime, we can find ${\nu : \mathbb Z/N \rightarrow \mathbb R_{\ge 0}}$ which satisfies the ${k}$-linear forms condition as well as $\displaystyle \nu(n) \ge \delta \widetilde \Lambda(n)$ for ${N/2 \le n < N}$. In that case, we can let $\displaystyle f(n) = \begin{cases} \delta \widetilde\Lambda(n) & N/2 \le n < N \\ 0 & \text{else}. \end{cases}$ Then ${0 \le f \le \nu}$. The presence of ${N/2 \le n < N}$ allows us to avoid “wrap-around issues” that arise from using ${\mathbb Z/N}$ instead of ${\mathbb Z}$. Relative Szemerédi then yields the result. For completeness, we state the construction. Let ${\chi : \mathbb R \rightarrow [0,1]}$ be supported on ${[-1,1]}$ with ${\chi(0) = 1}$, and define a normalizing constant ${c_\chi = \int_0^\infty \left\lvert \chi'(x) \right\rvert^2 \; dx}$. Inspired by ${\Lambda(n) = \sum_{d \mid n} \mu(d) \log(n/d)}$, we define a truncated ${\Lambda}$ by $\displaystyle \Lambda_{\chi, R}(n) = \log R \sum_{d \mid n} \mu(d) \chi\left( \frac{\log d}{\log R} \right).$ Let ${k \ge 3}$, ${R = N^{k^{-1} 2^{-k-3}}}$. Now, we define ${\nu}$ by $\displaystyle \nu(n) = \begin{cases} \dfrac{\varphi(W)}{W} \dfrac{\Lambda_{\chi,R}(Wn+1)^2}{c_\chi \log R} & N/2 \le n < N \\ 0 & \text{else}. \end{cases}$ This turns out to work, provided ${w}$ grows sufficiently slowly in ${N}$. # Formal vs Functional Series (OR: Generating Function Voodoo Magic) Epistemic status: highly dubious. I found almost no literature doing anything quite like what follows, which unsettles me because it makes it likely that I’m overcomplicating things significantly. ## 1. Synopsis Recently I was working on an elegant problem which was the original problem 6 for the 2015 International Math Olympiad, which reads as follows: Problem [IMO Shortlist 2015 Problem C6] Let ${S}$ be a nonempty set of positive integers. We say that a positive integer ${n}$ is clean if it has a unique representation as a sum of an odd number of distinct elements from ${S}$. Prove that there exist infinitely many positive integers that are not clean. Proceeding by contradiction, one can prove (try it!) that in fact all sufficiently large integers have exactly one representation as a sum of an even subset of ${S}$. Then, the problem reduces to the following: Problem Show that if ${s_1 < s_2 < \dots}$ is an increasing sequence of positive integers and ${P(x)}$ is a nonzero polynomial then we cannot have $\displaystyle \prod_{j=1}^\infty (1 - x^{s_j}) = P(x)$ as formal series. To see this, note that all sufficiently large ${x^N}$ have coefficient ${1 + (-1) = 0}$. Now, the intuitive idea is obvious: the root ${1}$ appears with finite multiplicity in ${P}$ so we can put ${P(x) = (1-x)^k Q(x)}$ where ${Q(1) \neq 0}$, and then we get that ${1-x}$ on the RHS divides ${P}$ too many times, right? Well, there are some obvious issues with this “proof”: for example, consider the equality $\displaystyle 1 = (1-x)(1+x)(1+x^2)(1+x^4)(1+x^8) \dots.$ The right-hand side is “divisible” by ${1-x}$, but the left-hand side is not (as a polynomial). But we still want to use the idea of plugging ${x \rightarrow 1^-}$, so what is the right thing to do? It turns out that this is a complete minefield, and there are a lot of very subtle distinctions that seem to not be explicitly mentioned in many places. I think I have a complete answer now, but it’s long enough to warrant this entire blog post. Here’s the short version: there’s actually two distinct notions of “generating function”, namely a “formal series” and “functional series”. They use exactly the same notation but are two different types of objects, and this ends up being the source of lots of errors, because “formal series” do not allow substituting ${x}$, while “functional series” do. Spoiler: we’ll need the asymptotic for the partition function ${p(n)}$. ## 2. Formal Series ${\neq}$ Functional Series I’m assuming you’ve all heard the definition of ${\sum_k c_kx^k}$. It turns out unfortunately that this isn’t everything: there are actually two types of objects at play here. They are usually called formal power series and power series, but for this post I will use the more descriptive names formal series and functional series. I’ll do everything over ${\mathbb C}$, but one can of course use ${\mathbb R}$ instead. The formal series is easier to describe: Definition 1 A formal series ${F}$ is an infinite sequence ${(a_n)_n = (a_0, a_1, a_2, \dots)}$ of complex numbers. We often denote it by ${\sum a_nx^n = a_0 + a_1x + a_2x^2 + \dots}$. The set of formal series is denoted ${\mathbb C[ [x] ]}$. This is the “algebraic” viewpoint: it’s a sequence of coefficients. Note that there is no worry about convergence issues or “plugging in ${x}$”. On the other hand, a functional series is more involved, because it has to support substitution of values of ${x}$ and worry about convergence issues. So here are the necessary pieces of data: Definition 2 A functional series ${G}$ (centered at zero) is a function ${G : U \rightarrow \mathbb C}$, where ${U}$ is an open disk centered at ${0}$ or ${U = \mathbb C}$. We require that there exists an infinite sequence ${(c_0, c_1, c_2, \dots)}$ of complex numbers satisfying $\displaystyle \forall z \in U: \qquad G(z) = \lim_{N \rightarrow \infty} \left( \sum_{k=0}^N c_k z^k \right).$ (The limit is take in the usual metric of ${\mathbb C}$.) In that case, the ${c_i}$ are unique and called the coefficients of ${G}$. This is often written as ${G(x) = \sum_n c_n x^n}$, with the open set ${U}$ suppressed. Remark 3 Some remarks on the definition of functional series: • This is enough to imply that ${G}$ is holomorphic (and thus analytic) on ${U}$. • For experts: note that I’m including the domain ${U}$ as part of the data required to specify ${G}$, which makes the presentation cleaner. Most sources do something with “radius of convergence”; I will blissfully ignore this, leaving this data implicitly captured by ${U}$. • For experts: Perhaps non-standard, ${U \neq \{0\}}$. Otherwise I can’t take derivatives, etc. Thus formal and functional series, despite having the same notation, have different types: a formal series ${F}$ is a sequence, while a functional series ${G}$ is a function that happens to be expressible as an infinite sum within its domain. Of course, from every functional series ${G}$ we can extract its coefficients and make them into a formal series ${F}$. So, for lack of better notation: Definition 4 If ${F = (a_n)_n}$ is a formal series, and ${G : U \rightarrow \mathbb C}$ is a functional series whose coefficients equal ${F}$, then we write ${F \simeq G}$. ## 3. Finite operations Now that we have formal and functional series, we can define sums. Since these are different types of objects, we will have to run definitions in parallel and then ideally check that they respect ${\simeq}$. For formal series: Definition 5 Let ${F_1 = (a_n)_n}$ and ${F_2 = (b_n)_n}$ be formal series. Then we set \displaystyle \begin{aligned} (a_n)_n \pm (b_n)_n &= (a_n \pm b_n)_n \\ (a_n)_n \cdot (b_n)_n &= \left( \textstyle\sum_{j=0}^n a_jb_{n-j} \right)_n. \end{aligned} This makes ${\mathbb C[ [x] ]}$ into a ring, with identity ${(0,0,0,\dots)}$ and ${(1,0,0,\dots)}$. We also define the derivative ${F = (a_n)_n}$ by ${F' = ((n+1)a_{n+1})_n}$. It’s probably more intuitive to write these definitions as \displaystyle \begin{aligned} \sum_n a_n x^n \pm \sum_n b_n x^n &= \sum_n (a_n \pm b_n) x^n \\ \left( \sum_n a_n x^n \right) \left( \sum_n b_n x^n \right) &= \sum_n \left( \sum_{j=0}^n a_jb_{n-j} \right) x^n \\ \left( \sum_n a_n x^n \right)' &= \sum_n na_n x^{n-1} \end{aligned} and in what follows I’ll start to use ${\sum_n a_nx^n}$ more. But officially, all definitions for formal series are in terms of the coefficients alone; these presence of ${x}$ serves as motivation only. Exercise 6 Show that if ${F = \sum_n a_nx^n}$ is a formal series, then it has a multiplicative inverse if and only if ${a_0 \neq 0}$. On the other hand, with functional series, the above operations are even simpler: Definition 7 Let ${G_1 : U \rightarrow \mathbb C}$ and ${G_2 : U \rightarrow \mathbb C}$ be functional series with the same domain ${U}$. Then ${G_1 \pm G_2}$ and ${G_1 \cdot G_2}$ are defined pointwise. If ${G : U \rightarrow \mathbb C}$ is a functional series (hence holomorphic), then ${G'}$ is defined poinwise. If ${G}$ is nonvanishing on ${U}$, then ${1/G : U \rightarrow \mathbb C}$ is defined pointwise (and otherwise is not defined). Now, for these finite operations, everything works as you expect: Theorem 8 (Compatibility of finite operations) Suppose ${F}$, ${F_1}$, ${F_2}$ are formal series, and ${G}$, ${G_1}$, ${G_2}$ are functional series ${U \rightarrow \mathbb C}$. Assume ${F \simeq G}$, ${F_1 \simeq G_1}$, ${F_2 \simeq G_2}$. • ${F_1 \pm F_2 \simeq G_1 \pm G_2}$, ${F_1 \cdot F_2 = G_1 \cdot G_2}$. • ${F' \simeq G'}$. • If ${1/G}$ is defined, then ${1/F}$ is defined and ${1/F \simeq 1/G}$. So far so good: as long as we’re doing finite operations. But once we step beyond that, things begin to go haywire. ## 4. Limits We need to start considering limits of ${(F_k)_k}$ and ${(G_k)_k}$, since we are trying to make progress towards infinite sums and products. Once we do this, things start to burn. Definition 9 Let ${F_1 = \sum_n a_n x^n}$ and ${F_2 = \sum_n b_n x^n}$ be formal series, and define the difference by $\displaystyle d(F_1, F_2) = \begin{cases} 2^{-n} & a_n \neq b_n, \; n \text{ minimal} \\ 0 & F_1 = F_2. \end{cases}$ This function makes ${\mathbb C[[x]]}$ into a metric space, so we can discuss limits in this space. Actually, it is a normed vector space obtained by ${\left\lVert F \right\rVert = d(F,0)}$ above. Thus, ${\lim_{k \rightarrow \infty} F_k = F}$ if each coefficient of ${x^n}$ eventually stabilizes as ${k \rightarrow \infty}$. For example, as formal series we have that ${(1,-1,0,0,\dots)}$, ${(1,0,-1,0,\dots)}$, ${(1,0,0,-1,\dots)}$ converges to ${1 = (1,0,0,0\dots)}$, which we write as $\displaystyle \lim_{k \rightarrow \infty} (1 - x^k) = 1 \qquad \text{as formal series}.$ As for functional series, since they are functions on the same open set ${U}$, we can use pointwise convergence or the stronger uniform convergence; we’ll say explicitly which one we’re doing. Example 10 (Limits don’t work at all) In what follows, ${F_k \simeq G_k}$ for every ${k}$. • Here is an example showing that if ${\lim_k F_k = F}$, the functions ${G_k}$ may not converge even pointwise. Indeed, just take ${F_k = 1 - x^k}$ as before, and let ${U = \{ z : |z| < 2 \}}$. • Here is an example showing that even if ${G_k \rightarrow G}$ uniformly, ${\lim_k F_k}$ may not exist. Take ${G_k = 1 - 1/k}$ as constant functions. Then ${G_k \rightarrow 1}$, but ${\lim_k F_k}$ doesn’t exist because the constant term never stabilizes (in the combinatorial sense). • The following example from this math.SE answer by Robert Israel shows that it’s possible that ${F = \lim_k F_k}$ exists, and ${G_k \rightarrow G}$ pointwise, and still ${F \not\simeq G}$. Let ${U}$ be the open unit disk, and set \displaystyle \begin{aligned} A_k &= \{z = r e^{i\theta} \mid 2/k \le r \le 1, \; 0 \le \theta \le 2\pi - 1/k\} \\ B_k &= \left\{ |z| \le 1/k \right\} \end{aligned} for ${k \ge 1}$. By Runge theorem there’s a polynomial ${p_k(z)}$ such that $\displaystyle |p_k(z) - 1/z^{k}| < 1/k \text{ on } A_k \qquad \text{and} \qquad |p_k(z)| < 1/k \text{ on }B_k.$ Then $\displaystyle G_k(z) = z^{k+1} p_k(z)$ is the desired counterexample (with ${F_k}$ being the sequence of coefficients from ${G}$). Indeed by construction ${\lim_k F_k = 0}$, since ${\left\lVert F_k \right\rVert \le 2^{-k}}$ for each ${k}$. Alas, ${|g_k(z) - z| \le 2/k}$ for ${z \in A_k \cup B_k}$, so ${G_k \rightarrow z}$ converges pointwise to the identity function. To be fair, we do have the following saving grace: Theorem 11 (Uniform convergence and both limits exist is sufficient) Suppose that ${G_k \rightarrow G}$ converges uniformly. Then if ${F_k \simeq G_k}$ for every ${k}$, and ${\lim_k F_k = F}$, then ${F \simeq G}$. Proof: Here is a proof, copied from this math.SE answer by Joey Zhou. WLOG ${G = 0}$, and let ${g_n(z) = \sum{a^{(n)}_kz^k}}$. It suffices to show that ${a_k = 0}$ for all ${k}$. Choose any ${0. By Cauchy’s integral formula, we have \displaystyle \begin{aligned} \left|a_k - a^{(n)}_k\right| &= \left|\frac{1}{2\pi i} \int\limits_{|z|=r}{\frac{g(z)-g_n(z)}{z^{n+1}}\text{ d}z}\right| \\ & \le\frac{1}{2\pi}(2\pi r)\frac{1}{r^{n+1}}\max\limits_{|z|=r}{|g(z)-g_n(z)|} \xrightarrow{n\rightarrow\infty} 0 \end{aligned} since ${g_n}$ converges uniformly to ${g}$ on ${U}$. Hence, ${a_k = \lim\limits_{n\rightarrow\infty}{a^{(n)}_k}}$. Since ${a^{(n)}_k = 0}$ for ${n\ge k}$, the result follows. $\Box$ The take-away from this section is that limits are relatively poorly behaved. ## 5. Infinite sums and products Naturally, infinite sums and products are defined by taking the limit of partial sums and limits. The following example (from math.SE again) shows the nuances of this behavior. Example 12 (On ${e^{1+x}}$) The expression $\displaystyle \sum_{n=0}^\infty \frac{(1+x)^n}{n!} = \lim_{N \rightarrow \infty} \sum_{n=0}^N \frac{(1+x)^n}{n!}$ does not make sense as a formal series: we observe that for every ${N}$ the constant term of the partial sum changes. But this does converge (uniformly, even) to a functional series on ${U = \mathbb C}$, namely to ${e^{1+x}}$. Exercise 13 Let ${(F_k)_{k \ge 1}}$ be formal series. • Show that an infinite sum ${\sum_{k=1}^\infty F_k(x)}$ converges as formal series exactly when ${\lim_k \left\lVert F_k \right\rVert = 0}$. • Assume for convenience ${F_k(0) = 1}$ for each ${k}$. Show that an infinite product ${\prod_{k=0}^{\infty} (1+F_k)}$ converges as formal series exactly when ${\lim_k \left\lVert F_k-1 \right\rVert = 0}$. Now the upshot is that one example of a convergent formal sum is the expression ${\lim_{N} \sum_{n=0}^N a_nx^n}$ itself! This means we can use standard “radius of convergence” arguments to transfer a formal series into functional one. Theorem 14 (Constructing ${G}$ from ${F}$) Let ${F = \sum a_nx^n}$ be a formal series and let $\displaystyle r = \frac{1}{\limsup_n \sqrt[n]{|c_n|}}.$ If ${r > 0}$ then there exists a functional series ${G}$ on ${U = \{ |z| < r \}}$ such that ${F \simeq G}$. Proof: Let ${F_k}$ and ${G_k}$ be the corresponding partial sums of ${c_0x^0}$ to ${c_kx^k}$. Then by Cauchy-Hadamard theorem, we have ${G_k \rightarrow G}$ uniformly on (compact subsets of) ${U}$. Also, ${\lim_k F_k = F}$ by construction. $\Box$ This works less well with products: for example we have $\displaystyle 1 \equiv (1-x) \prod_{j \ge 0} (1+x^{2^j})$ as formal series, but we can’t “plug in ${x=1}$”, for example, ## 6. Finishing the original problem We finally return to the original problem: we wish to show that the equality $\displaystyle P(x) = \prod_{j=1}^\infty (1 - x^{s_j})$ cannot hold as formal series. We know that tacitly, this just means $\displaystyle \lim_{N \rightarrow \infty} \prod_{j=1}^N\left( 1 - x^{s_j} \right) = P(x)$ as formal series. Here is a solution obtained only by only considering coefficients, presented by Qiaochu Yuan from this MathOverflow question. Both sides have constant coefficient ${1}$, so we may invert them; thus it suffices to show we cannot have $\displaystyle \frac{1}{P(x)} = \frac{1}{\prod_{j=1}^{\infty} (1 - x^{s_j})}$ as formal power series. The coefficients on the LHS have asymptotic growth a polynomial times an exponential. On the other hand, the coefficients of the RHS can be shown to have growth both strictly larger than any polynomial (by truncating the product) and strictly smaller than any exponential (by comparing to the growth rate in the case where ${s_j = j}$, which gives the partition function ${p(n)}$ mentioned before). So the two rates of growth can’t match. # New algebra handouts on my website For olympiad students: I have now published some new algebra handouts. They are: • Introduction to Functional Equations, which cover the basic techniques and theory for FE’s typically appearing on olympiads like USA(J)MO. • Monsters, an advanced handout which covers functional equations that have pathological solutions. It covers in detail the solutions to Cauchy functional equation. • Summation, which is a compilation of various types of olympiad-style sums like generating functions and multiplicative number theory. • English, notes on proof-writing that I used at the 2016 MOP (Mathematical Olympiad Summer Program). You can download all these (and other handouts) from my MIT website. Enjoy! # Approximating E3-LIN is NP-Hard This lecture, which I gave for my 18.434 seminar, focuses on the MAX-E3LIN problem. We prove that approximating it is NP-hard by a reduction from LABEL-COVER. ## 1. Introducing MAX-E3LIN In the MAX-E3LIN problem, our input is a series of linear equations ${\pmod 2}$ in ${n}$ binary variables, each with three terms. Equivalently, one can think of this as ${\pm 1}$ variables and ternary products. The objective is to maximize the fraction of satisfied equations. Example 1 (Example of MAX-E3LIN instance) \displaystyle \begin{aligned} x_1 + x_3 + x_4 &\equiv 1 \pmod 2 \\ x_1 + x_2 + x_4 &\equiv 0 \pmod 2 \\ x_1 + x_2 + x_5 &\equiv 1 \pmod 2 \\ x_1 + x_3 + x_5 &\equiv 1 \pmod 2 \end{aligned} \displaystyle \begin{aligned} x_1 x_3 x_4 &= -1 \\ x_1 x_2 x_4 &= +1 \\ x_1 x_2 x_5 &= -1 \\ x_1 x_3 x_5 &= -1 \end{aligned} A diligent reader can check that we may obtain ${\frac34}$ but not ${1}$. Remark 2 We immediately notice that • If there’s a solution with value ${1}$, we can find it easily with ${\mathbb F_2}$ linear algebra. • It is always possible to get at least ${\frac{1}{2}}$ by selecting all-zero or all-one. The theorem we will prove today is that these “obvious” observations are essentially the best ones possible! Our main result is that improving the above constants to 51% and 99%, say, is NP-hard. Theorem 3 (Hardness of MAX-E3LIN) The ${\frac{1}{2}+\varepsilon}$ vs. ${1-\delta}$ decision problem for MAX-E3LIN is NP-hard. This means it is NP-hard to decide whether an MAX-E3LIN instance has value ${\le \frac{1}{2}+\varepsilon}$ or ${\ge 1-\delta}$ (given it is one or the other). A direct corollary of this is approximating MAX-SAT is also NP-hard. Corollary 4 The ${\frac78+\varepsilon}$ vs. ${1-\delta}$ decision problem for MAX-SAT is NP-hard. Remark 5 The constant ${\frac78}$ is optimal in light of a random assignment. In fact, one can replace ${1-\delta}$ with ${\delta}$, but we don’t do so here. Proof: Given an equation ${a+b+c=1}$ in MAX-E3LIN, we consider four formulas ${a \lor \neg b \lor \neg c}$, ${\neg a \lor b \lor \neg c}$, ${a \lor \neg b \lor \neg c}$, ${a \lor b \lor c}$. Either three or four of them are satisfied, with four occurring exactly when ${a+b+c=0}$. One does a similar construction for ${a+b+c=1}$. $\Box$ The hardness of MAX-E3LIN is relevant to the PCP theorem: using MAX-E3LIN gadgets, Ha}stad was able to prove a very strong version of the PCP theorem, in which the verifier merely reads just three bits of a proof! Let ${\varepsilon, \delta > 0}$. We have $\displaystyle \mathbf{NP} \subseteq \mathbf{PCP}_{\frac{1}{2}+\varepsilon, 1-\delta}(3, O(\log n)).$ In other words, any ${L \in \mathbf{NP}}$ has a (non-adaptive) verifier with the following properties. • The verifier uses ${O(\log n)}$ random bits, and queries just three (!) bits. • The acceptance condition is either ${a+b+c=1}$ or ${a+b+c=0}$. • If ${x \in L}$, then there is a proof ${\Pi}$ which is accepted with probability at least ${1-\delta}$. • If ${x \notin L}$, then every proof is accepted with probability at most ${\frac{1}{2} + \varepsilon}$. ## 2. Label Cover We will prove our main result by reducing from the LABEL-COVER. Recall LABEL-COVER is played as follows: we have a bipartite graph ${G = U \cup V}$, a set of keys ${K}$ for vertices of ${U}$ and a set of labels ${L}$ for ${V}$. For every edge ${e = \{u,v\}}$ there is a function ${\pi_e : L \rightarrow K}$ specifying a key ${k = \pi_e(\ell) \in K}$ for every label ${\ell \in L}$. The goal is to label the graph ${G}$ while maximizing the number of edges ${e}$ with compatible key-label pairs. Approximating LABEL-COVER is NP-hard: Theorem 7 (Hardness of LABEL-COVER) The ${\eta}$ vs. ${1}$ decision problem for LABEL-COVER is NP-hard for every ${\eta > 0}$, given ${|K|}$ and ${|L|}$ are sufficiently large in ${\eta}$. So for any ${\eta > 0}$, it is NP-hard to decide whether one can satisfy all edges or fewer than ${\eta}$ of them. ## 3. Setup We are going to make a reduction of the following shape: In words this means that • “Completeness”: If the LABEL-COVER instance is completely satisfiable, then we get a solution of value ${\ge 1 - \delta}$ in the resulting MAX-E3LIN. • “Soundness”: If the LABEL-COVER instance has value ${\le \eta}$, then we get a solution of value ${\le \frac{1}{2} + \varepsilon}$ in the resulting MAX-E3LIN. Thus given an oracle for MAX-E3LIN decision, we can obtain ${\eta}$ vs. ${1}$ decision for LABEL-COVER, which we know is hard. The setup for this is quite involved, using a huge number of variables. Just to agree on some conventions: Definition 8 (“Long Code”) A ${K}$-indexed binary string ${x = (x_k)_k}$ is a ${\pm 1}$ sequence indexed by ${K}$. We can think of it as an element of ${\{\pm 1\}^K}$. An ${L}$-binary string ${y = (y_\ell)_\ell}$ is defined similarly. Now we initialize ${|U| \cdot 2^{|K|} + |V| \cdot 2^{|L|}}$ variables: • At every vertex ${u \in U}$, we will create ${2^{|K|}}$ binary variables, one for every ${K}$-indexed binary string. It is better to collect these variables into a function $\displaystyle f_u : \{\pm1\}^K \rightarrow \{\pm1\}.$ • Similarly, at every vertex ${v \in V}$, we will create ${2^{|L|}}$ binary variables, one for every ${L}$-indexed binary string, and collect these into a function $\displaystyle g_v : \{\pm1\}^L \rightarrow \{\pm1\}.$ Picture: Next we generate the equations. Here’s the motivation: we want to do this in such a way that given a satisfying labelling for LABEL-COVER, nearly all the MAX-E3LIN equations can be satisfied. One idea is as follows: for every edge ${e}$, letting ${\pi = \pi_e}$, • Take a ${K}$-indexed binary string ${x = (x_k)_k}$ at random. Take an ${L}$-indexed binary string ${y = (y_\ell)_\ell}$ at random. • Define the ${L}$-indexed binary ${z = (z_\ell)_\ell}$ string by ${z = \left( x_{\pi(\ell)} y_\ell \right)}$. • Write down the equation ${f_u(x) g_v(y) g_v(z) = +1}$ for the MAX-E3LIN instance. Thus, assuming we had a valid coloring of the graph, we could let ${f_u}$ and ${g_v}$ be the dictator functions for the colorings. In that case, ${f_u(x) = x_{\pi(\ell)}}$, ${g_v(y) = y_\ell}$, and ${g_v(z) = x_{\pi(\ell)} y_\ell}$, so the product is always ${+1}$. Unfortunately, this has two fatal flaws: 1. This means a ${1}$ instance of LABEL-COVER gives a ${1}$ instance of MAX-E3LIN, but we need ${1-\delta}$ to have a hope of working. 2. Right now we could also just set all variables to be ${+1}$. We fix this as follows, by using the following equations. Definition 8 (Equations of reduction) For every edge ${e}$, with ${\pi = \pi_e}$, we alter the construction and say • Let ${x = (x_k)_k}$ be and ${y = (y_\ell)_\ell}$ be random as before. • Let ${n = (n_\ell)_\ell}$ be a random ${L}$-indexed binary string, drawn from a ${\delta}$-biased distribution (${-1}$ with probability ${\delta}$). And now define ${z = (z_\ell)_\ell}$ by $\displaystyle z_\ell = x_{\pi(\ell)} y_\ell n_\ell .$ The ${n_\ell}$ represents “noise” bits, which resolve the first problem by corrupting a bit of ${z}$ with probability ${\delta}$. • Write down one of the following two equations with ${\frac{1}{2}}$ probability each: \displaystyle \begin{aligned} f_u(x) g_v(y) g_v(z) &= +1 \\ f_u(x) g_v(y) g_v(-z) &= -1. \end{aligned} This resolves the second issue. This gives a set of ${O(|E|)}$ equations. I claim this reduction works. So we need to prove the “completeness” and “soundness” claims above. ## 4. Proof of Completeness Given a labeling of ${G}$ with value ${1}$, as described we simply let ${f_u}$ and ${g_v}$ be dictator functions corresponding to this valid labelling. Then as we’ve seen, we will pass ${1 - \delta}$ of the equations. ## 5. A Fourier Computation Before proving soundness, we will first need to explicitly compute the probability an equation above is satisfied. Remember we generated an equation for ${e}$ based on random strings ${x}$, ${y}$, ${\lambda}$. For ${T \subseteq L}$, we define $\displaystyle \pi^{\text{odd}}_e(T) = \left\{ k \in K \mid \left\lvert \pi_e^{-1}(k) \cap T \right\rvert \text{ is odd} \right\}.$ Thus ${T}$ maps subsets of ${L}$ to subsets of ${K}$. Remark 9 Note that ${|\pi^{\text{odd}}(T)| \le |T|}$ and that ${\pi^{\text{odd}}(T) \neq \varnothing}$ if ${|T|}$ is odd. Lemma 10 (Edge Probability) The probability that an equation generated for ${e = \{u,v\}}$ is true is $\displaystyle \frac{1}{2} + \frac{1}{2} \sum_{\substack{T \subseteq L \\ |T| \text{ odd}}} (1-2\delta)^{|T|} \widehat g_v(T)^2 \widehat f_u(\pi^{\text{odd}}_e(T)).$ Proof: Omitted for now\dots $\Box$ ## 6. Proof of Soundness We will go in the reverse direction and show (constructively) that if there is MAX-E3LIN instance has a solution with value ${\ge\frac{1}{2}+2\varepsilon}$, then we can reconstruct a solution to LABEL-COVER with value ${\ge \eta}$. (The use of ${2\varepsilon}$ here will be clear in a moment). This process is called “decoding”. The idea is as follows: if ${S}$ is a small set such that ${\widehat f_u(S)}$ is large, then we can pick a key from ${S}$ at random for ${f_u}$; compare this with the dictator functions where ${\widehat f_u(S) = 1}$ and ${|S| = 1}$. We want to do something similar with ${T}$. Here are the concrete details. Let ${\Lambda = \frac{\log(1/\varepsilon)}{2\delta}}$ and ${\eta = \frac{\varepsilon^3}{\Lambda^2}}$ be constants (the actual values arise later). Definition 11 We say that a nonempty set ${S \subseteq K}$ of keys is heavy for ${u}$ if $\displaystyle \left\lvert S \right\rvert \le \Lambda \qquad\text{and}\qquad \widehat{f_u}(S) \ge \varepsilon^2.$ Note that there are at most ${\varepsilon^{-2}}$ heavy sets by Parseval. Definition 12 We say that a nonempty set ${T \subseteq L}$ of labels is ${e}$-excellent for ${v}$ if $\displaystyle \left\lvert T \right\rvert \le \Lambda \qquad\text{and}\qquad S = \pi^{\text{odd}}_e(T) \text{ is heavy.}$ In particular ${S \neq \varnothing}$ so at least one compatible key-label pair is in ${S \times T}$. Notice that, unlike the case with ${S}$, the criteria for “good” in ${T}$ actually depends on the edge ${e}$ in question! This makes it easier to keys than to select labels. In order to pick labels, we will have to choose from a ${\widehat g_v^2}$ distribution. Lemma 13 (At least ${\varepsilon}$ of ${T}$ are excellent) For any edge ${e = \{u,v\}}$, at least ${\varepsilon}$ of the possible ${T}$ according to the distribution ${\widehat g_v^2}$ are ${e}$-excellent. Proof: Applying an averaging argument to the inequality $\displaystyle \sum_{\substack{T \subseteq L \\ |T| \text{ odd}}} (1-2\delta)^{|T|} \widehat g_v(T)^2 \left\lvert \widehat f_u(\pi^{\text{odd}}(T)) \right\rvert \ge 2\varepsilon$ shows there is at least ${\varepsilon}$ chance that ${|T|}$ is odd and satisfies $\displaystyle (1-2\delta)^{|T|} \left\lvert \widehat f_u(S) \right\rvert \ge \varepsilon$ where ${S = \pi^{\text{odd}}_e(T)}$. In particular, ${(1-2\delta)^{|T|} \ge \varepsilon \iff |T| \le \Lambda}$. Finally by \Cref{rem:po}, we see ${S}$ is heavy. $\Box$ Now, use the following algorithm. • For every vertex ${u \in U}$, take the union of all heavy sets, say $\displaystyle \mathcal H = \bigcup_{S \text{ heavy}} S.$ Pick a random key from ${\mathcal H}$. Note that ${|\mathcal H| \le \Lambda\varepsilon^{-2}}$, since there are at most ${\varepsilon^{-2}}$ heavy sets (by Parseval) and each has at most ${\Lambda}$ elements. • For every vertex ${v \in V}$, select a random set ${T}$ according to the distribution ${\widehat g_v(T)^2}$, and select a random element from ${T}$. I claim that this works. Fix an edge ${e}$. There is at least an ${\varepsilon}$ chance that ${T}$ is ${e}$-excellent. If it is, then there is at least one compatible pair in ${\mathcal H \times T}$. Hence we conclude probability of success is at least $\displaystyle \varepsilon \cdot \frac{1}{\Lambda \varepsilon^{-2}} \cdot \frac{1}{\Lambda} = \frac{\varepsilon^3}{\Lambda^2} = \eta.$ (Addendum: it’s pointed out to me this isn’t quite right; the overall probability of the equation given by an edge ${e}$ is ${\ge \frac{1}{2}+\varepsilon}$, but this doesn’t imply it for every edge. Thus one likely needs to do another averaging argument.) # Against the “Research vs. Olympiads” Mantra There’s a Mantra that you often hear in math contest discussions: “math olympiads are very different from math research”. (For known instances, see O’Neil, Tao, and more. More neutral stances: Monks, Xu.) It’s true. And I wish people would stop saying it. Every time I’ve heard the Mantra, it set off a little red siren in my head: something felt wrong. And I could never figure out quite why until last July. There was some (silly) forum discussion about how Allen Liu had done extraordinarily on math contests over the past year. Then someone says: A: Darn, what math problem can he not do?! B: I’ll go out on a limb and say that the answer to this is “most of the problems worth asking.” We’ll see where this stands in two years, at which point the answer will almost certainly change, but research $\neq$ Olympiads. Then it hit me. ## Ping-pong vs. Tennis Let’s try the following thought experiment. Consider a world-class ping-pong player, call her Sarah. She has a fan-base talking about her pr0 ping-pong skills. Then someone comes along as says: Well, table tennis isn’t the same as tennis. To which I and everyone else reasonable would say, “uh, so what?”. It’s true, but totally irrelevant; ping-pong and tennis are just not related. Maybe Sarah will be better than average at tennis, but there’s no reason to expect her to be world-class in that too. And yet we say exactly the same thing for olympiads versus research. Someone wins the IMO, out pops the Mantra. Even if the Mantra is true when taken literally, it’s implicitly sending the message there’s something wrong with being good at contests and not good at research. So now I ask: just what is wrong with that? To answer this question, I first need to answer: “what is math?”. There’s been a trick played with this debate, and you can’t see it unless you taboo the word “math”. The word “math” can refer to a bunch of things, like: • Training for contest problems like USAMO/IMO, or • Working on open problems and conjectures (“research”). So here’s the trick. The research community managed to claim the name “math”, leaving only “math contests” for the olympiad community. Now the sentence “Math contests should be relevant to math” seems totally innocuous. But taboo the world “math”, and you get “Olympiads should be relevant to research” and then you notice something’s wrong. In other words, since “math” is a substring of “math contests”, it suddenly seems like the olympiads are subordinate to research. All because of an accident in naming. Since when? Everyone agrees that olympiads and research are different things, but it does not then follow that “olympiads are useless”. Even if ping-pong is called “table tennis”, that doesn’t mean the top ping-pong players are somehow inferior to top tennis players. (And the scary thing is that in a world without the name “ping-pong”, I can imagine some people actually thinking so.) I think for many students, olympiads do a lot of good, independent of any value to future math research. Math olympiads give high school students something interesting to work on, and even the training process for a contest such that the IMO carries valuable life lessons: it teaches you how to work hard even in the face of possible failure, and what it’s like to be competitive at an international level (i.e. what it’s like to become really good at something after years of hard work). The peer group that math contests give is also wonderful, and quite similar to the kind of people you’d meet at a top-tier university (and in some cases, they’re more or less the same people). And the problem solving ability you gain from math contests is indisputably helpful elsewhere in life. Consequently, I’m well on record as saying the biggest benefits of math contests have nothing to do with math. There are also more mundane (but valid) reasons (they help get students out of the classroom, and other standard blurbs about STEM and so on). And as a matter of taste I also think contest problems are interesting and beautiful in their own right. You could even try to make more direct comparisons (for example, I’d guess the average arXiv paper in algebraic geometry gets less attention than the average IMO geometry problem), but that’s a point for another blog post entirely. ## The Right and Virtuous Path Which now leads me to what I think is a culture issue. MOP alumni prior to maybe 2010 or so were classified into two groups. They would either go on to math research, which was somehow seen as the “right and virtuous path“, or they would defect to software/finance/applied math/etc. Somehow there is always this implicit, unspoken message that the smart MOPpers do math research and the dumb MOPpers drop out. I’ll tell you how I realized why I didn’t like the Mantra: it’s because the only time I hear the Mantra is when someone is belittling olympiad medalists. The Mantra says that the USA winning the IMO is no big deal. The Mantra says Allen Liu isn’t part of the “smart club” until he succeeds in research too. The Mantra says that the countless time and energy put into running each year’s MOP are a waste of time. The Mantra says that the students who eventually drop out of math research are “not actually good at math” and “just good at taking tests”. The Mantra even tells outsiders that they, too, can be great researchers, because olympiads are useless anyways. The Mantra is math research’s recruiting slogan. And I think this is harmful. The purpose of olympiads was never to produce more math researchers. If it’s really the case that olympiads and research are totally different, then we should expect relatively few olympiad students to go into research; yet in practice, a lot of them do. I think one could make a case that a lot of the past olympiad students are going into math research without realizing that they’re getting into something totally unrelated, just because the sign at the door said “math”. One could also make a case that it’s very harmful for those that don’t do research, or try research and then decide they don’t like it: suddenly these students don’t think they’re “good at math” any more, they’re not smart enough be a mathematician, etc. But we need this kind of problem-solving skill and talent too much for it to all be spent on computing R(6,6). Richard Rusczyk’s take from Math Prize for Girls 2014 is: When people ask me, am I disappointed when my students don’t go off and be mathematicians, my answer is I’d be very disappointed if they all did. We need people who can think about these complex problems and solve really hard problems they haven’t seen before everywhere. It’s not just in math, it’s not just in the sciences, it’s not just in medicine — I mean, what we’d give to get some of them in Congress! Academia is a fine career, but there’s tons of other options out there: the research community may denounce those who switch out as failures, but I’m sure society will take them with open arms. To close, I really like this (sarcastic) comment from Steven Karp (near bottom): Contest math is inaccessible to over 90% of people as it is, and then we’re supposed to tell those that get it that even that isn’t real math? While we’re at it, let’s tell Vi Hart to stop making videos because they don’t accurately represent math research. Thanks first of all for the many long and thoughtful comments from everyone (both here, on Facebook, in private, and so on). It’s given me a lot to think about. Here’s my responses to some of the points that were raised, which is necessarily incomplete because of the volume of discussion. 1. To start off, it was suggested I should explicitly clarify: I do not mean to imply that people who didn’t do well on contests cannot do well in math research. So let me say that now. 2. My favorite comment that I got was that in fact this whole post pattern matches with bravery debates. On one hand you have lots of olympiad students who actually FEEL BAD about winning medals because they “weren’t doing real math”. But on the other hand there are students whose parents tell them to not pursue math as a major or career because of low contest scores. These students (and their parents) would benefit a lot from the Mantra; so I concede that there are indeed good use cases of the Mantra (such as those that Anonymous Chicken, betaveros describe below) and in particular the Mantra is not intrinsically bad. Which of these use is the “common use” probably depends on which tribes you are part of (guess which one I see more?). It’s interesting in that in this case, the two sides actually agree on the basic fact (that contests and research are not so correlated). 3. Some people point out that research is a career while contests aren’t. I am not convinced by this; I don’t think “is a career” is a good metric for measuring value to society, and can think of several examples of actual jobs that I think really should not exist (not saying any names). In addition, I think that if the general public understood what mathematicians actually do for a career, they just might be a little less willing to pay us. I think there’s an interesting discussion about whether contests / research are “valuable” or not, but I don’t think the answer is one-sided; this would warrant a whole different debate (and would derail the entire post if I tried to address it). 4. Some people point out that training for olympiads yields diminishing returns (e.g. learning Muirhead and Schur is probably not useful for anything else). I guess this is true, but isn’t it true of almost anything? Maybe the point is supposed to be “olympiads aren’t everything”, which is agreeable (see below). 5. The other favorite comment I got was from Another Chicken, who points out below that the olympiad tribe itself is elitist: they tend to wall themselves off from outsiders (I certainly do this), and undervalue anything that isn’t hard technical problems. I concede these are real problems with the olympiad community. Again, this could be a whole different blog post. But I think this comment missed the point of this post. It is probably fine (albeit patronizing) to encourage olympiad students to expand; but I have a big problem with framing it as “spend time on not-contests because research“. That’s the real issue with the Mantra: it is often used as a recruitment slogan, telling students that research is the next true test after the IMO has been conquered. Changing the Golden Metric from olympiads to research seems to just make the world more egotistic than it already is. # Vinogradov’s Three-Prime Theorem (with Sammy Luo and Ryan Alweiss) This was my final paper for 18.099, seminar in discrete analysis, jointly with Sammy Luo and Ryan Alweiss. We prove that every sufficiently large odd integer can be written as the sum of three primes, conditioned on a strong form of the prime number theorem. ## 1. Introduction In this paper, we prove the following result: Every sufficiently large odd integer ${N}$ is the sum of three prime numbers. In fact, the following result is also true, called the “weak Goldbach conjecture”. Theorem 2 (Weak Goldbach conjecture) Every odd integer ${N \ge 7}$ is the sum of three prime numbers. The proof of Vinogradov’s theorem becomes significantly simpler if one assumes the generalized Riemann hypothesis; this allows one to use a strong form of the prime number theorem (Theorem 9). This conditional proof was given by Hardy and Littlewood in the 1923’s. In 1997, Deshouillers, Effinger, te Riele and Zinoviev showed that the generalized Riemann hypothesis in fact also implies the weak Goldbach conjecture by improving the bound to ${10^{20}}$ and then exhausting the remaining cases via a computer search. As for unconditional proofs, Vinogradov was able to eliminate the dependency on the generalized Riemann hypothesis in 1937, which is why the Theorem 1 bears his name. However, Vinogradov’s bound used the ineffective Siegel-Walfisz theorem; his student K. Borozdin showed that ${3^{3^{15}}}$ is large enough. Over the years the bound was improved, until recently in 2013 when Harald Helfgott claimed the first unconditional proof of Theorem 2, see here. In this exposition we follow Hardy and Littlewood’s approach, i.e. we prove Theorem 1 assuming the generalized Riemann hypothesis, following the exposition of Rhee. An exposition of the unconditional proof by Vinogradov is given by Rouse. ## 2. Synopsis We are going to prove that $\displaystyle \sum_{a+b+c = N} \Lambda(a) \Lambda(b) \Lambda(c) \asymp \frac12 N^2 \mathfrak G(N) \ \ \ \ \ (1)$ where $\displaystyle \mathfrak G(N) \overset{\text{def}}{=} \prod_{p \mid N} \left( 1 - \frac{1}{(p-1)^2} \right) \prod_{p \nmid N} \left( 1 + \frac{1}{(p-1)^3} \right)$ and ${\Lambda}$ is the von Mangoldt function defined as usual. Then so long as ${2 \nmid N}$, the quantity ${\mathfrak G(N)}$ will be bounded away from zero; thus (1) will imply that in fact there are many ways to write ${N}$ as the sum of three distinct prime numbers. The sum (1) is estimated using Fourier analysis. Let us define the following. Definition 3 Let ${\mathbb T = \mathbb R/\mathbb Z}$ denote the circle group, and let ${e : \mathbb T \rightarrow \mathbb C}$ be the exponential function ${\theta \mapsto \exp(2\pi i \theta)}$. For ${\alpha\in\mathbb T}$, ${\|\alpha\|}$ denotes the minimal distance from ${\alpha}$ to an integer. Note that ${|e(\theta)-1|=\Theta(\|\theta\|)}$. Definition 4 For ${\alpha \in \mathbb T}$ and ${x > 0}$ we define $\displaystyle S(x, \alpha) = \sum_{n \le x} \Lambda(n) e(n\alpha).$ Then we can rewrite (1) using ${S}$ as a “Fourier coefficient”: Proof: We have $\displaystyle S(N,\alpha)^3=\sum_{a,b,c\leq N}\Lambda(a)\Lambda(b)\Lambda(c)e((a+b+c)\alpha),$ so \displaystyle \begin{aligned} \int_{\alpha \in \mathbb T} S(N, \alpha)^3 e(-N\alpha) \; d\alpha &= \int_{\alpha \in \mathbb T} \sum_{a,b,c\leq N}\Lambda(a)\Lambda(b)\Lambda(c)e((a+b+c)\alpha) e(-N\alpha) \; d\alpha \\ &= \sum_{a,b,c\leq N}\Lambda(a)\Lambda(b)\Lambda(c)\int_{\alpha \in \mathbb T}e((a+b+c-N)\alpha) \; d\alpha \\ &= \sum_{a,b,c\leq N}\Lambda(a)\Lambda(b)\Lambda(c)I(a+b+c=N) \\ &= \sum_{a+b+c=N}\Lambda(a)\Lambda(b)\Lambda(c), \end{aligned} as claimed. $\Box$ In order to estimate the integral in Proposition 5, we divide ${\mathbb T}$ into the so-called “major” and “minor” arcs. Roughly, • The “major arcs” are subintervals of ${\mathbb T}$ centered at a rational number with small denominator. • The “minor arcs” are the remaining intervals. These will be made more precise later. This general method is called the Hardy-Littlewood circle method, because of the integral over the circle group ${\mathbb T}$. The rest of the paper is structured as follows. In Section 3, we define the Dirichlet character and other number-theoretic objects, and state some estimates for the partial sums of these objects conditioned on the Riemann hypothesis. These bounds are then used in Section 4 to provide corresponding estimates on ${S(x, \alpha)}$. In Section 5 we then define the major and minor arcs rigorously and use the previous estimates to given an upper bound for the integral over both areas. Finally, we complete the proof in Section 6. ## 3. Prime number theorem type bounds In this section, we collect the necessary number-theoretic results that we will need. It is in this section only that we will require the generalized Riemann hypothesis. As a reminder, the notation ${f(x)\ll g(x)}$, where ${f}$ is a complex function and ${g}$ a nonnegative real one, means ${f(x)=O(g(x))}$, a statement about the magnitude of ${f}$. Likewise, ${f(x)=g(x)+O(h(x))}$ simply means that for some ${C}$, ${|f(x)-g(x)|\leq C|h(x)|}$ for all sufficiently large ${x}$. ### 3.1. Dirichlet characters In what follows, ${q}$ denotes a positive integer. Definition 6 A Dirichlet character modulo ${q}$ ${\chi}$ is a homomorphism ${\chi : (\mathbb Z/q)^\times \rightarrow \mathbb C^\times}$. It is said to be trivial if ${\chi = 1}$; we denote this character by ${\chi_0}$. By slight abuse of notation, we will also consider ${\chi}$ as a function ${\mathbb Z \rightarrow \mathbb C^\ast}$ by setting ${\chi(n) = \chi(n \pmod q)}$ for ${\gcd(n,q) = 1}$ and ${\chi(n) = 0}$ for ${\gcd(n,q) > 1}$. Remark 7 The Dirichlet characters form a multiplicative group of order ${\phi(q)}$ under multiplication, with inverse given by complex conjugation. Note that ${\chi(m)}$ is a primitive ${\phi(q)}$th root of unity for any ${m \in (\mathbb Z/q)^\times}$, thus ${\chi}$ takes values in the unit circle. Moreover, the Dirichlet characters satisfy an orthogonality relation Experts may recognize that the Dirichlet characters are just the elements of the Pontryagin dual of ${(\mathbb Z/q)^\times}$. In particular, they satisfy an orthogonality relationship $\displaystyle \frac{1}{\phi(q)} \sum_{\chi \text{ mod } q} \chi(n) \overline{\chi(a)} = \begin{cases} 1 & n = a \pmod q \\ 0 & \text{otherwise} \end{cases} \ \ \ \ \ (3)$ and thus form an orthonormal basis for functions ${(\mathbb Z/q)^\times \rightarrow \mathbb C}$. ### 3.2. Prime number theorem for arithmetic progressions Definition 8 The generalized Chebyshev function is defined by $\displaystyle \psi(x, \chi) = \sum_{n \le x} \Lambda(n) \chi(n).$ The Chebyshev function is studied extensively in analytic number theory, as it is the most convenient way to phrase the major results of analytic number theory. For example, the prime number theorem is equivalent to the assertion that $\displaystyle \psi(x, \chi_0) = \sum_{n \le x} \Lambda(n) \asymp x$ where ${q = 1}$ (thus ${\chi_0}$ is the constant function ${1}$). Similarly, Dirichlet’s theorem actually asserts that any ${q \ge 1}$, $\displaystyle \psi(x, \chi) = \begin{cases} x + o_q(x) & \chi = \chi_0 \text{ trivial} \\ o_q(x) & \chi \neq \chi_0 \text{ nontrivial}. \end{cases}$ However, the error term in these estimates is quite poor (more than ${x^{1-\varepsilon}}$ for every ${\varepsilon}$). However, by assuming the Riemann Hypothesis for a certain “${L}$-function” attached to ${\chi}$, we can improve the error terms substantially. Theorem 9 (Prime number theorem for arithmetic progressions) Let ${\chi}$ be a Dirichlet character modulo ${q}$, and assume the Riemann hypothesis for the ${L}$-function attached to ${\chi}$. 1. If ${\chi}$ is nontrivial, then $\displaystyle \psi(x, \chi) \ll \sqrt{x} (\log qx)^2.$ 2. If ${\chi = \chi_0}$ is trivial, then $\displaystyle \psi(x, \chi_0) = x + O\left( \sqrt x (\log x)^2 + \log q \log x \right).$ Theorem 9 is the strong estimate that we will require when putting good estimates on ${S(x, \alpha)}$, and is the only place in which the generalized Riemann Hypothesis is actually required. ### 3.3. Gauss sums Definition 10 For ${\chi}$ a Dirichlet character modulo ${q}$, the Gauss sum ${\tau(\chi)}$ is defined by $\displaystyle \tau(\chi)=\sum_{a=0}^{q-1}\chi(a)e(a/q).$ We will need the following fact about Gauss sums. Lemma 11 Consider Dirichlet characters modulo ${q}$. Then: 1. We have ${\tau(\chi_0) = \mu(q)}$. 2. For any ${\chi}$ modulo ${q}$, ${\left\lvert \tau(\chi) \right\rvert \le \sqrt q}$. ### 3.4. Dirichlet approximation We finally require Dirichlet approximation theorem in the following form. Theorem 12 (Dirichlet approximation) Let ${\alpha \in \mathbb R}$ be arbitrary, and ${M}$ a fixed integer. Then there exists integers ${a}$ and ${q = q(\alpha)}$, with ${1 \le q \le M}$ and ${\gcd(a,q) = 1}$, satisfying $\displaystyle \left\lvert \alpha - \frac aq \right\rvert \le \frac{1}{qM}.$ ## 4. Bounds on ${S(x, \alpha)}$ In this section, we use our number-theoretic results to bound ${S(x,\alpha)}$. First, we provide a bound for ${S(x,\alpha)}$ if ${\alpha}$ is a rational number with “small” denominator ${q}$. Lemma 13 Let ${\gcd(a,q) = 1}$. Assuming Theorem 9, we have $\displaystyle S(x, a/q) = \frac{\mu(q)}{\phi(q)} x + O\left( \sqrt{qx} (\log qx)^2 \right)$ where ${\mu}$ denotes the Möbius function. Proof: Write the sum as $\displaystyle S(x, a/q) = \sum_{n \le x} \Lambda(n) e(na/q).$ First we claim that the terms ${\gcd(n,q) > 1}$ (and ${\Lambda(n) \neq 0}$) contribute a negligibly small ${\ll \log q \log x}$. To see this, note that • The number ${q}$ has ${\ll \log q}$ distinct prime factors, and • If ${p^e \mid q}$, then ${\Lambda(p) + \dots + \Lambda(p^e) = e\log p = \log(p^e) < \log x}$. So consider only terms with ${\gcd(n,q) = 1}$. To bound the sum, notice that \displaystyle \begin{aligned} e(n \cdot a/q) &= \sum_{b \text{ mod } q} e(b/q) \cdot \mathbf 1(b \equiv an) \\ &= \sum_{b \text{ mod } q} e(b/q) \left( \frac{1}{\phi(q)} \sum_{\chi \text{ mod } q} \chi(b) \overline{\chi(an)} \right) \end{aligned} by the orthogonality relations. Now we swap the order of summation to obtain a Gauss sum: \displaystyle \begin{aligned} e(n \cdot a/q) &= \frac{1}{\phi(q)} \sum_{\chi \text{ mod } q} \overline{\chi(an)} \left( \sum_{b \text{ mod } q} \chi(b) e(b/q) \right) \\ &= \frac{1}{\phi(q)} \sum_{\chi \text{ mod } q} \overline{\chi(an)} \tau(\chi). \end{aligned} Thus, we swap the order of summation to obtain that \displaystyle \begin{aligned} S(x, \alpha) &= \sum_{\substack{n \le x \\ \gcd(n,q) = 1}} \Lambda(n) e(n \cdot a/q) \\ &= \frac{1}{\phi(q)} \sum_{\substack{n \le x \\ \gcd(n,q) = 1}} \sum_{\chi \text{ mod } q} \Lambda(n) \overline{\chi(an)} \tau(\chi) \\ &= \frac{1}{\phi(q)} \sum_{\chi \text{ mod } q} \tau(\chi) \sum_{\substack{n \le x \\ \gcd(n,q) = 1}} \Lambda(n) \overline{\chi(an)} \\ &= \frac{1}{\phi(q)} \sum_{\chi \text{ mod } q} \overline{\chi(a)} \tau(\chi) \sum_{\substack{n \le x \\ \gcd(n,q) = 1}} \Lambda(n)\overline{\chi(n)} \\ &= \frac{1}{\phi(q)} \sum_{\chi \text{ mod } q} \overline{\chi(a)} \tau(\chi) \psi(x, \overline\chi) \\ &= \frac{1}{\phi(q)} \left( \tau(\chi_0) \psi(x, \chi_0) + \sum_{1 \neq \chi \text{ mod } q} \overline{\chi(a)} \tau(\chi) \psi(x, \overline\chi) \right). \end{aligned} Now applying both parts of Lemma 11 in conjunction with Theorem 9 gives \displaystyle \begin{aligned} S(x,\alpha) &= \frac{\mu(q)}{\phi(q)} \left( x + O\left( \sqrt x (\log qx)^2 \right) \right) + O\left( \sqrt x (\log x)^2 \right) \\ &= \frac{\mu(q)}{\phi(q)} x + O\left( \sqrt{qx} (\log qx)^2 \right) \end{aligned} as desired. $\Box$ We then provide a bound when ${\alpha}$ is “close to” such an ${a/q}$. Lemma 14 Let ${\gcd(a,q) = 1}$ and ${\beta \in \mathbb T}$. Assuming Theorem 9, we have $\displaystyle S(x, a/q + \beta) = \frac{\mu(q)}{\phi(q)} \left( \sum_{n \le x} e(\beta n) \right) + O\left( (1+\|\beta\|x) \sqrt{qx} (\log qx)^2 \right).$ Proof: For convenience let us assume ${x \in \mathbb Z}$. Let ${\alpha = a/q + \beta}$. Let us denote ${\text{Err}(x, \alpha) = S(x,\alpha) - \frac{\mu(q)}{\phi(q)} x}$, so by Lemma 13 we have ${\text{Err}(x,\alpha) \ll \sqrt{qx}(\log x)^2}$. We have \displaystyle \begin{aligned} S(x, \alpha) &= \sum_{n \le x} \Lambda(n) e(na/q) e(n\beta) \\ &= \sum_{n \le x} e(n\beta) \left( S(n, a/q) - S(n-1, a/q) \right) \\ &= \sum_{n \le x} e(n\beta) \left( \frac{\mu(q)}{\phi(q)} + \text{Err}(n, \alpha) - \text{Err}(n-1, \alpha) \right) \\ &= \frac{\mu(q)}{\phi(q)} \left( \sum_{n \le x} e(n\beta) \right) + \sum_{1 \le m \le x-1} \left( e( (m+1)\beta) - e( m\beta ) \right) \text{Err}(m, \alpha) \\ &\qquad + e(x\beta) \text{Err}(x, \alpha) - e(0) \text{Err}(0, \alpha) \\ &\le \frac{\mu(q)}{\phi(q)} \left( \sum_{n \le x} e(n\beta) \right) + \left( \sum_{1 \le m \le x-1} \|\beta\| \text{Err}(m, \alpha) \right) + \text{Err}(0, \alpha) + \text{Err}(x, \alpha) \\ &\ll \frac{\mu(q)}{\phi(q)} \left( \sum_{n \le x} e(n\beta) \right) + \left( 1+x\left\| \beta \right\| \right) O\left( \sqrt{qx} (\log qx)^2 \right) \end{aligned} as desired. $\Box$ Thus if ${\alpha}$ is close to a fraction with small denominator, the value of ${S(x, \alpha)}$ is bounded above. We can now combine this with the Dirichlet approximation theorem to obtain the following general result. Corollary 15 Suppose ${M = N^{2/3}}$ and suppose ${\left\lvert \alpha - a/q \right\rvert \le \frac{1}{qM}}$ for some ${\gcd(a,q) = 1}$ with ${q \le M}$. Assuming Theorem 9, we have $\displaystyle S(x, \alpha) \ll \frac{x}{\varphi(q)} + x^{\frac56+\varepsilon}$ for any ${\varepsilon > 0}$. Proof: Apply Lemma 14 directly. $\Box$ ## 5. Estimation of the arcs We’ll write $\displaystyle f(\alpha) \overset{\text{def}}{=} S(N,\alpha)=\sum_{n \le N} \Lambda(n)e(n\alpha)$ for brevity in this section. Recall that we wish to bound the right-hand side of (2) in Proposition 5. We split ${[0,1]}$ into two sets, which we call the “major arcs” and the “minor arcs.” To do so, we use Dirichlet approximation, as hinted at earlier. In what follows, fix \displaystyle \begin{aligned} M &= N^{2/3} \\ K &= (\log N)^{10}. \end{aligned} ### 5.1. Setting up the arcs Definition 16 For ${q \le K}$ and ${\gcd(a,q) = 1}$, ${1 \le a \le q}$, we define $\displaystyle \mathfrak M(a,q) = \left\{ \alpha \in \mathbb T \mid \left\lvert \alpha - \frac aq \right\rvert \le \frac 1M \right\}.$ These will be the major arcs. The union of all major arcs is denoted by ${\mathfrak M}$. The complement is denoted by ${\mathfrak m}$. Equivalently, for any ${\alpha}$, consider ${q = q(\alpha) \le M}$ as in Theorem 12. Then ${\alpha \in \mathfrak M}$ if ${q \le K}$ and ${\alpha \in \mathfrak m}$ otherwise. Proposition 17 ${\mathfrak M}$ is composed of finitely many disjoint intervals ${\mathfrak M(a,q)}$ with ${q \le K}$. The complement ${\mathfrak m}$ is nonempty. Proof: Note that if ${q_1, q_2 \le K}$ and ${a/q_1 \neq b/q_2}$ then ${\left\lvert \frac{a}{q_1} - \frac{b}{q_2} \right\rvert \ge \frac{1}{q_1q_2} \gg \frac{3}{qM}}$. $\Box$ In particular both ${\mathfrak M}$ and ${\mathfrak m}$ are measurable. Thus we may split the integral in (2) over ${\mathfrak M}$ and ${\mathfrak m}$. This integral will have large magnitude on the major arcs, and small magnitude on the minor arcs, so overall the whole interval ${[0,1]}$ it will have large magnitude. ### 5.2. Estimate of the minor arcs First, we note the well known fact ${\phi(q) \gg q/\log q}$. Note also that if ${q=q(\alpha)}$ as in the last section and ${\alpha}$ is on a minor arc, we have ${q > (\log N)^{10}}$, and thus ${\phi(q) \gg (\log N)^{9}}$. As such Corollary 3.3 yields that ${f(\alpha) \ll \frac{N}{\phi(q)}+N^{.834} \ll \frac{N}{(\log N)^9}}$. Now, \displaystyle \begin{aligned} \left\lvert \int_{\mathfrak m}f(\alpha)^3e(-N\alpha) \; d\alpha \right\rvert &\le \int_{\mathfrak m}\left\lvert f(\alpha)\right\rvert ^3 \; d\alpha \\ &\ll \frac{N}{(\log N)^9} \int_{0}^{1}\left\lvert f(\alpha)\right\rvert ^2 \;d\alpha \\ &=\frac{N}{(\log N)^9}\int_{0}^{1}f(\alpha)f(-\alpha) \; d\alpha \\ &=\frac{N}{(\log N)^9}\sum_{n \le N} \Lambda(n)^2 \\ &\ll \frac{N^2}{(\log N)^8}, \end{aligned} using the well known bound ${\sum_{n \le N} \Lambda(n)^2 \ll \frac{N}{\log N}}$. This bound of ${\frac{N^2}{(\log N)^8}}$ will be negligible compared to lower bounds for the major arcs in the next section. ### 5.3. Estimate on the major arcs We show that $\displaystyle \int_{\mathfrak M}f(\alpha)^3e(-N\alpha) d\alpha \asymp \frac{N^2}{2} \mathfrak G(N).$ By Proposition 17 we can split the integral over each interval and write $\displaystyle \int_{\mathfrak M} f(\alpha)^3e(-N\alpha) \; d\alpha = \sum_{q \le (\log N)^{10}}\sum_{\substack{1 \le a \le q \\ \gcd(a,q)=1}} \int_{-1/qM}^{1/qM}f(a/q+\beta)^3e(-N(a/q+\beta)) \; d\beta.$ Then we apply Lemma 14, which gives \displaystyle \begin{aligned} f(a/q+\beta)^3 &= \left(\frac{\mu(q)}{\phi(q)}\sum_{n \le N}e(\beta n) \right)^3 \\ &+\left(\frac{\mu(q)}{\phi(q)}\sum_{n \le N}e(\beta n)\right)^2 O\left((1+\|\beta\|N)\sqrt{qN} \log^2 qN\right) \\ &+\left(\frac{\mu(q)}{\phi(q)}\sum_{n \le N}e(\beta n)\right) O\left((1+\|\beta\|N)\sqrt{qN} \log^2 qN\right)^2 \\ &+O\left((1+\|\beta\|N)\sqrt{qN} \log^2 qN\right)^3. \end{aligned} Now, we can do casework on the side of ${N^{-.9}}$ that ${\|\beta\|}$ lies on. • If ${\|\beta\| \gg N^{-.9}}$, we have ${\sum_{n \le N}e(\beta n) \ll \frac{2}{|e(\beta)-1|} \ll \frac{1}{\|\beta\|} \ll N^{.9}}$, and ${(1+\|\beta\|N)\sqrt{qN} \log^2 qN \ll N^{5/6+\varepsilon}}$, because certainly we have ${\|\beta\|<1/M=N^{-2/3}}$. • If on the other hand ${\|\beta\|\ll N^{-.9}}$, we have ${\sum_{n \le N}e(\beta n) \ll N}$ obviously, and ${O(1+\|\beta\|N)\sqrt{qN} \log^2 qN) \ll N^{3/5+\varepsilon}}$. As such, we obtain $\displaystyle f(a/q+\beta)^3 \ll \left( \frac{\mu(q)}{\phi(q)}\sum_{n \le N}e(\beta n) \right)^3 + O\left(N^{79/30+\varepsilon}\right)$ in either case. Thus, we can write \displaystyle \begin{aligned} &\qquad \int_{\mathfrak M} f(\alpha)^3e(-N\alpha) \; d\alpha \\ &= \sum_{q \le (\log N)^{10}} \sum_{\substack{1 \le a \le q \\ \gcd(a,q)=1}} \int_{-1/qM}^{1/qM} f(a/q+\beta)^3e(-N(a/q+\beta)) \; d\beta \\ &= \sum_{q \le (\log N)^{10}} \sum_{\substack{1 \le a \le q \\ \gcd(a,q)=1}} \int_{-1/qM}^{1/qM}\left[\left(\frac{\mu(q)}{\phi(q)}\sum_{n \le N}e(\beta n)\right)^3 + O\left(N^{79/30+\varepsilon}\right)\right]e(-N(a/q+\beta)) \; d\beta \\ &=\sum_{q \le (\log N)^{10}} \frac{\mu(q)}{\phi(q)^3} S_q \left(\sum_{\substack{1 \le a \le q \\ \gcd(a,q)=1}} e(-N(a/q))\right) \left( \int_{-1/qM}^{1/qM}\left(\sum_{n \le N}e(\beta n)\right)^3e(-N\beta) \; d\beta \right ) \\ &\qquad +O\left(N^{59/30+\varepsilon}\right). \end{aligned} just using ${M \le N^{2/3}}$. Now, we use $\displaystyle \sum_{n \le N}e(\beta n) = \frac{1-e(\beta N)}{1-e(\beta)} \ll \frac{1}{\|\beta\|}.$ This enables us to bound the expression $\displaystyle \int_{1/qM}^{1-1/qM}\left (\sum_{n \le N}e(\beta n)\right) ^ 3 e(-N\beta)d\beta \ll \int_{1/qM}^{1-1/qM}\|\beta\|^{-3} d\beta = 2\int_{1/qM}^{1/2}\beta^{-3} d\beta \ll q^2M^2.$ But the integral over the entire interval is \displaystyle \begin{aligned} \int_{0}^{1}\left(\sum_{n \le N}e(\beta n) \right)^3 e(-N\beta)d\beta &= \int_{0}^{1} \sum_{a,b,c \le N} e((a+b+c-N)\beta) \\ &\ll \sum_{a,b,c \le N} \mathbf 1(a+b+c=N) \\ &= \binom{N-1}{2}. \end{aligned} Considering the difference of the two integrals gives $\displaystyle \int_{-1/qM}^{1/qM}\left(\sum_{n \le N}e(\beta n) \right)^3 e(-N\beta) \; d\beta - \frac{N^2}{2} \ll q^2 M^2 + N \ll (\log N)^c N^{4/3},$ for some absolute constant ${c}$. For brevity, let $\displaystyle S_q = \sum_{\substack{1 \le a \le q \\ \gcd(a,q)=1}} e(-N(a/q)).$ Then \displaystyle \begin{aligned} \int_{\mathfrak M} f(\alpha)^3e(-N\alpha) \; d\alpha &= \sum_{q \le (\log N)^{10}} \frac{\mu(q)}{\phi(q)^3}S_q \left( \int_{-1/qM}^{1/qM}\left(\sum_{n \le N}e(\beta n)\right)^3e(-N\beta) \; d\beta \right ) \\ &\qquad +O\left(N^{59/30+\varepsilon}\right) \\ &= \frac{N^2}{2}\sum_{q \le (\log N)^{10}} \frac{\mu(q)}{\phi(q)^3}S_q + O((\log N)^{10+c} N^{4/3}) + O(N^{59/30+\varepsilon}) \\ &= \frac{N^2}{2}\sum_{q \le (\log N)^{10}} \frac{\mu(q)}{\phi(q)^3} + O(N^{59/30+\varepsilon}). \end{aligned} . The inner sum is bounded by ${\phi(q)}$. So, $\displaystyle \left\lvert \sum_{q>(\log N)^{10}} \frac{\mu(q)}{\phi(q)^3} S_q \right\rvert \le \sum_{q>(\log N)^{10}} \frac{1}{\phi(q)^2},$ which converges since ${\phi(q)^2 \gg q^c}$ for some ${c > 1}$. So $\displaystyle \int_{\mathfrak M} f(\alpha)^3e(-N\alpha) \; d\alpha = \frac{N^2}{2}\sum_{q = 1}^\infty \frac{\mu(q)}{\phi(q)^3}S_q + O(N^{59/30+\varepsilon}).$ Now, since ${\mu(q)}$, ${\phi(q)}$, and ${\sum_{\substack{1 \le a \le q \\ \gcd(a,q)=1}} e(-N(a/q))}$ are multiplicative functions of ${q}$, and ${\mu(q)=0}$ unless ${q}$ is squarefree, \displaystyle \begin{aligned} \sum_{q = 1}^\infty \frac{\mu(q)}{\phi(q)^3} S_q &= \prod_p \left(1+\frac{\mu(p)}{\phi(p)^3}S_p \right) \\ &= \prod_p \left(1-\frac{1}{(p-1)^3} \sum_{a=1}^{p-1} e(-N(a/p))\right) \\ &= \prod_p \left(1-\frac{1}{(p-1)^3}\sum_{a=1}^{p-1} (p\cdot \mathbf 1(p|N) - 1)\right) \\ &= \prod_{p|N}\left(1-\frac{1}{(p-1)^2}\right) \prod_{p \nmid N}\left(1+\frac{1}{(p-1)^3}\right) \\ &= \mathfrak G(N). \end{aligned} So, $\displaystyle \int_{\mathfrak M} f(\alpha)^3e(-N\alpha) \; d\alpha = \frac{N^2}{2}\mathfrak{G}(N) + O(N^{59/30+\varepsilon}).$ When ${N}$ is odd, $\displaystyle \mathfrak{G}(N) = \prod_{p|N}\left(1-\frac{1}{(p-1)^2}\right)\prod_{p \nmid N}\left(1+\frac{1}{(p-1)^3}\right)\geq \prod_{m\geq 3}\left(\frac{m-2}{m-1}\frac{m}{m-1}\right)=\frac{1}{2},$ so that we have $\displaystyle \int_{\mathfrak M} f(\alpha)^3e(-N\alpha) \; d\alpha \asymp \frac{N^2}{2}\mathfrak{G}(N),$ as desired. ## 6. Completing the proof Because the integral over the minor arc is ${o(N^2)}$, it follows that $\displaystyle \sum_{a+b+c=N} \Lambda(a)\Lambda(b)\Lambda(c) = \int_{0}^{1} f(\alpha)^3 e(-N\alpha) d \alpha \asymp \frac{N^2}{2}\mathfrak{G}(N) \gg N^2.$ Consider the set ${S_N}$ of integers ${p^k\leq N}$ with ${k>1}$. We must have ${p \le N^{\frac{1}{2}}}$, and for each such ${p}$ there are at most ${O(\log N)}$ possible values of ${k}$. As such, ${|S_N| \ll\pi(N^{1/2}) \log N\ll N^{1/2}}$. Thus $\displaystyle \sum_{\substack{a+b+c=N \\ a\in S_N}} \Lambda(a)\Lambda(b)\Lambda(c) \ll (\log N)^3 |S|N \ll\log(N)^3 N^{3/2},$ and similarly for ${b\in S_N}$ and ${c\in S_N}$. Notice that summing over ${a\in S_N}$ is equivalent to summing over composite ${a}$, so $\displaystyle \sum_{p_1+p_2+p_3=N} \Lambda(p_1)\Lambda(p_2)\Lambda(p_3) =\sum_{a+b+c=N} \Lambda(a)\Lambda(b)\Lambda(c) + O(\log(N)^3 N^{3/2}) \gg N^2,$ where the sum is over primes ${p_i}$. This finishes the proof. # First drafts of Napkin up! EDIT: Here’s a July 19 draft that fixes some of the glaring issues that were pointed out. This morning I finally uploaded the first drafts of my Napkin project, which I’ve been working on since December 2014. See the Napkin tab above for a listing of all drafts. Napkin is my personal exposition project, which unifies together a lot of my blog posts and even more that I haven’t written on yet into a single coherent narrative. It’s written for students who don’t know much higher math, but are curious and already are comfortable with proofs. It’s especially suited for e.g. students who did contests like USAMO and IMO. There are still a lot of rough edges in the draft, but I haven’t been able to find much time to work on it this whole calendar year, and so I’ve finally decided the perfect is the enemy of the good and it’s about time I brought this project out of the garage. I’d much appreciate any comments, corrections, or suggestions, however minor. Please let me know! I do plan to keep updating this draft as I get comments, though I can’t promise that I’ll be very fast in doing so. I. Basic Algebra and Topology II. Linear Algebra and Multivariable Calculus III. Groups, Rings, and More IV. Complex Analysis V. Quantum Algorithms VI. Algebraic Topology I: Homotopy VII. Category Theory VIII. Differential Geometry IX. Algebraic Topology II: Homology X. Algebraic NT I: Rings of Integers XI. Algebraic NT II: Galois and Ramification Theory XII. Representation Theory XIII. Algebraic Geometry I: Varieties XIV. Algebraic Geometry II: Schemes XV. Set Theory I: ZFC, Ordinals, and Cardinals XVI. Set Theory II: Model Theory and Forcing (I’ve also posted this on Reddit to try and grab a larger audience. We’ll see how that goes.)
2017-11-25T00:05:05
{ "domain": "wordpress.com", "url": "https://usamo.wordpress.com/page/2/", "openwebmath_score": 0.9790788292884827, "openwebmath_perplexity": 1479.406349184308, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357259231533, "lm_q2_score": 0.6654105521116443, "lm_q1q2_score": 0.6532573114142514 }
https://scicomp.stackexchange.com/questions/31224/richardsons-iteration-gradient-method-and-spectral-radius
# Richardson's Iteration, Gradient Method and Spectral Radius Richardson's iteration introduce a scalar $$\alpha$$ to the update formula: $$\textbf{x}^{(k+1)} = \textbf{x}^{(k)} + \alpha \textbf{r}^{(k)}$$ And compute $$\alpha$$ by minimizing the spectral radius: $$\min_{\omega}\,\rho(B) = \min_{\omega}\, \rho(I-\omega A)\, .$$ As $$I$$, and $$A$$ do not change over the iterations, it might seem it exists only one (i.e. equal for all the iterations) $$\omega$$ making the max eigenvalue minimal, hence the spectral radius minimal. Since I know gradient method and conjugate gradient perform better by setting $$\alpha$$ dynamically, I am wondering what am I missing here. Is there any way to see through the spectral radius expression that gradient and conjugate gradient iterations converge faster? • The key insight is that the spectral radius of $I-\omega A$ is a conservative measure of convergence rate. One can show that picking the optimal $\omega$ over $k$ iterations amounts to solving an order-$k$ polynomial minimization problem over $\|Ax-b\|$ subject to $x = p(A)b$. You might find my answer on MO helpful: mathoverflow.net/questions/232132/… – Richard Zhang Mar 12 '19 at 20:36
2020-08-14T10:50:03
{ "domain": "stackexchange.com", "url": "https://scicomp.stackexchange.com/questions/31224/richardsons-iteration-gradient-method-and-spectral-radius", "openwebmath_score": 0.9056366086006165, "openwebmath_perplexity": 370.551216291485, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357259231533, "lm_q2_score": 0.6654105521116443, "lm_q1q2_score": 0.6532573114142514 }
https://proofwiki.org/wiki/Rule_of_Simplification/Sequent_Form/Formulation_1
# Rule of Simplification/Sequent Form/Formulation 1 ## Theorem $\text {(1)}: \quad$ $\ds p \land q$ $\ds$ $\ds \vdash \ \$ $\ds p$ $\ds$ $\text {(2)}: \quad$ $\ds p \land q$ $\ds$ $\ds \vdash \ \$ $\ds q$ $\ds$ ### Form 1 $\ds p \land q$ $\ds$ $\ds \vdash \ \$ $\ds p$ $\ds$ ### Form 2 $\ds p \land q$ $\ds$ $\ds \vdash \ \$ $\ds q$ $\ds$ ## Proof We apply the Method of Truth Tables. $\begin{array}{|ccc||c|c|} \hline p & \land & q & p & q \\ \hline F & F & F & F & F \\ F & F & T & F & T \\ T & F & F & T & F \\ T & T & T & T & T \\ \hline \end{array}$ As can be seen, when $p \land q$ is true so are both $p$ and $q$. $\blacksquare$
2021-05-15T08:50:56
{ "domain": "proofwiki.org", "url": "https://proofwiki.org/wiki/Rule_of_Simplification/Sequent_Form/Formulation_1", "openwebmath_score": 0.8440014123916626, "openwebmath_perplexity": 419.20582683821624, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357259231532, "lm_q2_score": 0.6654105521116443, "lm_q1q2_score": 0.6532573114142513 }
https://math.stackexchange.com/questions/4455692/is-it-possible-that-linear-transform-changes-interval-data-to-ratio
# Is it possible that linear transform changes interval data to ratio? I'm going through a stats intro and got puzzled by the concept of interval and ratio data. Celsius temps is an example of interval data that can not be multiplied and divided, because there is no true 0. Yet, a simple linear transformation changes Celsius into Kelvin, where 0 is defined. Kelvin temps are divided and multiplied all over physics, so clearly a ratio data. To me it seems counterintuitive that a simple linear transformation allows for definition of a whole new mathematical operation. Is it just me and there is actually nothing strange about it, or am I missing something? Think of a plane (linear space) $$H$$ that does not go through the origin e.g. $$\{x\in \mathbb R^n|x^Tc=b\neq0\}$$,$$c$$ is a constant vector. Then obviously you cannot multiply / add vectors from $$H$$ -- thee result will be out of $$H$$, this is the same case as Celcius. But doing an affine transform to each point $$x'=x+\frac{bc}{\|c\|^2}$$, $$\{x'\in \mathbb R^n|x'^Tc=0\}$$ this space become a linear subspace, and now you can multiply / add vectors from it.
2022-06-27T16:13:59
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/4455692/is-it-possible-that-linear-transform-changes-interval-data-to-ratio", "openwebmath_score": 0.6871325969696045, "openwebmath_perplexity": 170.95387938330975, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.981735725388777, "lm_q2_score": 0.6654105521116443, "lm_q1q2_score": 0.6532573110586717 }
http://mathhelpforum.com/algebra/196549-show-polynomial-f-x-2x-1-has-multiplicative-inverse-z_4-x-print.html
# Show that the polynomial f(x)=2x+1 has a multiplicative inverse in Z_4 [x]. $\left[f(x)\right]^2=(2x+1)^2=4x^2+4x+1=1$ in $\mathbb Z_4[x].$ So $f(x)$ is its own inverse in $\mathbb Z_4[x].$
2014-03-14T02:48:46
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/196549-show-polynomial-f-x-2x-1-has-multiplicative-inverse-z_4-x-print.html", "openwebmath_score": 0.671514093875885, "openwebmath_perplexity": 452.5043214278145, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.981735725388777, "lm_q2_score": 0.6654105521116443, "lm_q1q2_score": 0.6532573110586717 }
https://mathoverflow.net/questions/251395/when-k-mathbbf-q-finite-field-x-always-has-k-rational-point-and-so/251399
# When $k = \mathbb{F}_q$ finite field, $X$ always has $k$-rational point, and so $A \simeq X$? Let $k$ be an arbitrary field. Let $(A, e)$ be an abelian variety over $k$, and let $X$ be a torsor for $A$, i.e. $X$ is a proper smooth $k$-variety, and there is an $A$-action acting $:A \times X \to X$ such that for any $k$-scheme $L$ and a point $x \in X(L)$, the induced "orbit" map $A_L \to X_L$ given by $a \mapsto a + x$ is an isomorphism. When $k = \mathbb{F}_q$ is a finite field, how do I see that $X$ always has a $k$-rational point, and thus $A \simeq X$? This is a theorem of Lang's from 1956. Here's an online document giving a proof (in the form $H^1(A,k)=0$): Lecture 14: Galois Cohomology of Abelian Varieties over Finite Fields, William Stein. http://wstein.org/edu/2010/582e/lectures/582e-2010-02-12/582e-2010-02-12.pdf Stein notes that there is a "more modern proof" in the first few sections of Chapter VI of Serre's Algebraic Groups and Class Fields. The original article is Serge Lang, "Abelian varieties over finite fields," Proceedings of the National Academy of Sciences 41.3 (1955): 174-176. It is available at http://www.pnas.org/content/41/3/174.short. It's very short, but phrased in the "old-style" Weil language of algebraic geometry. Here's a quick sketch of a proof with $k=\mathbb F_q$. Choosing a point of $X(\overline{k})$, we can make $X$ into an abelian variety over $\overline{k}$. Then the $q$-power Frobenius map $\phi_q:X\to X$ is the composition of an isogeny and a translation, say $\phi_q(x)=f(x)+x_0$ with $f:X\to X$ an isogeny (defined over $\overline{k}$) and $x_0\in X(\overline{k})$. The fact that $\phi_q$ is inseparable implies that $f$ is inseparable, and hence $(1-f)^*$ acts as the identity map on differentials. Thus $1-f$ has finite kernel, so it is surjective, and thus there is a point $x_1\in X(\overline k)$ satisfying $(1-f)(x_1)=x_0$. This implies that $\phi_q(x_1)=x_1$, and hence that $x_1\in X(k)$. A bit of overkill, but it follows from the Weil conjectures. The structure of cohomology ($H^i = \wedge^i H^1$) is computed over the algebraic closure and it follows that the number of points is $\prod(\alpha_i-1)$ where the $\alpha_i$ are the eigenvalues of Frobenius on $H^1$ so $|\alpha_i| = q^{1/2}$ and the product is therefore not zero. This is a theorem of Serge Lang, proved in a paper called "Abelian varieties over finite fields".
2021-12-09T03:55:02
{ "domain": "mathoverflow.net", "url": "https://mathoverflow.net/questions/251395/when-k-mathbbf-q-finite-field-x-always-has-k-rational-point-and-so/251399", "openwebmath_score": 0.9671802520751953, "openwebmath_perplexity": 80.61651198964636, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.981735725388777, "lm_q2_score": 0.6654105521116443, "lm_q1q2_score": 0.6532573110586717 }
https://cameramath.com/Questions?category=Arithmetic
Still have math questions? Q: When Sally went bowling, her scores were 108, 72, and 95. If she bowls a 4th game, what will her score need to be to give her an average of 91? Q: Last month, the online price of a powered ride-on car was $250. This month, the online price is$330. What is the percent of increase for the price of the car? Q: Billy's car gets 25 miles per gallon of gas. If Wilmington, DE is 150 miles away, how many gallons of gas is he going to need to get to Wilmington and come back home? Q: Which list of numbers is ordered from least to greatest value? $$A.\frac{1}{20},4.1\times10^{-1},4.8\%,\sqrt{4}$$ $$B.4.8\%,\frac{1}{20},4.1\times10^{-1},\sqrt{4}$$ $$C.\frac{1}{20},\sqrt{4},4.1\times10^{-1},4.8\%$$ $$D.4.1\times10^{-1},\frac{1}{20},\sqrt{4},4.8\%$$ Q: In the auditorium there are second third ,fourth and fifth graders. One sixth of the students are fifth graders, one third are fourth graders and one fourth of the remaining students are second graders. If there are 96 students in the auditorium, how many are second graders? what fraction of the students are second graders Q: Hayden says that $$\frac{3}{2}$$ is a whole number because it does not repeat as a decimal. Is he correct? Why or why not? Q: In 2015, the number of graduating seniors taking the ACT exam was 1,924,887. In 2011, a total of 1,612,665 graduating seniors took the exam. By what percent did the number increase over this period of time? Q: Convert $$132\text{ cm}^2$$ into square inches. When appropriate, round your answer off to the nearest thousandth. Q: The price of a technology stock has risen to $9.85 today. Yesterday's price was$9.72. Find the percentage increase. Round your answer to the nearest tenth of a percent. Q: Mr. Kendrick had a $100 bill to buy meat for a family gathering. He bought two briskets for$13.50 each, eight pounds of sausage of $31.94, and six chickens for$20.16. How much change should Mr. kendrick receive from his $100 bill? Q: Simplify: $$8(-9-5x)+4b$$ use $$x=9$$ and $$b=-9$$ a. None of these answers are correct. b. -1044 c. -468 d. -396 e. 252 Q: What is the GCF of two numbers or expressions that have no prime factors in common. Explain. Q: How many meters are there in $$50\frac{1}{4}$$ kilometers? a. 5025 m b. 5250 m c. 50025 m d. 50250 m Q: At a shelter, 15% of the dogs are puppies. There are 60 dogs at the shelter. How many are puppies? Q: The manufacturing cost of an air-conditioning unit$544, and the full-replacement extended warranty costs $113. If the manufacturer sells 506,970 units with extended warranties and must replace 20% of them as a result, how much will the replacement costs be? (Note: Assume manufacturing costs and replacement costs are the same.) a.$55,158,336 b. $11,457,522 c.$2,129,274 d. $66,717,252 Q: Ryan has 10 quarters. Kelly has exactly seven times as much money as Ryan. They will combine their money to buy a game. The game costs$27. How much more money do they need to save? Q: Sam wants to buy a remote control dune buggy that is 30% off. The original price is $200.11. What is amount of discount(nearest penny)? Q: The men's 110-meter hurdles is an event in the Olympic games. The distance from the starting line to the first hurdle is 13.72 meters. The distance from the first hurdle to the second hurdle is 9.14 meters. What is the total distance from the starting line to the second hurdle? Q: In a city, the record monthly high temperature for March is $$56^{\circ}\text{F}$$. The record monthly low temperature for March is $$-4^{\circ}\text{F}$$. What is the range of temperatures for the month of march? Q: x+9=10 Q: Makayla has$740 to spend at a bicycle store for some new gear and biking outfits. Assume all prices listed include tax. She buys a new bicycle for $465.34. She buys 4 bicycle reflectors for$13.25 each and a pair of bike gloves for $18.12. She plans to spend some or all of the money she has left to buy new biking outfits for$25.69 each. What is the greatest number of outfits Makayla can buy with the money that's left over?
2021-09-23T17:48:57
{ "domain": "cameramath.com", "url": "https://cameramath.com/Questions?category=Arithmetic", "openwebmath_score": 0.2490197867155075, "openwebmath_perplexity": 1867.5410219471028, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357248544009, "lm_q2_score": 0.6654105521116443, "lm_q1q2_score": 0.6532573107030922 }
https://math.stackexchange.com/questions/2367685/proving-argument-of-complex-number-properties/2367689
# Proving argument of complex number properties. How to prove $\arg\left(\frac{z_1}{z_2}\right) = \arg(z_1) - \arg(z_2)$ for $z_1, z_2 \neq 0$ ? 1. If you know about the polar representation of complex numbers, you can write $z_1 = r_1 e^{i \theta_1}$, $z_2 = r_2 e^{i \theta_2}$ where $r_j = |z_j|$, $\theta_j = \arg(z_j)$ for $j=1,2$. Then $z_1/z_2 = (r_1/r_2) e^{i(\theta_1-\theta_2)}$, so $\theta_1-\theta_2$ is an argument for $z_1/z_2$. 2. If you don't know yet about polar representation, you can use trigonometric identities. Say $z_j = x_j + i y_j$. Then $$\frac{z_1}{z_2} = \frac{(x_1+iy_1)(x_2-iy_2)}{x_2^2+y_2^2}=\frac{(x_1x_2+y_1y_2)+i(y_1x_2-x_1y_2)}{x_2^2+y_2^2}.$$ Write $x_j = r_j \cos \theta_j$, $y_j = r_j \sin \theta_j$ for $j=1,2$. After pulling out some common factors of $r_1 r_2$, you will see some terms like $\cos \theta_1 \cos \theta_2 + \sin \theta_1 \sin \theta_2$, which by a trigonometric identity is equal to $\cos(\theta_1-\theta_2)$; and $\sin \theta_1 \cos \theta_2 - \cos \theta_1 \sin \theta_2$, which by another identity is equal to $\sin(\theta_1-\theta_2)$. From this you can conclude again that $\theta_1-\theta_2$ is an argument for $z_1/z_2$ (well, after you have filled in some steps that I skipped, including taking care of $r_j$ factors). 3. Finally, if you really want to prove that $\arg(z_1/z_2) = \arg(z_1) - \arg(z_2)$, then, then please be careful! For example if you follow the convention that arguments should be single values in the interval $[0,2\pi)$, then the statement you want to prove is false! If $\arg(z_1) < \arg(z_2)$, then you will have $\arg(z_1/z_2) = \arg(z_1)-\arg(z_2)+2\pi$. It is okay if you think of argument as a multivalued function (then the statement you want to prove is an equality of sets; it is true, but you might want to think carefully about what it actually means, including what the subtraction $\arg z_1 - \arg z_2$ means). [Thanks for the comment below that brought this possible interpretation to my attention, I overlooked it before.] So please work with a more careful formulation of the statement. Then you should be able to find a proof using either one of the ideas suggested above. • If $\arg(z)$ denotes the muli-valued argument of $z$, then $\arg(z_1/z_2)=\arg(z_1)-\arg(z_2)$. The equality is interpreted as a set equality. – Mark Viola Jul 22 '17 at 4:13 This is equivalent to $arg(z_1) + arg(z_2) = arg(z_1 z_2)$. Write $z_1 = r_1 e^{i arg(z_1)}$ and $z_2 = r_2 e^{i arg(z_2)}$.
2019-05-20T04:59:42
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2367685/proving-argument-of-complex-number-properties/2367689", "openwebmath_score": 0.953991174697876, "openwebmath_perplexity": 143.67686951265702, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357248544007, "lm_q2_score": 0.6654105521116443, "lm_q1q2_score": 0.6532573107030921 }
https://beta.book.xogeny.com/behavior/arrays/oned/
Found an issue with the book? Report it on Github. # One-Dimensional Heat Transfer¶ Our previous discussion on State Space introduced matrices and vectors. The focus was primarily on mathematical aspects of arrays. In this section, we will consider how arrays can be used to represent something a bit more physical, the one-dimensional spatial distribution of variables. We’ll look at several features in Modelica that are related to arrays and how they allow us to compactly express behavior involving arrays. Our problems will center around the a simple heat transfer problem. Consider a one-dimensional rod like the one shown below: ## Deriving Equations¶ Before getting into the math, there is an important point worth making. Modelica is a language for representing lumped systems. What this means is that the behavior must be expressed in terms of ordinary differential equations (ODEs) or in some cases differential-algebraic equations (DAEs). But Modelica does not include any means for describing partial differential equations (i.e., equations that involve the gradient of variables in spatial directions). As such, in this section we will derive the ordinary differential equations that result when we discretize a rod into discrete pieces and then model the behavior of each of these discrete (lumped) pieces. With that out of the way, let us consider the heat balance for each discrete section of this rod. First, we have the thermal capacitance of the section. This can expressed as: where is the mass of the section, is the capacitance (a material property) and is the temperature of the section. We can further describe the mass as: where is the material density and is the volume of the section. Finally, we can express the volume of the section as: where is the cross-sectional area of the section, which is assumed to be uniform, and is the length of the section. For this example, we will assume the rod is composed of equal size pieces. In this case, we can define the segment length, , to be: We will also assume that the cross-sectional area is uniform along the length of the rod. As such, the mass of each segment can be given as: In this case, the thermal capacitance of each section would be: This, in turn, means that the net heat gained in that section at any time will be: where we assume that , and don’t change with respect to time. That covers the thermal capacitance. In addition, we will consider two different forms of heat transfer. The first form of heat transfer we will consider is convection from each section to some ambient temperature, . We can express the amount of heat lost from each section as: where is the convection coefficient and is the surface area of the section. The other form of heat transfer is conduction to neighboring sections. Here there will be two contributions, one lost to the section, if it exists, and the other lost to the section, if it exists. These can be represented, respectively, as: Using these relations, we know that the heat balance for the first element would be: Similarly, the heat balance for the last element would be: Finally, the heat balance for all other elements would be: ## Implementation¶ We start by defining types for the various physical quantities. This will give us the proper units and, depending on the tool, allows us to do unit checking on our equations. Our type definitions are as follows: type Temperature=Real(unit="K", min=0); type ConvectionCoefficient=Real(unit="W/K", min=0); type ConductionCoefficient=Real(unit="W.m-1.K-1", min=0); type Mass=Real(unit="kg", min=0); type SpecificHeat=Real(unit="J/(K.kg)", min=0); type Density=Real(unit="kg/m3", min=0); type Area=Real(unit="m2"); type Volume=Real(unit="m3"); type Length=Real(unit="m", min=0); We will also define several parameters to describe the rod we are simulating: parameter Integer n=10; parameter Length L=1.0; parameter Density rho=2.0; parameter ConvectionCoefficient h=2.0; parameter ConductionCoefficient k=10; parameter SpecificHeat C=10.0; parameter Temperature Tamb=300 "Ambient temperature"; Given these parameters, we can compute the areas and volume for each section in terms of the parameters we have already defined using the following declarations: parameter Area A_c = pi*R^2, A_s = 2*pi*R*L; parameter Volume V = A_c*L/n; Finally, the only array in this problem is the temperature of each section (since this is the only quantity that actually varies along the length of the rod): Temperature T[n]; This concludes all the declarations we need to make. Now let’s consider the various equations required. First, we need to specify the initial conditions for the rod. We will assume that , and the initial temperatures of all other sections can be linearly interpolated between these two end conditions. This is captured by the following equation: initial equation T = linspace(200,300,n); where the linspace operator is used to create an array of n values that vary linearly between 200 and 300. Recall from our State Space examples that we can include equations where the left hand side and right hand side expressions are vectors. This is another example of such an equation. Finally, we come to the equations that describe how the temperature in each section changes over time: equation rho*V*C*der(T[1]) = -h*A_s*(T[1]-Tamb)-k*A_c*(T[1]-T[2])/(L/n); for i in 2:(n-1) loop rho*V*C*der(T[i]) = -h*A_s*(T[i]-Tamb)-k*A_c*(T[i]-T[i-1])/(L/n)-k*A_c*(T[i]-T[i+1])/(L/n); end for; rho*V*C*der(T[end]) = -h*A_s*(T[end]-Tamb)-k*A_c*(T[end]-T[end-1])/(L/n); The first equation corresponds to the heat balance for section , the last equation corresponds to the heat balance for section and the middle equation covers all other sections. Note the use of end as a subscript. When an expression is used to evaluate a subscript for a given dimension, end represents the size of that dimension. In our case, we use end to represent the last section. Of course, we could use n in this case, but in general, end can be very useful when the size of a dimension is not already associated with a variable. Also note the use of a for loop in this model. A for loop allows the loop index variable to loop over a range of values. In our case, the loop index variable is i and the range of values is through . The general syntax for a for loop is: for <var> in <range> loop // statements end for; where <range> is a vector of values. A convenient way to generate a sequence of values is to use the range operator, :. The value before the range operator is the initial value in the sequence and the value after the range operator is the final value in the sequence. So, for example, the expression 5:10 would generate a vector with the values 5, 6, 7, 8, 9 and 10. Note that this includes the values used to specify the range. When a for loop is used in an equation section, each iteration of the for loop generates a new equation for each equation inside the for loop. So in our case, we will generate equations corresponding to values of i between 2 and n-1. Putting all this together, the complete model would be: model Rod_ForLoop "Modeling heat conduction in a rod using a for loop" type Temperature=Real(unit="K", min=0); type ConvectionCoefficient=Real(unit="W/K", min=0); type ConductionCoefficient=Real(unit="W.m-1.K-1", min=0); type Mass=Real(unit="kg", min=0); type SpecificHeat=Real(unit="J/(K.kg)", min=0); type Density=Real(unit="kg/m3", min=0); type Area=Real(unit="m2"); type Volume=Real(unit="m3"); type Length=Real(unit="m", min=0); constant Real pi = 3.14159; parameter Integer n=10; parameter Length L=1.0; parameter Density rho=2.0; parameter ConvectionCoefficient h=2.0; parameter ConductionCoefficient k=10; parameter SpecificHeat C=10.0; parameter Temperature Tamb=300 "Ambient temperature"; parameter Area A_c = pi*R^2, A_s = 2*pi*R*L; parameter Volume V = A_c*L/n; Temperature T[n]; initial equation T = linspace(200,300,n); equation rho*V*C*der(T[1]) = -h*A_s*(T[1]-Tamb)-k*A_c*(T[1]-T[2])/(L/n); for i in 2:(n-1) loop rho*V*C*der(T[i]) = -h*A_s*(T[i]-Tamb)-k*A_c*(T[i]-T[i-1])/(L/n)-k*A_c*(T[i]-T[i+1])/(L/n); end for; rho*V*C*der(T[end]) = -h*A_s*(T[end]-Tamb)-k*A_c*(T[end]-T[end-1])/(L/n); end Rod_ForLoop; Note Note that we’ve included pi as a literal constant in this model. Later in the book, we’ll discuss how to properly import common Constants. Simulating this model yields the following solution for each of the nodal temperatures: Note how the temperatures are initially distributed linearly (as we specified in our initial equation section). ## Alternatives¶ It turns out that there are several ways we can generate the equations we need. Each has its own advantages and disadvantages depending on the context. We’ll present them here just to demonstrate the possibilities. The choice of which one they feel leads to the most understandable equations is up to the model developer. One array feature we can use to make these equations slightly simpler is called an array comprehension. An array comprehension flips the for loop around so that we take a single equation and add some information at the end indicating that the equation should be evaluated for different values of the loop index variable. In our case, we can represent our equation section using array comprehensions as follows: equation rho*V*C*der(T[1]) = -h*A_s*(T[1]-Tamb)-k*A_c*(T[1]-T[2])/(L/n); rho*V*C*der(T[2:n-1]) = {-h*A_s*(T[i]-Tamb)-k*A_c*(T[i]-T[i-1])/(L/n)-k*A_c*(T[i]-T[i+1])/(L/n) for i in 2:(n-1)}; rho*V*C*der(T[end]) = -h*A_s*(T[end]-Tamb)-k*A_c*(T[end]-T[end-1])/(L/n); We could also combine the array comprehension with some if expressions to nullify contributions to the heat balance that don’t necessarily apply. In that case, we can simplify the equation section to the point where it contains one (admittedly multi-line) equation: equation rho*V*C*der(T) = {-h*A_s*(T[i]-Tamb) -(if i==1 then 0 else k*A_c/(L/n)*(T[i]-T[i-1])) -(if i==n then 0 else k*A_c/(L/n)*(T[i]-T[i+1])) for i in 1:n}; Recall, from several previous examples, that Modelica supports vector equations. In these cases, when the left hand and right hand side are vectors of the same size, we can use a single (vector) equation to represent many scalar equations. We can use this feature to simplify our equations as follows: equation rho*V*C*der(T[1]) = -h*A_s*(T[1]-Tamb)-k*A_c*(T[1]-T[2])/(L/n); rho*V*C*der(T[2:n-1]) = -h*A_s*(T[i]-Tamb)-k*A_c*(T[2:n-1]-T[1:n-2])/(L/n)-k*A_c*(T[2:n-1]-T[3:n])/(L/n); rho*V*C*der(T[end]) = -h*A_s*(T[end]-Tamb)-k*A_c*(T[end]-T[end-1])/(L/n); Note that when a vector variable like T has a range of subscripts applied to it, the result is a vector containing the components indicated by the values in the subscript. For example, the expression T[2:4] is equivalent to {T[2], T[3], T[4]}. The subscript expression doesn’t need to be a range. For example, T[{2,5,9}] is equivalent to {T[2], T[5], T[9]}. Finally, let us consider one last way of refactoring these equations. Imagine we introduced three additional vector variables: Heat Qconv[n]; Heat Qleft[n]; Heat Qright[n]; Then we can write these two equations (again using vector equations) to define the heat lost to the ambient, previous section and next section in the rod: Qconv = {-h*A_s*(T[i]-Tamb) for i in 1:n}; Qleft = {(if i==1 then 0 else -k*A_c*(T[i]-T[i-1])/(L/n)) for i in 1:n}; Qright = {(if i==n then 0 else -k*A_c*(T[i]-T[i+1])/(L/n)) for i in 1:n}; This allows us to express the heat balance for each section using a vector equation that doesn’t include any subscripts: rho*V*C*der(T) = Qconv+Qleft+Qright; ## Conclusion¶ In this section, we’ve seen various ways that we can use vector variables and vector equations to represent one-dimensional heat transfer. Of course, this vector related functionality can be used for a wide range of different problem types. The goal of this section was to introduce several features to demonstrate the various options that are available to the developer when working with vectors.
2019-08-23T11:15:06
{ "domain": "xogeny.com", "url": "https://beta.book.xogeny.com/behavior/arrays/oned/", "openwebmath_score": 0.7521411180496216, "openwebmath_perplexity": 705.1747806897664, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357248544007, "lm_q2_score": 0.6654105521116443, "lm_q1q2_score": 0.6532573107030921 }
https://api-project-1022638073839.appspot.com/questions/what-are-the-set-of-values-for-which-this-equation-has-real-distinct-roots
# What are the set of values for which this equation has real distinct roots? ## $2 {x}^{2} + 3 k x + k = 0$ Apr 18, 2018 $k < 0 \text{ or } k > \frac{8}{9}$ #### Explanation: $\text{to determine the nature of the roots use the "color(blue)"discriminant}$ •color(white)(x)Delta=b^2-4ac • " If "Delta>0" then real distinct roots" • " If "Delta=0" then real and equal roots" • " If "Delta<0" then complex roots" $\text{here "Delta>0" is required}$ $2 {x}^{2} + 3 k x + k = 0 \leftarrow \textcolor{b l u e}{\text{is in standard form}}$ $\text{with "a=2,b=3k" and } c = k$ $\rightarrow \Delta = {\left(3 k\right)}^{2} - \left(4 \times 2 \times k\right) = 9 {k}^{2} - 8 k$ "rArr9k^2-8k>0 $\text{the left side is a quadratic with positive leading}$ $\text{coefficient and zeros at "k=0" and } k = \frac{8}{9}$ graph{9x^2-8x [-10, 10, -5, 5]} $\text{Thus it is positive when "k<0" or } k > \frac{8}{9}$ $k \in \left(- \infty , 0\right) \cup \left(\frac{8}{9} , \infty\right)$ Apr 18, 2018 $k < 0$$\text{ , }$$k > \frac{8}{9}$ #### Explanation: $2 {x}^{2} + 3 k x + k = 0$ is a quadratic equation and to find the roots of $x$ of quadratic equations $\textcolor{g r e e n}{\text{Example :}}$$\textcolor{b l u e}{a {x}^{2} + b x + c = 0}$ we use the following formula color(blue)(x=(-b+-sqrt(b^2-4ac))/(2a) color(blue)("where the term "(b^2-4ac) " is the discriminant" so in the quadratic equation If the discriminant ${b}^{2} - 4 a c > 0$$\rightarrow$the equation has two real solutions . ${b}^{2} - 4 a c < 0$$\rightarrow$the equation has no real solutions . ${b}^{2} - 4 a c = 0$$\rightarrow$the equation has one real solution . $2 {x}^{2} + 3 k x + k = 0$ $a = 2$$\text{ , }$$b = + 3 k$$\text{ , }$$c = k$ Substitute in the discriminant $9 {k}^{2} - \left(4\right) \left(2\right) \left(k\right)$ so in order to get the real distinct roots of the function color(blue)("discriminant">0 $9 {k}^{2} - 8 k > 0$ $k \left(9 k - 8\right) > 0$ $k < 0$$\text{ , }$$k > \frac{8}{9}$
2020-04-03T21:50:18
{ "domain": "appspot.com", "url": "https://api-project-1022638073839.appspot.com/questions/what-are-the-set-of-values-for-which-this-equation-has-real-distinct-roots", "openwebmath_score": 0.7959893345832825, "openwebmath_perplexity": 4116.037549160033, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357243200245, "lm_q2_score": 0.6654105521116445, "lm_q1q2_score": 0.6532573103475127 }
http://ams.org/bookstore?fn=20&arg1=tb-de&ikey=GSM-129
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Classical Methods in Ordinary Differential Equations: With Applications to Boundary Value Problems Stuart P. Hastings, University of Pittsburgh, PA, and J. Bryce McLeod, Oxford University, England, and University of Pittsburgh, PA SEARCH THIS BOOK: 2012; 373 pp; hardcover Volume: 129 ISBN-10: 0-8218-4694-9 ISBN-13: 978-0-8218-4694-0 List Price: US$63 Member Price: US$50.40 Order Code: GSM/129 This text emphasizes rigorous mathematical techniques for the analysis of boundary value problems for ODEs arising in applications. The emphasis is on proving existence of solutions, but there is also a substantial chapter on uniqueness and multiplicity questions and several chapters which deal with the asymptotic behavior of solutions with respect to either the independent variable or some parameter. These equations may give special solutions of important PDEs, such as steady state or traveling wave solutions. Often two, or even three, approaches to the same problem are described. The advantages and disadvantages of different methods are discussed. The book gives complete classical proofs, while also emphasizing the importance of modern methods, especially when extensions to infinite dimensional settings are needed. There are some new results as well as new and improved proofs of known theorems. The final chapter presents three unsolved problems which have received much attention over the years. Both graduate students and more experienced researchers will be interested in the power of classical methods for problems which have also been studied with more abstract techniques. The presentation should be more accessible to mathematically inclined researchers from other areas of science and engineering than most graduate texts in mathematics.
2013-05-25T01:16:47
{ "domain": "ams.org", "url": "http://ams.org/bookstore?fn=20&arg1=tb-de&ikey=GSM-129", "openwebmath_score": 0.22488011419773102, "openwebmath_perplexity": 963.37711974325, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357243200244, "lm_q2_score": 0.6654105521116443, "lm_q1q2_score": 0.6532573103475126 }
https://socratic.org/questions/how-do-you-write-the-complex-number-in-trigonometric-form-4i
# How do you write the complex number in trigonometric form 4i? Sep 30, 2016 4i=4 cis (pi/2). For the general form, 4i = 4 cis ((2k+1/2)pi)$, k = 0 , \pm 1 , \pm 2 , \pm 3 , \ldots$ #### Explanation: Any complex number in rectangular cartesian form is z = (x, y) = (real part) x + (imaginary part ) iy), where x and y are real. (x, y) in polar form is $r \left(\cos \theta , \sin \theta\right)$, $r \left(\cos \theta + i \sin \theta\right)$ and, in brief, =r cis $\theta$ The conversion is from $r = \sqrt{{x}^{2} + {y}^{2}} \ge 0$, $\cos \theta = \frac{x}{\sqrt{{x}^{2} + {y}^{2}}} \mathmr{and} \sin \theta = \frac{y}{\sqrt{{x}^{2} + {y}^{2}}}$ Here, x = 0, y = 4, and so, $r = \sqrt{{4}^{2} + 0} = 4$ ( the principal square root ), $\cos \theta = 0 \mathmr{and} \sin \theta = \frac{4}{4} = 1$. The value of theta =$\frac{\pi}{2} \in {Q}_{1}$. The general value is $2 k \pi + \frac{\pi}{2} \in {Q}_{1} , k = 0 , \pm 1 , \pm 2 , \pm 3 , \ldots$ All values point to the same direction.. So, seemingly, the general form might be viewed as irrelevant. Yet, for rotation problems, with $\theta = \theta$ (time ) = ct, the same point is reached in a cycle of period $2 \pi$. And so, it cannot be ignored, 4i = 4 cis ((2k+1/2)pi)$, k = 0 , \pm 1 , \pm 2 , \pm 3 , \ldots$
2020-02-19T03:39:06
{ "domain": "socratic.org", "url": "https://socratic.org/questions/how-do-you-write-the-complex-number-in-trigonometric-form-4i", "openwebmath_score": 0.8237181901931763, "openwebmath_perplexity": 2267.035179966452, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357243200245, "lm_q2_score": 0.6654105521116443, "lm_q1q2_score": 0.6532573103475126 }
https://www.lmfdb.org/knowledge/show/group.outer_aut
show · group.outer_aut all knowls · up · search: If $G$ is a group, its inner automorphism group is a normal subgroup of the full automorphism group. Then, the outer automorphism group of $G$ is $\mathrm{Out}(G) = \Aut(G)/\mathrm{Inn}(G).$ Authors: Knowl status: • Review status: beta • Last edited by John Jones on 2019-05-23 23:32:21 Referred to by: Not referenced anywhere at the moment. History:
2020-07-15T02:38:38
{ "domain": "lmfdb.org", "url": "https://www.lmfdb.org/knowledge/show/group.outer_aut", "openwebmath_score": 0.7336868643760681, "openwebmath_perplexity": 3304.2759700213733, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357243200245, "lm_q2_score": 0.6654105521116443, "lm_q1q2_score": 0.6532573103475126 }
https://math.stackexchange.com/questions/3878452/continous-piecewise-function/3878485
# Continous Piecewise Function [closed] I recently found this question: find $$c$$ and $$d$$ such that $$f(x) = \begin{cases} cx+4d & x<2\\ x^{2}+4 & 2\leq x\leq 3 \\ dx^{2}+\frac{2x}{c}+1 & x>3 \end{cases}$$ is continuous everywhere. How should I solve this? • Welcome to Math SE! Please modify your question to include your own thoughts/attempts at solving this problem. This site isn't a homework service and as it stands your question is unlikely to be meet with helpful responses. – DMcMor Oct 23 at 18:35 Since all "pieces" of the function are continuous, it remains that the junctions between pieces must be continuous. We must have: $$\lim_{x\rightarrow 2^{-}}(cx + 4d) = \lim_{x\rightarrow 2^{+}}( x^{2} + 4)$$ $$\lim_{x\rightarrow 3^{-}}(x^{2} + 4) = \lim_{x\rightarrow 3^{+}}(dx^{2} + \frac{2x}{c} + 1)$$ Evaluating the limits: $$2c + 4d = 8$$ $$13 = 9d + \frac{6}{c} + 1$$ Thus: $$c = 4-2d$$ $$13 = 9d + \frac{3}{2-d}+1$$ $$12(2-d) = 9d(2-d) + 3$$ $$9d^{2} - 30d + 21 = 0$$ $$3(3d-7)(d-1) = 0$$ $$d = 1,\frac{7}{3}$$ Then, substituting: $$\boxed{(c,d) = (2,1),(-\frac{2}{3},\frac{7}{3})}$$ We may verify that both pairs solve the original system of equations. Use middle definition to get values $$f(2)=8$$ and $$f(3)=13$$. Then get a pair of equations from continuity: $$2c+4d=8$$ and $$9d+\frac{6}{c}+1=13$$. Solve for $$c$$ and $$d$$.
2020-11-29T14:34:18
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3878452/continous-piecewise-function/3878485", "openwebmath_score": 0.7599393725395203, "openwebmath_perplexity": 368.2271241524912, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9817357237856482, "lm_q2_score": 0.6654105521116443, "lm_q1q2_score": 0.653257309991933 }