Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Cutting a Möbius strip down the middle Why does the result of cutting a Möbius strip down the middle lengthwise have two full twists in it? I can account for one full twist--the identification of the top left corner with the bottom right is a half twist; similarly, the top right corner and bottom left identification contributes another half twist. But where does the second full twist come from? Explanations with examples or analogies drawn from real life much appreciated. edit: I'm pasting J.M.'s Mathematica code here (see his answer), modified for version 5.2. twist[{f_, g_}, a_, b_, u_] := {Cos[u] (a + f Cos[b u] - g Sin[b u]), Sin[u] (a + f Cos[b u] - g Sin[b u]), g Cos[b u] + f Sin[b u]}; With[{a = 3, b = 1/2, f = 1/2}, Block[{$DisplayFunction = Identity}, g1 = ParametricPlot3D[Evaluate[Append[twist[{f - v, 0}, a, b, u], {EdgeForm[], FaceForm[SurfaceColor[Red], SurfaceColor[Blue]]}]], {u, 0, 2 Pi}, {v, 0, 2 f}, Axes -> None, Boxed -> False]; g2 = ParametricPlot3D[Evaluate[Append[twist[{f - v, 0}, a, b, u], EdgeForm[]]], {u, 0, 4 Pi}, {v, 0, 2 f/3}, Axes -> None, Boxed -> False]; g3 = ParametricPlot3D[Evaluate[Append[twist[{f - v, 0}, a, b, u], {EdgeForm[], FaceForm[SurfaceColor[Red], SurfaceColor[Blue]]}]], {u, 0, 2 Pi}, {v, 2 f/3, 4 f/3}, Axes -> None, Boxed -> False, PlotPoints -> 105]]; GraphicsArray[{{g1, Show[g2, g3]}}]];
Observe that the boundary of a Möbius strip is a circle. When you cut, you create more boundary; this is in fact a second circle. During this process, the Möbius strip loses its non-orientability. Make two Möbius strips with paper and some tape. Cut one and leave the other uncut. Now take each and draw a line down the middle. The line will come back and meet itself on the Möbius strip; on the cut Möbius strip, it won't.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 3, "answer_id": 2 }
How to calculate point y with given point x of a angled line I dropped out of school to early I guess, but I bet you guys can help me here. I've got a sloped line starting from point a(0|130) and ending at b(700|0). I need an equation to calculate the y-coordinate when the point x is given, e.g. 300. Can someone help me please ? Sorry for asking such a dumb question, can't find any answer here, propably just too silly to get the math slang ;)
You want the two point form of a linear equation. If your points are $(x_1,y_1)$ and $(x_2,y_2)$, the equation is $y-y_1=(x-x_1)\frac{y_2-y_1}{x_2-x_1}$. In your case, $y=-\frac{130}{700}(x-700)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/67602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is the zero factorial one i.e ($0!=1$)? Possible Duplicate: Prove $0! = 1$ from first principles Why does 0! = 1? I was wondering why, $0! = 1$ Can anyone please help me understand it. Thanks.
Answer 1: The "empty product" is (in general) taken to be 1, so that formulae are consistent without having to look over your shoulder. Take logs and it is equivalent to the empty sum being zero. Answer 2: $(n-1)! = \frac {n!} n$ applied with $n=1$ Answer 3: Convention - for the reasons above, it works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What does the exclamation mark do? I've seen this but never knew what it does. Can any one let me in on the details? Thanks.
For completeness: Although in mathematics the $!$ almost always refers to the factorial function, you often see it in quasi-mathematical contexts with a different meaning. For example, in many programming languages it is used to mean negation, for example in Java the expression !true evaluates to false. It is also commonly used to express inequality, for example in the expression 1 != 2, read as '1 is not equal to 2'. It is also used in some functional languages to denote a function that modifies its input, as in the function set! in Scheme, which sets its first argument to the value of its second argument.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 1 }
How would I solve $\frac{(n - 10)(n - 9)(n - 8)\times\ldots\times(n - 2)(n - 1)n}{11!} = 12376$ for some $n$ without brute forcing it? Given this equation: $$ \frac{(n - 10)(n - 9)(n - 8)\times\ldots\times(n - 2)(n - 1)n}{11!} = 12376 $$ How would I find $n$? I already know the answer to this, all thanks toWolfram|Alpha, but just knowing the answer isn't good enough for me. I want to know on how I would go about figuring out the answer without having to multiply each term, then using algebra to figure out the answer. I was hoping that there might be a more clever way of doing this.
$n(n-1)\cdots(n-10)/11! = 2^3 \cdot 7 \cdot 13 \cdot 17$. It's not hard to see that $11! = 2^8 3^3 5^2 7^1 11^1$; this is apparently known as de Polignac's formula although I didn't know the name. Therefore $n(n-1) \cdots (n-10) = 2^{11} 3^3 5^2 7^2 11^1 13^1 17^1$. In particular 17 appears in the factorization but 19 does not. So $17 \le n < 19$. By checking the exponent of $7$ we see that $n = 17$ (so we have (17)(16)\cdots (7), which includes 7 and 14) not $n = 18$. Alternatively, there's an analytic solution. Note that $n(n-1) \cdots (n-10) < (n-5)^{11}$ but that the two sides are fairly close together. This is because $(n-a)(n-(10-a)) < (n-5)^2$. So we have $$ n(n-1) \cdots (n-10)/11! = 12376 $$ and using the inequality we get $$ (n-5)^{11}/11! > 12376 $$ where we expect the two sides to be reasonably close. Solving for $n$ gives $$ n > (12376 \times 11!)^{1/11} + 5 = 16.56.$$ Now start trying values of $n$ that are greater than 16.56; the first one is 17, the answer. Implicit in here is the approximation $$ {n \choose k} \approx {(n-(k-1)/2)^k \over k!} $$ which comes from replacing every factor of the product $n(n-1)\cdots(n-k)$ by the middle factor.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 7, "answer_id": 6 }
Square root of differential operator If $D_x$ is the differential operator. eg. $D_x x^3=3 x^2$. How can I find out what the operator $Q_x=(1+(k D_x)^2)^{(-1/2)}$ does to a (differentiable) function $f(x)$? ($k$ is a real number) For instance what is $Q_x x^3$?
It probably means this: Expand the expression $(1+(kt)^2)^{-1/2}$ as a power series in $t$, getting $$ a_0 + a_1 t + a_2 t^2 + a_3 t^3 + \cdots, $$ and then put $D_x$ where $t$ was: $$ a_0 + a_1 D_x + a_2 D_x^2 + a_3 D_x^3 + \cdots $$ and then apply that operator to $x^3$: $$ a_0 x^3 + a_1 D_x x^3 + a_2 D_x^2 x^3 + a_3 D_x^3 x^3 + \cdots. $$ All of the terms beyond the ones shown here will vanish, so you won't have an infinite series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
Does taking closure preserve finite index subgroups? Let $K \leq H$ be two subgroups of a topological group $G$ and suppose that $K$ has finite index in $H$. Does it follow that $\bar{K}$ has finite index in $\bar{H}$ ?
The answer is yes in general, and here is a proof, which is an adaptation of MartianInvader's: Let $K$ have finite index in $H$, with coset reps. $h_1,\ldots,h_n$. Since multiplication by any element of $G$ is a homeomorphism from $G$ to itself (since $G$ is a topological group), we see that each coset $h_i \overline{K}$ is a closed subset of $G$, and hence so is their union $h_1\overline{K} \cup \cdots \cup h_n \overline{K}$. Now this closed set contains $H$, and hence contains $\overline{H}$. Thus $\overline{H}$ is contained in the union of finitely many $\overline{K}$ cosets, and hence contains $\overline{K}$ with finite index. [Note: I hadn't paid proper attention to Keenan and Kevin's comments on MartianInvader's answer when I wrote this, and this answer essentially replicates the content of their comments.]
{ "language": "en", "url": "https://math.stackexchange.com/questions/68034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
Condition For Existence Of Phase Flows I am a bit confused about the existence of one-parameter groups of diffeomorphisms/phase flows for various types of ODE's. Specifically, there is a problem in V.I. Arnold's Classical Mechanics text that asks to prove that a positive potential energy guarantees a phase flow, and also one asking to prove that $U(x) = -x^4$ does not define a phase flow- and these having me thinking. Consider the following two (systems of) differential equations: $\dot x(t) = y(t)$, $\dot y(t) = 4x(t)^3$ and $\dot a(t) = b(t)$, $\dot b(t) = -4a(t)^3$. Both phase flows might, as far as I see it, have issues with the fact that the functions $\dot y(t)$ and $\dot b(t)$ have inverses which are not $C^\infty$ everywhere. However, the $(x,y)$ phase flow has an additional, apparently (according to Arnold's ODE text) more important issue- it approaches infinity in a finite time. Why, though, do I care about the solutions "blowing up" more than I care about the vector fields' differentiability issues? $\textbf{What is, actually, the criterion for the existence of a phase flow, given a (sufficiently differentiable) vector field?}$
How does Arnold define "phase flow"? As far as I know, part of the definition of a flow requires the solutions to exist for all $t > 0$. If they go to $\infty$ in a finite time, they don't exist after that. On the other hand, I don't see why not having a $C^\infty$ inverse would be an issue.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Expectation of supremum Let $x(t)$ a real valued stochastic process and $T>0$ a constant. Is it true that: $$\mathbb{E}\left[\sup_{t\in [0,T]} |x(t)|\right] \leq T \sup_{t\in [0,T]} \mathbb{E}\left[|x(t)|\right] \text{ ?}$$ Thanks for your help.
Elaboration on the comment by Zhen, just consider $x(t) = 1$ a.s. for all $t$ and $T = 0.5$
{ "language": "en", "url": "https://math.stackexchange.com/questions/68187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Notations involving squiggly lines over horizontal lines Is there a symbol for "homeomorphic to"? I looked on Wikipedia, but it doesn't seem to mention one? Also, for isomorphism, is the symbol a squiggly line over an equals sign? What is the symbol with a squiggly line over just one horizontal line? Thanks.
I use $\cong$ for isomorphism in a category, which includes both homeomorphism and isomorphism of groups, etc. I have seen $\simeq$ used to mean homotopy equivalence, but I don't know how standard this is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 3, "answer_id": 1 }
Determining the truth value of a statement I am stuck with the following question, Determine the truth value of each of the following statments(a statement is a sentence that evaluates to either true or false but you can not be indecisive). If 2 is even then New York has a large population. Now I don't get what does truth value means here. I would be thankful if someone could help me out. Thanks in advance.
If X Then Y is an implication. In other words, the truth of X implies the truth of Y. The "implies" operator is defined in exactly this manner. Google "implies operator truth table" to see the definition for every combination of values. Most importantly, think about why it's defined in this manner by substituting in place of X and Y statements that you know to be either true or false. One easy way to summarise the definition, is that either X is false (in which case it doesn't matter what the second value is), or Y is true. So applying this to your statement: 2 is indeed even (so now that X is true we only need to check that Y is true to conclude that the implication is indeed valid). NY does indeed have a large population, and so we conclude that the implication is valid, and the sentence as a whole is true!
{ "language": "en", "url": "https://math.stackexchange.com/questions/68318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
summation of x * (y choose x) binomial coefficients What does this summation simplify to? $$ \sum_{x=0}^{y} \frac{x}{x!(y-x)!} $$ I was able to realize that it is equivalent to the summation of $x\dbinom{y}{x}$ if you divide and multiply by $y!$, but I am unsure of how to further simplify. Thanks for the help!
Using generating function technique as in answer to your other question: Using $g_1(t) = t \exp(t) = \sum_{x=0}^\infty t^{x+1} \frac{1}{x!} = \sum_{x=0}^\infty t^{x+1} \frac{x+1}{(x+1)!} = \sum_{x=-1}^\infty t^{x+1} \frac{x+1}{(x+1)!} = \sum_{x=0}^\infty t^{x} \frac{x}{x!}$ and $g_2(t) = \exp(t)$. $$ \sum_{x=0}^{y} x \frac{1}{x!} \frac{1}{(y-x)!} = [t]^y ( g_1(t) g_2(t) ) = [t]^y ( t \exp(2 t) ) = \frac{2^{y-1}}{(y-1)!} = \frac{y 2^{y-1}}{y!} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/68384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Why is this map a homeomorphism? A few hours ago a user posted a link to this pdf: There was a discussion about Proposition 3.2.8. I read it, and near the end, there is a map given $$ \bigcap_{i_1,\dots,i_n,\dots\in\{0,1\}}X_{i_1,\dots,i_n,\dots}\mapsto (i_1,\dots,i_n,\dots). $$ And it says this is a homeomorphism. Is there an more explicit explanation why it's a homeomorphism?
If you examine the construction of $C$, you’ll see that each set $Y_{i_1,\dots,i_n}$ is the closure of a certain open ball; to simplify the notation, let $B_{i_1,\dots,i_n}$ be that open ball. The map in question is a bijection that takes $B_{i_1,\dots,i_n}\cap C$ to $$\{(j_1,j_2,\dots)\in\{0,1\}^{\mathbb{Z}^+}: j_1=i_1, j_2=i_2,\dots,j_n=i_n\},$$ which is a basic open set in the product $\{0,1\}^{\mathbb{Z}^+}$. Every open subset of $C$ is a union of sets of the form $B_{i_1,\dots,i_n}\cap C$, so the map is open. Every open set in the product $\{0,1\}^{\mathbb{Z}^+}$ is a union of sets of the form $$\{(j_1,j_2,\dots)\in\{0,1\}^{\mathbb{Z}^+}: j_1=i_1, j_2=i_2,\dots,j_n=i_n\},$$ so the map is continuous. Finally, a continuous, open bijection is a homeomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a rule of integration that corresponds to the quotient rule? When teaching the integration method of u-substitution, I like to emphasize its connection with the chain rule of integration. Likewise, the intimate connection between the product rule of derivatives and the method of integration by parts comes up in discussion. Is there an analogous rule of integration for the quotient rule? Of course, if you spot an integral of the form $\int \left (\frac{f(x)}{g(x)} \right )' = \int \frac{g(x) \cdot f(x)' - f(x) \cdot g(x)'}{\left [ g(x)\right ]^2 }$, then the antiderivative is obvious. But is there another form/manipulation/"trick"?
I guess you could arrange an analog to integration by parts, but making students learn it would be superfluous. $$ \int \frac{du}{v} = \frac{u}{v} + \int \frac{u}{v^2} dv.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/68505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 5, "answer_id": 1 }
Gaussian Elimination Does a simple Gaussian elimination works on all matrices? Or is there cases where it doesn't work? My guess is yes, it works on all kinds of matrices, but somehow I remember my teacher points out that it doesn't works on all matrices. But I'm not sure, because I have been given alot of methods, and maybe I have mixed it all up.
Gaussian elimination without pivoting works only for matrices all whose leading principal minors are non-zero. See http://en.wikipedia.org/wiki/LU_decomposition#Existence_and_uniqueness.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
On the GCD of a Pair of Fermat Numbers I've been working with the Fermat numbers recently, but this problem has really tripped me up. If the Fermat theorem is set as $f_a=2^{2^a}+1$, then how can we say that for an integer $b<a$, the $\gcd(f_b,f_a)=1$?
Claim. $f_n=f_0\cdots f_{n-1}+2$. The result holds for $f_1$: $f_0=2^{2^0}+1 = 2^1+1 = 3$, $f_1=2^{2}+1 = 5 = 3+2$. Assume the result holds for $f_n$. Then $$\begin{align*} f_{n+1} &= 2^{2^{n+1}}+1\\ &= (2^{2^n})^2 + 1\\ &= (f_n-1)^2 +1\\ &= f_n^2 - 2f_n +2\\ &= f_n(f_0\cdots f_{n-1} + 2) -2f_n + 2\\ &= f_0\cdots f_{n-1}f_n + 2f_n - 2f_n + 2\\ &= f_0\cdots f_n + 2, \end{align*}$$ which proves the formula by induction. $\Box$ Now, let $d$ be a common factor of $f_b$ and $f_a$. Then $d$ divides $f_0\cdots f_{a-1}$ (because it's a multiple of $f_b$) and divides $f_a$. That means that it divides $$f_a - f_0\cdots f_{a-1} = (f_0\cdots f_{a-1}+2) - f_0\cdots f_{a-1} = 2;$$ but $f_a$ and $f_b$ are odd, so $d$ is an odd divisor of $2$. Therefore, $d=\pm 1$. So $\gcd(f_a,f_b)=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 3 }
The chain rule for a function to $\mathbf{C}$ Let $f:U\longrightarrow \mathbf{C}$ be a holomorphic function, where $U$ is a Riemann surface, e.g., $U=\mathbf{C}$, $U=B(0,1)$ or $U$ is the complex upper half plane, etc. For $a$ in $\mathbf{C}$, let $t_a:\mathbf{C} \longrightarrow \mathbf{C}$ be the translation by $a$, i.e., $t_a(z) = z-a$. What is the difference between $df$ and $d(t_a\circ f)$ as differential forms on $U$? My feeling is that $df = d(t_a\circ f)$, but why?
The forms will be different if $a\not=0$, namely if $\mathrm{d} f = w(z) \mathrm{d}z$ locally, then $\mathrm{d}\left( t_a \circ f\right) = w(z-a) \mathrm{d} z$. Added: Above, I was using the following, unconventional definition for the composition, $(t_a \circ f)(z) = f(t_a(z)) = f(z-a)$. The conventional definition, though, is $(t_a \circ f)(z) = t_a(f(z)) = f(z)-a$. With this definition $\mathrm{d} (t_a \circ f) = \mathrm{d}(f-a) = \mathrm{d} f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/68768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve $t_{n}=t_{n-1}+t_{n-3}-t_{n-4}$? I missed the lectures on how to solve this, and it's really kicking my butt. Could you help me out with solving this? Solve the following recurrence exactly. $$ t_n = \begin{cases} n, &\text{if } n=0,1,2,3, \\ t_{n-1}+t_{n-3}-t_{n-4}, &\text{otherwise.} \end{cases} $$ Express your answer as simply as possible using the $\Theta$ notation.
Let's tackle it the general way. Define the ordinary generating function: $$ T(z) = \sum_{n \ge 0} t_n z^n $$ Writing the recurrence as $t_{n + 4} = t_{n + 3} + t_{n + 1} - t_n$, the properties of ordinary generating functions give: $$ \begin{align*} \frac{T(z) - t_0 - t_1 z - t_2 z^2 - t_3 z^3}{z^4} &= \frac{T(z) - t_0 - t_1 z - t_2 z^2}{z^3} + \frac{T(z) - t_0}{z} - T(z) \\ T(z) &= \frac{z}{(1 - z)^2} \end{align*} $$ This has the coefficient: $$ t_n = [z^n] \frac{z}{(1 - z)^2} = [z^{n - 1}] \frac{1}{(1 - z)^2} = (-1)^{n - 1} \binom{-2}{n - 1} = n $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/68822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Sum of a series of minimums I should get sum of the following minimums.Is there any way to solve it? $$\min\left\{2,\frac{n}2\right\} + \min\left\{3,\frac{n}2\right\} + \min\left\{4,\frac{n}2\right\} + \cdots + \min\left\{n+1, \frac{n}2\right\}=\sum_{i=1}^n \min(i+1,n/2)$$
If $n$ is even, your sum splits as $$\sum_{i=1}^{\frac{n}{2}-2} \min\left(i+1,\frac{n}{2}\right)+\frac{n}{2}+\sum_{i=\frac{n}{2}}^{n} \min\left(i+1,\frac{n}{2}\right)=\sum_{i=1}^{\frac{n}{2}-2} (i+1)+\frac{n}{2}+\frac{n}{2}\sum_{i=\frac{n}{2}}^{n} 1$$ If $n$ is odd, you can perform a similar split: $$\sum_{i=1}^{\frac{n-3}{2}} (i+1)+\frac{n}{2}\sum_{i=\frac{n-1}{2}}^{n} 1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/68873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Prove that $\cos(x)$ is identically zero using integration by parts Consider $$\int\cos(t-x)\sin(x)dx,$$ where $t$ is a constant. Evaluating the integral by parts, let \begin{align} u = \cos(t-x),\ dv = \sin(x), \\ du = \sin(t-x),\ v = -\cos(x), \end{align} so $$ \int\cos(t-x)\sin(x)dx = -\cos(t-x)\cos(x) - \int\sin(t-x)\cdot-\cos(x)dx. $$ Evaluating the integral on the right by parts again (with a slight abuse of notation), \begin{align} u = \sin(t-x),&\quad dv = -\cos(x), \\ du = -\cos(t-x),&\quad v = -\sin(x), \end{align} we get \begin{align} \int\cos(t-x)\sin(x)dx &= -\cos(t-x)\cos(x) - \left( -\sin(t-x)\sin(x)-\int\cos(t-x)\sin(x)dx\right) \\ &= -\cos(t-x)\cos(x) + \sin(t-x)\sin(x) + \int\cos(t-x)\sin(x)dx, \end{align} and subtracting the integral from both sides, we obtain the dazzling new identity $$\sin(t-x)\sin(x)-\cos(t-x)\cos(x)=0$$ for all $t$ and $x$! Pushing it further, the LHS expression is $-\cos(t)$, and as $t$ was just an arbitrary constant, this implies $\cos(x)$ is identically zero! Now I obviously know something's wrong here. But what, and where? Where's the flaw in my reasoning? P.S. I can evaluate the integral to get the proper answer lol. But this was rather interesting.
A standard trigonometric identity says that $$\sin(t-x)\sin(x)-\cos(t-x)\cos(x)$$ is equal to $$ -\cos((t-x)+x) $$ and that is $-\cos t$. As a function of $x$, this is a constant, i.e. since there's no "$x$" in this expression, it doesn't change as $x$ changes. Since the "dazzling new identity", if stated correctly, would say, not that the expression equals $0$, but that the expression is constant, it seems your derivation is correct. Except that you wrote "$=0$" where you needed "$=\text{constant}$".
{ "language": "en", "url": "https://math.stackexchange.com/questions/68926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Primes sum ratio Let $$G(n)=\begin{cases}1 &\text{if }n \text{ is a prime }\equiv 3\bmod17\\0&\text{otherwise}\end{cases}$$ And let $$P(n)=\begin{cases}1 &\text{if }n \text{ is a prime }\\0&\text{otherwise.}\end{cases}$$ How to prove that $$\lim_{N\to\infty}\frac{\sum\limits_{n=1}^N G(n)}{\sum\limits_{n=1}^N P(n)}=\frac1{16}$$ And what is $$\lim_{N\to\infty} \frac{\sum\limits_{n=1}^N n\,G(n)}{\sum\limits_{n=1}^N n\,P(n)}?$$ And what is $O(f(n))$ of the fastest growing function $f(n)$ such that the following limit exists: $$\lim_{N\to\infty} \frac{\sum\limits_{n=1}^N f(n)\,G(n)}{\sum\limits_{n=1}^N f(n)\,P(n)}$$ And does this all follow directly from the asymptotic equidistribution of primes modulo most thing, if such a thing were known? And is it known?
The first sum follows from Siegel-Walfisz_theorem Summation by parts on the second sum should yield for large $N$: $$\frac{\sum\limits_{n=1}^N n\,G(n)}{\sum\limits_{n=1}^N n\,P(n)}=\frac{(N\sum\limits_{n=1}^N G(n))-\sum\limits_{n=1}^{N-1}\sum\limits_{k=0}^{n} G(k)}{\sum\limits_{n=1}^N n\,P(n)}=\frac{(N\sum\limits_{n=1}^N P(n)/16)-\sum\limits_{n=1}^{N-1}\sum\limits_{k=0}^{n} P(k)/16}{\sum\limits_{n=1}^N n\,P(n)}=\frac1{16}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/68981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 2, "answer_id": 1 }
Confusion about a specific notation In the following symbolic mathematical statement $n \in \omega $, what does $\omega$ stand for? Does it have something to do with the continuum, or is it just another way to denote the set of natural numbers?
The notation of $\omega$ is coming from ordinals, and it denotes the least ordinal number which is not finite. The von Neumann ordinals are transitive sets which are well ordered by $\in$. We can define these sets by induction: * *$0=\varnothing$; *$\alpha+1 = \alpha\cup\{\alpha\}$; *If $\beta$ is limit and all $\alpha<\beta$ were defined, then $\displaystyle\beta=\bigcup_{\alpha<\beta}\alpha$. That is to say that after we have defined all the natural numbers, we define $\omega=\{0,1,2,3,\ldots\}$, then we can continue if so and define $\omega+1 = \omega\cup\{\omega\}$ and so on. In set theory it is usual to use $\omega$ to denote the least infinite ordinal, as well the set of finite ordinals. It still relates to the continuum since $\mathcal P(\omega)$ is of cardinality continuum, since $\omega$ is countable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What is a good book for learning math, from middle school level? Which books are recommended for learning math from the ground up and review the basics - from middle school to graduate school math? I am about to finish my masters of science in computer science and I can use and understand complex math, but I feel like my basics are quite poor.
It depends what your level is and what you're interested in. I think a book that's not about maths but uses maths is probably more interesting for most people. I've noticed this in undergraduates as well: give someone a course using the exact same maths but with the particulars of their subject area subbed in, and they'll like it much better. Examples being speech therapy, economics, criminal justice, meterorology, kinesiology, ecology, philosophy, audio engineering…. cohomology of the tribar http://www.jstor.org/discover/pgs/index?id=10.2307/1575844&img=dtc.17.tif.gif&uid=3739664&uid=2&uid=4&uid=3739256&sid=21104710897987&orig=/discover/10.2307/1575844?uid=3739664&uid=2&uid=4&uid=3739256&sid=21104710897987 That said … for me a great introduction was Penrose's The Road to Reality. It pulls no punches unlike many popular physics books. I've always been interested in "the deep structure of the universe/reality" so ... that was a topic in line with my advice from above. But also Penrose is an excellent writer and takes the time to draw pictures.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63", "answer_count": 15, "answer_id": 11 }
Equation of a rectangle I need to graph a rectangle on the Cartesian coordinate system. Is there an equation for a rectangle? I can't find it anywhere.
I found recently a new parametric form for a rectangle, that I did not know earlier: $$ \begin{align} x(u) &= \frac{1}{2}\cdot w\cdot \mathrm{sgn}(\cos(u)),\\ y(u) &= \frac{1}{2}\cdot h\cdot \mathrm{sgn}(\sin(u)),\quad (0 \leq u \leq 2\pi) \end{align} $$ where $w$ is the width of the rectangle and $h$ is its height. I have used this in modelling parametric ruled surfaces, where it seems to be rather handy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 9, "answer_id": 1 }
Integrating $\int \frac{1}{1+e^x} dx$ I wish to integrate $$\int_{-a}^a \frac{dx}{1+e^x}.$$ By symmetry, the above is equal to $$\int_{-a}^a \frac{dx}{1+e^{-x}}$$ Now multiply by $e^x/e^x$ to get $$\int_{-a}^a \frac{e^x}{1+e^x} dx$$ which integrates to $$\log(1+e^x) |^a_{-a} = \log((1+e^a)/(1+e^{-a})),$$ which is not correct. According to Wolfram, we should get $$2a + \log((1+e^{-a})/(1+e^a)).$$ Where is the mistake? EDIT: Mistake found: was using log on calculator, which is base 10.
Both answers are equal. Split your answer into $\log(1+e^a)-\log(1+e^{-a})$, and write this as $$\begin{align*}\log(e^a(1+e^{-a}))-\log(e^{-a}(1+e^a))&=\log e^a+\log(1+e^{-a})-\log e^{-a}-\log(1+e^a)\\ &=2a+\log((1+e^{-a})/(1+e^a))\end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/69179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
How does one prove if a multivariate function is constant? Suppose we are given a function $f(x_{1}, x_{2})$. Does showing that $\frac{\partial f}{\partial x_{i}} = 0$ for $i = 1, 2$ imply that $f$ is a constant? Does this hold if we have $n$ variables instead?
Yes, it does, as long as the function is continuous on a connected domain and the partials exist (let's not get into anything pathological here). And the proof is the exact same as in the one variable case (If there are two points whose values we want to compare, they lie on the same line. Use the multivariable mean variable theorem to show that they must have the same value. This proof is easier if you know directional derivatives and/or believe that you can assume that the partial derivative in the direction of this line is zero because all other basis-partials are zero.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/69294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
The law of sines in hyperbolic geometry What is the geometrical meaning of the constant $k$ in the law of sines, $\frac{\sin A}{\sinh a} = \frac{\sin B}{\sinh b} = \frac{\sin C}{\sinh c}=k$ in hyperbolic geometry? I know the meaning of the constant only in Euclidean and spherical geometry.
As given by Will Jagy, k must be inside the argument: $$ \frac{\sin A}{\sinh(a/k)} = \frac{\sin B}{\sinh(b/k)} = \frac{\sin C}{\sinh(c/k)} $$ This is the Law of Hyperbolic trigonometry where k is the pseudoradius, constant Gauss curvature $K= -1/k^2$. Please also refer to " Pan-geometry", a set of relations mirrored from spherical to hyerbolic, typified by (sin,cos) -> (sinh,cosh).. in Roberto Bonola's book on Non-euclidean Geometry. There is nothing imaginary about pseudoradius.It is as real,palpable and solid as the radius of sphere in spherical trigonometry, after hyperbolic geometry has been so firmly established. I wish practice of using $ K=-1$ should be done away with,always using $ K = -1/k^2 $ or $ K = -1/a^2 $instead.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Why are samples always taken from iid random variables? In most mathematical statistic textbook problems, a question always ask: Given you have $X_1, X_2, \ldots, X_n$ iid from a random sample with pdf:(some pdf). My question is why can't the sample come from one random variable such as $X_1$ since $X_1$ itself is a random variable. Why do you need the sample to come from multiple iid random variables?
A random variable is something that has one definite value each time you do the experiment (whatever you define "the experiment" to be), but possibly a different value each time you do it. If you collect a sample of several random values, the production of all those random values must -- in order to fit the structure of the theory -- be counted as part of one single experiment. Therefore, if you had only one variable, there couldn't be any different values in your sample.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How many different combinations of $X$ sweaters can we buy if we have $Y$ colors to choose from? How many different combinations of $X$ sweaters can we buy if we have $Y$ colors to choose from? According to my teacher the right way to think about this problem is to think of partitioning $X$ identical objects (sweaters) into $Y$ different categories (colors). Well,this idea however yields the right answer but I just couldn't convince my self about this way to thinking,to be precise I couldn't link the wording of the problem to this approach,could any body throw some more light on this?
The classical solution to this problem is as follows: Order the $Y$ colors. Write $n_1$ zeroes if there are $n_1$ sweater of the first color. Write a single one. Write $n_2$ zeroes where $n_2$ is the number of sweaters of the second color. Write a single one, and so on. You get a string of length $X+Y-1$ that has exactly $X$ zeroes and $Y-1$ ones. The map that I have described above is a 1-1 correspondence between number of different combinations of $X$ sweaters with $Y$ colors and the set of such binary strings. Now, each string is uniquely determined by the positions of the ones in the string. How many ways are there to place $Y-1$ ones in an array of length $X+Y-1$? Does this help? Edit: With my reasoning, you arrive exactly at Brian Scott's formula. He uses bars where I have ones and stars where I have zeroes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Closed form for a pair of continued fractions What is $1+\cfrac{1}{2+\cfrac{1}{3+\cfrac{1}{4+\cdots}}}$ ? What is $1+\cfrac{2}{1+\cfrac{3}{1+\cdots}}$ ? It does bear some resemblance to the continued fraction for $e$, which is $2+\cfrac{2}{2+\cfrac{3}{3+\cfrac{4}{4+\cdots}}}$. Another thing I was wondering: can all transcendental numbers be expressed as infinite continued fractions containing only rational numbers? Of course for almost all transcendental numbers there does not exist any method to determine all the numerators and denominators.
I don't know if either of the continued fractions can be expressed in terms of common functions and constants. However, all real numbers can be expressed as a continued fractions containing only integers. The continued fractions terminate for rational numbers, repeat for a quadratic algebraic numbers, and neither terminate nor repeat for other reals. Shameless plug: There are many references out there for continued fractions. I wrote a short paper that is kind of dry and covers only the basics (nothing close to the results that J. M. cites), but it goes over the results that I mentioned.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 2, "answer_id": 0 }
What is wrong with my reasoning regarding finding volumes by integration? The problem from the book is (this is Calculus 2 stuff): Find the volume common to two spheres, each with radius $r$, if the center of each sphere lies on the surface of the other sphere. I put the center of one sphere at the origin, so its equation is $x^2 + y^2 + z^2 = r^2$. I put the center of the other sphere on the $x$-axis at $r$, so its equation is $(x-r)^2 + y^2 + z^2 = r^2$. By looking at the solid down the $y$- or $z$-axis it looks like a football. By looking at it down the $x$-axis, it looks like a circle. So, the spheres meet along a plane as can be confirmed by setting the two equations equal to each other and simplifying until you get $x = r/2$. So, my strategy is to integrate down the $x$-axis from 0 to $r/2$, getting the volume of the cap of one of the spheres and just doubling it, since the solid is symmetric. In other words, I want to take circular cross-sections along the $x$-axis, use the formula for the area of a circle to find their areas, and add them all up. The problem with this is that I need to find an equation for $r$ in terms of $x$, and it has to be quadratic rather than linear, otherwise I'll end up with the volume of a cone rather than a sphere. But when I solve for, say, $y^2$ in one equation, plug it into the other one, and solve for $r$, I get something like $r = \sqrt{2 x^2}$, which is linear.
The analytic geometry of $3$-dimensional space is not needed to solve this problem. In particular, there is no need for the equations of the spheres. All we need is some information about the volumes of solids of revolution. Draw two circles of radius $1$, one with center $(0,0)$, the other with center $(1,0)$. (Later we can scale everything by the linear factor $r$, which scales volume by the factor $r^3$.) The volume we are looking for is twice the volume obtained by rotating the part of the circle $x^2+y^2=1$, from $x=1/2$ to $x=1$, about the $x$-axis. (This is clear if we have drawn a picture). So the desired volume, in the case $r=1$, is $$2\int_{1/2}^1 \pi y^2\,dx.$$ There remains some work to do, but it should be straightforward. Comment: There are other approaches. Volumes of spherical caps were known well before the formal discovery of the calculus. And if we do use calculus, we can also tackle the problem by making perpendicular to the $y$-axis, or by using the "shell" method. It is worthwhile experimenting with at least one of these methods. The algebra gets a little more complicated.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Proving two lines trisects a line A question from my vector calculus assignment. Geometry, anything visual, is by far my weakest area. I've been literally staring at this question for hours in frustrations and I give up (and I do mean hours). I don't even now where to start... not feeling good over here. Question: In the diagram below $ABCD$ is a parallelogram with $P$ and $Q$ the midpoints of the the sides $BC$ and $CD$, respectively. Prove $AP$ and $AQ$ trisect $BD$ at the points $E$ and $F$ using vector methods. Image: Hints: Let $a = OA$, $b = OB$, etc. You must show $ e = \frac{2}{3}b + \frac{1}{3}d$, etc. I figured as much without the hints. Also I made D the origin and simplified to $f = td$ for some $t$. And $f = a + s(q - a)$ for some $s$, and $q = \frac{c}{2}$ and so on... but I'm just going in circles. I have no idea what I'm doing. There are too many variables... I am truly frustrated and feeling dumb right now. Any help is welcome. I'm going to go watch Dexter and forget how dumb I'm feeling.
Note that EBP and EDA are similar triangles. Since 2BP=AD, it follows that 2EB=ED, and thus 3EB=BD. Which is to say, AP trisects BD.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
What is an easy way to prove the equality $r > s > 0$ implies $x^r > x^s$? I have been using simple inequalities of fractional powers on a positive interval and keep abusing the inequality for $x>1$. I was just wondering if there is a nice way to prove the inequality in a couple line: Let $x \in [1,\infty)$ and $r,s \in \mathbb{R}$ What is an easy way to prove the equality $r > s > 0$ implies $x^r > x^s$?
If you accept that $x^y\gt 1$ for $x\gt 1$ and $y \gt 0$, then $x^r=x^{r-s}x^s \gt x^s$ for $x\gt 1$ and $r \gt s$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Is the class of cardinals totally ordered? In a Wikipedia article http://en.wikipedia.org/wiki/Aleph_number#Aleph-one I encountered the following sentence: "If the axiom of choice (AC) is used, it can be proved that the class of cardinal numbers is totally ordered." But isnt't the class of ordinals totally ordered (in fact, well-ordered) without axiom of choice? Being a subclass of the class of ordinals, isn't the class of cardinals obviously totally ordered?
If I understand the problem correctly, it depends on your definition of cardinal. If you define the cardinals as initial ordinals, then your argument works fine, but without choice you cannot show that every set is equinumerous to some cardinal. (Since AC is equivalent to every set being well-orderable.) On the other hand, if you have some definition which implies that each set is equinumerous to some cardinal number, then without choice you cannot show that any two sets (any two cardinals) are comparable. (AC is equivalent to: For two sets $A$, $B$ there exists either an injective map $A\to B$ or an injective map $B\to A$. It is listed as one of equivalent forms if AC at wiki.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/69774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 3 }
How to prove that proj(proj(b onto a) onto a) = proj(b onto a)? How to prove that proj(proj(b onto a) onto a) = proj(b onto a)? It makes perfect sense conceptually, but I keep going in circles when I try to prove it mathematically. Any help would be appreciated.
If they are vectors in ${\mathbb R}^n$, you can do it analytically too. You have $$proj_{\bf a}({\bf b}) = ({\bf b} \cdot {{\bf a} \over ||{\bf a}||}) {{\bf a} \over ||{\bf a}||}$$ So if ${\bf c}$ denotes $proj_{\bf a}({\bf b})$ then $$proj_{\bf a}({\bf c}) = ({\bf c} \cdot {{\bf a} \over ||{\bf a}||}) {{\bf a} \over ||{\bf a}||}$$ $$= \bigg(({\bf b} \cdot {{\bf a} \over ||{\bf a}||}) {{\bf a} \over ||{\bf a}||}\cdot {{\bf a} \over ||{\bf a}||}\bigg) {{\bf a} \over ||{\bf a}||}$$ Factoring out constants this is $$\bigg(({\bf b} \cdot {{\bf a} \over ||{\bf a}||^3}) ({\bf a} \cdot {\bf a})\bigg) {{\bf a} \over ||{\bf a}||}$$ $$= ({\bf b} \cdot {{\bf a} \over ||{\bf a}||^3}) ||{\bf a}||^2 {{\bf a} \over ||{\bf a}||}$$ $$ = ({\bf b} \cdot {{\bf a} \over ||{\bf a}||}) {{\bf a} \over ||{\bf a}||}$$ $$= proj_{\bf a}({\bf b}) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/69834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Counting Number of k-tuples Let $A = \{a_1, \dots, a_n\}$ be a collection of distinct elements and let $S$ denote the collection all $k$-tuples $(a_{i_1}, \dots a_{i_k})$ where $i_1, \dots i_k$ is an increasing sequence of numbers from the set $\{1, \dots n \}$. How can one prove rigorously, and from first principles, that the number of elements in $S$ is given by $n \choose k$?
We will show that the number of ways of selecting a subset of $k$ distinct objects from a pool of $n$ of them is given by the binomial coefficient $$ \binom{n}{k} = \frac{n!}{k!(n-k)!}. $$ I find this proof easiest to visualize. First imagine permuting all the $n$ objects in a sequence; this can be done in $n!$ ways. Given a permutation, we pick the first $k$ objects, and we are done. But wait! We overcounted... * *Since we are only interested in the subset of $k$ items, the ordering of the first $k$ items in the permutation does not matter. Remember that these can be arranged in $k!$ ways. *Similarly, the remaining $n-k$ items that we chose to discard are also ordered in the original permutation. Again, these $n-k$ items can be arranged in $(n-k)!$ ways. So to handle the overcounting, we simply divide our original answer by these two factors, resulting in the binomial coefficient. But honestly, I find this argument slightly dubious, at least the way I wrote it. Are we to take on faith that we have taken care of all overcounting? And, why exactly are we dividing by the product of $k!$ and $(n-k)!$ (and not some other function of these two numbers)? One can make the above argument a bit more rigorous in the following way. Denote by $S_n$, $S_k$ and $S_{n-k}$ be the set of permutations of $n$, $k$ and $n-k$ objects respectively. Also let $\mathcal C(n,k)$ be the $k$-subsets of a set of $n$ items. Then the above argument is essentially telling us that There is a bijection between $S_n$ and $\mathcal C(n,k) \times S_k \times S_{n-k}$. Here $\times$ represents Cartesian product. The formal description of the bijection is similar to the above argument: specify the subset formed by the first $k$ items, specify the arrangement among the first $k$ items, specify the arrangement among the remaining $n-k$ items. (The details are clear, I hope. :)) Given this bijection, we can then write: $$ |S_n| = |\mathcal C(n,k)| \cdot |S_k| \cdot |S_{n-k}|, $$ which is exactly what we got before.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
ODE question: $y'+A(t) y =B(t)$, with $y(0)=0, B>0$ implies $y\ge 0$; another proof? I am trying to prove that the solution for the different equation $$y'+A(t) y =B(t)$$ with initial condition $y(0)=0$ and the assumption that $B\ge 0$, has non-negative solution for all $t\ge 0$, i.e. $y(t)\ge 0$ for all $t\ge 0$. The one I know is constructive: Fisrt consider the homogeneous ODE $x'+\frac{1}{2} A(t) x=0$ and let $u'=\frac{B}{x^2}$ with $u(0)=0$. Then $y=\frac{ux^2}{2}$ will satisfy the origional ODE of $y$. Clearly, by the construction of $y$, $y\ge 0$ for $t\ge 0$. But this proof is not natural in my opnion, there is no reason(at least I didn't see) to construct such $x(t)$ and $u(t)$. So is there any other way to prove this fact?
A natural approach is to start from the special case where $A(t)=0$ for every $t\geqslant0$. Then the ODE reads $y'(t)=B(t)$ hence $y'(t)\geqslant0$ for every $t\geqslant0$ hence $y$ is nondecreasing. Since $y(0)\geqslant0$, this proves that $y(t)\geqslant0$ for every $t\geqslant0$. One can deduce the general case from the special one. This leads to consider $z(t)=C(t)y(t)$ and to hope that $z'(t)$ is a multiple of the LHS of the ODE. As you know, defining $C(t)=\exp\left(\int\limits_0^tA(s)\mathrm ds\right)$ fits the bill since $(C\cdot y)'=C\cdot (y'+A\cdot y)$ hence one is left with the ODE $z'=C\cdot B$. Since $C(t)>0$, $C(t)\cdot B(t)\geqslant0$ hence the special case for the RHS $C\cdot B$ yields the inequality $z(t)\geqslant z(0)$ and since $z(0)=y(0)\geqslant0$, one gets $y(t)\geqslant C(t)^{-1}y(0)\geqslant0$ and the proof is finished. Edit The same trick applies to more general functions $A$ and $B$. For example, replacing $A(t)$ by $A(t,y(t))$ and $B(t)$ by $B(t,y(t))$ and assuming that $B(s,x)\geqslant0$ for every $s\geqslant0$ and every $x\geqslant0$, the same reasoning yields $y(t)\geqslant C(t)^{-1}y(0)\geqslant0$ with $C(t)=\exp\left(\int\limits_0^tA(s,y(s))\mathrm ds\right)$. The only modification is that one must now assume that $t\geqslant0$ belongs to the maximal interval $[0,t^*)$ where $y$ may be defined. Solutions of such differential equations when $A$ does depend on $y(t)$ may explode in finite time hence $t^*$ may be finite but $y(t)\geqslant0$ on $0\leqslant t<t^*$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Functions with subscripts? In the equation: $f_\theta(x)=\theta_1x$ Is there a reason that $\theta$ might be a subscript of $f$ and not either a second parameter or left out of the left side of the equation altogether? Does it differ from the following? $f(x,\theta)=\theta_1x$ (I've been following the Machine Learning class and the instructor uses this notation that I've not seen before)
As you note, this is mostly notational choice. I might call the $\theta$ a parameter, rather than an independent variable. That is to say, you are meant to think of $\theta$ as being fixed, and $x$ as varying. As an example (though I am not sure of the context you saw this notation), maybe you are interested in describing the collection of functions $$f(x) = x^2+c$$, where $c$ is a real number. I might call this function $f_c(x)$, so that I can later say that for $c\leq 0$, the function $f_c$ has two real roots, while for $c>0$ the roots are uniformly convex. I think these statements would be much more opaque if I made them about the function $f(x,c)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/69988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Mathematical reason for the validity of the equation: $S = 1 + x^2 \, S$ Given the geometric series: $1 + x^2 + x^4 + x^6 + x^8 + \cdots$ We can recast it as: $S = 1 + x^2 \, (1 + x^2 + x^4 + x^6 + x^8 + \cdots)$, where $S = 1 + x^2 + x^4 + x^6 + x^8 + \cdots$. This recasting is possible only because there is an infinite number of terms in $S$. Exactly how is this mathematically possible? (Related, but not identical, question: General question on relation between infinite series and complex numbers).
The $n$th partial sum of your series is $$ \begin{align*} S_n &= 1+x^2+x^4+\cdots +x^{2n}= 1+x^2(1+x^2+x^4+\cdots +x^{2n-2})\\ &= 1+x^2S_{n-1} \end{align*} $$ Assuming your series converges you get that $$ \lim_{n\to\infty}S_n=\lim_{n\to\infty}S_{n-1}=S. $$ Thus $S=1+x^2S$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Not homotopy equivalent to a 3-manifold w/ boundary Let $X_g$ be the wedge of $g$ copies of the circle $S^1$ where $g>1$. Prove that $X_g \times X_g$ is not homotopy equivalent to a 3-manifold with boundary.
If it is a homotopy equivalent to a $3$-manifold $M$, looks at the homology long exact sequence for the pair $(M,\partial M)$ with $\mathbb Z_2$-coefficients. By Poincare duality, $H_i(M)\cong H_{3-i}(M,\partial M)$. You also know the homology groups of $M$, since you know those of $X$. If $\partial M$ has $c$ components, the fact that the long exact sequence has trivial euler characteristic allows you to compute that the rank of $H_1(\partial M;\mathbb Z_2)$ is equal to $2c+4g-2g^2-2<2c$. On the other hand, as you noted $\pi_2$ must be trivial. Yet any boundary component which is a sphere or $\mathbb{RP}^2$ will contribute some $\pi_2$, since it will represent a map of $S^2$ that doesn't bound a map of a $3$-ball to one side. Thus each boundary component has at least two homology generators, which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Bijections from $A$ to $B$, where $A$ is the set of subsets of $[n]$ that have even size and $B$ is the set of subsets of $[n]$ that have odd size Let $A$ be the set of subsets of $[n]$ that have even size, and let $B$ be the set of subsets of $[n]$ that have odd size. Establish a bijection from $A$ to $B$. The following bijection is suggested for $n=3$: $$\matrix{A: & \{1,2\} & \{1,3\} & \{2,3\} & \varnothing\\ B: & \{1,2,3\} & \{1\} & \{2\} & \{3\}}$$ I know that first we have to establish a function that is both surjective and injective so that it is bijective. I don't know where to take a step from here in the right direction. So I need a bit of guidance. Something suggested is let f be the piecewise function: $$f(x) = x \setminus \{n\}\text{ if }n \in x\text{ and }f(x) = x\cup\{n\}\text{ if }n \notin x$$
For $n$ an odd positive integer, there is a natural procedure. The mapping that takes any subset $E$ of $[n]$ with an even number of elements to its complement $[n]\setminus E$ is a bijection from $A$ to $B$. Dealing with even positive $n$ is messier. Here is one way. The subsets of $[n]$ can be divided into two types: (i) subsets that do not contain $n$ and (ii) subsets that contain $n$. (i) If $E$ is a subset of $[n]$ with an even number of elements, and does not contain $n$, map $E$ to $[n-1]\setminus E$. Call this map $\phi$. (ii) If $E$ is a subset of $n$ with an even number of elements, and contains $n$, map $E$ to $\{n\} \cup \phi^{-1}(E\setminus\{n\})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Question regarding upper bound of fixed-point function The problem is to estimate the value of $\sqrt[3]{25}$ using fixed-point iteration. Since $\sqrt[3]{25} = 2.924017738$, I start with $p_0 = 2.5$. A sloppy C++ program yield an approximation to within $10^{-4}$ by $14$ iterations. #include <cmath> #include <iostream> using namespace std; double fx( double x ) { return 5.0 / sqrt( x ); } void fixed_point_algorithm( double p0, double accuracy ) { double p1; int n = 0; do { n++; p1 = fx( p0 ); cout << n << ": " << p1 << endl; if( abs( p1 - p0 ) <= accuracy ) { break; } p0 = p1; } while( true ); cout << "n = " << n << ", p_n = " << p1 << endl; } int main() { fixed_point_algorithm( 2.5, 0.0001 ); } Then I tried to solve it mathematically using the these two fixed-point theorems: Fixed-point Theorem Let $g \in C[a,b]$ be such that $g(x) \in [a,b]$, for all $x$ in $[a,b]$. Suppose, in addition, that $g'$ exists on $(a,b)$ and that a constant $0 < k < 1$ exists with $$|g'(x)| \leq k, \text{ for all } x \in (a, b)$$ Then, for any number $p_0$ in $[a,b]$, the sequence defined by $$p_n = g(p_{n-1}), n \geq 1$$ converges to the unique fixed-point in $[a,b]$ Corollary If $g$ satisfies the hypotheses of Theorem 2.4, then bounds for the error involved in using $p_n$ to approximate $p$ are given by $$|p_n - p| \leq k^n \max\{p_0 - a, b - p_0\}$$ and $$|p_n - p| \leq \dfrac{k^n}{1-k}|p_1 - p_0|, \text{ for all } n \geq 1$$ I picked the interval $[2.5, 3.0],$ $$g(x) = \dfrac{5}{\sqrt{x}}$$ $$g'(x) = \dfrac{-5}{2 \cdot x^{3/2}}$$ Plugging in several values in $(2.5, 3.0)$ convince me $x = 2.5$ yield the largest value of $k$. $$\implies \lim_{x\to\ 2.5} \bigg|\dfrac{-5}{2\cdot x^{3/2}} \bigg| = \dfrac{\sqrt{10}}{5}$$ So I chose $k = \dfrac{\sqrt{10}}{5}$, where $p_1 = g(p_0) = \sqrt{10}$. Then I solved for $n$ in the inequality equation: $$ 10^{-4} \leq |p_n - p| \leq \dfrac{k^n}{1-k}|p_1 - p_0|$$ $$\dfrac{\bigg(\dfrac{\sqrt{10}}{5}\bigg)^n}{1-\dfrac{\sqrt{10}}{5}}|\sqrt{10} - 2.5| \geq 10^{-4}$$ And I got $n \approx 18$ which is odd :(. From my understanding fixed-point iteration converges quite fast, so 4 iteration is significant. Then I tried to vary the interval to see if the result can come closer to 14, but I couldn't find any interval that satisfied. So I guess either my upper bound must be wrong or I didn't fully understand the theorem. Can anyone give me a hint? Thank you,
If I understand this right, $p_n$ converges to a fixed point of $g$. Taking $g(x)=\sqrt5/x$ as you have done, the fixed point of $g$ is not the $\root3\of{25}$ that you are after, but rather it is $\root4\of5$. So it's no wonder everything is going haywire.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Proving an inequality between functions: are the bounds sufficient if both strictly increase and are concave? I would like to show that $$f(n) > g(n)$$ for all $n$ within a certain range. If I can show that both $f(n)$ and $g(n)$ are strictly increasing with $n$, and that both are strictly concave, and that $f(n) > g(n)$ at both the lower and upper bounds of $n$, is that sufficient?
No. Consider, for example, $f(x)=1+12x-x^2$ and $g(x)=20x-10x^2$ between $0$ and $1$. Plotted by Wolfram Alpha.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding inverse cosh I am trying to find $\cosh^{-1}1$ I end up with something that looks like $e^y+e^{-y}=2x$. I followed the formula correctly so I believe that is correct up to this point. I then plug in $1$ for $x$ and I get $e^y+e^{-y}=2$ which, according to my mathematical knowledge, is still correct. From here I have absolutely no idea what to do as anything I do gives me an incredibly complicated problem or the wrong answer.
start with $$\cosh(y)=x$$ since $$\cosh^2(y)-\sinh^2(y)=1$$ or $$x^2-\sinh^2(y)=1$$ then $$\sinh(y)=\sqrt{x^2-1}$$ now add $\cosh(y)=x$ to both sides to make $$\sinh(y)+\cosh(y) = \sqrt{x^2-1} + x $$ which the left hand side simplifies to : $\exp(y)$ so the answer is $$y=\ln\left(\sqrt{x^2-1}+x\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/70500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Span of permutation matrices The set $P$ of $n \times n$ permutation matrices spans a subspace of dimension $(n-1)^2+1$ within, say, the $n \times n$ complex matrices. Is there another description of this space? In particular, I am interested in a description of a subset of the permutation matrices which will form a basis. For $n=1$ and $2$, this is completely trivial -- the set of all permutation matrices is linearly independent. For $n=3$, the dimension of their span is $5$, and any five of the six permutation matrices are linearly independent, as can be seen from the following dependence relation: $$ \sum_{M \in P} \det (M) \ M = 0 $$ So even in the case $n=4$, is there a natural description of a $10$ matrix basis?
As user1551 points out, your space is the span of all "magic matrices" -- all $n\times n$ matrices for which every row and column sum is equal to the same constant (depending on the matrix). As an algebra this is isomorphic to $\mathbb{C} \oplus M_{n-1}(\mathbb{C})$. You can think of this as the image in $\operatorname{End}_{\mathbb{C}}(\mathbb{C}^n)$ of the natural representation of $S_n$ on $n$ points -- perhaps this is where your question comes from. The representation decomposes as the direct sum of the trivial rep and an $(n-1)$-dimensional irreducible. The set of permutation matrices coming from the permutations $1$, $(1,r)$, $(1,r,s)$ for $1\neq r \neq s \neq 1$ form a basis of this space. To see that they are linearly independent, consider the first rows then the first columns of the corresponding matrices.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 0 }
Connected components of subspaces vs. space If $Y$ is a subspace of $X$, and $C$ is a connected component of $Y$, then C need not be a connected component of $X$ (take for instance two disjoint open discs in $\mathbb{R}^2$). But I read that, under the same hypothesis, $C$ need not even be connected in $X$. Could you please provide me with an example, or point me towards one? Thank you. SOURCE http://www.filedropper.com/manifolds2 Page 129, paragraph following formula (A.7.16).
Isn't it just false? The image of a connected subspace by the injection $Y\longrightarrow X$ is connected...
{ "language": "en", "url": "https://math.stackexchange.com/questions/70628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Countable subadditivity of the Lebesgue measure Let $\lbrace F_n \rbrace$ be a sequence of sets in a $\sigma$-algebra $\mathcal{A}$. I want to show that $$m\left(\bigcup F_n\right)\leq \sum m\left(F_n\right)$$ where $m$ is a countable additive measure defined for all sets in a $\sigma$ algebra $\mathcal{A}$. I think I have to use the monotonicity property somewhere in the proof, but I don't how to start it. I'd appreciate a little help. Thanks. Added: From Hans' answer I make the following additions. From the construction given in Hans' answer, it is clear the $\bigcup F_n = \bigcup G_n$ and $G_n \cap G_m = \emptyset$ for all $m\neq n$. So $$m\left(\bigcup F_n\right)=m\left(\bigcup G_n\right) = \sum m\left(G_n\right).$$ Also from the construction, we have $G_n \subset F_n$ for all $n$ and so by monotonicity, we have $m\left(G_n\right) \leq m\left(F_n\right)$. Finally we would have $$\sum m(G_n) \leq \sum m(F_n).$$ and the result follows.
Given a union of sets $\bigcup_{n = 1}^\infty F_n$, you can create a disjoint union of sets as follows. Set $G_1 = F_1$, $G_2 = F_2 \setminus F_1$, $G_3 = F_3 \setminus (F_1 \cup F_2)$, and so on. Can you see what $G_n$ needs to be? Using $m(\bigcup_{n = 1}^\infty G_n)$ and monotonicity, you can prove $m(\bigcup_{n = 1}^\infty F_n) \leq \sum_{n = 1}^\infty m(F_n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
A ring element with a left inverse but no right inverse? Can I have a hint on how to construct a ring $A$ such that there are $a, b \in A$ for which $ab = 1$ but $ba \neq 1$, please? It seems that square matrices over a field are out of question because of the determinants, and that implies that no faithful finite-dimensional representation must exist, and my imagination seems to have given up on me :)
Take the ring of linear operators on the space of polynomials. Then consider (formal) integration and differentiation. Integration is injective but not surjective. Differentiation is surjective but not injective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 2, "answer_id": 0 }
normal groups of a infinite product of groups I have a question regarding the quotient of a infinite product of groups. Suppose $(G_{i})_{i \in I}$ are abelian groups with $|I|$ infinite and each $G_i$ has a normal subgroup $N_i$. Is it true in general that $$\prod_{i \in I} G_i/ \prod_{i \in I} N_i \cong \prod_{i\in I} G_i/N_i$$ More specifically, is it true that $$\prod_{p_i \text{prime}} \mathbb{Z}_{p_i} / \prod_{p_i \text{prime}} p^{e_i}\mathbb{Z}_{p_{i}} \cong \prod_{p_i \text{prime},e_{i} \leq \infty} \mathbb{Z}/p_{i}^{e_i}\mathbb{Z} \times \prod_{p_i \text{prime},e_{i} = \infty}\mathbb{Z}_{p_i}$$ where $\mathbb{Z}_{p_i}$ stands for the $p_i$-adic integers and $p_i^\infty \mathbb{Z}_{p_i}=0$ and all $e_i$ belong to $\mathbb{N} \cup \{\infty\}$. Any help would be appreciated.
Here is a slightly more general statement. Let $(X_i)$ be a family of sets, and $X$ its product. For each $i$ let $E_i\subset X_i^2$ be an equivalence relation. Write $x\ \sim_i\ y$ for $(x,y)\in E_i$. Let $E$ be the product of the $E_i$. There is a canonical bijection between $X^2$ and the product of the $X_i^2$. Thus $E$ can be viewed as a subset of $X^2$. Write $x\sim y$ for $(x,y)\in E$. Let $x,y$ be in $X$. The followong is clear: Lemma 1. We have $x\sim y\ \Leftrightarrow\ x_i\ \sim_i\ y_i\ \forall\ i$. In particular $\sim$ is an equivalence relation on $X$. Define $f_i:X\to X_i/E_i$ by mapping $x$ to the canonical image of $x_i$. Let $$ f:X\to\prod\ (X_i/E_i) $$ be the map attached to the family $(f_i)$. CLAIM. The map $f$ induces a bijection $g$ from $X/E$ to $\prod(X_i/E_i)$. Let $x,y$ be in $X$. Lemma 2. We have $x\sim y\ \Leftrightarrow\ f(x)=f(y)$. Proof: This follows from Lemma 1. Conclusion: The map $f$ induces an injection $g$ from $X/E$ to $\prod(X_i/E_i)$. It only remains to prove that $g$ is surjective. To do this, let $a$ be in $\prod(X_i/E_i)$. For each $i$ choose a representative $x_i\in X_i$ of $a_i$, put $x:=(x_i)$, and check the equality $f(x)=a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Cardinality of Borel sigma algebra It seems it's well known that if a sigma algebra is generated by countably many sets, then the cardinality of it is either finite or $c$ (the cardinality of continuum). But it seems hard to prove it, and actually hard to find a proof of it. Can anyone help me out?
It is easy to prove that the $\sigma$-algebra is either finite or has cardinality at least $2^{\aleph_0}$. One way to prove that it has cardinality at most $2^{\aleph_0}$, without explicitly using transfinite recursion, is the following. It is easy to see that it is enough to prove this upper bound for a "generic" $\sigma$-algebra, e.g., for the Borel $\sigma$-algebra of $\{0,1\}^{\omega}$, or for the Borel $\sigma$-algebra of the Baire space $\mathcal{N} = \omega^{\omega}$. Note that $\mathcal{N}$ is a Polish space, so we can talk about analytic subsets of $\mathcal{N}$. Every Borel subset is an analytic subset of $\mathcal{N}$ (in fact, $A \subseteq \mathcal{N}$ is Borel if and only if $A$ and $X \setminus A$ are both analytic). So it is enough to prove that $\mathcal{N}$ has $2^{\aleph_0}$ analytic subsets. Now use the theorem stating that every analytic subset of $\mathcal{N}$ is the projection of a closed subset of $\mathcal{N} \times \mathcal{N}$. Since $\mathcal{N} \times \mathcal{N}$ has a countable basis of open subsets, it has $2^{\aleph_0}$ open subsets, so it has $2^{\aleph_0}$ closed subsets. So $\mathcal{N}$ has $2^{\aleph_0}$ analytic subsets. The proof using transfinite recursion might be simpler, but I think the analytic subset description gives a slightly different, kind of direct ("less transfinite") view on the Borel sets, that could be useful to know.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73", "answer_count": 2, "answer_id": 1 }
What Implications Can be Drawn from a Binomial Distribution? Hello everyone I understand how to calculate a binomial distribution or how to identify when it has occurred in a data set. My question is what does it imply when this type of distribution occurs? Lets say for example you are a student in a physics class and the professor states that the distribution of grades on the first exam throughout all sections was a binomial distribution. With typical class averages of around 40 to 50 percent. How would you interpret that statement?
Lets say for example you are a student in a physics class and the professor states that the distribution of grades on the first exam throughout all sections was a binomial distribution. With typical class averages of around 40 to 50 percent. How would you interpret that statement? Most likely the professor was talking loosely and his statement means that the histogram of percentage scores resembled the bell-shaped curve of a normal density function with average or mean value of $40\%$ to $50\%$. Let us assume for convenience that the professor said the average was exactly $50\%$. The standard deviation of scores would have to be at most $16\%$ or so to ensure that only a truly exceptional over-achiever would have scored more than $100\%$. As an aside, in the US, raw scores on the GRE and SAT are processed through a (possibly nonlinear) transformation so that the histogram of reported scores is roughly bell-shaped with mean $500$ and standard deviation $100$. The highest reported score is $800$, and the smallest $200$. As the saying goes, you get $200$ for filling in your name on the answer sheet. At the high end, on the Quantitative GRE, a score of $800$ ranks only at the $97$-th percentile. What if the professor had said that there were no scores that were a fraction of a percentage point, and that the histogram of percentage scores matched a binomial distribution with mean $50$ exactly? Well, the possible percentage scores are $0\%$, $1\%, \ldots, 100\%$ and so the binomial distribution in question has parameters $(100, \frac{1}{2})$ with $P\{X = k\} = \binom{100}{k}/2^{100}$. So, if $N$ denotes the number of students in the course, then for each $k, 0 \leq k \leq 100$, $N\cdot P\{X = k\}$ students had a percentage score of $k\%$. Since $N\cdot P\{X = k\}$ must be an integer, and $P\{X = 0\} = 1/2^{100}$, we conclude that $N$ is an integer multiple of $2^{100}$. I am aware that physics classes are often large these days, but having $2^{100}$ in one class, even if it is subdivided into sections, seems beyond the bounds of plausibility! So I would say that your professor had his tongue firmly embedded in his cheek when he made the statement.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Lesser-known integration tricks I am currently studying for the GRE math subject test, which heavily tests calculus. I've reviewed most of the basic calculus techniques (integration by parts, trig substitutions, etc.) I am now looking for a list or reference for some lesser-known tricks or clever substitutions that are useful in integration. For example, I learned of this trick $$\int_a^b f(x) \, dx = \int_a^b f(a + b -x) \, dx$$ in the question Showing that $\int\limits_{-a}^a \frac{f(x)}{1+e^{x}} \mathrm dx = \int\limits_0^a f(x) \mathrm dx$, when $f$ is even I am especially interested in tricks that can be used without an excessive amount of computation, as I believe (or hope?) that these will be what is useful for the GRE.
When integrating rational functions by partial fractions decomposition, the trickiest type of antiderivative that one might need to compute is $$I_n = \int \frac{dx}{(1+x^2)^n}.$$ (Integrals involving more general quadratic factors can be reduced to such integrals, plus integrals of the much easier type $\int \frac{x \, dx}{(1+x^2)^n}$, with the help of substitutions of the form $x \mapsto x+a$ and $x \mapsto ax$.) For $n=1$, we know that $I_1 = \int \frac{dx}{1+x^2} = \arctan x + C$, and the usual suggestion for finding $I_n$ for $n \ge 2$ is to work one's way down to $I_1$ using the reduction formula $$ I_n = \frac{1}{2(n-1)} \left( \frac{x}{(1+x^2)^{n-1}} + (2n-3) \, I_{n-1} \right) . $$ However, this formula is not easy to remember, and the computations become quite tedious, so the lesser-known trick that I will describe here is (in my opinion) a much simpler way. From now on, I will use the abbreviation $$T=1+x^2.$$ First we compute $$ \frac{d}{dx} \left( x \cdot \frac{1}{T^n} \right) = 1 \cdot \frac{1}{T^n} + x \cdot \frac{-n}{T^{n+1}} \cdot 2x = \frac{1}{T^n} - \frac{2n x^2}{T^{n+1}} = \frac{1}{T^n} - \frac{2n (T-1)}{T^{n+1}} \\ = \frac{1}{T^n} - \frac{2n T}{T^{n+1}} + \frac{2n}{T^{n+1}} = \frac{2n}{T^{n+1}} - \frac{2n-1}{T^n} . $$ Let us record this result for future use, in the form of an integral: $$ \int \left( \frac{2n}{T^{n+1}} - \frac{2n-1}{T^n} \right) dx = \frac{x}{T^n} + C . $$ That is, we have $$ \begin{align} \int \left( \frac{2}{T^2} - \frac{1}{T^1} \right) dx &= \frac{x}{T} + C ,\\ \int \left( \frac{4}{T^3} - \frac{3}{T^2} \right) dx &= \frac{x}{T^2} + C ,\\ \int \left( \frac{6}{T^4} - \frac{5}{T^3} \right) dx &= \frac{x}{T^3} + C ,\\ &\vdots \end{align} $$ With the help of this, we can easily compute things like $$ \begin{align} \int \left( \frac{1}{T^3} + \frac{5}{T^2} - \frac{2}{T} \right) dx &= \int \left( \frac14 \left( \frac{4}{T^3} - \frac{3}{T^2} \right) + \frac{\frac34 + 5}{T^2} - \frac{2}{T} \right) dx \\ &= \int \left( \frac14 \left( \frac{4}{T^3} - \frac{3}{T^2} \right) + \frac{23}{4} \cdot \frac12 \left( \frac{2}{T^2} - \frac{1}{T^1} \right) + \frac{\frac{23}{8}-2}{T} \right) dx \\ &= \frac14 \frac{x}{T^2} + \frac{23}{8} \frac{x}{T} + \frac{7}{8} \arctan x + C . \end{align} $$ Of course, the relation that we are using, $2n \, I_{n+1} - (2n-1) \, I_n = \frac{x}{T^n}$, really is the reduction formula in disguise. However, the trick is: (a) to derive the formula just by differentiation (instead of starting with an integral where the exponent is one step lower than the one that we're interested in, inserting a factor of $1$, integrating by parts, and so on), and (b) to leave the formula in its "natural" form as it appears when differentiating (instead of solving for $I_{n+1}$ in terms of $I_n$), which results in a structure which is easier to remember and a more pleasant way of organizing the computations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/70974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "169", "answer_count": 8, "answer_id": 6 }
Consequences of the Langlands program I have been reading the book Fearless Symmetry by Ash and Gross.It talks about Langlands program, which it says is the conjecture that there is a correspondence between any Galois representation coming from the etale cohomology of a Z-variety and an appropriate generalization of a modular form, called an “automorphic representation". Even though it appears to be interesting, I would like to know that are there any important immediate consequences of the Langlands program in number theory or any other field. Why exactly are the mathematicians so excited about this?
There are many applications of the Langlands program to number theory; this is why so many top-level researchers in number theory are focusing their attention on it. One such application (proved six or so years ago by Clozel, Harris, and Taylor) is the Sato--Tate conjecture, which describes rather precisely the deviation of the number of mod $p$ points on a fixed elliptic curve $E$, as the prime $p$ varies, from the "expected value" of $1 + p$. Further progress in the Langlands program would give rise to analogous distribution results for other Diophantine equations. (The key input is the analytic properties of the $L$-functions mentioned in Jeremy's answer.) At a slightly more abstract level, one can think of the Langlands program as providing a classification of Diophantine equations in terms of automorphic forms. At a more concrete level, it is anticipated that such a classification will be a crucial input to the problem of developing general results on solving Diophantine equations. (E.g. all results in the direction of the Birch--Swinnerton-Dyer conjecture take as input the modularity of the elliptic curve under investigation.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/71113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 2, "answer_id": 1 }
Proof of dividing fractions $\frac{a/b}{c/d}=\frac{ad}{bc}$ For dividing two fractional expressions, how does the division sign turns into multiplication? Is there a step by step proof which proves $$\frac{a}{b} \div \frac{c}{d} = \frac{a}{b} \cdot \frac{d}{c}=\frac{ad}{bc}?$$
Suppose $\frac{a}{b}$ and $\frac{c}{d}$ are fractions. That is, $a$, $b$, $c$, $d$ are whole numbers and $b\ne0$, $d\ne0$. In addition we require $c\ne0$. Let $\frac{a}{b}\div\frac{c}{d}=A$. Then by definition of division of fractions , $A$ is a unique fraction so that $A\times\frac{c}{d}=\frac{a}{b}$. However, $(\frac{a}{b}\times\frac{d}{c})\times\frac{c}{d}=\frac{a}{b}\times(\frac{d}{c}\times\frac{c}{d})=\frac{a}{b}\times(\frac{dc}{cd})=\frac{a}{b}\times(\frac{dc}{dc})=\frac{a}{b}$. Then by uniqueness (of $A$), $A=\frac{a}{b}\times\frac{d}{c}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 5 }
Proof that series diverges Prove that $\displaystyle\sum_{n=1}^\infty\frac{1}{n(1+1/2+\cdots+1/n)}$ diverges. I think the only way to prove this is to find another series to compare using the comparison or limit tests. So far, I have been unable to find such a series.
This answer is similar in spirit to Didier Piau's answer. The following theorem is a very useful tool: Suppose that $a_k > 0$ form a decreasing sequence of real numbers. Then $$\sum_{k=1}^\infty a_k$$ converges if and only if $$\sum_{k=1}^\infty 2^k a_{2^k}$$ converges. Applying this to the problem in hand we are reduced to investigating the convergence of $$\sum_{k=1}^\infty \frac{1}{1 + 1/2 + \dots + 1/2^k}$$ But one easily sees that $$1 + 1/2 + \dots + 1/2^k \le 1 + 2 \cdot 1/2 + 4 \cdot 1/4 + \dots + 2^{k-1}\cdot1/2^{k-1} + 1/2^k \le k + 1.$$ Because $$\sum_{k=1}^\infty \frac{1}{k+1}$$ diverges, we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 0 }
On the meaning of being algebraically closed The definition of algebraic number is that $\alpha$ is an algebraic number if there is a nonzero polynomial $p(x)$ in $\mathbb{Q}[x]$ such that $p(\alpha)=0$. By algebraic closure, every nonconstant polynomial with algebraic coefficients has algebraic roots; then, there will be also a nonconstant polynomial with rational coefficients that has those roots. I feel uncomfortable with the idea that the root of a polynomial with algebraic coefficients is again algebraic; why are we sure that for every polynomial in $\mathbb{\bar{Q}}[x]$ we could find a polynomial in $\mathbb{Q}[x]$ that has the same roots? I apologize if I'm asking something really trivial or my question comes from a big misunderstanding of basic concepts.
Let $p(x) = a_0+a_1x+\cdots +a_{n-1}x^{n-1} + x^n$ be a polynomial with coefficients in $\overline{\mathbb{Q}}$. For each $i$, $0\leq i\leq n-1$, let $a_i=b_{i1}, b_{i2},\ldots,b_{im_i}$ be the $m_i$ conjugates of $a_i$ (that is, the "other" roots of the monic irreducible polynomial with coefficients in $\mathbb{Q}$ that has $a_i$ as a root). Now let $F = \mathbb{Q}[b_{11},\ldots,b_{n-1,m_{n-1}}]$. This field is Galois over $\mathbb{Q}$. Let $G=\mathrm{Gal}(F/\mathbb{Q})$. Now consider $$q(x) = \prod_{\sigma\in G} \left( \sigma(a_0) + \sigma(a_1)x+\cdots + \sigma(a_{n-1})x^{n-1} + x^n\right).$$ This is a polynomial with coefficients in $F$, and any root of $p(x)$ is also a root of $q(x)$ (since one of the elements of $G$ is the identity, so one of the factors of $q(x)$ is $p(x)$). The key observation is that if you apply any element $\tau\in G$ to $q(x)$, you get back $q(x)$ again: $$\begin{align*} \tau(q(x)) &= \tau\left(\prod_{\sigma\in G} \left( \sigma(a_0) + \sigma(a_1)x+\cdots + \sigma(a_{n-1})x^{n-1} + x^n\right)\right)\\ &= \prod_{\sigma\in G} \left( \tau\sigma(a_0) +\tau\sigma(a_1)x+\cdots + \tau\sigma(a_{n-1})x^{n-1} + x^n\right)\\ &= \prod_{\sigma'\in G} \left( \sigma'(a_0) + \sigma'(a_1)x+\cdots + \sigma'(a_{n-1})x^{n-1} + x^n\right)\\ &= q(x). \end{align*}$$ That means that the coefficients of $q(x)$ must lie in the fixed field of $G$. But since $F$ is Galois over $\mathbb{Q}$, the fixed field is $\mathbb{Q}$. That is: $q(x)$ is actually a polynomial in $\mathbb{Q}[x]$. Thus, every root of $p(x)$ is the root of a polynomial with coefficients in $\mathbb{Q}$. For an example of how this works, suppose you have $p(x) = x^3 - (2\sqrt{3}+\sqrt{5})x + 3$. The conjugate of $\sqrt{3}$ is $-\sqrt{3}$; the conjugate of $\sqrt{5}$ is $-\sqrt{5}$. The field $\mathbb{Q}[\sqrt{3},\sqrt{5}]$ already contains all the conjugates, and the Galois group over $\mathbb{Q}$ has four elements: the one that maps $\sqrt{3}$ to itself and $\sqrt{5}$ to $-\sqrt{5}$; the one the maps $\sqrt{3}$ to $-\sqrt{3}$ and $\sqrt{5}$ to itself; the one that maps $\sqrt{3}$ to $-\sqrt{3}$ and $\sqrt{5}$ to $-\sqrt{5}$; and the identity. So $q(x)$ would be the product of $x^3 - (2\sqrt{3}+\sqrt{5})x + 3$, $x^3 - (-2\sqrt{3}+\sqrt{5})x+3$, $x^3 - (2\sqrt{3}-\sqrt{5})x + 3$, and $x^3 - (-2\sqrt{3}-\sqrt{5})x + 3$. If you multiply them out, you get $$\begin{align*} \Bigl( &x^3 - (2\sqrt{3}+\sqrt{5})x + 3\Bigr)\Bigl( x^3 + (2\sqrt{3}+\sqrt{5})x+3\Bigr)\\ &\times \Bigl(x^3 - (2\sqrt{3}-\sqrt{5})x + 3\Bigr)\Bigl( x^3 + (2\sqrt{3}-\sqrt{5})x + 3\Bigr)\\ &= \Bigl( (x^3+3)^2 - (2\sqrt{3}+\sqrt{5})^2x^2\Bigr)\Bigl((x^3+3)^2 - (2\sqrt{3}-\sqrt{5})^2x^2\Bigr)\\ &=\Bigl( (x^3+3)^2 - 17x^2 - 2\sqrt{15}x^2\Bigr)\Bigl( (x^3+3)^2 - 17x^2 + 2\sqrt{15}x^2\Bigr)\\ &= \Bigl( (x^3+3)^2 - 17x^2\Bigr)^2 - 60x^4, \end{align*}$$ which has coefficients in $\mathbb{Q}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 3, "answer_id": 1 }
Questions about cosets: "If $aH\neq Hb$, then $aH\cap Hb=\emptyset$"? Let $H$ be a subgroup of group $G$, and let $a$ and $b$ belong to $G$. Then, it is known that $$ aH=bH\qquad\text{or}\qquad aH\cap bH=\emptyset $$ In other words, $aH\neq bH$ implies $aH\cap bH=\emptyset$. What can we say about the statement "If $aH\neq Hb$, then $aH\cap Hb=\emptyset$" ? [EDITED:] What I think is that when $G$ is Abelian, this can be true since $aH=Ha$ for any $a\in G$. But what if $G$ is non-Abelian? How should I go on?
It is sometimes true and sometimes false. For example, if $H$ is a normal subgroup of $G$, then it is true. If $H$ is the subgroup generated by the permutation $(12)$ inside $G=S_3$, the symmetric group of degree $3$, then $(123)H\neq H(132)$, yet $(13)\in(123)H\cap H(132)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/71335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
The limit of locally integrable functions If ${f_i} \in L_{\rm loc}^1(\Omega )$ with $\Omega $ an open set in ${\mathbb R^n}$ , and ${f_i}$ are uniformly bounded in ${L^1}$ for every compact set, is it necessarily true that there is a subsequece of ${f_i}$ converging weakly to a regular Borel measure?
Take $K_j$ a sequence of compact sets such that their interior grows to $\Omega$. That is, $\mathrm{int}(K_j) \uparrow \Omega$. Let $f_i^0$ be a sub-sequence of $f_i$ such that $f_i^0|_{K_0}$ converges to a Borel measure $\mu_0$ over $K_0$. For each $j > 0$, take a sub-sequence $f_i^j$ of $f_i^{j-1}$ converging to a Borel measure $\mu_j$ over $K_j$. It is evident, from the concept of convergence, that for $k \leq j$, and any Borel set $A \subset K_k$, $\mu_j(A) = \mu_k(A)$. Now, define $\mu(A) = \lim \mu(A \cap K_j)$. And take the sequence $f_j^j$. For any continuous $g$ with compact support $K$, there exists $k$ such that $K \subset \mathrm{int}(K_k)$ (why?). Then, since for $j \geq k$, $f_j^j|_{K_k}$ is a sequence that converges to $\mu_k$, $$ \int_{K_k} g f_j^j \mathrm{d}x \rightarrow \int_{K_k} g f_j^j \mathrm{d}\mu_k = \int g f_j^j \mathrm{d}\mu. $$ That is, $f_j^j \rightarrow \mu$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Large $n$ asymptotic of $\int_0^\infty \left( 1 + x/n\right)^{n-1} \exp(-x) \, \mathrm{d} x$ While thinking of 71432, I encountered the following integral: $$ \mathcal{I}_n = \int_0^\infty \left( 1 + \frac{x}{n}\right)^{n-1} \mathrm{e}^{-x} \, \mathrm{d} x $$ Eric's answer to the linked question implies that $\mathcal{I}_n \sim \sqrt{\frac{\pi n}{2}} + O(1)$. How would one arrive at this asymptotic from the integral representation, without reducing the problem back to the sum ([added] i.e. expanding $(1+x/n)^{n-1}$ into series and integrating term-wise, reducing the problem back to the sum solve by Eric) ? Thanks for reading.
Interesting. I've got a representation $$ \mathcal{I}_n = n e^n \int_1^\infty t^{n-1} e^{- nt}\, dt $$ which can be obtained from yours by the change of variables $t=1+\frac xn$. After some fiddling one can get $$ 2\mathcal{I}_n= n e^n \int_0^\infty t^{n-1} e^{- nt}\, dt+o(\mathcal{I}_n)= n^{-n} e^n \Gamma(n+1)+\ldots=\sqrt{2\pi n}+\ldots. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/71447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 1 }
moment-generating function of the chi-square distribution How do we find the moment-generating function of the chi-square distribution? I really couldn't figure it out. The integral is $$E[e^{tX}]=\frac{1}{2^{r/2}\Gamma(r/2)}\int_0^\infty x^{(r-2)/2}e^{-x/2}e^{tx}dx.$$ I'm going over it for a while but can't seem to find the solution. By the way, the answer should be $$(1-2t)^{(-r/2)}.$$
In case you have not yet figure it out, the value of the integral follows by simple scaling of the integrand. First, assume $t < \frac{1}{2}$, then change variables $x = (1-2 t) y$: $$ \int_0^\infty x^{(r-2)/2} \mathrm{e}^{-x/2}\mathrm{e}^{t x}\mathrm{d}x = \int_0^\infty x^{r/2} \mathrm{e}^{-\frac{(1-2 t) x}{2}} \, \frac{\mathrm{d}x}{x} = \left(1-2 t\right)^{-r/2} \int_0^\infty y^{r/2} \mathrm{e}^{-\frac{t}{2}} \, \frac{\mathrm{d}y}{y} $$ The integral in $y$ gives the normalization constant, and value of m.g.f. follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
A couple of problems involving divisibility and congruence I'm trying to solve a few problems and can't seem to figure them out. Since they are somewhat related, maybe solving one of them will give me the missing link to solve the others. $(1)\ \ $ Prove that there's no $a$ so that $ a^3 \equiv -3 \pmod{13}$ So I need to find $a$ so that $a^3 \equiv 10 \pmod{13}$. From this I get that $$a \equiv (13k+10)^{1/3} \pmod{13} $$ If I can prove that there's no k so that $ (13k+10)^{1/3} $ is a integer then the problem is solved, but I can't seem to find a way of doing this. $(2)\ \ $ Prove that $a^7 \equiv a \pmod{7} $ If $a= 7q + r \rightarrow a^7 \equiv r^7 \pmod{7} $. I think that next step should be $ r^7 \equiv r \pmod{7} $, but I can't figure out why that would hold. $(3)\ \ $ Prove that $ 7 | a^2 + b^2 \longleftrightarrow 7| a \quad \textbf{and} \quad 7 | b$ Left to right is easy but I have no idea how to do right to left since I know nothing about what 7 divides except from the stated. Any help here would be much appreciated. There're a lot of problems I can't seem to solve because I don't know how to prove that a number is or isn't a integer like in problem 1 and also quite a few that are similar to problem 3, but I can't seem to find a solution. Any would be much appreciated.
HINT $\rm\ (2)\quad\ mod\ 7\!:\ \{\pm 1,\:\pm 2,\:\pm3\}^3\equiv\: \pm1\:,\:$ so squaring yields $\rm\ a^6\equiv 1\ \ if\ \ a\not\equiv 0\:.$ $\rm(3)\quad \ mod\ 7\!:\ \ if\ \ a^2\equiv -b^2\:,\:$ then, by above, cubing yields $\rm\: 1\equiv -1\ $ for $\rm\ a,b\not\equiv 0\:.$ $\rm(1)\quad \ mod\ 13\!:\ \{\pm1,\:\pm3,\:\pm4\}^3 \equiv \pm 1,\ \ \{\pm2,\pm5,\pm6\}^3\equiv \pm 5\:,\: $ and neither is $\rm\:\equiv -3\:.$ If you know Fermat's little theorem or a little group theory then you may employ such to provide more elegant general proofs - using the above special cases as hints.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$11$ divisibility We know that $1331$ is divisible by $11$. As per the $11$ divisibility test, we can say $1331$ is divisible by $11$. However we cannot get any quotient. If we subtract each unit digit in the following way, we can see the quotient when $1331$ is divided by $11$. $1331 \implies 133 -1= 132$ $132 \implies 13 - 2= 11$ $11 \implies 1-1 = 0$, which is divisible by $11$. Also the quotient is, arrange, all the subtracted unit digits (in bold and italic) from bottom to top, we get $121$. Which is quotient. I want to know how this method is working? Please write a proof.
HINT $\ $ Specialize $\rm\ x = 10\ $ below $$\rm(x+1)\ (a_n\ x^n +\:\cdots\:+a_1\ x + a_0)\ =\ a_n\ x^{n+1}+ (a_n+a_{n-1})\ x^{n}+\:\cdots\:(a_1+a_0)\ x+ a_0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/71638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Prove there is no element of order 6 in a simple group of order 168 Let $G$ be a simple group of order 168. Let $n_p$ be the number of Sylow $p$ subgroups in $G$. I have already shown: $n_7 = 8$, $n_3 = 28$, $n_2 \in \left\{7, 21 \right\}$ Need to show: $n_2 = 21$ (showing there is no element of order 6 In $G$ will suffice) Attempt so far: If $P$ is a Sylow-2 subgroup of $G$, $|P| = 8$. Assume for contradiction that $n_2 = 7$. Then the normalizer $N(P)$ has order 24. Let $k_p$ be the number of Sylow-$p$ subgroups in $N(P)$. Then $k_3 \in \left\{1,4 \right\}$ and $k_2 \in \left\{1,3 \right\}$. Then I showed $k_3 = 4, k_2 = 1$. Counting argument shows there is an element of order 6 in $N(P)$, and thus in $G$ too. I don't know how to proceed from here. I am told that there cannot be an element of order 6 in $G$, but I don't know how to prove it, if someone could help me prove this I would very much appreciate it. Can someone help me?
If there is an element of order 6, then that centralizes the Sylow $3$-subgroup $P_3$ generated by its square. You have already shown that $|N(P_3)|=168/n_3=6$. Therefore the normalizer of any Sylow $3$-subgroup would have to be cyclic of order 6, and an element of order 6 belongs to exactly one such normalizer. Thus your group would have $56=2\cdot n_3$ elements of order $3$, $56=2\cdot n_3$ elements of order $6$, $48=6\cdot n_7$ elements of order $7$, and therefore only eight other elements. Those eight would have to form a Sylow $2$-subgroup, and that would be unique, so...
{ "language": "en", "url": "https://math.stackexchange.com/questions/71768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Correlation Coefficient between these two random variables Suppose that $X$ is real-valued normal random variable with mean $\mu$ and variance $\sigma^2$. What is the correlation coefficient between $X$ and $X^2$?
Hint: You are trying to find: $$\frac{E\left[\left(X^2-E\left[X^2\right]\right)\left(X-E\left[X\right]\right)\right]}{\sqrt{E\left[\left(X^2-E\left[X^2\right]\right)^2\right]E\left[\left(X-E\left[X\right]\right)^2\right]}}$$ For a normal distribution the raw moments are * *$E\left[X^1\right] = \mu$ *$E\left[X^2\right] = \mu^2+\sigma^2$ *$E\left[X^3\right] = \mu^3+3\mu\sigma^2$ *$E\left[X^4\right] = \mu^4+6\mu^2\sigma^2+3\sigma^4$ so multiply out, substitute and simplify.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Why is the math for negative exponents so? This is what we are taught: $$5^{-2} = \left({\frac{1}{5}}\right)^{2}$$ but I don't understand why we take the inverse of the base when we have a negative exponent. Can anyone explain why?
For natural numbers $n$, $m$, we have $x^nx^m=x^{n+m}$. If you want this rule to be preserved when defining exponentiation by all integers, then you must have $x^0x^n = x^{n+0} = x^n$, so that you must define $x^0 = 1$. And then, arguing similarly, you have $x^nx^{-n} = x^{n-n}=x^0=1$, so that $x^{-n}=1/x^n$. Now, you can try to work out for yourself what $x^{1/n}$ should be, if we want to preserve the other exponentiation rule $(x^n)^m = x^{nm}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 1 }
Representation of this function using a single formula without conditions Is it possible to represent the following function with a single formula, without using conditions? If not, how to prove it? $F(x) = \begin{cases}u(x), & x \le 0, \ v(x) & x > 0 \end{cases}$ So that it will become something like that: $F(x) = G(x)$ With no conditions? I need it for further operations like derivative etc.
Note: This answers the original question, asking whether a formula like $F(x)=G(u(x),v(x))$ might represent the function $F$ defined as $F(x) = u(x)$ if $x \leqslant 0$ and $F(x)=v(x)$ if $x > 0$. The OP finally reacted to remarks made by several readers that another answer did not address this, by modifying the question, which made the other answer fit (post hoc) the question. Just to make sure @Rasmus's message got through: for any set $E$ with at least two elements, there can exist no function $G:\mathbb R^2\to E$ such that for every functions $u:\mathbb R\to E$ and $v:\mathbb R\to E$ and every $x$ in $\mathbb R$, one has $G(u(x),v(x))=u(x)$ if $x\leqslant0$ and $G(u(x),v(x))=v(x)$ if $x>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/71951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
A graph with less than 10 vertices contains a short circuit? Lately I read an old paper by Paul Erdős and L. Pósa ("On the maximal number of disjoint circuits of a graph") and stumbled across the following step in a proof (I changed it a bit to be easier to read): It is well known and easy to show that every (undirected) graph with $n < 10$ vertices, where all vertices have a degree $\geq 3$ contains a circuit of at most $4$ edges. I would be very happy if someone could enlighten me how this is simple and why he can conclude that, maybe there are some famous formulas for graphs that make this trivial? For the ones interested, he also mentions that a counterexample for $n=10$ is the petersen graph.
Because every vertex have degree $\ge 2$, there must be at least one cycle. Consider, therefore, a cycle of minimal length; call this length $n$. Because each vertex in the cycle has degree $\ge 3$, it is connected to at least one vertex apart from its two neighbors in the cycle. That cannot be a non-neighbor member of the cycle either, because then the cycle wouldn't be minimal. If the graph has $V$ vertices and $n > V/2$, then due to the pigeonhole principle two vertices in the cycle must share a neighbor outside of the cycle. We can then construct a new cycle that contains these two edges and half of the original cycle, and this will be shorter than the original cycle unless each half of the original cycle has length at most $2$. Therefore, a graph with $V$ vertices each having degree $\ge 3$ must have a cycle of length at most $\max(4,\lfloor V/2\rfloor)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Express this curve in the rectangular form Express the curve $r = \dfrac{9}{4+\sin \theta}$ in rectangular form. And what is the rectangular form? If I get the expression in rectangular form, how am I able to convert it back to polar coordinate?
what is the rectangular form? It is the $y=f(x)$ expression of the curve in the $x,y$ referential (see picture). It can also be the implicit form $F(x,y)=F(x,f(x))\equiv 0$. Steps: 1) transformation of polar into rectangular coordinates (also known as Cartesian coordinates) (see picture) $$x=r\cos \theta ,$$ $$y=r\sin \theta ;$$ 2) from trigonometry and from 1) $r=\sqrt{x^2+y^2}$ $$\sin \theta =\frac{y}{r}=\frac{y}{\sqrt{ x^{2}+y^{2}}};$$ 3) substitution in the given equation $$r=\frac{9}{4+\sin \theta }=\dfrac{9}{4+\dfrac{y}{\sqrt{x^{2}+y^{2}}}}=9\dfrac{\sqrt{x^{2}+y^{2}}}{4\sqrt{x^{2}+y^{2}}+y};$$ 4) from 1) $r=\sqrt{x^2+y^2}$, equate $$9\frac{\sqrt{x^{2}+y^{2}}}{4\sqrt{ x^{2}+y^{2}}+y}=\sqrt{x^{2}+y^{2}};$$ 5) simplify to obtain the implicit equation $$4\sqrt{x^{2}+y^{2}}+y-9=0;$$ 6) Rewrite it as $$4\sqrt{x^{2}+y^{2}}=9-y,$$ square it (which may introduce extraneous solutions, also in this question), rearrange as $$16y^{2}+18y+15x^{2}-81=0,$$ and solve for $y$ $$y=-\frac{3}{5}\pm \frac{4}{15}\sqrt{81-15x^{2}}.$$ 7) Check for extraneous solutions. if I get the expression in rectangular form, how am I able to convert it back to polar coordinate? The transformation of rectangular to polar coordinates is $$r=\sqrt{x^{2}+y^{2}}, \qquad \theta =\arctan \frac{y}{x}\qquad \text{in the first quadrant},$$ or rather $\theta =\arctan2(y,x)$ to take into account a location different from the first quadrant. (Wikipedia link). As commented by J.M. the curve is an ellipse. Here is the plot I got using the equation $16y^{2}+18y+15x^{2}-81=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
General Lebesgue Dominated Convergence Theorem In Royden (4th edition), it says one can prove the General Lebesgue Dominated Convergence Theorem by simply replacing $g-f_n$ and $g+f_n$ with $g_n-f_n$ and $g_n+f_n$. I proceeded to do this, but I feel like the proof is incorrect. So here is the statement: Let $\{f_n\}_{n=1}^\infty$ be a sequence of measurable functions on $E$ that converge pointwise a.e. on $E$ to $f$. Suppose there is a sequence $\{g_n\}$ of integrable functions on $E$ that converge pointwise a.e. on $E$ to $g$ such that $|f_n| \leq g_n$ for all $n \in \mathbb{N}$. If $\lim\limits_{n \rightarrow \infty}$ $\int_E$ $g_n$ = $\int_E$ $g$, then $\lim\limits_{n \rightarrow \infty}$ $\int_E$ $f_n$ = $\int_E$ $f$. Proof: $$\int_E (g-f) = \liminf \int_E g_n-f_n.$$ By the linearity of the integral: $$\int_E g - \int_E f = \int_E g-f \leq \liminf \int_E g_n -f_n = \int_E g - \liminf \int_E f_n.$$ So, $$\limsup \int_E f_n \leq \int_E f.$$ Similarly for the other one. Am I missing a step or is it really a simple case of replacing.
You made a mistake: $$\liminf \int (g_n-f_n) = \int g-\limsup \int f_n$$ not $$\liminf \int (g_n-f_n) = \int g-\liminf \int f_n.$$ Here is the proof: $$\int (g-f)\leq \liminf \int (g_n-f_n)=\int g -\limsup \int f_n$$ which means that $$\limsup \int f_n\leq \int f$$ Also $$\int (g+f)\leq \liminf \int(g_n+f_n)=\int g + \liminf \int f_n$$ which means that $$\int f\leq \liminf \int f_n$$ i.e. $$\limsup \int f_n\leq \int f\leq \liminf\int f_n\leq \limsup \int f_n$$ So they are all equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47", "answer_count": 3, "answer_id": 1 }
Existence of least squares solution to $Ax=b$ Does a least squares solution to $Ax=b$ always exist?
If you think at the least squares problem geometrically, the answer is obviously "yes", by definition. Let me try to explain why. For the sake of simplicity, assume the number of rows of $A$ is greater or equal than the number of its columns and it has full rang (i.e., its columns are linearly independent vectors). Without these hypotheses the answer is still "yes", but the explanation is a little bit more involved. If you have a system of linear equations $$ Ax = b \ , $$ you can look at it as the following equivalent problem: does the vector $b$ belong to the span of the columns of $A$? That is, $$ Ax = b \qquad \Longleftrightarrow \qquad \exists \ x_1, \dots , x_n \quad \text{such that }\quad x_1a_1 + \dots + x_na_n = b \ . $$ Here, $a_1, \dots , a_n$ are the columns of $A$ and $x = (x_1, \dots , x_n)^t$. If the answer is "yes", then the system has a solution. Otherwise, it hasn't. So, in this latter case, when $b\notin \mathrm{span }(a_1, \dots , a_n)$, that is, when your system hasn't a solution, you "change" your original system for another one which by definition has a solution. Namely, you change vector $b$ for the nearest vector $b' \in \mathrm{span }(a_1, \dots , a_n)$. This nearest vector $b'$ is the orthogonal projection of $b$ onto $\mathrm{span }(a_1, \dots , a_n)$. So the least squares solution to your system is, by definition, the solution of $$ Ax = b' \ , \qquad\qquad\qquad (1) $$ and your original system, with this change and the aforementioned hypotheses, becomes $$ A^t A x = A^tb \ . \qquad\qquad\qquad (2) $$ EDIT. Formula (1) becomes formula (2) taking into account that the matrix of the orthogonal projection onto the span of columns of $A$ is $$ P_A = A(A^tA)^{-1}A^t \ . $$ (See Wikipedia.) So, $b' = P_Ab$. And, if you put this into formula (1), you get $$ Ax = A(A^tA)^{-1}A^tb \qquad \Longrightarrow \qquad A^tAx = A^tA(A^tA)^{-1}A^tb = A^tb \ . $$ That is, formula (2).
{ "language": "en", "url": "https://math.stackexchange.com/questions/72222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 7, "answer_id": 0 }
How to show that $\frac{f}{g}$ is measurable Here is my attempt to show that $\frac{f}{g}~,g\neq 0$ is a measurable function, if $f$ and $g$ are measurable function. I'd be happy if someone could look if it's okay. Since $fg$ is measurable, it is enough to show that $\frac{1}{g}$ is measurable. $$ \left\{x\;\left|\;\frac{1}{g}\lt \alpha \right\}\right.=\left\{x\;\left|\;g \gt \frac{1}{\alpha} \right\}\right., \qquad g\gt 0,\quad \alpha \in \mathbb{R},$$ which is measurable, since the right hand side is measurable. Also, $$ \left\{x\;\left|\;\frac{1}{g}\lt \alpha \right\}\right.= \left\{x\;\left|\;g\lt\frac{1}{\alpha} \right\}\right.,\qquad g\lt 0,\quad \alpha \in \mathbb{R},$$ which is measurable since the right hand side is measurable. Therefore, $\frac{1}{g}$ is measurable, and so $\frac{f}{g}$ is measurable.
Using the fact that $fg$ is a measurable function and in view of the identity $$\frac{f}{g}=f\frac{1}{g}$$ it suffices to show that $1/g$ (with $g\not=0$) is measurable on $E$. Indeed, $$E\left(\frac{1}{g}<\alpha\right)=\left\{\begin{array}{lll} E\left(g<0\right) & \quad\text{if }\alpha=0\\ E\left(g>1/\alpha\right)\cup E\left(g<0\right) & \quad\text{if }\alpha>0\\ E\left(g>1/\alpha\right)\cap E\left(g<0\right) & \quad\text{if }\alpha<0 \end{array}\right.$$ Hence, $f/g$ ($g$ vanishing nowhere on $E$) is a measurable function on $E$. For a better understanding of what is going on, I suggest to plot the function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Cyclic group with exactly 3 subgroups: itself $\{e\}$ and one of order $7$. Isn't this impossible? Suppose a cyclic group has exactly three subgroups: $G$ itself, $\{e\}$, and a subgroup of order $7$. What is $|G|$? What can you say if $7$ is replaced with $p$ where $p$ is a prime? Well, I see a contradiction: the order should be $7$, but that is only possible if there are only two subgroups. Isn't it impossible to have three subgroups that fit this description? If G is cyclic of order n, then $\frac{n}{k} = 7$. But if there are only three subgroups, and one is of order 1, then 7 is the only factor of n, and $n = 7$. But then there are only two subgroups. Is this like a trick question? edit: nevermind. The order is $7^2$ right?
Hint: how can a number other than $7$ not have other factors?
{ "language": "en", "url": "https://math.stackexchange.com/questions/72432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it ever $i$ time? I am asking this question as a response to reading two different questions: Is it ever Pi time? and Are complex number real? So I ask, is it ever $i$ time? Could we arbitrarily define time as following the imaginary line instead of the real one? (NOTE: I have NO experience with complex numbers, so I apologize if this is a truly dumb question. It just followed from my reading, and I want to test my understanding)
In the Wikipedia article titled Paul Émile Appell, we read that "He discovered a physical interpretation of the imaginary period of the doubly periodic function whose restriction to real arguments describes the motion of an ideal pendulum." The interpretation is this: The real period is the real period. The maximum deviation from vertical is $\theta$. The imaginary period is what the period would be if that maximum deviation were $\pi - \theta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Generating coordinates for 'N' points on the circumference of an ellipse with fixed nearest-neighbor spacing I have an ellipse with semimajor axis $A$ and semiminor axis $B$. I would like to pick $N$ points along the circumference of the ellipse such that the Euclidean distance between any two nearest-neighbor points, $d$, is fixed. How would I generate the coordinates for these points? For what range of $A$ and $B$ is this possible? As a clarification, all nearest-neighbor pairs should be of fixed distance $d$. If one populates the ellipse by sequentially adding nearest neighbors in, say, a clockwise fashion, the first and last point added should have a final distance $d$.
I will assume that $A$, $B$ and $N$ are given, and that $d$ is unknown. There is always a solution. Let $L$ be the perimeter of the ellipse. An obvious constraint is $N\,d<L$. Take $d\in(0,L/N)$. As explained in Gerry Myerson's answer, pick a point $P_1$ on the ellipse, and then pick points $P_2,\dots,P_N$ such that $P_{i+1}$ is clockwise from $P_i$ and the euclidean distance between $P_i$ and $P_{i+1}$ is $d$. If $d$ is small, $P_N$ will be short of $P_1$, while if $d$ is large, it will "overpass" $P_1$. In any case, the position of $P_N$ is a continuous function of $d$. By continuity, there will be a value of $d$ such that $P_1=P_N$. It is also clear that this value is unique. To find $P_N$ for a given $d$ you need to solve $N-1$ quadratic equations. To compute the value of $d$, you can use the bisection method. Edit: TonyK's objections can be taken care of if $N=2\,M$ is even. Take $P_1=(A,0)$ and follow the procedure to find points $P_2,\dots,P_{M+1}$ in the upper semiellipse such that $P_{i+1}$ is clockwise of $P_i$ and at distance $d$, and $P_{M+1}=(-A,0)$. The the sought solution is $P_1,\dots,P_{M+1},\sigma(P_M),\dots,\sigma(P_2)$, where $\sigma(P)$ is the point symmetric of $P$ with respect to the axis $y=0$. If $N=2\,M+1$ is odd, I believe that there is also a symmetric solution, but I have to think about it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 3 }
Simple use of a permutation rule in calculating probability I have the following problem from DeGroot: A box contains 100 balls, of which 40 are red. Suppose that the balls are drawn from the box one at a time at random, without replacement. Determine (1) the probability that the first ball drawn will be red. (2) the probability that the fifteenth ball drawn will be red, and (3) the probability that the last ball drawn will be red. For the first question: * *the probability that the first ball drawn will be red should be: $$\frac{40!}{(100-40)!}$$ *the probability that the fifteenth ball will be red: $$\frac{60! \times 40!}{(60-14)! \times (40-1)}$$ *the probability that the last ball drawn will be red: $$\frac{60! \times 40!}{100}$$ Am I on the right track with any of these?
The answers are (a) $40/100$; (b) $40/100$; (c) $40/100$. (a) Since balls tend to roll around, let us imagine instead that we have $100$ cards, with the numbers $1$ to $100$ written on them. The cards with numbers $1$ to $40$ are red, and the rest are blue. The cards are shuffled thoroughly, and we deal the top card. There are $100$ cards, and each of them is equally likely to be the top card. Since $40$ of the cards are red, it follows that the probability of "success" (first card drawn is red) is $40/100$. (b) If we think about (b) the right way, it should be equally clear that the probability is $40/100$. The probability that any particular card is the fifteenth one drawn is the same as the probability that it is the first one drawn: all permutations of the $100$ cards are equally likely. It follows that the probability that the fifteenth card drawn is red is $40/100$. (c) First, fifteenth, eighty-eighth, last, it is all the same, since all permutations of the cards are equally likely. Another way: We look again at (a), using a more complicated sample space. Again we use the card analogy. There are $\binom{100}{40}$ ways to decide on the locations of the $40$ red cards (but not which red cards occupy these locations). All these ways are equally likely. In how many ways can we choose these $40$ locations so that Position $1$ is one of the chosen locations? We only need to choose $39$ locations, and this can be done in $\binom{99}{39}$ ways. So the desired probability is $$\frac{\binom{99}{39}}{\binom{100}{40}}.$$ Compute. The top is $\frac{99!}{39!60!}$ and the bottom is $\frac{100!}{40!60!}$. Divide. There is a lot of cancellation. We quickly get $40/100$. Now count the number of ways to find locations for the reds that include Position $15$. An identical argument shows that the number of such choices is $\binom{99}{39}$, so again we get probability $40/100$. Another other way: There are other reasonable choices of sample space. The largest natural one is the set of $100!$ permutations of our $100$ cards. Now we count the permutations that have a red in a certain position, say Position $15$. The red in Position $15$ can be chosen in $40$ ways. For each of these ways, the remaining $99$ cards can be arranged in $99!$ ways, for a total of $(40)(99!)$. Thus the probability that we have a red in Position $15$ is $$\frac{(40)(99!)}{100!},$$ which quickly simplifies to $40/100$. The same argument works for any other specific position.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to get apparent linear diameter from angular diameter Say I have an object, whose actual size is 10 units in diameter, and it is 100 units away. I can find the angular diameter as such: $2\arctan(5/100) = 5.725\ $ radians. Can I use this angular diameter to find the apparent linear size (that is, the size it appears to be to my eye) in generic units at any given distance?
It appears you are using the wrong angular units: $2\;\tan^{-1}\left(\frac{5}{100}\right)=5.7248$ degrees $=0.099917$ radians. The formula you cite above is valid for a flat object perpendicular to the line of sight. If your object is a sphere, the angular diameter is given by $2\;\sin^{-1}\left(\frac{5}{100}\right)=5.7320$ degrees $=0.100042$ radians. Usually, the angular size is referred to as the apparent size. Perhaps you want to find the actual size of the object which has the same apparent size but lies at a different distance. In that case, as joriki says, just multiply the actual distance by $\frac{10}{100}$ to get the actual diameter. This is a result of the "similar triangles" rule used in geometry proofs. Update: In a comment to joriki's answer, the questioner clarified that what they want is to know how the apparent size varies with distance. The formulae for the angular size comes the diagram above: for the flat object: $\displaystyle\tan\left(\frac{\alpha}{2}\right)=\frac{D/2}{r}$; for the spherical object: $\displaystyle\sin\left(\frac{\alpha}{2}\right)=\frac{D/2}{r}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/72666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove for which $n \in \mathbb{N}$: $9n^3 - 3 ≤ 8^n$ A homework assignment requires me to find out and prove using induction for which $n ≥ 0$ $9n^3 - 3 ≤ 8^n$ and I have conducted multiple approaches and consulted multiple people and other resources with limited success. I appreciate any hint you can give me. Thanks in advance.
Let $f(n)=n^3-3$, and let $g(n)=8^n$. We compute a little, to see what is going on. We have $f(0) \le g(0)$; $f(1)\le g(1)$; $f(2) > g(2)$; $f(3) \le g(3)$; $f(4) \le g(4)$. Indeed $f(4)=573$ and $g(4)=4096$, so it's not even close. The exponential function $8^x$ ultimately grows incomparably faster than the polynomial $9x^3-3$. So it is reasonable to conjecture that $9n^3-3 \le 8^n$ for every non-negative integer $n$ except $2$. We will show by induction that $9n^3-3 \le 8^n$ for all $n \ge 3$. It is natural to work with ratios. We show that $$\frac{8^n}{9n^3-3} \ge 1$$ for all $n \ge 3$. The result certainly holds at $n=3$. Suppose that for a given $n \ge 3$, we have $\frac{8^n}{9n^3-3} \ge 1$. We will show that $\frac{8^{n+1}}{9(n+1)^3-3} \ge 1$. Note that $$\frac{8^{n+1}}{9(n+1)^3-3}=8 \frac{9n^3-3}{9(n+1)^3-3}\frac{8^n}{9n^3-3}.$$ By the induction hypothesis, we have $\frac{8^n}{9n^3-3} \ge 1$. So all we need to do is to show that $$8 \frac{9n^3-3}{9(n+1)^3-3} \ge 1,$$ or equivalently that $$\frac{9(n+1)^3-3}{9n^3-3} \le 8.$$ If $n\ge 3$, the denominator is greater than $8n^3$, and the numerator is less than $9(n+1)^3$. Thus, if $n \ge 3$, then $$\frac{9(n+1)^3-3}{9n^3-3} <\frac{9}{8}\frac{(n+1)^3}{n^3}=\frac{9}{8}\left(1+\frac{1}{n}\right)^3.$$ But if $n \ge 3$, then $(1+1/n)^3\le (1+1/3)^3<2.5$, so $\frac{9}{8}(1+1/n)^3<8$, with lots of room to spare.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Real elliptic curves in the fundamental domain of $\Gamma(2)$ An elliptic curve (over $\mathbf{C}$) is real if its j-invariant is real. The set of real elliptic curves in the standard fundamental domain of $\mathrm{SL}_2(\mathbf{Z})$ can be explicitly described. In the standard fundamental domain $$F(\mathbf{SL}_2(\mathbf{Z}))=\left\{\tau \in \mathbf{H} : -\frac{1}{2} \leq \Re(\tau) \leq 0, \vert \tau \vert \geq 1 \right\} \cup \left\{ \tau \in \mathbf{H} : 0 < \Re(\tau) < \frac{1}{2}, \vert \tau \vert > 1\right\},$$ the set of real elliptic curves is the boundary of this fundamental domain together with the imaginary axis lying in this fundamental domain. Let $F$ be the standard fundamental domain for $\Gamma(2)$. Can one describe how the set of real elliptic curves in this fundamental domain looks like? Of course, the set of real elliptic curves in $F(\mathbf{SL}_2(\mathbf{Z}))$ is contained in the set of real elliptic curves in $F$. But there should be more.
Question answered in the comments by David Loeffler. I'm not sure which fundamental domain for $\Gamma(2)$ you consider to be "standard"? But whichever one you go for, it'll just be the points in your bigger domain whose $SL_2(\Bbb{Z})$ orbit contains a point of the set you just wrote down.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Order of finite fields is $p^n$ Let $F$ be a finite field. How do I prove that the order of $F$ is always of order $p^n$ where $p$ is prime?
A slight variation on caffeinmachine's answer that I prefer, because I think it shows more of the structure of what's going on: Let $F$ be a finite field (and thus has characteristic $p$, a prime). * *Every element of $F$ has order $p$ in the additive group $(F,+)$. So $(F,+)$ is a $p$-group. *A group is a $p$-group iff it has order $p^n$ for some positive integer $n$. The first claim is immediate, by the distributive property of the field. Let $x \in F, \ x \neq 0_F$. We have \begin{align} p \cdot x &= p \cdot (1_{F} x) = (p \cdot 1_{F}) \ x \\ & = 0 \end{align} This is the smallest positive integer for which this occurs, by the definition of the characteristic of a field. So $x$ has order $p$. The part that we need of the second claim is a well-known corollary of Cauchy's theorem (the reverse direction is just an application of Lagrange's theorem).
{ "language": "en", "url": "https://math.stackexchange.com/questions/72856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "68", "answer_count": 6, "answer_id": 3 }
Variance of sample variance? What is the variance of the sample variance? In other words I am looking for $\mathrm{Var}(S^2)$. I have started by expanding out $\mathrm{Var}(S^2)$ into $E(S^4) - [E(S^2)]^2$ I know that $[E(S^2)]^2$ is $\sigma$ to the power of 4. And that is as far as I got.
Maybe, this will help. Let's suppose the samples are taking from a normal distribution. Then using the fact that $\frac{(n-1)S^2}{\sigma^2}$ is a chi squared random variable with $(n-1)$ degrees of freedom, we get $$\begin{align*} \text{Var}~\frac{(n-1)S^2}{\sigma^2} & = \text{Var}~\chi^{2}_{n-1} \\ \frac{(n-1)^2}{\sigma^4}\text{Var}~S^2 & = 2(n-1) \\ \text{Var}~S^2 & = \frac{2(n-1)\sigma^4}{(n-1)^2}\\ & = \frac{2\sigma^4}{(n-1)}, \end{align*}$$ where we have used that fact that $\text{Var}~\chi^{2}_{n-1}=2(n-1)$. Hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/72975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101", "answer_count": 6, "answer_id": 1 }
Find $a$ and $b$ with whom this expression $x\bullet y=(a+x)(b+y)$ is associative I need to find a and b with whom this expression is associative: $$x\bullet y=(a+x)(b+y)$$ Also known that $$x,y\in Z$$ So firstly I write: $$(x\bullet y)\bullet z=x\bullet (y\bullet z)$$ Then I express and express them and after small algebra I get: $$ab^2+b^2x+az+abz+bxz=a^2b+a^2z+bx+abx+axz$$ And I don't know what to do now, I don't know if I should do it, but I tried to assign to $x=0$ and $z=1$ but nothing better happened. Could you please share some ideas? EDIT: Sorry, but I forgot to mention that I need to find such a and b with whom it should be associative
Assuming that you still want the element $1 \in \mathbb{Z}$ to be a multiplicative identity element with your new multiplication, here is an alternative way of quickly seeing that $a=b=0$. For an arbitrary $y\in \mathbb{Z}$ we shall have: $y=1 \bullet y=(a+1)(b+y)=ab+ay+b+y=(a+1)b+(a+1)y$ This is satisfied if and only if $a+1=1$ and $(a+1)b=0$. The unique solution is $a=0$ and $b=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Calculating the exponential of a $4 \times 4$ matrix Find $e^{At}$, where $$A = \begin{bmatrix} 1 & -1 & 1 & 0\\ 1 & 1 & 0 & 1\\ 0 & 0 & 1 & -1\\ 0 & 0 & 1 & 1\\ \end{bmatrix}$$ So, let me just find $e^{A}$ for now and I can generalize later. I notice right away that I can write $$A = \begin{bmatrix} B & I_{2} \\ 0_{22} & B \end{bmatrix}$$ where $$B = \begin{bmatrix} 1 & -1\\ 1 & 1\\ \end{bmatrix}$$ I'm sort of making up a method here and I hope it works. Can someone tell me if this is correct? I write: $$A = \mathrm{diag}(B,B) + \begin{bmatrix}0_{22} & I_{2}\\ 0_{22} & 0_{22}\end{bmatrix}$$ Call $S = \mathrm{diag}(B,B)$, and $N = \begin{bmatrix}0_{22} & I_{2}\\ 0_{22} & 0_{22}\end{bmatrix}$. I note that $N^2$ is $0_{44}$, so $$e^{N} = \frac{N^{0}}{0!} + \frac{N}{1!} + \frac{N^2}{2!} + \cdots = I_{4} + N + 0_{44} + \cdots = I_{4} + N$$ and that $e^{S} = \mathrm{diag}(e^{B}, e^{B})$ and compute: $$e^{A} = e^{S + N} = e^{S}e^{N} = \mathrm{diag}(e^{B}, e^{B})\cdot[I_{4} + N]$$ This reduces the problem to finding $e^B$, which is much easier. Is my logic correct? I just started writing everything as a block matrix and proceeded as if nothing about the process of finding the exponential of a matrix would change. But I don't really know the theory behind this I'm just guessing how it would work.
Consider $M(t) = \exp(t A)$, and as you noticed, it has block-diagonal form $$ M(t) = \left(\begin{array}{cc} \exp(t B) & n(t) \\ 0_{2 \times 2} & \exp(t B) \end{array} \right). $$ Notice that $M^\prime(t) = A \cdot M(t)$, and this results in a the following differential equation for $n(t)$ matrix: $$ n^\prime(t) = \mathbb{I}_{2 \times 2} \cdot \exp(t B) + B \cdot n(t) $$ which translates into $$ \frac{\mathrm{d}}{\mathrm{d} t} \left( \exp(-t B) n(t) \right) = \mathbb{I}_{2 \times 2} $$ which is to say that $n(t) = t \exp(t B)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Index notation for tensors: is the spacing important? While reading physics textbooks I often come across notation like this; $$J_{\alpha}{}^{\beta},\ \Gamma_{\alpha \beta}{}^{\gamma}, K^\alpha{}_{\beta}.$$ Notice the spacing in indices. I don't understand why they do not write simply $J_{\alpha}^\beta, \Gamma_{\alpha \beta}^\gamma, K^\alpha_{\beta}$.
It's important to keep track of the ordering if you want to use a metric to raise and lower indices freely (without explicitly writing out $g_{ij}$'s all the time). For example (using Penrose abstract index notation), if you raise the index $a$ on the tensor $K_{ab}$, then you get $K^a{}_b (=g^{ac} K_{cb})$, whereas if you raise the index $a$ on the tensor $K_{ba}$, you get $K_b{}^a (=g^{ac}K_{bc})$. Since the tensors $K^a{}_b$ and $K_b{}^a$ act differently on $X_a Y^b$ (unless $K$ happens to be symmetric, i.e., $K_{ab}=K_{ba}$), one doesn't want to denote them both by $K^a_b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 1 }
Solve for equation algebraically Is it possible to write the following function as $H(x)=$'some expresssion` ? $$D(x) = H(x) + H(x-1)$$ Edit: Hey everyone, thanks for all the great responses, and just to clarify H(x) and D(x) are always going to be polynomials, I wasn't sure if that made too big of a difference so I didn't mention it before. Edit 2: I'm sincerely sorry if I was too general, I've only really used polynomial equations in my education thus far, so I didn't realize they might not seam just regular to everyone else. Once again I'm very sorry If I didn't give everyone an opportunity to answer the question because of my vagueness. I sincerely appreciate all of your answers and time.
$H(x)$ could be defined as anything on $[0,1)$, then the relation $$ D(x) = H(x) + H(x-1)\tag{1} $$ would define $H(x)$ on the rest of $\mathbb{R}$. For example, for $x\ge0$, $$ H(x)=(-1)^{\lfloor x\rfloor}H(x-\lfloor x\rfloor)+\sum_{k=0}^{\lfloor x\rfloor-1}(-1)^kD(x-k)\tag{2} $$ and for $x<0$, $$ H(x)=(-1)^{\lfloor x\rfloor}H(x-\lfloor x\rfloor)+\sum_{k=\lfloor x\rfloor}^{-1}(-1)^kD(x-k)\tag{3} $$ Polynomial solutions: Some interest was shown for polynomial solutions when $D(x)$ is a polynomial. As was done in another recent answer, we can use Taylor's Theorem to show that $$ e^{-\mathcal{D}}P(x)=P(x-1)\tag{4} $$ where $\mathcal{D}=\dfrac{\mathrm{d}}{\mathrm{d}x}$, at least for polynomial $P$. Using $(4)$, $(1)$ becomes $$ D(x)=\left(I+e^{-\mathcal{D}}\right)H(x)\tag{5} $$ Noting that $$ (1+e^{-x})^{-1}=\tfrac{1}{2}+\tfrac{1}{4}x-\tfrac{1}{48}x^3+\tfrac{1}{480}x^5+\dots\tag{6} $$ we can get a polynomial solution of $(1)$ with $$ H(x)=\left(\tfrac{1}{2}I+\tfrac{1}{4}\mathcal{D}-\tfrac{1}{48}\mathcal{D}^3+\tfrac{1}{480}\mathcal{D}^5+\dots\right)D(x)\tag{7} $$ Polynomial example: For example, if $D(x)=x^2$, then using $(7)$, $H(x)=\tfrac{1}{2}x^2+\tfrac{1}{2}x$. Check: $$\left(\tfrac{1}{2}x^2+\tfrac{1}{2}x\right)+\left(\tfrac{1}{2}(x-1)^2+\tfrac{1}{2}(x-1)\right)=x^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/73222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 1 }
Limiting distribution of sum of normals How would I go about solving this problem below? I am not exactly sure where to start. I know that I need to make use of the Lebesgue Dominated Convergence theorem as well. Thanks for the help. Let $X_1, X_2, \ldots, X_n$ be a random sample of size $n$ from a distribution that is $N(\mu, \sigma^2)$ where $\sigma^2 > 0$. Show that $Z_n = \sum\limits_{i=1}^n X_i$ does not have a limiting distribution.
Another way to show $P(Z_n\le z)=0$ for all $z$ as Did suggests: Note that $Z_n\sim N(n\mu, n\sigma^2)$, so for some fixed real number $z$, $P(Z_n\le z)=\frac{1}{2}\biggr[1+erf\biggr(\frac{z-n\mu}{\sqrt{2n\sigma^2}}\biggr)\biggr]\to0$ as $n\to\infty$, since $\frac{z-n\mu}{\sqrt{2n\sigma^2}}\to-\infty$ and $\lim_{x\to-\infty}erf(x)=-1$. Hence the limit of the CDFs is equal to 0 everywhere, and cannot limit to a CDF. [I use the definition of erf from the Wikipedia page: https://en.wikipedia.org/wiki/Normal_distribution]
{ "language": "en", "url": "https://math.stackexchange.com/questions/73259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
How to integrate this trigonometry function? The question is $ \displaystyle \int{ \frac{1-r^{2}}{1-2r\cos(\theta)+r^{2}}} d\theta$. I know it will be used weierstrass substitution to solve but i did not have any idea of it.
There's a Wikipedia article about this technique: Weierstrass substitution. Notice that what you've got here is $\displaystyle\int\frac{d\theta}{a+b\cos\theta}$. The factor $1-r^2$ pulls out, and $a=1+r^2$ and $b=-2r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to prove $\lim_{n \to \infty} \sqrt{n}(\sqrt[n]{n} - 1) = 0$? I want to show that $$\lim_{n \to \infty} \sqrt{n}(\sqrt[n]{n}-1) = 0$$ and my assistant teacher gave me the hint to find a proper estimate for $\sqrt[n]{n}-1$ in order to do this. I know how one shows that $\lim_{n \to \infty} \sqrt[n]{n} = 1$, to do this we can write $\sqrt[n]{n} = 1+x_n$, raise both sides to the n-th power and then use the binomial theorem (or to be more specific: the term to the second power). However, I don't see how this or any other trivial term (i.e. the first or the n-th) could be used here. What estimate am I supposed to find or is there even a simpler way to show this limit? Thanks for any answers in advance.
The OP's attempt can be pushed to get a complete proof. $$ n = (1+x_n)^n \geq 1 + nx_n + \frac{n(n-1)}{2} x_n^2 + \frac{n(n-1)(n-2)}{6} x_n^3 > \frac{n(n-1)(n-2) x_n^3}{6} > \frac{n^3 x_n^3}{8}, $$ provided $n$ is "large enough" 1. Therefore, (again, for large enough $n$,) $x_n < 2 n^{-2/3}$, and hence $\sqrt{n} x_n < 2n^{-1/6}$. Thus $\sqrt{n} x_n$ approaches $0$ by the sandwich (squeeze) theorem. 1In fact, you should be able to show that for all $n \geq 12$, we have $$ \frac{n(n-1)(n-2)}{6} > \frac{n^3}{8} \iff \left( 1-\frac1n \right) \left( 1- \frac2n \right) \geq \frac34. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/73403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 1 }
Fractional cardinalities of sets Is there any extension of the usual notion of cardinalities of sets such that there is some sets with fractional cardinalities such as 5/2, ie a set with 2.5 elements, what would be an example of such a set? Basically is there any consistent set theory where there is a set whose cardinality is less than that of {cat,dog,fish} but greater than that of {47,83} ?
One can extend the notion of cardinality to include negative and non-integer values by using the Euler characteristic and homotopy cardinality. For example, the space of finite sets has homotopy cardinality $e=\frac{1}{0!}+\frac{1}{1!}+\frac{1}{2!}+\dotsi$. The idea is to sum over each finite set, inversely weighted by the size of their symmetry group. John Baez discusses this in detail on his blog. He has plenty of references, as well as lecture notes, course notes, and blog posts about the topic here. The first sentence on the linked page: "We all know what it means for a set to have 6 elements, but what sort of thing has -1 elements, or 5/2? Believe it or not, these questions have nice answers." -Baez
{ "language": "en", "url": "https://math.stackexchange.com/questions/73470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 3 }
A number when successively divided by $9$, $11$ and $13$ leaves remainders $8$, $9$ and $8$ respectively A number when successively divided by $9$, $11$ and $13$ leaves remainders $8$, $9$ and $8$ respectively. The answer is $881$, but how? Any clue about how this is solved?
First when the number is divided by 9. the remainder is 8. So N = 9x+8. Similarly, next x = 11y+9, and y=13z+8. So N = 99y+89 = 99(13z+8)+89 = 1287z+792+89 = 1287z+881. So N is of the form, 1287*(A whole Number)+881. If you need to find the minimum number, then it would be 881. Hope that helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
existence and uniqueness of Hermite interpolation polynomial What are the proofs of existence and uniqueness of Hermite interpolation polynomial? suppose $x_{0},...,x_{n}$ are distinct nodes and $i=1 , ... ,n$ and $m_{i}$ are in Natural numbers. prove exist uniqueness polynomial $H_{N}$ with degree N=$m_{1}+...+m_{n}$-1 satisfying $H_{N}^{(k)}$=$y_{i}^{(k)}$ k=0,1,...,$m_{i}$ & i=$0,1,\ldots,n$ ?
I think you've got your indices mixed up a bit; they're sometimes starting at $0$ and sometimes at $1$. I'll assume that the nodes are labeled from $1$ to $n$ and the first $m_i$ derivatives at $x_i$ are determined, that is, the derivatives from $0$ to $m_i-1$. A straightforward proof consists in showing how to construct a basis of polynomials $P_{ik}$ that have non-zero $k$-th derivative at $x_i$ and zero for all other derivatives and nodes. For given $i$, start with $k=m_i-1$ and set $$P_{i,m_i-1}=(x-x_i)^{m_i-1}\prod_{j\ne i}(x-x_j)^{m_j}\;.$$ Then decrement $k$ in each step. Start with $$Q_{i,k}=(x-x_i)^k\prod_{j\ne i}(x-x_j)^{m_j}\;,$$ which has zero derivatives up to $k$ at $x_i$, and subtract out multiples of the $P_{i,k'}$ with $k'\gt k$, which have already been constructed, to make the $k'$-th derivatives at $x_i$ with $k'\gt k$ zero. Doing this for all $i$ yields a basis whose linear combinations can have any given values for the derivatives. Uniqueness follows from the fact that the number of these polynomials is equal to the dimension $d=\sum_i m_i$ of the vector space of polynomials of degree up to $d-1$. Since the $P_{ik}$ are linearly independent, there's no more room for one more that also satisfies one of the conditions, since it would have to be linearly independent of all the others.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Norm of adjoint operator in Hilbert space Suppose $H$ is a Hilbert space and let $T \in B(H,H)$ where in our notation $B(H,H)$ denotes the set of all linear continuous operators $H \rightarrow H$. We defined the adjoint of $T$ as the unique $T^* \in B(H,H)$ such that $\langle Tx,y \rangle = \langle x, T^*y\rangle$ for all $x,y$ in $H$. I proved its existence as follows: Fix $y \in H$. Put $\Phi_y: H \rightarrow \mathbb{C}, x \mapsto \langle Tx,y \rangle$. This functional is continuous since $|\langle Tx, y\rangle | \leq ||Tx||\; ||y|| \leq ||T||\; ||x||\; ||y||$. Therefore we can apply the Riesz-Fréchet theorem which gives us the existence of a vector $T^*y \in H$ such that for all $x \in H$ we have $\langle Tx, y\rangle = \langle x, T^* y\rangle$. I now have to prove that $||T^*|| = ||T||$. I can show $||T^*|| \leq ||T||$: Since the Riesz theorem gives us an isometry we have $||T^*y|| = ||\Phi_y||_{H*} = \sup_{||x||\leq 1} |\langle Tx, y\rangle| \leq ||T||\;||y||$ and thus $||T^*|| \leq ||T||$. However, I do not see how to prove the other inequality without using consequences of Hahn-Banach or similar results. It seems like I am missing some quite simple point. Any help is very much appreciated! Regards, Carlo
Why don't you look at what is $T^{**}$ ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/73670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
Does Zorn Lemma imply the existence of a (not unique) maximal prolongation of any solution of an ode? Let be given a map $F:(x,y)\in\mathbb{R}\times\mathbb{R}^n\to F(x,y)\in\mathbb{R}^n$. Let us denote by $\mathcal{P}$ the set whose elements are the solutions of the ode $y'=F(x,y)$, i.e. the differentiable maps $u:J\to\mathbb{R}^n$, where $J\ $ is some open interval in $\mathbb{R}\ $, s.t. $u'(t)=F(t,u(t))$ for all $t\in J$. Let $\mathcal{P}$ be endowed with the ordering by extension. In order to prove that any element of $\mathcal{P}$ is extendable to a (not unique) maximal element, without particular hypothesis on $F$, I was wondering if the Zorn lemma can be used.
Yes, Zorn's Lemma should be all you need. Take the set of partial solutions that extend your initial solution, and order them by the subset relation under the common definition of a function as the set of pairs $\langle x, f(x)\rangle$. Then the union of all functions in a chain will be another partial solution, so Zorn's Lemma applies.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prove an inequality by Induction: $(1-x)^n + (1+x)^n < 2^n$ Could you give me some hints, please, to the following problem. Given $x \in \mathbb{R}$ such that $|x| < 1$. Prove by induction the following inequality for all $n \geq 2$: $$(1-x)^n + (1+x)^n < 2^n$$ $1$ Basis: $$n=2$$ $$(1-x)^2 + (1+x)^2 < 2^2$$ $$(1-2x+x^2) + (1+2x+x^2) < 2^2$$ $$2+2x^2 < 2^2$$ $$2(1+x^2) < 2^2$$ $$1+x^2 < 2$$ $$x^2 < 1 \implies |x| < 1$$ $2$ Induction Step: $n \rightarrow n+1$ $$(1-x)^{n+1} + (1+x)^{n+1} < 2^{n+1}$$ $$(1-x)(1-x)^n + (1+x)(1+x)^n < 2·2^n$$ I tried to split it into $3$ cases: $x=0$ (then it's true), $-1<x<0$ and $0<x<1$. Could you tell me please, how should I move on. And do I need a binomial theorem here? Thank you in advance.
The proof by induction is natural and fairly straightforward, but it’s worth pointing out that induction isn’t actually needed for this result if one has the binomial theorem at hand: Corrected: $$\begin{align*} (1-x)^n+(1+x)^n &= \sum_{k=0}^n\binom{n}k (-1)^kx^k + \sum_{k=0}^n \binom{n}k x^k\\ &= \sum_{k=0}^n\binom{n}k \left((-1)^k+1\right)x^k\\ &= 2\sum_{k=0}^{\lfloor n/2\rfloor}\binom{n}{2k}x^{2k}\\ &< 2\sum_{k=0}^{\lfloor n/2\rfloor}\binom{n}{2k}\tag{1}\\ &= 2\cdot 2^{n-1}\tag{2}\\ &= 2^n, \end{align*}$$ where the inequality in $(1)$ holds because $|x|< 1$, and $(2)$ holds for $n>0$ because $\sum\limits_{k=0}^{\lfloor n/2\rfloor}\binom{n}{2k}$, the number of subsets of $[n]=\{1,\dots,n\}$ of even cardinality, is equal to the number of subsets of $[n]$ of odd cardinality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 2 }
Solving a recurrence using substitutions I have to solve this recurrence using substitutions: $(n+1)(n-2)a_n=n(n^2-n-1)a_{n-1}-(n-1)^3a_{n-2}$ with $a_2=a_3=1$. The only useful substitution that I see is $b_n=(n+1)a_n$, but I don't know how to go on, could you help me please?
So if $b_n=(n+1)a_n$, then $b_{n-1}=na_{n-1}$, and $b_{n-2}=(n-1)a_{n-2}$, and you equation becomes $$(n-2)b_n=(n^2-n-1)b_{n-1}-(n-1)^2b_{n-2}$$ which is a little simpler than what you started with, though I must admit I can't see offhand any easy way to get to a solution from there. Are you working from a text or some notes that have other examples of solving via substitution? Maybe there's a hint in some other example as to how best to proceed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Is $\sum_{n=0}^{\infty}2^n$ equal to $-1$? Why? Possible Duplicate: Divisibility with sums going to infinity From Wikipedia and Minute Physics i see that the sum would be -1. I find this challenging to understand, how does a sum of positive integers end up being negative?
It's all about the principle of analytic continuation. The function $f(x)=\sum_{n=0}^\infty z^n$ defines an analytic function in the unit disk, equal to the meromorphic function $g(z)=1/(1-z)$. Note that the equality $f\equiv g$ holds only in the disk, where $f$ converges absolutely. Despite this, if we naively wanted to assign any value to $f(x)$ outside of $\vert z \vert < 1$, an intuitive choice would be to set $f(2)=g(2)= -1$. Moreover, the theory of complex functions implies that this is somehow the only possible choice (for this function, at least). The principle is called analytic continuation, and it is incredibly important in many areas of mathematics, most notably complex analysis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why is $(0, 0)$ not a minimum of $f(x, y) = (y-3x^2)(y-x^2)$? There is an exercise in my lists about those functions: $$f(x, y) = (y-3x^2)(y-x^2) = 3 x^4-4 x^2 y+y^2$$ $$g(t) = f(vt) = f(at, bt); a, b \in \mathbf{R}$$ It asks me to prove that $t = 0$ is a local minimum of $g$ for all $a, b \in \mathbf{R}$ I did it easily: $$g(t) = 3 a^4 t^4-4 a^2 t^2 b t+b^2 t^2$$ $$g'(t) = 2 b^2 t-12 a^2 b t^2+12 a^4 t^3$$ $$g''(t) = 2 b^2-24 a^2 b t+36 a^4 t^2$$ It is a critical point: $$g'(0) = 0; \forall a, b$$ Its increasing for all a, b: $$g''(0) = 2b^2 > 0; \forall b \ne 0$$ and $$b = 0 \implies g(t) = 3 a^4 t^4$$ which has only one minimum, at $0$, and no maximum However, it also asks me to prove that $(0, 0)$ is not a local minimum of $f$. How can this be possible? I mean, if $(0, 0)$ is a minimum over every straight line that passes through it, then, in this point, $f$ should be increasing in all directions, no?
Draw the set of points in the $xy$-plane where $f(x,y) = 0$. Then look at the regions of the plane that are created and figure out in which of them $f(x,y)$ is positive and in which $f(x,y)$ is negative. From there, you should be able to prove that $(0,0)$ is neither a local maximum nor a local minimum point. (Hint: Using your sketch, is there any disk centered at $(0,0)$ in which $f(x,y)$ takes its minimum value at $(0,0)$?) As important as understanding why the phenomenon you are observing is happening (no local minimum at the origin even though the function restricted to any straight line through the origin has a local minimum there) is to figure out how you would construct such a function and why the example given is, in some sense, the simplest kind of example you could construct. The idea used here is very similar to creating an example which shows that $\lim_{(x,y) \rightarrow (0,0)} f(x,y)$ may not exist even if the limit exists along all straight lines approaching the origin. What you're really learning in this example is that the seemingly reasonable intuition that we would naturally have that we can understand a function of two variables near a point by understanding all of the functions of one variable obtained by restricting the function to lines through that point is misguided -- in particular, it fails if we are trying to find local maxima and minima.
{ "language": "en", "url": "https://math.stackexchange.com/questions/73949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 2 }
Statistics: Maximising expected value of a function of a random variable An agent wishes to solve his optimisation problem: $ \mbox{max}_{\theta} \ \ \mathbb{E}U(\theta S_1 + (w - \theta) + Y)$, where $S_1$ is a random variable, $Y$ a contingent claim and $U(x) = x - \frac{1}{2}\epsilon x^2$. My problem is - how to I 'get rid' of '$\mathbb{E}$', to get something I can work with? Thanks
Expanding the comment by Ilya: $$\mathbb{E}\,U(\theta S_1 + (w - \theta) + Y) =\mathbb{E} (\theta S_1 + (w - \theta) + Y) - \frac{\epsilon}{2} \mathbb{E} \left((\theta S_1 + (w - \theta) + Y)^2\right) $$ is a quadratic polynomial in $\theta $ with negative leading coefficients. Its unique point of maximum is found by setting the derivative to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Are any two Cantor sets ; "Fat" and "Standard" Diffeomorphic to each Other? All: I know any two Cantor sets; "fat" , and "Standard"(middle-third) are homeomorphic to each other. Still, are they diffeomorphic to each other? I think yes, since they are both $0$-dimensional manifolds (###), and any two $0$-dimensional manifolds are diffeomorphic to each other. Still, I saw an argument somewhere where the claim is that the two are not diffeomorphic. The argument is along the lines that, for $C$ the characteristic function of the standard Cantor set integrates to $0$ , since $C$ has (Lebesgue) measure zero, but , if $g$ where a diffeomorphism into a fat Cantor set $C'$, then: $ f(g(x))$ is the indicator function for $C'$, so its integral is positive. And (my apologies, I don't remember the Tex for integral and I don't have enough points to look at someone else's edit ; if someone could please let me know ) By the chain rule, the change-of-variable $\int_0^1 f(g(x))g'(x)dx$ should equal $\int_a^b f(x)dx$ but $g'(x)>0$ and $f(g(x))>0$ . So the change-of-variable is contradicted by the assumption of the existence of the diffeomorphism $g$ between $C$ and $C'$. Is this right? (###)EDIT: I realized after posting --simultaneously with "Lost in Math"* , that the Cantor sets {C} are not 0-dimensional manifolds (for one thing, C has no isolated points). The problem then becomes, as someone posted in the comments, one of deciding if there is a differentiable map $f:[0,1]\rightarrow [0,1]$ taking C to C' with a differentiable inverse. * *I mean, who isn't, right?
I think I found an answer to my question, coinciding with the idea in Ryan's last paragraph: absolute continuity takes sets of measure zero to sets of measure zero. A diffeomorphism defined on [0,1] is Lipshitz continuous, since it has a bounded first derivative (by continuity of f' and compactness of [0,1]), so that it is absolutely continuous, so that it would take sets of measure zero to sets of measure zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Explicit formula for Fermat's 4k+1 theorem Let $p$ be a prime number of the form $4k+1$. Fermat's theorem asserts that $p$ is a sum of two squares, $p=x^2+y^2$. There are different proofs of this statement (descent, Gaussian integers,...). And recently I've learned there is the following explicit formula (due to Gauss): $x=\frac12\binom{2k}k\pmod p$, $y=(2k)!x\pmod p$ ($|x|,|y|<p/2$). But how to prove it? Remark. In another thread Matt E also mentions a formula $$ x=\frac12\sum\limits_{t\in\mathbb F_p}\left(\frac{t^3-t}{p}\right). $$ Since $\left(\dfrac{t^3-t}p\right)=\left(\dfrac{t-t^{-1}}p\right)=(t-t^{-1})^{2k}\mod p$ (and $\sum t^i=0$ when $0<i<p-1$), this is, actually, the same formula (up to a sign).
Here is a high level proof. I assume it can be done in a more elementary way. Chapter 3 of Silverman's Arithmetic of Elliptic Curves is a good reference for the ideas I am using. Let $E$ be the elliptic curve $y^2 = x^3+x$. By a theorem of Weyl, the number of points on $E$ over $\mathbb{F}_p$ is $p- \alpha- \overline{\alpha}$ where $\alpha$ is an algebraic integer satisfying $\alpha \overline{\alpha} =p$, and the bar is complex conjugation. (If you count the point at $\infty$, then the formula should be $p - \alpha - \overline{\alpha} +1$.) Let $p \equiv 1 \mod 4$. We will establish two key claims: Claim 1: $\alpha$ is of the form $a+bi$, for integers $a$ and $b$, and Claim 2: $-2a \equiv \binom{(p-1)/2}{(p-1)/4} \mod p$. So $a^2+b^2 = p$ and $a \equiv -\frac{1}{2} \binom{(p-1)/2}{(p-1)/4}$, as desired. Proof sketch of Claim 1: Let $R$ be the endomorphism ring of $E$ over $\mathbb{F}_p$. Let $j$ be a square root of $-1$ in $\mathbb{F}_p$. Two of the elements of $R$ are $F: (x,y) \mapsto (x^p, y^p)$ and $J: (x,y) \mapsto (-x,jy)$. Note that $F$ and $J$ commute; this uses that $j^p = j$, which is true because $p \equiv 1 \mod 4$. So $F$ and $J$ generate a commutative subring of $R$. If you look at the list of possible endomorphism rings of elliptic curves, you'll see that such a subring must be of rank $\leq 2$, and $J$ already generates a subring of rank $2$. (See section 3.3 in Silverman.) So $F$ is integral over the subring generated by $J$. That ring is $\mathbb{Z}[J]/\langle J^2=-1 \rangle$, which is integrally closed. So $F$ is in that ring, meaning $F = a+bJ$ for some integers $a$ and $b$. If you understand the connection between Frobenius actions and points of $E$ over $\mathbb{F}_p$, this shows that $\alpha = a+bi$. Proof sketch of Claim 2: The number of points on $E$ over $\mathbb{F}_p$ is congruent modulo $p$ to the coefficient of $x^{p-1}$ in $(x^3+x)^{(p-1)/2}$ (section 3.4 in Silverman). This coefficient is $\binom{(p-1)/2}{(p-1)/4}$. So $$- \alpha - \overline{\alpha} \equiv \binom{(p-1)/2}{(p-1)/4} \mod p$$ or $$-2a \equiv \binom{(p-1)/2}{(p-1)/4} \mod p$$ as desired. Remark: This is very related to the formula Matt E mentions. For $u \in \mathbb{F}_p$, the number of square roots of $u$ in $\mathbb{F}_p$ is $1+\left( \frac{u}{p} \right)$. So the number of points on $E$ is $$p+\sum_{x \in \mathbb{F}_p} \left( \frac{x^3+x}{p} \right).$$ This is essentially Matt's sum; if you want, you could use the elliptic curve $y^2 = x^3-x$ in order to make things exactly match, although that would introduce some signs in other places. So your remark gives another (morally, the same) proof of Claim 2.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 4, "answer_id": 3 }
what's the relationship between a.s. continuous and m.s. continuous? suppose that X(t) is a s.p. on T with $EX(t)^2<+\infty$. we give two kinds of continuity of X(t). * *X(t) is continuous a.s. *X(t) is m.s. continuous, i.e. $\lim\limits_{\triangle t \rightarrow 0}E(X(t+\triangle t)-X(t))^2=0$. Then, what's the relationship between these two kinds of continuity.
I don't know if there is a clear relation between both concepts. For example if you take the Brownian Motion it satisfies 1 and 2 but if you take a Poisson process then it only satisfies 2 (although it satisfies a weaker form of condition 1 which is continuity in probability). The question is what do you want to do with those processes ? Regards
{ "language": "en", "url": "https://math.stackexchange.com/questions/74203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }