Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
The time it takes an object to go through a hole through the Earth's center Newton's law of attraction is given by the differential equation $$\frac {dv}{dt} = -\frac {gr}{R}$$ where $r$ is the distance from the center of the Earth, $R$ is the radius of the Earth, and $g$ is the acceleration due to gravity. What is the time that it takes for an object dropped through one end of the hole to reach the other end? Solving the differential equation for $t$ in terms of $v$ is futile because when the ball reaches the other end the velocity of the ball is $0$. I also tried expressing $v$ as a function of $r$ by $$\frac{dv}{dt} = \left(\frac{dv}{dr}\right)\left(\frac{dr}{dt}\right) = v \frac{dv}{dr}$$ but it did not help because while I can calculate the speed of the object halfway, at this point $r=0$ and thus I cannot calculate the time of one fourth of an oscillation (the period of one oscillation being the time it takes for the object to get back at the end of the hole from which it was dropped).
Write it as $\frac {d^2r}{dt^2}+\frac {gr}R=0$ and you have a harmonic oscillator The angular frequency is $\sqrt {\frac {g}R}$ and the period is $2 \pi\sqrt{\frac { R}{g}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/496501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Two exercises on set theory about Cantor's equation and the von Neumann hierarchy Good evening to all. I have two exercises I tried to resolve without a rigorous success: * *Is it true or false that if $\kappa$ is a non-numerable cardinal number then $\omega^\kappa = \kappa$, where the exponentiation is the ordinal exponentiation? I found out that the smallest solution for this Cantor's Equation is $\epsilon_0$ but it is numerable. I was thinking about $\omega_1$ but i don't know how to calculate something like $\omega^{\omega_1}$ and see if it is equal or not to $\omega_1$. *Exists an ordinal number $\alpha > \omega$ that verify $\alpha \times \alpha \subseteq V_\alpha$ , where $V_\alpha$ is the von Neumann hierarchy? Thanks in advance
You really just have to apply the definitions, in both cases, and see what happens. For the first one, recall that the definition in the case of exponentiation is as follows: $$\alpha^0=1;\ \alpha^{\beta+1}=\alpha^\beta\cdot\alpha;\ \alpha^\delta=\sup\{\alpha^\gamma\mid\gamma<\delta\}.$$ Since $\kappa$ is a limit ordinal then we have that $\omega^\kappa=\sup\{\omega^\delta\mid\delta<\kappa\}$. It is a nontrivial claim that one should prove first, but $|\omega^\delta|=|\delta|$ for every infinite ordinal $\delta$. Now it's simple, since $\kappa$ is a cardinal, we have that $|\omega^\delta|=|\delta|<\kappa$ for $\delta<\kappa$, and so the supremum in the definition has to be $\kappa$. As for the second problem, note that $\alpha\times\alpha=\{\{\{x\},\{x,y\}\}\mid x,y\in\alpha\}$. So you just have to verify what properties $\alpha$ has that $\{\{x\},\{x,y\}\}$ has rank smaller than $\alpha$. (Hint: what is the rank of $\{x\}$ and $\{x,y\}$?)
{ "language": "en", "url": "https://math.stackexchange.com/questions/496591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Bounds for $\binom{n}{cn}$ with $0 < c < 1$. Are there really good upper and lower bounds for $\binom{n}{cn}$ when $c$ is a constant $0 < c < 1$? I know that $\left(\frac{1}{c^{cn}}\right) \leq \binom{n}{cn} \leq \left(\frac{e}{c}\right)^{cn}$.
Hint: as $n$ goes to infinity, you can approximate $\binom{n}{cn}$ by Entropy function as follows: $$ \binom{n}{cn}\approx e^{nH(c)}. $$ where $H(c)=-c\log(c)-(1-c)\log(1-c)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/496707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
If A and B are compact than also A+B Suppose we have a topological vector space $X$ and $A, B\subset X$. We define A+B to be the set of the sums $a+b$ where $a\in A$ and $b\in B$. We should prove that also A+B is compact if A and B are compact. But the union of arbitrary compact sets isn't compact in generell. Thus: why is the statement wel true?
The function $F(x,y): X \times X \to X$ defined by $F(x,y)=x+y$ is continuous, and $A \times B$ is compact in $X \times Y$. The image of a compact set under an continuous function is compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/496786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Sum of low rank tensors How high can the sum of $k$ low rank $m\times m\times\dots \times m$ tensors of rank $t$ be? Is there a good upper bound?
Rank $t$ means $t$ (but no more) linearly independent rows/columns. If those $t$ that are linearly independent in the first matrix are also linearly independent to $t$ of those independent in the second one, you get rank $2t$, and so on. Therefore, your answer is $r_{MAX} = \min \{ m, kt \}$. Without knowing anything more about these matrices, their sum will have rank anywhere between $0$ and $r_{MAX}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/496852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Conversion Question A heavy rain fell on a city for 26 minutes, at a rate of 3.9mm/hr. If the area of the city is 244km^2, how many gallons of water fell on the city that day?
First convert minutes into hours. So we have: $26 min = \frac{26}{60}h = \frac{13}{30}h$ Now multiply the time by the rate to find how much rain did fall. $\frac{13}{30} \times \frac{39}{10} = \frac{507}{300}mm$. If rain wouldn't go underground and it stayed above ground then the depth of the "pool" would be $\frac{507}{300}mm$. Now multiply that by the area to get volume. First we'll try to convert everything in meters, because meter is "middle" unit for kilometer and milimeter. $\frac{507}{300 000} \times 244 000 000 = 412 360 m^3$ Now just convert to gallons. $$412 360 m^3 \approx 108933987 \text{ gallons}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/496971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Better solution to an elementary number theory problem I found the following problem about elementary number thery Alice designed a program such that it takes an integer $n>1$, and then it factors it as $a_0^{e_0}a_1^{e_1}a_2^{e_2}\cdots a_1^{e_n}$. It then calculates $r=a_0e_0+a_1e_1\cdots+a_ne_n+1$ and repeats the process with $r$. Show that it always end in a periodic sequence, and find every possible period. My solution: Define $f(x)$ and $\rightarrow$ as an iteration of the program. Lemma 1.1:For every natural $x\ge3$, we have $x^2>2x+2$ Proof: $$x-1\ge2\implies x^2\ge2x+3>2x+2$$ Lemma 1.2:For every naturals $s\ge2$ and $x\ge3$, we have $x^s>sx+2$ Proof: Inductive hypothesis:$x^k\ge xk+2$ $\forall x\ge2\in\mathbb{N}$.Therefore $$x^{s+1}\ge sx^2+2x > sx+x+1=x(s+1)+2$$ Using lemma 1.1 as basecase, we finish the proof. Lemma 1.3:For every different primes $p,q$, we have $f(pq)\le pq$ Proof: Let wlog $p<q$ Then, $p\ge2$,$p\ge3$,$p\neq q$ $$p-1\ge1$$ $$q-1\ge2$$ $$pq-q-p+1\ge2$$ $$pq\ge q+p+1$$ $$pq\ge f(pq)$$ And equality is attained iff $p=2,q=3$ Lemma 1.4:For every $x\in\mathbb{N}$ more than $2$ prime factors and at least two different ones, we have $f(x)<x$ Proof: From Lemma 1.3 we know that if $p$ and $q$ are different primes $$pq\ge q+p+1$$ Multiply by a prime $x$ both sides $$xpq\ge x(q+p)+x>p+q+1+x$$ So $xpq>f(xpq)$ for every prime $x$ Since we want to show that $f(ypq)<ypq$ $\forall y\ge2$, we can multiply by its prime factors $x$ a finite number of times and likewise obtain the result. Then, from Lemma 1.2 we know that $p^k>f(p^k)+1$ for all $k\ge2$,$p\ge3$. But we don't know what happens when $k=1$ or $p=2$. If $p=2$ then $$f(2^k)=2k+1$$ Since we know $2^3\ge 2*3+2$ we use the proof of Lemma 1.2 to verify $2^s>2s+2$ for $s>3$ Then we are left with $$f(2)=3\rightarrow 4\rightarrow 5\rightarrow 6\rightarrow 6$$ $$f(4)=5\rightarrow 6\rightarrow 6$$ $$f(8)=7\rightarrow 8$$ And $2^s>f(2^s)+1$ for $s>3$. So if $p+1$ is a number of the form $2^s$ where $s>3$ $p\rightarrow p+1=2^s\rightarrow 2s+1<2^s-1=p$, and we note that after two iterations the number has decreased. If $k=1$, then we have $$f(p)=p+1$$ If $p\neq2$(we have dealt with that case before), then $f(p)=2k$ for some integer k\ge2$ Then, either $2k$ is a power of $2$ or it is not. If it is, then we have dealt with that case before and we are done. If it is not, then $2k$ has two or more prime factors. If it has $2$, we know from lemma 1.3 that we achieve equality iff $p=6$ and that the iterations are decreasing elsewhere. If it has more than $2$, since we have dealt with perfect powers before we know it has at least two different factors, and Lemma 1.4 says there is no more solutions and the iteration is decreasing. Answer: $7\rightarrow8\rightarrow7$ and $6\rightarrow6$ Question: Was there an easier, faster way to solve this?
Lemma 1: If $ x $ is a prime, then $f(x) = x+1$. Lemma 2: If $x = mn$ (not necessarily coprime), then $f(mn) - 1 = [f(m) - 1] + [f(n) - 1 ]$. I consider this the crux of the function. This is easily proved (once you know it). Now check that $f(4) = 5$, $f(6) = 6$ and $f(8) = 7$. Lemma 3: $f(x) \leq x+1$. Lemma 4: If $ x\geq 9$ is composite, then $f(x) \leq x-2 $. Let $x=mn$, then we want to show that $f(mn) \leq f(m) + f(n) -1 \leq m+n+1 \leq mn-2.$ The last inequality is true because $(m-1)(n-1) \geq 4$. Lemma 5: If $x \geq 9$, then $f(f(x)) \leq x-1 < x$. Corollary: Every sequence of $f^{(i)}(x)$ is eventually periodic. Corollary: If $x \geq 9$, then $x$ doesn't appear in a periodic sequence. We just have to check $f(x)$ for $x=1$ to 8 to find the various periodic sequences.
{ "language": "en", "url": "https://math.stackexchange.com/questions/497071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How prove this $\displaystyle\sum_{k=1}^{n}\sin\frac{1}{(k+1)^2}\le\ln{2}$ show that $$\sum_{k=1}^{n}\sin\dfrac{1}{(k+1)^2}\le\ln{2}$$ I think this is nice inequality, and idea maybe use this $$\sin{x}<x$$ so $$\sum_{k=1}^{n}\sin{\dfrac{1}{(n+1)^2}}<\sum_{k=1}^{n}\dfrac{1}{(k+1)^2}<\dfrac{\pi^2}{6}-1\approx 0.644<\ln{2}$$ But this problem is from Middle school students compution,so they don't know $$\sum_{k=1}^{\infty}\dfrac{1}{k^2}=\dfrac{\pi^2}{6}$$ so I think this problem have other nice methods? Thank you and this problem is from http://tieba.baidu.com/p/2600561301
Once you noticed that $\sin x\le x$ you do not need to know the exact value of $\sum_{k=1}^{\infty}\frac{1}{k^2}.$ Instead, you can approximate it by evaluating first few terms and estimating the tail. More precisely, $$\sum_{k=2}^{n}\frac{1}{k^2}=\left(\frac{1}{4}+\frac{1}{9}+\frac{1}{16}\right)+\frac{1} {5^2}+...+\frac{1}{n^2}\le 0.4236...+\frac{1}{4\cdot 5}+\frac{1}{5\cdot6}...+\frac{1}{(n-1)\cdot n}=$$ $$=0.4236...+\frac{1}{4}-\frac{1}{n}\le 0.68< \ln 2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/497138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Calculate the possible combinations for an eight-character password Can any when one help me in this question: Calculate the possible combinations for; * *An eight-character password consisting of upper- and lowercase letters and at least one numeric digit $(0–9)$? *A ten-character password consisting of upper- and lowercase letters and at least one numeric digit $(0–9)$? Thanks
Hint: (v): One character can be one of 62 (= 26[A-Z]+26[a-z]+10[0-9]) letters, and choosing one character for password is independent of choosing other characters for password. (vi): #(An alphanumeric password containing at least one numeric digit) = #(An alphanumeric password) - #(An alphanumeric password which not contains any digit)
{ "language": "en", "url": "https://math.stackexchange.com/questions/497257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
discretization of probability measures Suppose I have given a probability measure $\nu$ over the positive reals. For a fixed $n\in\mathbb{N}$, we set $\lambda := \frac{1}{n}$ and $A_n:=\{\lambda k, k=0,\dots\}$. Now we look at a certain discretization of $\nu$ on $A_n$: $$\nu_n(\{0\}):= \int_0^\lambda (1-nx)d\nu (x) \\ \nu_n(\{\lambda k\}):=\int_{(k-1)\lambda}^{(k+1)\lambda}(1-|nx-k|)d\nu(x)$$ If we have a function $f:A_n\to\mathbb{R}$, I want to show that the following equation holds: $$\int g(x)d\nu_n(x)=\int F^n(g)d\nu(x) \tag{1}$$ where $F^n(g):=(1-\kappa)g(\lfloor nx\rfloor \lambda)+\kappa g((\lfloor nx\rfloor +1)\lambda)$. Note for continuous $g$, we have $F^n(g)\to g$ pointwise. About $(1)$, we need for sure the result, that if you have a measure of the form $\mu_f=\int fd\mu$, then $\int gd\mu_f=\int gf d\mu$. Writing the LHS out, I do not see why this should be the RHS. Thanks in advance for your help.
Both sides of (1) are linear functionals of the sequence $(g(x))_{x\in A_n}$, the LHS because the support of $\nu_n$ is included in $A_n$ and the RHS because $F^n(g)$ depends on $(g(x))_{x\in A_n}$ only. Fix some $k\geqslant0$. The coefficient of $g(k/n)$ on the LHS is $\nu_n(\{k/n\})$. The coefficient of $g(k/n)$ on the RHS is $(1-\kappa)\nu(B_k)+\kappa\nu(B_{k-1})$, where $B_i=\{x\mid\lfloor nx\rfloor=i\}=[i/n,(i+1)/n)$. By definition, $$ \nu_n(\{k/n\})=\int_{-1/n}^{1/n}(1-n|x|)\mathrm d\nu(x). $$ I fail to see how this can coincide with $$ (1-\kappa)\nu(B_k)+\kappa\nu(B_{k-1})=\int_{-1/n}^{1/n}u_\kappa(x)\mathrm d\nu(x), \qquad u(x)=(1-\kappa)\mathbf 1_{x\gt0}+\kappa\mathbf 1_{x\lt0}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/497328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that $\frac{1}{(n+1)!}(1+\frac{1}{(n+1)}+\frac{1}{(n+1)^2}+\frac{1}{(n+1)^3}+\cdots)=\frac{1}{n!n}$ Show that $\frac{1}{(n+1)!}(1+\frac{1}{(n+1)}+\frac{1}{(n+1)^2}+\frac{1}{(n+1)^3}+\cdots)=\frac{1}{n!n}$, (here $n$ is a natural number) Maybe easy, but I cannot see it. Thanks in advance! Alexander
Since $$1+x+x^2+\cdots=\frac1{1-x}$$ Therefore, $$1+\frac1{n+1}+\frac1{(n+1)^2}+\cdots=\frac1{1-\frac1{n+1}}=\frac1{\frac{n+1-1}{n+1}}=\frac{n+1}{n}$$ Now, $$\frac1{(n+1)!}\cdot(1+\frac1{n+1}+\frac1{(n+1)^2}+\cdots)=\frac1{(n+1)!}\cdot\frac{n+1}{n}=\frac1{n!n}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/497429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Expressing a $3\times 3$ determinant as the product of four factors I am attempting to express the determinant below as a product of four linear factors $$\begin{vmatrix} a & bc & b+c\\ b & ca & c+a\\ c & ab & a+b\\ \end{vmatrix} = a\begin{vmatrix} ca & c+a\\ ab & a+b\\ \end{vmatrix} - bc\begin{vmatrix} b & c+a\\ c & a+b\\ \end{vmatrix} +(b+c)\begin{vmatrix} b & ca\\ c & ab\\ \end{vmatrix}$$ This is as far as I get before it gets too messy $$ =a^3(c-b)-bc\{(b-c)(b+c)+a(b-c)\}+a(b-c)(b+c)^2 $$ But I cant seem to arrive at the answer in the book, which is given as $$ (a-b)(b-c)(c-a)(a+b+c) $$ Am I doing something wrong as I have been stuck on this question for three days. Thanks in advance! $$ $$
Start by adding the 1st to the 3rd column to create a column of $a+b+c$'s. Then subtract 3rd row from 2nd & 1st ones to make two out of three entries in that column zero. Now expand wrt that column.
{ "language": "en", "url": "https://math.stackexchange.com/questions/497498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How to solve $e^x = 2$ I know that $\ln(x)$ is the inverse of the exponential function $a^x$. So I thought that $$ e^x=2 \Leftrightarrow x = \ln(2) $$ but my calculator says $x = \ln(2) + 2 i \pi n$, where $N \in \mathbb{Z}$. What have $e^x$ and $\ln(x)$ to do with the unit circle?
This comes from the complex analysis ideas. If we know that $x$ is real valued, then clearly $x = \log 2$. However if $x$ is allowed to be complex valued, things become trickier. We know that for any $k\in\mathbb{Z}$, $\,\,e^{i\theta + 2k\pi i} = e^{i\theta}$. You can work this out yourself with Euler's formula. So the issue is that any complex number has infinitely many equivalent expressions (when you write it in polar form). What this boils down to is the fact that if you write your complex number in polar form, if you add $2\pi$ to the angle, you end up at the same exact complex number. Since we don't know exactly which range of $\theta$ you wanted for the complex number, we can't specify one value of $\theta$. So take your example. $$e^x = e^{x+2k\pi i} = 2.$$ Let's rewrite $2$ slightly as $e^{\log 2 + 2l\pi i}$ for $l\in\mathbb{Z}$ and move it over to the left. We get $$e^{x+2k\pi i - \log 2 - 2l\pi i} = 1 = e^{2m\pi i}$$ And so we know that $x+2k\pi i - \log 2 - 2l\pi i = 2m\pi i$ for some integer $m$. Since we don't care about the individual values of $k,l,m$ let's forget about them and just group them together. What we end up with is $x = \log 2 + 2n\pi i$ like you have. Ostensibly the problem is that since there is no unique representation for a complex number in polar form (which implicitly invokes the complex exponential), there cannot possibly be any unique way to take logarithms. Inverses of multivalued functions are ill-conceived. To get around this, what we usually do is a priori restrict ourselves to certain ranges of $\theta$ (say $[0,2\pi)$). In this case, there is a unique polar expression and we can easily do the inversion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/497559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Predicate Logic Translating "All But One" I need to translate an English sentence including the phrase "all but one" into predicate logic. The sentence is: "All students but one have an internet connection." I'm not sure how to show "all but one" in logic. I could say $\forall x ((x \neq a) \rightarrow I(x))$ $I(x)$ being "$x$ has an internet connection" But that clearly wouldn't work in this case, as we don't know which student it is. I could say that $\exists x(\neg I(x))$ But it doesn't seem like that has the same meaning. Thanks in advance for any help you can give!
"For all but one $\;x\;$, $\;P(x)\;$ holds" is the same as "there exists a unique $\;x\;$ such that $\;\lnot P(x)\;$ holds. Normally the notation $\;\exists!\;$ is used for "there exists a unique" (just like $\;\exists\;$ is used for "there exists some"). If your answer is allowed to use $\;\exists!\;$, then the above gives you the answer. If not, then there are different ways to write $\;\exists!\;$ in terms of $\;\exists\;$ and $\;\forall\;$. The one I like best, which also results in the shortest formula, can be found in another answer of mine.
{ "language": "en", "url": "https://math.stackexchange.com/questions/497639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Lemma for the construction of the reciprocity map I do not understand the highlighted part in the following proof, namley that $N(\tilde x)=1$. To give some context, this proof is taken from Neukirch's Algebraic Number Theory, where $\tilde K$ indicates the maximal unramified extension of $K$ (and the same for $L$). It would suffice to know that $\sigma_n$ and $\tau_i$ commute, but I don't know if it's true in general..
Ok, solved, they commute because $\sigma\in G(\tilde{L}\mid L)$ and $\tau_i\in G(\tilde L\mid\tilde K)$, which are two normal subgroups of $G(\tilde L\mid K)$ whose intersection is trivial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/497706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
lcm in $\mathbb{Z}[\sqrt{-5}]$ does not exists I need to show that lcm of $2$ and $1+\sqrt{-5}$ does not exists in $\mathbb{Z}[\sqrt{-5}]$ Getting no idea about how to start, I was thinking when does lcm cannot exists!
Let $a=x+y\sqrt{-5}$ be an LCM of $2$ and $1+\sqrt{-5}$. Then $(a)$ is equal to the ideal $I:=(2) \cap (1+\sqrt{-5})$. It can be seen that $I$ has index $12$ in $\mathbb{Z}[\sqrt{-5}]$. But $(a)$ has index $Nm(a)=x^2+5y^2$ which is different from $12$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/497775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Prove $3|n(n+1)(n+2)$ by induction I tried proving inductively but I didn't really go anywhere. So I tried: Let $3|n(n+1)(n+2)$. Then $3|n^3 + 3n^2 + 2n \Longrightarrow 3|(n(n(n+3)) + 2)$ But then?
Consider the binomial $(x+1)^{n+2}$. The coefficient of the $x^3$ term is $${n+2\choose 3}={(n+2)!\over 3!(n-1)!}={n(n+1)(n+2)\over 6}$$ Every coefficient of $(x+1)^n$ is an integer for $n$ an integer, therefore $6|n(n+1)(n+2)$ and thus $3|n(n+1)(n+2)$. Note that this mechanism can apply to any integer, including showing that $1000|n(n+1)(n+2)\cdots(n+998)(n+999)$. As noted elsewhere, this property of the binomial coefficients is provable by induction, which demonstrates that an inductive proof is truly the proper way to show the question's property. In binomial terms, the inductive step occurs by noting that $${n\choose k-1}+{n\choose k}={n+1\choose k}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/497859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 7, "answer_id": 4 }
Prime and Factorization, prime divisor property Let $p$ be prime. Then if $p|ab$ then $p|a$ or $p|b$. Proof: Suppose $p$ does not divide $a$ Then $\gcd (a,p) = 1$ since $p$ is prime. $$ 1 = ma + np $$ $$ b = mab +npb$$ Since $p|map$ and $p|npb$ then $p|b$ I have a problem understanding that $p|map$, can anyone show me how this works?
Note that the initial condition is $p|ab$, so from this follows that $ab = pk$, where $k$ is some positive integer. Assuming that $gcd(p,a) = 1$, then from the Bezout Lemma follows $$1 = ma + np$$ $$b = mab + npb$$ This is something that you've already done, now make the substitution and get: $$b = mpk + npb$$ $$b = p(mk + pb)$$ Because the term in the parenthesis is integer it follows that $p|b$ Q.E.D.
{ "language": "en", "url": "https://math.stackexchange.com/questions/498028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Mean of probability distribution function. The current chapter I am working on is continuous random variables. I know that the mean value of a continuous random variable is: $$ E[X] =\int_{-\infty}^{\infty} xf(x) dx $$ That being said, my question is to find $E[X]$ of the following table: $$ X |\hspace{4 mm} -3 \hspace{4 mm}|\hspace{4 mm} 6 \hspace{4 mm} |\hspace{4 mm} 9 \\ f(x) |\hspace{4 mm} 1/6 \hspace{4 mm}|\hspace{2 mm} 1/2\hspace{1 mm} | 1/3 $$ I want to confirm that this is in fact a DISCRETE question simply included in my continuous problem set and thus $E[X]$ will equal the following: $$E[X] = (-3 * 1/6) + (6 * 1/2) + (9 * 1/3) = 5.5$$ Additionally, I have solved $E[X^2]$ to equal the following: $$E[X^2] = (-3^2 * 1/6) + (6^2 * 1/2) + (9^2 * 1/3) = 43.5$$ In summary, my concern is that this seemingly discrete variable has been placed in my continuous problem set and I would like to confirm both that conclusion and my methodology for calculating my means. Thanks to all!
Yes, it is right. However, and just for fun, there is an alternative definition of $E[X]$: $$\mu_X=E[X]=\int\limits_{0}^{+\infty}{1-F_x(x)dx} -\int\limits_{-\infty}^{0}{F_x(x)dx}$$ and one can prove that is equivalent to other definition (good exercise). This definition is general for any random variable, discrete or continuous or neither. All what you need is calculate $F(x)$ with $f(x)$ and all done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/498096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Verifying finite simple groups The classification of finite simple groups required thousands of pages in journals. The end result is that a finite group is simple if and only if it is on a list of 26 sporadic groups and several families of groups. Usually in classification theorems proving that the items on the list do what they're supposed to do is far simpler than proving that the list is complete. Is that the case here? How hard would it be for someone who only knows basic group theory to verify that the groups on the list really are finite simple groups? Update: To break the question up a bit, which parts of the verification would be easiest, hardest, tedious but elementary, etc.? For example, the prime cyclic groups are simple and trivial to verify.
Probably the best source for this would be the (graduate level) textbook The Finite Simple Groups by R.A. Wilson. It is under 300 pages and covers all of the finite simple groups. It proves simplicity of all of them. It proves existence and uniqueness of nearly all of them. It describes interesting structure of most of them. I have found its explanations to be fairly simple and not to require a lot of background. If you are only interested in some of the finite simple groups (the alternating, the classical, the chevalley, the sporadics) then there is usually a better set of books (different sets of books for each type), but if you are interested in all of them and want any hope of finishing in a timely manner, then this is the book for you!
{ "language": "en", "url": "https://math.stackexchange.com/questions/498162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Show that the equation $x^2+xy-y^2=3$ does not have integer solutions. Show that the equation $$x^2+xy-y^2=3$$ does not have integer solutions. I solved the equation for $x$: $x=\displaystyle \frac{-y\pm\sqrt{y^2+4(y^2+3)}}{2}$ $\displaystyle =\frac{-y\pm\sqrt{5y^2+12}}{2}$ I was then trying to show that $\sqrt{5y^2+12}$ can not be an integer using $r^2\equiv 12 \pmod{5y^2}$. I got stuck here.
Note that $$x^2+xy-y^2=(x-2y)^2+5(xy-y^2)=(x-2y)^2\qquad({\rm mod}\>5)\ .$$ But $$0^2=0,\quad(\pm1)^2=1,\quad(\pm2)^2=-1\qquad({\rm mod}\>5)\ ,$$ which implies that $3$ is not a quadratic residue modulo $5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/498236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Volume calculation with n-variables integrals Given $A=(a_{i.j})_{1\le i,j \le n}$ invertible matrix of size $n \times n$, and given $T$ the domain in $\mathbb{R}^n$ which is defined by the following inequality: $\alpha_i \le \sum_{j=1}^{n}{a_{i,j}x_j} \le \beta_i$. (a). How can I calculate the volume $V(T)$? (b). Given $f(x)=\sum_{i=1}^n{c_i x_i}$ for constants $c_1,...,c_n$, How can I proove that $\int_T{fd_T}=\frac{V(T)}{2}\sum_{i=1}^{n}{d_i(\beta_i + \alpha_i)}$. $d_1,...,d_n$ are constants. For (a): I tried to substitute the variables- $x_i$ becomes $y_i$ so that $y=Bx$ ($B$ is the derivatives matrix by each variable), and then I stucked with the Jacobian calculation, but I think that the integral will be $V(T)=(\beta_1 - \alpha_1)(\beta_2 - \alpha_2)...(\beta_n - \alpha_n)$ Thank you!
Note that $x \in T$ iff $Ax \in \prod_{i=1}^n [\alpha_i, \beta_i]$, that is $AT = \prod_{i=1}^n [\alpha_i, \beta_i]$ so we have, by the integral transformation formula \begin{align*} V(T) &= \int_T \,dx\\ &= \int_{AT} |\det A^{-1}|\, dy \\ &= \frac 1{|\det A|} \cdot \prod_{i=1}^n (\beta_i - \alpha_i) \end{align*} To integrate the linear $f$ we argue along the same lines \begin{align*} \int_T f(x)\, dx &= \int_{AT} f(A^{-1}y)\cdot |\det A^{-1}|\, dy\\ \end{align*} Now note, that $y \mapsto f(A^{-1}y)$ is linear as a composition of two linear functions, so there are $d_i$ such that $f(A^{-1}y) = \sum_i d_iy_i$ for each $y$. This gives \begin{align*} \int_T f(x)\, dx &= \int_{AT} f(A^{-1}y)\cdot |\det A^{-1}|\, dy\\ &= \frac 1{|\det A|} \int_{\prod_j [\alpha_j, \beta_j]} \sum_i d_iy_i\, dy\\ &= \sum_i \frac {d_i}{|\det A|} \int_{\prod_j [\alpha_j, \beta_j]} y_i\, dy\\ &= \sum_i \frac{d_i}{|\det A|} (\beta_1 - \alpha_1) \cdots (\beta_{i-1}-\alpha_{i-1}) \cdot (\beta_{i+1} - \alpha_{i+1}) \cdots (\beta_n -\alpha_n) \cdot \int_{\alpha_i}^{\beta_i} y_i\, dy_i\\ &= \sum_i \frac{d_i}{|\det A|} (\beta_1 - \alpha_1) \cdots (\beta_{i-1}-\alpha_{i-1}) \cdot (\beta_{i+1} - \alpha_{i+1}) \cdots (\beta_n -\alpha_n) \cdot \frac{\beta_i^2 - \alpha_i^2}2\\ &= \sum_i \frac{d_i}{2|\det A|} \prod_j (\beta_j - \alpha_j) \cdot (\beta_i + \alpha_i)\\ &= \frac{V(T)}2 \cdot \sum_i d_i(\alpha_i + \beta_i) \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/498315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$f:\mathbb{R}\to\mathbb{R}$ is defined as $f(x)=n\forall x=n\in\mathbb{N}$ Let $f:\mathbb{R}\to\mathbb{R}$ be defined as $ f(x) := \begin{cases} x, & \text{if}\ x\in \mathbb N,\\\\ 0, & \text{else,} \end{cases} $ and $T=\mathbb{N}\cup\{n+1/n:n\in\mathbb{N}\}$. The function $f$ is continuous on $\mathbb{N}$ with respect to the usual metric on the reals, as any function is continuous as every point is an isolated point. And $f$ is discontinuous on $T$ as $|f(n)-f(n+1/n)|=n>\epsilon=1/2$ say. Am I right?
Actually, your proof is correct but not written down properly. Let $\epsilon = \frac{1}{2}$ then we claim there is no $\delta > 0$ with $$|f(x) - f(y)| < \epsilon \qquad \forall\ |x-y| < \delta$$ Let $\delta_0$ be such a choice and chose $N := \lceil \frac{1}{\delta_0} \rceil + 1$, then $$\left|N - N+\frac{1}{N}\right| = \frac{1}{N} < \delta_0$$ But $$\left|f(N) - f\left(N+\frac{1}{N}\right)\right| = N > \epsilon$$ So $f$ is not uniformly continuous on $T$, but locally continuous. $f$ is (uniformly) continuous on $\mathbb N$, locally cont. on $T$ and discontinuous on $\mathbb R$ in every $x\in\mathbb N$
{ "language": "en", "url": "https://math.stackexchange.com/questions/498385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Damped oscillation fit We have some measurement data like this: The expected behavior of the data is a damped oscillation: $$y=a e^{d*t} cos(\omega t+\phi) + k$$ Where: $t$ Current time $y$ Current deflection $a$ Amplitude $d$ Damping factor $\omega$ Angluar velocity $\phi$ Phase shift $k$ Offset The task is to fit the 5 parameters to match the real data. Our current approch does the following: - Find start values for all 5 parameters - Place the values into a system of equations - Iterate until the error gets below a given value In most cases this works well. But in some cases it fails (breaking after 100 iterations). Now there are two possible options: 1) Suppose that the data is 'too bad' and give up 2) Find a better solution Does anyone have a idea of different ways to solve this?
You are trying to solve the harmonic inversion problem. That website contains code and programs for it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/498472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
How to prove $\cos 36^{\circ} = (1+ \sqrt 5)/4$? Given $4 \cos^2 x -2\cos x -1 = 0$. Use this to show that $\cos 36^{\circ} = (1+ \sqrt 5)/4$, $\cos 72^{\circ} = (-1+\sqrt 5)/4$ Your help is greatly appreciated! Thanks
Hint: Look at the Quadratic Formula: The solution to $ax^2+bx+c=0$ is $x=\dfrac{-b\pm\sqrt{b^2-4ac}}{2a}$ The equation is based on the fact that $$ \cos(5x)=16\cos^5(x)-20\cos^3(x)+5\cos(x) $$ and that $\cos(5\cdot36^\circ)=-1$ to get $$ 16\cos^5(36^\circ)-20\cos^3(36^\circ)+5\cos(36^\circ)+1=0 $$ Factoring yields $$ (\cos(36^\circ)+1)(4\cos^2(36^\circ)-2\cos(36^\circ)-1)^2=0 $$ We know that $\cos(36^\circ)+1\ne0$; therefore, $$ 4\cos^2(36^\circ)-2\cos(36^\circ)-1=0 $$ Deciding between the two roots of this equation is a matter of looking at the signs of the roots. For $\cos(72^\circ)$, use the identity $\cos(2x)=2\cos^2(x)-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/498600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
What is an odd function? I'm reading this term (odd function) in my numerical analysis book, but I have never heard of this. What does it mean that an function is odd ?
A function is said to be odd if changing the sign of the variable changes the sign of the function (keeping the absolute value the same). It is even if changing the sign of the variable does not change the function. We express this mathematically as: Odd: $f(-x)=-f(x)$ Even: $f(-x)=f(x)$ It is sometimes useful to know that any function can be expressed as the sum of an even function and an odd function $f(x)=\left(\frac{f(x)+f(-x)}{2}\right)+\left(\frac{f(x)-f(-x)}{2}\right)$ The "odd"ness of a function is therefore an antisymmetric property, while evenness is a symmetric property. There are occasions in other contexts where it is useful to split something into symmetric and antisymmetric parts. So although the terminology may seem a little odd, it actually expresses something which is quite deep, and introduces an idea which can be fruitful. For example determinants are antisymmetric functions. If the symmetric integral of an odd function exists, it is equal to zero, which sometimes saves a lot of work if you spot it - i.e. $\int_{-a}^{a}f(x)dx=0$ if $f(x)$ is an odd function. It is useful to know that the function $\sin x$ is odd, and $\cos x$ is even. The hyperbolic functions $\sinh x$ and $\cosh x$ are the decomposition of the exponential function $e^x$ into its odd and even parts.
{ "language": "en", "url": "https://math.stackexchange.com/questions/498670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Given a covarince matrix, generate a Gaussian random variable Given a $M \times M$ desired covariance, $R$, and a desired number of sample vectors, $N$ calculate a $N \times M$ Gaussian random vector, $X$. Not really sure what to do here. You can calculate the joint pdf given a mean, $\mu$, and covariance. So for $2 \times 2$ covariance matrix is defined as: $$ \text{Cov}[X] = \begin{bmatrix}\text{Cov}(X_1) & \text{Cov}(X_1,X_2) \\ \text{Cov}(X_2,X_1) & \text{Cov}(X_2)\end{bmatrix} $$ But not sure how to get the mean from that.
To generate one vector $u\in\mathcal{R}^M$, first of all generate any vector $v$ from $\mathcal{N}(0,I)$ (or M independent normally distributed variables with mean $0$, varaince $1$). We now need to get a matrix $L$ such that $LL^T=R$, easiest way is a cholesky decomposition. Now $u=Lv\sim\mathcal{N}(0,R)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/498747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
combinatorics: Repeating a procedure, will $0$ ever show up? On a black board we have written the numbers $1$ $2$ $...$ $50$ in a list. Each time we clear two numbers and write their difference instead. We continue this until there is only one number left. Is it possible that the number is zero? I guess that the answer must be negative but I don't know why. The question is from a math competition for 8th graders.
When we clear two numbers and write their difference two cases could happen: If $a$ and $b$ (the two numbers we've cleared) have the same parity then $a-b$ will be even. Otherwise, $a-b$ will be odd. Now, from $1$ to $50$ there are exactly $25$ odd numbers and $25$ even numbers. If the first case happens, then we either will have $23$ odd numbers and $25$ even numbers, or we will have $25$ odd numbers and $23$ even numbers. In both cases after adding $a-b$ we will have an odd number of odds in our sum. So the parity of the sum will be odd. (Just look at what we will have mod $2$). If the parity is different, then we will have $24$ odd numbers and $24$ even numbers. But $a-b$ is odd in this case, therefore again the number of odds will be odd and the parity of the sum will be odd. Now apply this idea to each step and you'll see that the answer is negative because what we'll have is odd but 0 is even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/498830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Did I write down the derivative product rule correctly for $g(x)=(f(x))^2$ Suppose that $f(4)=5$ and $f'(4)=5$ . Use the product rule to determine the value of $g'(4)$ where $g(x)=(f(x))^2$ So I'm writing this problem as: $g'(x)=(f(x))\frac{d}{dx}f(x)+\frac{d}{dx}f(x)(f(x))$ If anybody can verify that I wrote it down correctly, I would really appreciate it.
Yes, you've written it correctly. Note that after simplification, the result is $2f(x) f'(x)$, which also agrees with what the chain rule would say in this context.
{ "language": "en", "url": "https://math.stackexchange.com/questions/499003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Defining a set using a predicate I encountered this notation in a proof and I'm a bit confused on the message it's trying to convey: $ B = \{ n \in \mathbb N \mid \neg P(n) \} $ Here's the notation itself within the greater context of the proof (or proof 'fragment' more like): ... $ P(n): $ predicate-definition Assume $ \exists n \in \mathbb N, \neg P(n) $ Then $ B = \{ n \in \mathbb N \mid \neg P(n) \} \;\;\;\;\;\;\;\; $ #Since $ B $ is non empty and a subset of $ \mathbb N $ it must have a least element Then $ \forall b \in B, \Rightarrow \neg P(b) $ Then $ \exists c \in B, \forall d \in B, c \leq d, \Rightarrow c - 1 \notin B \Rightarrow P(c - 1) $ ... My question is, what exactly is $ B $ ? Is it the set of all the natural numbers that make $ P(n) $ false?
That is exactly right. The notation $X=\{x\ |\ y\}$ means that $X$ is the set of all $x$ such that condition (or conditions) $y$ are satisfied. $x$ and $y$ can be more complex statements, as is the case in your example
{ "language": "en", "url": "https://math.stackexchange.com/questions/499096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove the limit.. $\varinjlim \sqrt{n^2+1}-n=0$. I need to prove that this converges to 0. Usung the definition of a sequence helps for the normal problems but for this I believe the triangle inequality is used at some point.... I let $S_n=\varinjlim \sqrt{n^2+1}-n$ Then $|S_n -S|<\epsilon \rightarrow |\varinjlim \sqrt{n^2+1}-n-0|<\epsilon$. I get to $n^2+1<(\epsilon+n)(\epsilon+n)$ and I'm stuck.. By the definition I'm trying to find $n>N$.... How can I solve this by contradiction? ( since the above lead me nowhere)
Try this: $$0 < \frac{\sqrt{n^2+1}-n}{1} = \frac{(\sqrt{n^2+1}-n)(\sqrt{n^2+1}+n)}{\sqrt{n^2+1}+n} = \frac{1}{\sqrt{n^2+1}+n} < \frac{1}{2n}$$ and then prove that the transformed sequence converges to zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/499182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Projective space is not affine I read a prove that the projective space $\mathbb P_{R}^{n}$ is not affine (n>0): (Remark 3.14 p72 Algebraic Geometry I by Wedhorn,Gortz). It said that the canonical ring homomorphism $R$ to $\Gamma(\mathbb P_{R}^{n}, \mathcal{O}_{\mathbb P_{R}^{n}})$ is an isomorphism. This implies that for n>0 the scheme $\mathbb P_{R}^{n}$ is not affine, since otherwise $\mathbb P_{R}^{n}=Spec R$. I am not clear about why is it impossible to have $\mathbb P_{R}^{n}=Spec R$? And is what sense does the $=$ mean here?
First, let us review the definition of an affine scheme. An affine scheme $X$ is a locally ringed space isomorphic to $\operatorname{Spec} A$ for some commutative ring $A$. This means that if one knows one has an affine scheme $X$, then all one has to do to recover $A$ such that $X=\operatorname{Spec} A$ is to take global sections of the structure sheaf, ie $A\cong\Gamma(X,\mathcal{O}_X)$. In order to prove that $\mathbb{P}^n_R$ is not affine, it suffices to show that $\operatorname{Spec}(\Gamma(\mathbb{P}^n_R,\mathcal{O}_{\mathbb{P}^n_R}))\cong \operatorname{Spec} R$ is not isomorphic to $\mathbb{P}^n_R$. This is due to a dimension argument- assume $R$ is noetherian, and $\dim R=d$. Then $\dim\mathbb{P}^n_R=d+n$, as $\dim R[x_1,\cdots,x_n]=d+n$. Unless $n=0$, the two cannot be isomorphic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/499264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
Why does $\mathbf{Var}(X) = \mathbf{Var}(-X)$ for random variable $X$? Question from UCLA Math GRE study packet, Problem Set 2, Number 4: http://www.math.ucla.edu/~cmarshak/GREProb.pdf Let $X$ and $Y$ be random variables. Which of the following is always true? \begin{align} ...\\ (II) \ \mathbf{Var}(X) = \mathbf{Var}(-X)\\ ...\\ ...\\ \end{align} Answer says (II) is True. Why?
Let $X$ be a random variable and $\alpha \in \mathbb R$. We have \begin{align*} {\rm Var}(\alpha X) &= \def\E{\mathbb E}\E[(\alpha X)^2] - \E[\alpha X]^2\\ &= \E[\alpha^2 X^2] - \bigl(\alpha \E[X]\bigr)^2\\ &= \alpha^2 \bigl(\E[X^2] - \E[X]^2\bigr)\\ &= \alpha^2 {\rm Var}(X) \end{align*} In your case $\alpha = -1$, and $$ {\rm Var}(-X) = (-1)^2 {\rm Var}(X) = {\rm Var}(X). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/499341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Finding $\lim_{x\to 0}\frac{\tan x(1- \cos x)}{3x^2}$ How do I find the following limit: $$ \lim_{x\to 0}\frac{\tan x(1- \cos x)}{3x^2} $$ without using L'Hopital's rule? The reason I'm making a point of not using L'Hopital is that if I run the limit through Wolfram Alpha that's the method it uses, but we haven't gone through that yet so I'm guessing I should use something else. I don't really know what to do here. I've done quite a number of exercises on limits by now and I almost always get it right and know immediately what to do, but not this time. The only thing I can think of is to use $\tan x=\frac{\sin x}{\cos x}$. But I can't say that helps me much... That only gives me $$ \lim_{x\to 0}\frac{\tan x(1- \cos x)}{3x^2}=\lim_{x\to 0}\frac{\sin x(\cos^{-1}x- 1)}{3x^2} $$ Does that help me? What should I do next? Or should I start with something different?
Hint: $$1-\cos(x) = 2\sin(x/2)^2. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/499434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
$|f|$ constant implies $f$ constant? If $f$ is an analytic function on a domain $D$ and $|f|=C$ is constant on $D$ why does this imply that $f$ is constant on $D$? Why is the codomain of $f$ not the circle of radius $\sqrt{C}$?
The equation can be written as $f(z)\overline{f}(\overline{z})=C^2$. So $\overline{f}(\overline{z})=C^2/f(z)$ is an analytic function of $z$. That can only be analytic if it is constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/499529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Why is $\sin(xy)/y$ continuous? Me and my mates are crunching this question for a while now. While we know that $\sin(xy)$ is continuous , $1 / y $ as the other part of the function clearly has a continuity gap at $y = 0 $, though the function can be continued at $y = 0$ with $f(x,0) = 0 $- why is that? We tried some things but are not getting to the important step that proves the matter.
The expression $$f(x,y):={\sin(xy)\over y}$$ is at face value undefined when $y=0$, but wait: When $y\ne 0$ one has the identity $${\sin(xy)\over y}=\int_0^x\cos(t\>y)\ dt\ .$$ Here the right side is obviously a continuous function of $x$ and $y$ in all of ${\mathbb R}^2$. It follows that the given $f$ can be extended continuously to the full plane in a unique way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/499572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
How can I show that $\begin{pmatrix} 1 & 1 \\ 0 & 1\end{pmatrix}^n = \begin{pmatrix} 1 & n \\ 0 & 1\end{pmatrix}$? Well, the original task was to figure out what the following expression evaluates to for any $n$. $$\begin{pmatrix} 1 & 1 \\ 0 & 1\end{pmatrix}^{\large n}$$ By trying out different values of $n$, I found the pattern: $$\begin{pmatrix} 1 & 1 \\ 0 & 1\end{pmatrix}^{\large n} = \begin{pmatrix} 1 & n \\ 0 & 1\end{pmatrix}$$ But I have yet to figure out how to prove it algebraically. Suggestions?
Powers of matrices occur in solving recurrence relations. If you write $$ \begin{pmatrix} x_{n+1} \\ y_{n+1} \end{pmatrix} = \begin{pmatrix} 1 & 1 \\ 0 & 1\end{pmatrix} \begin{pmatrix} x_n \\ y_n \end{pmatrix} $$ then clearly $$ \begin{pmatrix} x_{n} \\ y_{n} \end{pmatrix} = \begin{pmatrix} 1 & 1 \\ 0 & 1\end{pmatrix}^n \begin{pmatrix} x_0 \\ y_0 \end{pmatrix} $$ and also $$ x_{n+1}=x_n+y_n, \qquad y_{n+1}=y_n $$ from which you get $$ x_n = x_0 + n\ y_0, \qquad y_n = y_0 $$ The first column of $A^n$ is given by taking $x_0=1$ and $y_0=0$, and so is $(1 \ 0)^T$. The second column is given by taking $x_0=0$ and $y_0=1$, and so is $(n \ 1)^T$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/499646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 7, "answer_id": 6 }
Explicit Formula for a Recurrence Relation for {2, 5, 9, 14, ...} (Chartrand Ex 6.46[b]) Consider the sequence $a_1 = 2, a_2 = 5, a_3 = 9, a_4 = 14,$ etc... (a) The recurrence relation is: $a_1 = 2$ and $a_n = a_{n - 1} + (n + 1) \; \forall \;n \in [\mathbb{Z \geq 2}]$. (b) Conjecture an explicit formula for $a_n$. (Proof for conjecture pretermitted here) I wrote out some $a_n$ to compass to cotton on to an idea or pattern. It seems bootless. $\begin{array}{cc} a_2 = 5 & a_3 = 9 & a_4 = 14 & a_5 = 20 & a_6 = 27\\ \hline \\ 5 = 2 + (2 + 1) & 9 = 5 + (3 + 1) & 14 = 9 + (4 + 1) & 20 = 14 + (5 + 1) & 27 = 20 + (7 + 1) \\ \end{array}$ The snippy answer only says $a_n = (n^2 + 3n)/2$. Thus, could someone please explicate the (missing) motivation or steps towards this conjecture? How and why would one envision this?
Write out the series for $a_{n}$ to start with. We have that $a_{n} = a_{n-1} + (n+1)\\ \quad = a_{n-2} + n + (n+1) \\ \quad = \ldots \\ \quad = a_{1} + 3 + 4 + \ldots + (n+1) \\ \quad = 2 + 3 + 4 + \ldots + (n+1) \\ \quad = \displaystyle \left(\sum_{i=1}^{n+1} i \right) - 1 \\ \quad = (n+1)(n+2)/2 - 1, \quad (\text{using the value for the sum of the integers between 1 and}\ n + 1) \\ \quad = (n^{2} + 3n)/2 + 1 -1 \\ \quad = (n^{2} + 3n)/2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/499728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
sum of irrational numbers - are there nontrivial examples? I know that the sum of irrational numbers does not have to be irrational. For example $\sqrt2+\left(-\sqrt2\right)$ is equal to $0$. But what I am wondering is there any example where the sum of two irrational numbers isn't obviously rational like an integer and yet after, say 50 digits after the decimal point it turns out to be rational. Are there any such examples with something non-trivial going on behind the scenes?
If $a+b=q$, where $a,b\notin\mathbb{Q}$ and $q\in \mathbb{Q}$, then $a=q-b$, so just choose a rational number whith sufficiently long period of decimals and you will get what you want. On the other hand, this is still quite trivial, since here we just sum up $b$ and $q-b$ (in your question, $q=0$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/499784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Value and simplify I want to find the value and simplify square root 36 ? Square root of 36 is 6 But I would know how to find the value and simplify it .
To find the square root of $37$ (say) involves a fair bit of calculation, and you will never get a numerically exact answer. But the situation is much different for $36$. For a rigorous proof that $\sqrt{36}=6$, all you need to do is (1) observe that $6^2=36$ and (2) note that $6$ is positive. Generally, a perfectly legitimate way to solve a mathematical problem is simply to guess the answer, if you are lucky enough to do so, and then verify that your guess is correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/499962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Möbius tranformation taking reals to reals can be written with real coefficients I'm working on this one (from Ahlfors' Complex Analysis): A Fractional Linear Transformation of form $\displaystyle T(z) = \frac{a z + b}{c z + d}$ which takes the real numbers into the real numbers can be written in a way where all the coefficients are real. I'm pretty sure I know a way to get the answer. First, the thing is a constant exactly when $ad - bc = 0$. Then you're done. If not, some coefficient is not zero. in that case, you can change variables and stuff around in a way so that you can assume WLOG $a \ne 0$. Then, you pick three different real-valued inputs and show that all the three remaining coefficients $b, c, d$ are real-valued. My question is: Is there an easier way to do this, without using more advanced complex analysis (which I don't know yet)
It turns out that this is not an answer to the OP's question. I will leave this answer nevertheless, perhaps someone might find it useful for something. This was my fault for not reading the question more carefully I will start the answer with an exercise. We will assume throughout that $a\neq 0$, and $ad-bc\neq 0$. Throughout, $L$ will denote a linear fractional transformation that sends extended reals to the extended reals, that satisfies the conditions above. Exercise: Let $(z_1,z_2,z_3)$, and $(w_1,w_2,w_3)$ be two triples of extended complex numbers (the complex numbers with the point at infinity include). Then their is a unique linear fractional transformation, $L$ such that $L(z_i)=w_i,i=1,2,3.$ Now to take on the literal statement of the question, if a linear fractional transformation takes the reals to the reals, then it is a linear function (and takes infinity to infinity), but this is not so interesting. More interesting (the one I assume that you want to prove) is that a LFT that takes the extended reals to the extended reals (the reals with infinity included) mat be written to have all real coefficients. Since by your remark, that we may assume $a\neq 0$, multiply the numerator and denominator by $\frac{1}{a}$, so now without loss of generality, we may assume that $a=1$. We will now invoke our exercise. If we say where we send $\{0,1,\infty\}$, we completely determine the LFT. Again, if we send infinity to infinity, we have a linear function. Let us assume that we send infinity to some real number. If $L(z)=\frac{z+b}{cz+d}$, then $L(\infty)=\frac{1}{c}$. If we assume that $L$ sends extended reals to extended reals, the, $\frac{1}{c}$ is real and therefore ,so is $c$. Likewise $L(0)=\frac{b}{d}$, which is also real if $d\neq 0$(we will also need to deal with the case when $d=0$. Since our linear fractional transformation does not send infinity to infinity, the number, $-\frac{d}{c}$ is real, since $L(-\frac{d}{c})=\infty$. Since $c$ is real, so is $d$. Since $\frac{b}{d}$ is real, so is $b$, since $d$ is real. Now suppose that $d=0$. Then the LFT is of the form, $\frac{z+b}{cz}=\frac{1}{c}+\frac{b}{cz}$. We still know that $c$ is real, the previous argument for this did not require $d\neq 0$. Now pick $z=1$ and plug it into $\frac{1}{c}+\frac{b}{cz}$, which concludes this argument.
{ "language": "en", "url": "https://math.stackexchange.com/questions/500031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Any hint for this calculus optimization problem? What should I use? We have a wire mesh of 1000 m to fence 2 regions, one circular and one square. Say how should the mesh should be cut to: a) The sum of the areas of both fenced regions is maximum. b) The sum of the areas of both fenced regions is minimum. I don't know if I should use Lagrange Multpliers or if it's only a one variable calculus problem. Any hint or idea would be very appreciated. Thank you.
Yes, you can use Lagrange multipliers and yes, it can be expressed as a $1$-variable problem. Your pick. Let $x$ be the radius of the circle and $y$ the side of the square. We have the constraint $$2\pi x+4y=1000\tag{1}.$$ We want to maximize/minimize $$\pi x^2+y^2\tag{2}$$ subject to Condition (1). Now use Lagrange multipliers. Things should go smoothly. One must not forget to check the endpoints $x=0$ and $y=0$. Or else we can use (1) to say solve for $y$ in terms of $x$, and substitute for $y$ in (2). We then have a one-variable problem, to be solved in the usual introduction to calculus way, or some other way. We get a quadratic in $x$, with somewhat messy coefficients.
{ "language": "en", "url": "https://math.stackexchange.com/questions/500140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that absolute convergence implies unconditional convergence In the proof of "absolute convergence implies unconditional convergence" for a convergent series $\sum_{n=1}^{\infty}a_n$, we take a partial sum of first $n$ terms of both the original series ($S_n$) and rearranged series ($S_n'$) and compare them. Because the original series converges, we get some $N$ from Cauchy-criteria. Now if I choose "$n$ large enough" such that $\{a_1,a_2,\dots,a_{N-1}\} \subseteq \{a_1',a_2',\dots,a_{n}'\}$. Then if we compare both the partial sums. the remaining $a_i$s are all for $i \geq N$, but still some $a_i'$s are remaining. The book claims that $$|\sum_{i=N}^{n}a_i-\sum_{i=N}^{n}a_i'| \leq |\sum_{i=N}^{n}|a_i||.$$ I could not not understand how these $a_i'$s are getting removed. An infinite series is called "unconditionally convergent" if every rearrangement of it converges to the same value. If the sequence is absolutely convergent, then it can be shown that all converges to the same value, in fact, the above theorem proves that.
If your text actually writes $|\sum_{i=N}^{n}a_i-\sum_{i=N}^{n}a_i'| \leq |\sum_{i=N}^{n}|a_i||$ then it is indeed mistaken. (Note also that even on its own the right hand side is strangely written: why do we need the outside absolute value?) For instance, suppose $a_N = \ldots = a_n = 0$. Then the inequality implies $\sum_{i=N}^n a_i' = 0$, but the assumptions do not give us that. I would suggest a somewhat different proof, namely the one which is given in Theorem 14.7 of these notes. Put $A = \sum_{n=0}^{\infty} a_n$. Fix $\epsilon > 0$ and let $N_0 \in \mathbb{N}$ be such that $\sum_{n=N_0}^{\infty} |a_n| < \epsilon$. Let $M_0 \in \mathbb{N}$ be sufficiently large so that the terms $a_{\sigma(0)},\ldots,a_{\sigma(M_0)}$ include all the terms $a_0,\ldots,a_{N_0-1}$ (and possibly others). Then for all $M \geq M_0$, $$|\sum_{n=0}^{M} a_{\sigma(n)} - A| = |\sum_{n=0}^{M} a_{\sigma(n)} - \sum_{n=0}^{\infty} a_n| \leq \sum_{n=N_0}^{\infty} |a_n| < \epsilon.$$ Indeed: by our choice of $M$ we know that the terms $a_0,\ldots,a_{N_0-1}$ appear in both $\sum_{n=0}^M a_{\sigma(n)}$ and $\sum_{n=0}^{\infty} a_n$ and thus get cancelled; some further terms may or may not be cancelled, but by applying the triangle inequality and summing the absolute values we get an upper bound by assuming no further cancellation. This shows $\sum_{n=0}^{\infty} a_{\sigma(n)} = \lim_{M \rightarrow \infty} \sum_{n=0}^M a_{\sigma(n)} = A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/500231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Problem: What is the remainder of $a^{72} \mod 35$ if $a$ is a whole number not having $5$ or $7$ as divisors. I have the following problem: Problem: What is the remainder of $a^{72} \mod 35$ if $a$ is a whole number not having $5$ or $7$ as divisors. If $a$ cannot be divided by $5$ or $7$ it cannot be divided by $35$, so the remainder is not $0$. However the remainder can be everything between $1$ and $34$ excluding numbers divisible by $5$ or $7$ ? Thanks for your time.
Since $a$ and $35$ are coprime $(gcd (a, 35) = 1)$, use Euler's totient function: $$a^{\phi(n)} = 1 \hspace{2 pt} mod \hspace{2 pt}n$$ So you get $$a^{\phi(35)} = a^{24} = 1 \hspace{2 pt} mod \hspace{2 pt}35$$ Thus, $$a^{24^3} = a^{72} = 1^3 mod 35 = 1$$ So $1$ is your remainder. Example: Set $a = 24$. http://www.calculatorpro.com/calculator/modulo-calculator/
{ "language": "en", "url": "https://math.stackexchange.com/questions/500295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove that if $n>10$ then $\sum_{d\mid n}\phi(\phi(d))<\frac{3}5n$ Prove that if $n>10$ then $$\sum_{d\mid n}\phi(\phi(d))<\frac{3}5n,$$ where $\phi(n)$ is Euler's totient function.
We start with the identity: $$n=\sum_{d|n}\phi(d).$$ In order to prove it, just note that right hand side is a multiplicative function and therefore it is enough to check equality for prime power only. Now the key point is to note that if $d|n$ then $\phi(d)|\phi(n)$ and therefore the left hand side of our inequality is the sum $\phi(m)$ where $m$ runs over some divisors of $\phi(n).$ In other words, $$\sum_{d\mid n}\phi(\phi(d))=\sum_{m|\phi(n)}\phi(m)-S=\phi(n)-S,$$ where $S$ is the sum of those divisors of $\phi(n)$ that are not of the from $\phi(d),$ $d|n.$ Now, if $n=p_1^{\alpha_1}\cdotp_2^{\alpha_2}...\cdot p_k^{\alpha_k}$ then $\phi(n)=p_1^{\alpha_1}\cdotp_2^{\alpha_2}...\cdot p_k^{\alpha_k}(p_1-1)....(p_k-1)$ and those divisors that come from $\phi(d)$ are all of the form $m=p_1^{\beta_1}\cdotp_2^{\beta_2}...\cdot p_k^{\beta_k}\prod_{i}(p_i-1).$ So if $n\ne 2^m$ then divisor $D=\frac{p_1^{\alpha_1-1}\cdotp_2^{\alpha_2-1}...\cdot p_k^{\alpha_k-1}(p_1-1)....(p_k-1)}{2}$ is in $S$ and we can estimate: $$\phi(n)-S\le \phi(n)/2\le \frac{n}{2}\le \frac{3}{5}n.$$ You are left to check $n=2^m$ which can be easily done directly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/500383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to calculate this complex integral $\int_0^\infty \frac{1}{q+i}e^{-(q+b)^2}\text{d}q$? (Please Help) I want to carry out the following integration $$\int_0^\infty \frac{1}{q+i}e^{-(q+b)^2}\text{d}q$$ which is trivial if calculated numerically with any value for b. But I really need to get an analytic expression for this integral. I would really appreciate it if you can help with this integral. Or if you can tell it's not possible to carry it out analytically, that is also helpful. Thanks in advance Huijie
$$\text{res}\left(\frac{\left(\sqrt{\pi } e^{\frac{1}{4} z (4 b+z)} \text{erfc}\left(b+\frac{z}{2}\right)\right) \left(e^{-i z} (-2 \text{Ci}(z)-2 i \text{Si}(z)-2 \log (-z)+2 \log (z)-i \pi )\right)}{}\{z,\alpha \}\right)$$ sorry my latex do not work find
{ "language": "en", "url": "https://math.stackexchange.com/questions/500456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Suppose that $G$ is a group with the property that for every choice of elements in $G$, $axb=cxd$ implies $ab=cd$. Prove that $G$ is Abelian. Suppose that $G$ is a group with the property that for every choice of elements in $G$, $axb=cxd$ implies $ab=cd$. Prove that $G$ is Abelian. (Middle cancellation implies commutativity). I am having trouble with this homework problem. The way I started was: Suppose $G$ is a group for which middle cancellation holds. Multiply $x$ by its inverse, $x^{-1}$ using middle cancellation so you have $ab=cd$. Thus $G$ is Abelian. I am unsure if this is the way to approach it.
Let $a,b\in G$. Then you have $(bab^{-1})ba(e)=(e)ba(a)=ba^2$, so by hypothesis you can conclude that $bab^{-1}e=ea$, that is $bab^{-1}=a$, which implies $ab=(bab^{-1})b=ba$, so $G$ is abelian.
{ "language": "en", "url": "https://math.stackexchange.com/questions/500530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Prove that $\mathbb{A}^1 - \{p_1, \dots, p_n\}$ and $\mathbb{A}^1 - \{q_1, \dots, q_m\}$ are not isomorphic for $n \neq m$ I want to prove that $\mathbb{A}^1 - \{p_1, \dots, p_n\}$ and $\mathbb{A}^1 - \{q_1, \dots, q_m\}$ are not isomorphic for $n \neq m$, where the $p$'s and $q$'s are points. This is one of those theorems that seems intuitively obvious, but I don't know how to prove. Perhaps by induction?
I suppose $k$ is algebraically closed or at least the points $p_i, q_j$ have coordinates in $k$. Otherwise the proof is more complicate. If they are isomorphic, then there exists an isomorphism of $k$-algebras ($k$ is the ground field): $$\phi: k[t, 1/(t-p_1), \dots, 1/(t-p_n)] \simeq k[t, 1/(t-q_1), \dots, 1/(t-q_m)].$$ This gives an isomorphism of their groups of units $$ k^* (t-p_1)^{\mathbb Z}...(t-p_n)^{\mathbb Z} \simeq k^*(t-q_1)^{\mathbb Z}...(t-q_m)^{\mathbb Z}$$ which is identity on $k^*$ because $\phi$ is identity on $k$. Taking the quotient by $k^*$ we obtain an isomorphism of groups $$ (t-p_1)^{\mathbb Z}...(t-p_n)^{\mathbb Z} \simeq (t-q_1)^{\mathbb Z}...(t-q_m)^{\mathbb Z},$$ hence $\mathbb Z^n\simeq \mathbb Z^m$ and $n=m$. Note that in general (when $n\ge 3$), $n=m$ is not sufficient to insure the existence of an isomorphism between the two curves.
{ "language": "en", "url": "https://math.stackexchange.com/questions/500603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Re-write $1 \cdot x$ to $x$. Given the following bi-directional re-write rules (where $1$ is a constant, $^{-1}$ is a unary operator, $\cdot$ is a binary operator, and $x,y,z$ are arbitrary terms): $$\begin{align*} x \cdot 1 &= x \\ x \cdot (y \cdot z) &= (x\cdot y) \cdot z \\ x \cdot x^{-1} &= 1 \end{align*}$$ we're asked to prove that $1 \cdot x = x$ (ie, there is a chain $t_0 \to t_1 \to t_3 \to \cdots \to t_n$ with $t_0 = 1\cdot x$, $t_n = x$, and $t_i \to t_{i+1}$ meaning one of the 3 equations above re-writes $t_i$ to $t_{i+1}$ (in either direction)). After staring at this for a while I'm beginning to doubt whether or not this is possible... can anyone a) confirm this is indeed possible and b) potentially nudge me in the right direction?
Another method of proof would be: Using the fact that $x = (x^{-1})^{-1}$, then $x^{-1} \cdot x = 1$ from axiom 3. \begin{align*} &1 \cdot 1 = 1 \\ &1 \cdot (x \cdot x^{-1}) = 1 \\ &(1 \cdot x) \cdot x^{-1} = x \cdot x^{-1} \\ &((1 \cdot x) \cdot x^{-1}) \cdot x = (x \cdot x^{-1}) \cdot x \\ &(1 \cdot x) \cdot (x^{-1} \cdot x) = x \cdot (x^{-1} \cdot x) \\ &(1 \cdot x) \cdot 1 = x \cdot 1 \\ &1 \cdot x = x \\ \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/500660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Proof by induction in categorical terms Given a category cartesian closed $C$ and a functor $F : C \to C$, I consider the initial object in the category of $F$-algebras. This initial object $\mu F$ seems to codify an "inductive object" in $C$. Now I'm trying to prove some property that I would "normally" prove by induction on the structure of $\mu F$. Let's suppose that the category is well-pointed. I want to prove a property that goes "for all $x,y : 1 \to \mu F$, $P(x,y)$" such that $P(x,y)$ is some mathematical statement in the language of category theory. In "set mathematics" I'd use induction on the structure of the set: first prove it for the base cases, and then use the induction hypothesis in the other cases. How can I prove this using the initiality of $\mu F$ rather than induction on its structure?
Martin's answer may be a bit deceptive, since it reduces to usual induction. But there is a brighter side to the story: if you only want to define a map from $\mu F$, to some object $A$, then all you need is to equip $A$ with $F$-algebra structure and let initiality of $\mu F$ apply. This is similar to defining a function by pattern-matching in a functional programming language. To extend from maps to predicates, I expect you'll need your ambient category to model the logic as well. Maybe by moving to some topos over $C$? I'm no expert on this, so cannot really say more, sorry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/500721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
need example a Riemann-integrable function is not continuous every continuous function is Riemann integrable,continuity is certainly not necessary. I dont know anything about measure.
Define $f$ to be identically $0$ on $\Bbb{R}$, except that $f(0) = 1$. To see that this is Riemann integrable, note that the lower sums are all $0$ (suppose we're integrating on $[-1, 1]$, for clarity). But the upper sums can be made arbitrary small, by choosing small intervals around $0$. More generally, $f$ is Riemann integrable if and only if the set of points of discontinuities has Lebesgue measure $0$; so something like Thomae's function, which is discontinuous on $\Bbb{Q}$ is still Riemann-integrable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/500796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why is $\lim_{x \to c}g(f(x)) = g(\lim_{x \to c}f(x))$ In this theorem (from the continuity section of the first chapter of a calculus textbook) If $g$ is continuous at $b$, and $\lim_{x \to c}f(x)=b$, then $\lim_{x\to c}g(f(x)) = g(\lim_{x \to c}f(x))=g(b)$. I would like an explanation (and proof) of $\lim_{x \to c}g(f(x)) = g(\lim_{x \to c}f(x))$.
The theorem mentioned is one of the most useful in calculating various limits. As an example if we need to calculate the limit of $\{f(x)\}^{g(x)}$ when $x \to a$ then we normally take logs. Say the limit is $L$ then $\log L = \log(\lim_{x \to a}\{f(x)\}^{g(x)})$ and then we exchange $\log$ and the limit operation. This is justified only because of this theorem. When we understand the usefulness and power of some result it makes us more curious to know the proof. So let's us now understand why the theorem is true. I will avoid use of $\epsilon$ and $\delta$ as they tend to lose the expressive charm of language and ideas. We are provided that $f(x) \to b$ as $x \to c$. This means that we can make the value of $f(x)$ arbitrarily close to $b$ by taking values of $x$ sufficiently close to $c$. Note that $\epsilon$ and $\delta$ are used to quantify the terms mentioned in italics in the last sentence. Next we are given that $g(x)$ is continuous at $b$. This means that we make values of $g(x)$ arbitrarily close to $g(b)$ by taking $x$ sufficiently close $b$. Next consider that while letting this $x \to b$ so that $g(x) \to g(b)$ we only try to use values of $x$ which form the values of the function $f$. Thus rather than $x$ tending to $b$ in any manner we prefer to have values of $x$ such that $x = f(t)$ (or $x$ takes values from range of $f$). Now we want values of $x$ sufficiently close to $b$ which can be done by taking values of $t$ sufficiently close to $c$. It thus follows that the value of $g(x) = g(f(t))$ can be made arbitrarily close to $g(b)$ by taking values of $t$ sufficiently close to $c$. In symbols $\lim_{t \to c}g(f(t)) = g(b) = g(\lim_{t \to c}f(t))$. While reading the above the reader may fail to understand why continuity of $g$ at $b$ is necessary. In that case let's assume that $g$ is not continuous at $b$ and ask what happens then? Well then we may have the case that $g(x) \to L$ as $x \to b$, but $L \neq g(b)$ and then we only have $\lim_{t \to c}g(f(t)) = L$. So please understand that in any case we must have $\lim_{t \to c}g(f(t)) = \lim_{x \to b}g(x)$ provided these limits exist. In the special case when $g$ is continuous at $b$ we can write this limit as $g(b)$ and noting that $b = \lim_{t \to c}f(t)$ we can further write $$\lim_{t \to c}g(f(t)) = \lim_{x \to b}g(x) = g(b) = g(\lim_{t \to c}f(t))$$ So in the above identity the first and last equality always holds. It is the middle equality which needs continuity of $g$ at $b$. Update: After having a look at this question I want to add some detail for the case when function $g$ is not continuous. In this case for the above argument to work it is essential that $f(x) \neq b$ in a certain neighborhood of $c$ (except possibly at $c$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/500867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Construct a Liapunov function for this system Construct a Liapunov function for the system (Determine the stability of $x \equiv 0$): I have an example:$$\begin{cases} & \mathrm { } \dot{x}= -x^3+xy^2\\ & \mathrm { } \dot{y}= -2x^2y-y^3 \end{cases} \tag{1}$$ Here's my solution: * *Let's try $V(x,y)=ax^2+by^2$. Then we have: $\dot{V}(x,y)=-2ax^4+2(a-2b)x^2y^2-2by^4$ * *When $a-2b<0$, for instance $a=b=1$. We have $V(x,y)=x^2+y^2$ such that: $$V(0,0)=0,V(x,y)>0, \forall (x,y) \ne (0,0)\ \text{and} \ \dot{V}(x,y)<0$$ *Hence, $x=y=0$ is asymptotically stable. ============================================================ What about the system: $$\begin{cases} & \mathrm{ } \dot{x}= y-3x^3\\ & \mathrm{ } \dot{y}= -x-7y^3 \end{cases} \tag{2}$$ How can we construct a Liapunov function for this system (Determine the stability of $x \equiv 0$ of the system). I'm sorry I fixed it!
I doubt that there is a closed-form global Liapunov function. Note that besides the centre at $(0,0)$ you have critical points at $(\pm \sqrt{3}/3,\mp 7 \sqrt{3}/9)$, which I think are saddle points. EDIT: It looks like e.g. $$ V(x,y) = {x}^{2}+{y}^{2}+ 8.75\,x{y}^{3}+3\,{x}^{2}{y}^{2}+ 5.25\,{x}^{3}y$$ is a Liapunov function for $(x,y)$ near $(0,0)$. EDIT: How could I find this? By symmetry, I wanted something invariant under $ x \to -x,\; y \to -y$, so terms of even total degree in $x$,$y$. So start with $x^2 + y^2 + \sum_{i=0}^4 a_i x^i y^{4-i}$. The lowest-order terms of $\dot{V}$ are of total degree $4$ in $x,y$. I then substituted $x = \cos(s)$, $y = \sin(s)$ into those terms, and chose $a_0, \ldots, a_4$ so that the result was negative for all $s \in [0,2\pi]$. The one above is far from the only possible choice (and I don't remember exactly why I chose that particular one).
{ "language": "en", "url": "https://math.stackexchange.com/questions/500932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Letting $S(m)$ be the digit sum of $m$, then $\lim_{n\to\infty}S(3^n)=\infty$? For any $m\in\mathbb N$, let $S(m)$ be the digit sum of $m$ in the decimal system. For example, $S(1234)=1+2+3+4=10, S(2^5)=S(32)=5$. Question 1 :Is the following true? $$\lim_{n\to\infty}S(3^n)=\infty.$$ Question 2 :How about $S(m^n)$ for $m\ge 4$ except some trivial cases? Motivation : I've got the following : $$\lim_{n\to\infty}S(2^n)=\infty.$$ Proof : The point of this proof is that there exists a non-zero number between the ${m+1}^{th}$ digit and ${4m}^{th}$ digit. If $$2^n=A\cdot{10}^{4m}+B, B\lt {10}^m, 0\lt A,$$ then $2^n\ge {10}^{4m}\gt 2^{4m}$ leads $n\gt 4m$. Hence, the left side can be divided by $2^{4m}$. Also, $B$ must be divided by $2^{4m}$ because ${10}^{4m}=2^{4m}\cdot 5^{4m}$. However, since $$B\lt {10}^m\lt {16}^m=2^{4m},$$ $B$ can not be divided by $2^{4m}$ if $B\not=0$. If $B=0$, then the right side can be divided by $5$ but the left side cannot be divided by $5$. Hence, we now know that there is a non-zero number between the ${m+1}^{th}$ digit and ${4m}^{th}$ digit. Since $2^n$ cannot be divided by $5$, the first digit is not $0$. There exists non-zero number between the second digit and the fourth digit. Again, there exists non-zero number between $5^{th}$ digit and ${16}^{th}$ digit. By the same argument as above, if $2^n$ has more than $4^k$ digits, then $S(n)\ge {k+1}$. Hence, $$n\log {2}\ge 4^k-1\ \ \Rightarrow \ \ S(n)\ge k+1.$$ Now we know that $$\lim_{n\to\infty}S(2^n)=\infty$$ as desired. Now the proof is completed. However, I've been facing difficulty for the $m=3$ case. I've got $\lim\sup S(3^n)=\infty$. Proof : Suppose that $3^n$ has $m$ digits. Letting $l=\varphi({10}^m)+n$, then $$3^l-3^n=3^n(3^{\varphi({10}^m)}-1).$$ Since this can be divided by ${10}^m$, we know that the last $m$ digits of $3^l$ are equal to those of $3^n$. Hence, we get $\lim\sup S(3^n)=\infty$. However, I can't get $\lim\inf S(3^n)$. Can anyone help? Update : I crossposted to MO.
I think the following result is common knowledge: If $m$ is not a power of $10$, then for any positive integer $X$, there exists a power of $m$ which has a decimal expansion starting with $X$. Proof idea: In other words, we should prove that $X \cdot 10^s \leq m^n < (X+1) \cdot 10^{s}$ for some positive integers $s$ and $n$. This inequality is equivalent to $s + \operatorname{log}_{10}(X) \leq n \log_{10}(m) < s + \log_{10}(X+1)$. Now the set of fractional parts of $n \alpha$, $n \in \mathbb{Z}_+$ is dense in $[0,1]$ when $\alpha$ is irrational. Using this fact we can choose the fractional part of $n \log_{10}(m)$ so that the above inequality holds (note that $\log_{10}(m)$ is irrational when $m$ is not a power of $10$). Thus $\limsup S(m^n) = \infty$ when $m$ is not a power of $10$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/501019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 2, "answer_id": 0 }
How to prove: $a+b+c\le a^2+b^2+c^2$, if $abc=1$? Let $a,b,c \in \mathbb{R}$, and $abc=1$. What is the simple(st) way to prove inequality $$ a+b+c \le a^2+b^2+c^2. $$ (Of course, it can be generalized to $n$ variables).
By replacing $a, b, c$ by $|a|, |b|, |c|$ if needed, we may assume they are non-negative. Then just apply Jensen inequality (or AM-GM inequality) to deduce that $$ a+b+c = \sum_{\text{cyclic}} a^{4/3}b^{1/3}c^{1/3} \leq \sum_{\text{cyclic}} \frac{4}{6}a^{2} + \frac{1}{6}b^{2} + \frac{1}{6}c^{2} = a^2 + b^2 + c^2. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/501106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 4 }
What is the largest value of $n$ for which $2n + 1$ is a factor of $122 + n^{2}$? Given that $n$ is a natural number, what is its largest value such that $2n + 1$ is a factor of $122 + n^{2}$?
Note that $4\cdot(n^2+122) - (2n+1)(2n-1)=489$. Hence if $n^2+122$ is a multiple of $2n+1$, we also need $2n+1\mid 489=2\cdot 244+1$. On the other hand, $n=244$ does indeed lead to $n^2+122 = 122\cdot(2n+1)$, so the largest $n$ is indeed $244$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/501190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
solving second order differential equation Bonsoir je cherche les solutions de l'équation differentielle de type $$x^3y''(x)+(ax^3+bx^2+cx+d)y(x) =0$$ Merci d'avance Good evening, I'm searching solutions of a differential equation of the type: $x^3y''(x) + (ax^3+bx^2+cx+d)y(x) = 0.$ Thanks is advance.
Hint: $x^3y''(x)+(ax^3+bx^2+cx+d)y(x)=0$ $\dfrac{d^2y}{dx^2}+\left(a+\dfrac{b}{x}+\dfrac{c}{x^2}+\dfrac{d}{x^3}\right)y=0$ Let $r=\dfrac{1}{x}$ , Then $\dfrac{dy}{dx}=\dfrac{dy}{dr}\dfrac{dr}{dx}=-\dfrac{1}{x^2}\dfrac{dy}{dr}=-r^2\dfrac{dy}{dr}$ $\dfrac{d^2y}{dx^2}=\dfrac{d}{dx}\left(-r^2\dfrac{dy}{dr}\right)=\dfrac{d}{dr}\left(-r^2\dfrac{dy}{dr}\right)\dfrac{dr}{dx}=\left(-r^2\dfrac{d^2y}{dr^2}-2r\dfrac{dy}{dr}\right)\left(-\dfrac{1}{x^2}\right)=\left(-r^2\dfrac{d^2y}{dr^2}-2r\dfrac{dy}{dr}\right)(-r^2)=r^4\dfrac{d^2y}{dr^2}+2r^3\dfrac{dy}{dr}$ $\therefore r^4\dfrac{d^2y}{dr^2}+2r^3\dfrac{dy}{dr}+(dr^3+cr^2+br+a)y=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/501269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Dart Board Probability On a Dart board, with different areas labeled as: A, B, C, D, and each area different sizes. The probabilities of each area are: P(A)=25%, P(B)=50%, P(C)=12.5% and P(D)=12.5% What is P(~C or B)? I don't understand the "not C or B" term. If it is not C then it includes B already. My guess would be 87.5%
You understood it correctly. Besides recognizing that mathematically "or" includes both being true, another point was to read it as P((~C) or B) as opposed to P(~(C or B)), which would be 37.5%
{ "language": "en", "url": "https://math.stackexchange.com/questions/501344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Lagrange basis functions as bases of Polynomials Space Suppose $L$ be a Vector Space of Polynomials of $x$ of degree $\leq n-1$ with coefficients in the field $\mathbb{K}$. Define $$g_i(x) :=\prod _ {{j=1},{j\neq i}}^n \frac{x-a_j}{a_i-a_j}$$ Show that the polynomials $g_1(x), g_2(x),...,g_n(x)$ form a basis of L. Furthermore, show that coordinates of polynomial $f$ in this basis are $\{f(a_1),f(a_2),...,f(a_n)\}.$ To show that the polynomials are the bases, I need to show that they span $L$ and that they are linearly independent. I thought showing that any element in the set $\{1,x,x^2,...,x^{n-1}\}$ belongs to the span of $\{g_1(x), g_2(x),...,g_n(x)\}$ would be enough to show the $g_1(x), g_2(x),...,g_n(x)$ spans $L.$ But I don't know how to do this! Also, linear independence seems to be tougher!
Choose arbitrary $f\in L$. Let be $$\tilde{f}(x) = \sum_{i = 1}^{n}f(a_i)g_i(x)\text{.} $$ For every $x\in \{a_1,\dots, a_n\}$ we have $f(x) = \tilde{f}(x)$, so the polynomial $p= f - \tilde{f}$ has $n$ zeros and $\deg p \leq n-1$, so $p(x) = 0$ for every $x\in \mathbb{R}$. So $g_i$ span $L$. We know that $\dim L = n$, so they must be linearly independendent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/501407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 1 }
Find zeros of this function: $$(3\tan(x)+4\cot(x))\cdot\sin(2x)$$ Do I have to multiply them and solve, or one by one, like: $$(3\tan(x)+4\cot(x))=0$$and$$\sin(2x)=0.$$
Observe that if $\displaystyle(3\tan x+4\cot x)=0\implies 3\tan x+\frac4{\tan x}=0\iff3\tan^2x+4=0$ which is impossible for real $x$ If $\sin2x=0, 2x=n\pi$ where $n$ is any integer If $n$ is even $=2m$(say) $2x=2m\pi, x=m\pi,\cot x=\cot m\pi=\frac{\cos m\pi}{\sin m\pi}=\frac{(-1)^m}0$ hence not finite If $n$ is odd $=2m+1$(say) $2x=(2m+1)\pi, x=\frac{(2m+1)\pi}2,\tan x=\tan\frac{(2m+1)\pi}2=\frac{(-1)^m}0$ hence not finite So, we don't have any real solution which will be more evident below Method $1:$ On multiplication, $\displaystyle(3\tan x+4\cot x)\sin2x=\left(3\frac{\sin x}{\cos x}+4\frac{\cos x}{\sin x}\right)2\sin x\cos x=6\sin^2x+8\cos^2x$ Using $\cos2A=2\cos^2A-1=1-2\sin^2A$ $6\sin^2x+8\cos^2x=3(1-\cos2x)+4(1+\cos2x)=7+\cos2x$ Do you know for real $y,-1\le \cos y\le 1$ Method $2:$ Using $\displaystyle\sin2A=\frac{2\tan x}{1+\tan^2x}$ and $\displaystyle\cot x=\frac1{\tan x}$ $\displaystyle(3\tan x+4\cot x)\sin2x=\left(3t+\frac4t\right)\frac{2t}{1+t^2}=\frac{2(3t^2+4)}{1+t^2}$ where $t=\tan x$
{ "language": "en", "url": "https://math.stackexchange.com/questions/501504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Find $\lim_{x\to 0} \frac{\tan16x}{\sin2x}$ Find $\lim_{x\to 0} \frac{\tan16x}{\sin2x}$ I'm a little confused on limit trig. Am i suppose to simplify tan or do I use the derivative quotient rule? Please Help!!!
Recall the following limits: * *$\lim_{x \to 0}\dfrac{\sin(ax)}{x} = a$ *$\lim_{x \to 0}\dfrac{\tan(bx)}{x} = b$ Note that $$\dfrac{\tan(16x)}{\sin(2x)} = \dfrac{\dfrac{\tan(16x)}{x}}{\dfrac{\sin(2x)}x}$$ Can you finish it off now?
{ "language": "en", "url": "https://math.stackexchange.com/questions/501609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Find all points where the tangent line has slope 1. Let $f(x)=x-\cos(x)$. Find all points on the graph of $y=f(x)$ where the tangent line has slope 1. (In each answer $n$ varies among all integers). So far I've used the Sum derivative rule for which I have $1+\sin(x)$. So do I put in 1 in for $x$ for sin$(x)$. Please Help!!
$f(x) = x - \cos x \tag{1}$ $f'(x) = 1 + \sin x \tag{2}$ $f'(x) = 1 \Rightarrow 1 = 1 + \sin x \Rightarrow \sin x = 0 \tag{3}$ $\sin x = 0 \Leftrightarrow x = k\pi, \tag{4}$ for $k \in \Bbb Z$, the set of integers. Hope this helps. Cheerio, and, as always, Fiat Lux!!!
{ "language": "en", "url": "https://math.stackexchange.com/questions/501678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
What is the probability that the student knew the answer to at least one of the two questions? A student takes a true-false examination containing 20 questions. On looking at the examination the student and that he knows the answer to 10 of the questions which he proceeds to answer correctly. He then randomly answers the remaining 10 questions. The instructor selects 2 of the questions at random and that the students answered both questions correctly. What is the probability that the student knew the answer to at least one of the two questions?
When complete, the following table shows the probabilities of getting $2$, $1$, or $0$ of the two questions right depending on the kinds of questions. $K$ refers to a question with a known answer and $U$ to a question with an unknown answer; the first letter is for the first question graded, and the second is for the second question graded. I’ve omitted most of the entries, leaving them for you to compute. $$\begin{array}{c|cc} &2&1&0\\ \hline KK&1&0&0\\ KU&a&&0\\ UK&b&&0\\ UU&c&&1/4 \end{array}$$ The student got both questions right, for a score of $2$, so the entries that you need are those in the first column: you need to determine what fraction of the total $1+a+b+c$ is contributed by the $KK$ case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/501835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Prove that $12 \mid n^2 - 1$ if $\gcd(n,6)=1$ Prove that $12 \mid n^2 - 1$ if $\gcd(n,6)=1$. I know I have to use Fermat's Little Theorem for this but I am unsure how to do this problem.
As is mentioned in answers above, $gcd(n,6)=1$ clearly implies that $n$ is odd. But then each of $n+1$ and $n-1$ are even, so $4$ divides $(n+1)(n-1)=n^2-1$. Thus it is only left to show that $3$ also divides $n^2-1$. But given any three consecutive integers, we know that $3$ must divide exactly one of them. Applying this to $n-1,n,n+1$, and using the fact that $3$ cannot divide $n$ (since $gcd(n,6)=1$), we get the desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/501895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
How can I solve these Modular problems? Very basic question, but how can I solve this? $7x+9y \equiv 0 \bmod 31$ and $2x-5y \equiv 2 \bmod 31$.
Adding to Peter's answer, if you're asked to solve this by hand, this particular modular arithmetic is easy because $31$ is prime. Multiplication and addition tables fall out very easily and the only part where you'd be required to do some computation is when you find the inverse of a number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/501968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the smallest value of $x^2+y^2$ when $x+y=6$? If $ x+y=6 $ then what is the smallest possible value for $x^2+y^2$? Please show me the working to show where I am going wrong! Cheers
Most reasonable and concrete solution is to find minima using derivatives. As shown by DeltaLima. There are many ways to find the minima graphically: $x^2+y^2$ expression can be written as $x^2+y^2=K$(equation)(it represents parabola) and $x+y=6$ represents, a straight line. They will intersect for minimum and maximum value of $(x,y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/502034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 9, "answer_id": 7 }
If $x_12$, show that $(x_n)$ is convergent. If $x_1<x_2$ are arbitrary real numbers, and $x_n=\frac{1}{2}(x_{n-2}+x_{n-1})$ for $n>2$, show that $(x_n)$ is convergent. What is the limit? The back of my textbook says that $\lim(x_n)=\frac{1}{3}x_1+\frac{2}{3}x_2$. I was thinking that if I show that the sequence is monotone increasing by induction, then I can "guess" (from the back of my textbook) that there is an upperbound of the limit and show by induction that all elements in $x_n$ are between $x_1$ and the upperbound and so it's bounded. Then by the monotone convergence theorem say it's convergent. I'm not sure how to show that it is monotone, and then after showing convergence finding the limit, without magically guessing it.
A related problem. To prove convergence, note that $$ x_n=\frac{1}{2}(x_{n-2}+x_{n-1})\implies x_n-x_{n-1}=\frac{1}{2}(x_{n-2}-x_{n-1}) $$ $$ \implies |x_n-x_{n-1}|=\frac{1}{2}|x_{n-1}-x_{n-2}|<\frac{2}{3}|x_{n-1}-x_{n-2}|. $$ This proves that the sequence is a contraction and hence convergent by the Fixed point theorem. To solve it, then assume the solution $x_n=r^n$ and subs back in the equation and solve the resulting polynomial in $r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/502100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 1 }
Exponential Function as an Infinite Product Is there any representation of the exponential function as an infinite product (where there is no maximal factor in the series of terms which essentially contributes)? I.e. $$\mathrm e^x=\prod_{n=0}^\infty a_n,$$ and by the sentence in brackets I mean that the $a_n$'s are not just mostly equal to $1$ or pairwise canceling away. The product is infinite but its factors don't contain a subseqeunce of $1$, if that makes sense. There is of course the limit definition as powers of $(1+x/n)$., but these are no definite $a_n$'s, which one could e.g. divide out.
Amazingly, the exponential function can be represented as an infinite product of a product! That result was shown in the 2006 paper "Double Integrals and Infinite Products For Some Classical Constants Via Analytic Continuations of Lerch's Transendent" by Jesus Guillera and Jonathan Sondow. It is proven in Theorem 5.3 that $$e^x=\prod_{n=1}^\infty \left(\prod_{k=1}^n (kx+1)^{(-1)^{k+1} {{n}\choose{k}}}\right) ^{1/n}$$ I dunno, this was too cool not to show you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/502160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 9, "answer_id": 2 }
Monotonic sequence and limit For $x_n = \frac{1}{2+x_{n-1}}$ where $x_1 =1/2$, show that the sequence is monotonic and find its limit. What I first did was finding $x_{n+1}$, which equals $\frac{1}{2+x_n}$; then $x_{n+2}=\frac{1}{2+\frac{1}{2+x_n}}=\frac{x_n+2}{2x_n+5}$ thus it does eventually get smaller hence $x_n>x_{n+1}$. How can I finish this?
The single most useful thing to do here is to draw on a same picture the graphs of the functions $u:x\mapsto1/(2+x)$ and $v:x\mapsto x$, say for $x$ in $(0,1)$. The rest follows by inspection... Since $u$ is decreasing from $u(0)\gt0$ to $u(\infty)=0$, $u$ has a unique fixed point, say $x^*$. Since $u(x_1)\lt x_1$, one knows that $x_1\gt x^*$. Drawing on our picture the segments from $(x_1,0)$ to $(x_1,x_2)$ to $(x_2,x_2)$ to $(x_2,x_3)$ and so on, one sees that: * *the sequence $(x_{2n-1})$ is decreasing and $x_{2n-1}\gt x^*$ for every $n$, *the sequence $(x_{2n})$ is increasing and $x_{2n}\lt x^*$ for every $n$, *the whole sequence $(x_{n})$ is neither decreasing nor increasing, *and the whole sequence $(x_{n})$ converges to $x^*$. Numerically, $x^*=\sqrt2-1\approx.414$. Edit: To show the last item, call $L$ the limit of $(x_{2n})$ and $M$ the limit of $(x_{2n-1})$, then $M=u(L)$ and $L=u(M)$ hence $L=u\circ u(L)$ and $M=u\circ u(M)$. Computing $u\circ u$, one sees that $x=u\circ u(x)$ is equivalent to $x=(2+x)/(5+2x)$, that is, $x^2+2x=1$, that is, $x=\pm\sqrt2+1$. Since $L$ and $M$ are positive, this shows that $L=M=x^*$ hence $(x_n)$ converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/502244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
What is the integral of $e^{-x^2/2}$ over $\mathbb{R}$ What is the integral of $$\int_{-\infty}^{\infty}e^{-x^2/2}dx\,?$$ My working is here: = $-e^(-1/2x^2)/x$ from negative infinity to infinity. What is the value of this? Not sure how to carry on from here. Thank you.
$$\left(\int\limits_{-\infty}^\infty e^{-\frac12x^2}dx\right)^2=\int\limits_{-\infty}^\infty e^{-\frac12x^2}dx\int\limits_{-\infty}^\infty e^{-\frac12y^2}dy=\int\limits_{-\infty}^\infty\int\limits_{-\infty}^\infty e^{-\frac12(x^2+y^2)}dxdy=$$ Change now to polar coordinates: $$=\int\limits_0^{2\pi}\int\limits_0^\infty re^{-\frac12r^2}drd\theta=\left.-2\pi e^{-\frac12r^2}\right|_0^\infty=2\pi$$ So your integral equals $\;\sqrt{2\pi}\;$
{ "language": "en", "url": "https://math.stackexchange.com/questions/502313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
A single gate defined in Conjunctive Normal Form This might sound like a silly question but I just want to make sure I'm not getting confused. I understand that CNF is essentially converting the logic into AND's of OR's, so for example... ~a AND (b OR c) in CNF would be (a AND b) OR (a AND c). My question is, what would a single and gate be defined as in CNF. So for example if it had 2 inputs and 1 output, x y z . Would x AND y in CNF just be x AND y? Thanks.
CNF stands for Conjunctive Normal Form. A boolean expression is in CNF if it is an AND of OR expressions where every OR has nothing but literals or inverted literals as inputs. The OR expressions for CNF are commonly called clauses. Your example: The CNF of x AND y consists of two single-literal clauses, one is x and one is y.
{ "language": "en", "url": "https://math.stackexchange.com/questions/502418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
About the ratio of the areas of a convex pentagon and the inner pentagon made by the five diagonals I've thought about the following question for a month, but I'm facing difficulty. Question : Letting $S{^\prime}$ be the area of the inner pentagon made by the five diagonals of a convex pentagon whose area is $S$, then find the max of $\frac{S^{\prime}}{S}$. $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ It seems that a regular pentagon and its affine images would give the max. However, I don't have any good idea without tedious calculations. Can anyone help? Update : I crossposted to MO.
Not a complete answer, and surely not elegant, but here's an approach. The idea is to consider "polygonal conics" in the following sense. Take the cone $K$ given by \begin{equation} K:\qquad x^2+y^2=z^2 \end{equation} and slice it with the plane $z=1$. We get a circle $\gamma$, and let's inscribe in this circle a regular pentagon, $P$ (The idea actually works with a regular $n$-gon with $n=2k+1$... should it be called $n$-oddgon?). The diagonals of this pentagon define a smaller pentagon $P'$. Let also $\gamma'$ be the circle in which $P'$ is itself inscribed in. Let also $C,C',S,S'$ be the areas of $\gamma,\gamma',P,P'$ respectively. We have \begin{equation} \frac{C'}{C}=\frac{S'}{S} \end{equation} since all these areas depend only on the radiuses of $\gamma$ and $\gamma'$. Let's now slice the initial cone and the cone $K'$ generated by the circle $\gamma'$ with a general plane \begin{equation} (x-x_0)\cos\theta+(z-z_0)\sin\theta=0 \end{equation} (Informally, it's the plane passing through $(x_0,0,z_0)$ and its normal vector lies in the plane $y=0$ and makes an angle $\theta$ with the $x$-axis). We get two nested conics. Considering only the cases when these conics are ellipses, we can compute the ratio of their areas (area of the inner ellipse over area of the outer one) and one finds that the maximum of this value occurs when both are circles. When only the inner conic is an ellipse, the ratio is zero. Considering now the "pentagonal cones" obtained by extruding the original nested pentagons along the two cones $K$ and $K'$ and reasoning in a similar way as before, we have that the ratio between the areas of the nested "pentagonal conics" has a maximum when the pentagons are regular. A problem with this approach is that I don't have a clue of what class of pentagons we are dealing with, as it may be too narrow to be of any real interest for the problem. UPDATE: in the case of a pentagon, since five points in general position determine a conic, I guess we have covered all the possible cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/502496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 5, "answer_id": 2 }
normal distribution derivation In this derivation: http://www.sonoma.edu/users/w/wilsonst/Papers/Normal/default.html how do these equal? $$ -k\int (x-\mu) dx = -\frac{k}{2} (x-\mu)^2$$ Isn't this the case? $$ -k\int (x-\mu) dx = -\frac{kx^2}{2} + k\mu x$$
The answer given comes from one antiderivative of $x-\mu$. Your answer comes from another antiderivative of $x-\mu$. The two differ by a constant, so both of them are correct antiderivatives. Neither of them is the general antiderivative of $x-\mu$. The general antiderivative of $x-\mu$ can be written as $\frac{1}{2}(x-\mu)^2+C$ or as $\frac{x^2}{2}-\mu x+C$, where in each case the $C$ is an arbitrary constant of integration. Your source presumably found the first form more convenient.
{ "language": "en", "url": "https://math.stackexchange.com/questions/502559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculations with permutations: show that $(1,2,3)^2(5,7)^2=(1,3,2)$ How can I show $(1,2,3)^2(5,7)^2=(1,3,2)$? And, specifically, what does $(1,2,3)^2$ and $(5,7)^2$ equal individually?
Note that with $(5, 7)^2 = (5, 7)(5, 7)$, $\quad (5 \to 7 \to 5)$ and $(7 \to 5 \to 7),\quad $ which gives us $(5, 7)(5, 7) = (1)$, the identity permutation. Any two-cycle, squared, gives us the identity permutation: it's an order two permutation. Since the first two squared cycles are disjoint from one another, you can simply compute $$(1, 2, 3)^2 \cdot (5, 7)^2 = (1, 2, 3)(1, 2, 3)\cdot(1) = (1, 2, 3)(1, 2, 3) = (1, 3, 2)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/502623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Compound Interest -Confirming Answer- "When you get a "30 year fixed rate mortgage" on a house, you borrow a certain amount of money at a certain interest rate. You then make the same monthly payment for 30 years. At the end of that time, the loan is fully paid off. The interest on the loan is compounded monthly. Suppose the amount of money you borrow is n dollars, and that the interest rate, compounded monthly, is r%. Find a formula in terms of n and r that gives you the amount of the monthly payment." I get my answer to be: $\frac{(n * (1 + r / (100 * 12))^{360})}{360} $ Can anyone confirm that this is correct?
We give a derivation of the correct formula for the monthly payments. We will assume that the first monthly payment is made a month after the loan is issued. We also assume that the interest rate $r$ is given as a percentage. Let $P$ be the monthly payment. The present value of a payment of $P$ made $k$ months from now is $$P\cdot \frac{1}{1+\frac{r}{(100)(12)}}.$$ For brevity, call $1+\frac{r}{(100)(12)}$ by the name $b$. The present value (PV) of the $360$ payments is $$P\left(\frac{1}{b}+\frac{1}{b^2}+\frac{1}{b^3}+\cdots+\frac{1}{b^{360}}\right).$$ By the formula for the sum of a finite geometric series, this is $$P\cdot\frac{b^{360}-1}{(b-1)b^{360}}.$$ Set this equal to $n$ and solve for $P$. We get $$P=n\cdot (b-1)\frac{b^{360}}{b^{360}-1}.$$ Remark: Your calculation took the initial amount $n$ owed, and calculated what this debt would grow to in $30$ years. Call this amount $H$, for huge. Then you divided $H$ by $360$ to get the monthly payment. However, our debt does not remain $n$ for $30$ years. As we make payments, the amount owed decreases. The rate of decrease is quite slow at first, since most of our monthly payments go to interest. But after a while, we are paying significant amounts off the principal each month.
{ "language": "en", "url": "https://math.stackexchange.com/questions/502697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Maximize the determinant Over the class $S$ of symmetric $n$ by $n$ matrices such that the diagonal entries are +1 and off diagonals are between $-1$ and $+1$ (inclusive/exclusive), is $$\max_{A \in S} \det A = \det(I_n)$$ ?
For the three by three case I have the following. Let $A$ be generally defined as $$A= \pmatrix{ 1& x& y\\ x& 1&z\\ y&z& 1\\ }. $$ Then the determinant is $$|A| = \matrix{ 1& z \\ z& 1 \\} - x\matrix{ x& z \\ y& 1 \\} + y\matrix{ x& 1 \\ y& z \\} = 1-z^2 - x(x-zy) + y(xz-y) =$$ $$= 1-x^2-y^2-z^2 +xyz-xyz= 1-(x^2+y^2+z^2) \le 1 = \det(I_n)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/502771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to show a continuous cannot differ a characteristic function by measure $0$? How to show that there is no continuous function on $\mathbb R$ such that it differs from $\chi_{[0,1]}$, the characteristic function of $[0,1]$, by a measure (Lebesgue measure) of $0$?
Hint: Let $f$ be such function. For any $\delta > 0$, $(-\delta, 0)$ must contain an element $x$ such that $f(x) = 0$. Similarly, $(0, \delta)$ must contain an element $x$ such that $f(x) = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/502845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $a_0=1$, and $a_n$ is defined by $a_n=a_{n+1}+a_{n+2}$, find $a_n$. This is not a homework problem, though it is in my textbook as a practice problem that intrigues me enough to try it. I've got some idea how to solve it but I don't know how to prove my hypothesis. The question reads exactly as follows: Suppose $a_0,a_1,a_2,a_3,\dots,a_n$ is a sequence of positive real numbers such that $a_0=1$ and $a_n=a_{n+1}+a_{n+2}$, $n\geq 0$. Find $a_n$. My first idea was to find a pattern to work with. I figured out equations for the first 4 terms in the sequence: \begin{align} a_0&=1=a_1+a_2=(3a_4+2a_5)+(2a_4+a_5)=5a_4+3a_5\\ a_1&=a_2+a_3=(2a_4+a_5)+(a_4+a_5)=3a_4+2a_5\\ a_2&=a_3+a_4=(a_4+a_5)+a_4=2a_4+a_5\\ a_3&=a_4+a_5 \end{align} From this, it would appear that the equation for $a_n$ is something of the form $a_n=la_{n+1}+ma_{n+2}$ for some $l,m\in\mathbb{N}$ and some $a_{n+1},a_{n+2}\in\mathbb{R}^{+}$. Unless I'm wrong, it looks like I can deduce that $a_0$ can be written as a linear combination of two variables $x$ and $y$, and the coefficients $l$ and $m$ appear (though it is unproven) to be coprime. If this is the case, then there should exist an $x$ and $y$ to satisfy the equation $1=lx+my$... But I've only learned how to do this when $x$ and $y$ are restricted to be any value in $\mathbb{Z}$... and I'm clearly restricted to positive real numbers. So how might I be able to tackle this?
Write it in the usual way with decreasing subscripts as $$ a_{n+2} + a_{n+1} - a_n = 0. $$ Whatever you might want to call it is $$ \lambda^2 + \lambda - 1 = 0. $$ If this has distinct roots then $a_n = B \lambda_1^n + C \lambda_2^n $ for real or complex constants $B,C$ depending how it turns out. So, $$ \lambda = \frac{-1 \pm \sqrt 5}{2}, $$ or $$ \lambda_1 = \frac{-1 + \sqrt 5}{2} \approx 0.618, \; \; \lambda_2 = \frac{-1 - \sqrt 5}{2} \approx -1.618. $$ If the coefficient of $\lambda_2$ were nonzero, that term would eventually overwhelm the $\lambda_1$ term, resulting in (eventually) alternating negative and positive $a_n.$ We are told the $a_n$ stay positive forever. So $a_n = B \lambda_1^n.$ Since $a_0 = 1$ we must have $$ a_n = \left( \frac{-1 + \sqrt 5}{2} \right)^n. $$ EDIT: it is easy enough to see that the set of sequences solving $a_{n+2} + a_{n+1} - a_n = 0$ make a vector space; you can add two sequences together, you can multiply by a constant, and so on. For differential equations, there is a fair amount involved in showing the dimension of the vector space. But we have difference equations, and the dimension is exactly two, simply because knowing $a_0$ and $a_1$ completely determines the sequence. Put another way, define a basis of two sequences, call them $x,y,$ so $$ x_0 = 1, x_1 = 0; \; \; x_{n+2} + x_{n+1} - x_n = 0,$$ $$ y_0 = 0, y_1 = 1; \; \; y_{n+2} + y_{n+1} - y_n = 0.$$ Therefore, if I can display two linearly independent sequences (it suffices to check at subscripts $0,1$) then i have another basis. TUESDAY. Note from comment above: if I had a problem with a repeated root, some constant $\beta$ and sequences solving $$ z_{n+2} - 2 \beta z_{n+1} + \beta^2 z_n = 0, $$ my characteristic equation would be $$ \lambda^2 - 2 \beta \lambda + \beta^2 = (\lambda - \beta)^2 = 0. $$ A basis, of two sequences is $\{\beta^n, \; n \, \beta^n \}$ so that any specific solution is $$ z_n = B \, \beta^n + C \, n \, \beta^n. $$ It's worth checking that both sequences in my basis really work!
{ "language": "en", "url": "https://math.stackexchange.com/questions/503021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Proving $ f(x) = x^2 $ is not uniformly continuous on the real line This is homework problem and the very premise has me stumped. It's in a text on PDE. The exercise says to show that $ f(x) = x^2 $ is not uniformly continuous on the real line. But every definition I know says that it is a continuous function, and unless you attach some special condition, like restricting the interval or making it a periodic function (perhaps saying $f(x-2) = f(x)$ or some such) it's by definition continuous. There's always a derivative since $f'(x) = 2x$. The preceding chapter is about Drichelet and the like, as an extension of Fourier series, so I am guessing that a Fourier expansion does something here but every proof of the proposition seems to have nothing to do with Fourier series in the slightest. So I am pretty lost here. This whole question seems utterly nonsensical.
Here is a more general approach to solving this problem. Let me formulate and prove a theorem: Theorem: Let $E = [a,+\infty),$ function $f:E \rightarrow \mathbb{R}$ is differentiable on $E$ and $$\displaystyle{\lim_{x \to \infty}} f'(x) = \infty.$$ Then $f$ is not a uniformly continuous function. Proof: Let the function $f$ be uniformly continuous. Let $\epsilon = 1$ and $\delta>0$ satisfies the deffinition of uniform continuous. From the $\displaystyle{\lim_{x \to \infty}} f'(x) = \infty$ follows that $$\exists m\geqslant a: \forall x \geqslant m \\ \left|f'(x)\right| > \frac{2}{\delta}$$ Let $x_1=m, x_2=m+\frac{\delta}{2}.$ Using Lagrange theorem for $[x_1,x_2]$ we have $$\left|f(x_1)-f(x_2)\right| = \frac{\delta}{2}\left|f'(\zeta)\right| \\ \text{for some } \zeta \in[x_1,x_2] $$ Since $\zeta \geqslant m$ then $\left|f'(\zeta)\right|>\frac{2}{\delta}$ from where $$\left| f(x_1) - f(x_2)\right| > 1 = \epsilon.$$ This contradicts the uniform continuity of the function $f$. Thus, $f$ is not uniformly continuous. ∎ So, your function is $f(x)=x^2$. Its derivative is $$\frac{d(x^2)}{dx} = 2x.$$ The limit of this derivative at infinity is $$\displaystyle{\lim_{x \to \infty}} (2x) = \infty.$$ So, by the theorem this function is not uniformly continuous on the $[0,+\infty)$. The same arguments can be used for $(-\infty,0]$. And the statement that if a function is uniformly continuous on the [a, c] as it is on [c,b], then it is uniformly continuous on the [a, b] finally can be used to prove that your function is not uniformly continuous on the real line $\mathbb{R}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/503093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 2, "answer_id": 0 }
Suppose that $f:[a,b] \to [a,b]$ is continuous. Prove that there is at least one fixed point in $[a,b]$ - that is, $x$ such that $f(x) = x$. Suppose that $f:[a,b] \to [a,b]$ is continuous. Prove that there is at least one fixed point in $[a,b]$ - that is, $x$ such that $f(x) = x$. I haven't a clue where to even start on this one.
Consider the function $g(x) = f(x) - x$. Then $$g(a) = f(a) - a \ge a - a = 0$$ while $$g(b) = f(b) - b \le b - b = 0$$ So by the intermediate value theorem, ...?
{ "language": "en", "url": "https://math.stackexchange.com/questions/503184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Graphing Hyperbolas I know that a Hyperbola is in the form of: $$\dfrac{(x-h)^2}{a^2}-\dfrac{(y-k)^2}{b^2}=1$$ But how would I graph it? I know that a Hyperbola has two asymptotes that the graph gets infinitely close to but will never touch, is there a way to find the asymptotes with that equation? and is the asymptotes the only thing you need to graph a hyperbola?
To add to Kaster's answer, there is a handy construction elaborated in this link. In the form $\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=1$ the hyperbola has its fundamental rectangle with corners at $(0,b), (0,-b), (a,0), (-a,0)$ and the diagonals of this rectangle are the asymptotes. Points $(a,0), (-a,0)$ are the vertices. Knowing the asymptotes and the vertices, the hyperbola is defined unambiguously. If you want to have a more precise graph when drawing by hand, you may want to calculate additional points for the hyperbola, though. Now, in the form $\dfrac{(x-h)^2}{a^2}+\dfrac{(y-k)^2}{b^2}=1$ this whole construction is simply shifted $h$ units in positive $x$ direction and $k$ units in positive $y$ direction. The most general form $A_{xx}x^2 + 2A_{xy}xy + A_{yy}y^2 + 2B_x x + 2B_y y + C = 0$ is a bit more problematic, though, since the fundamental triangle is rotated.
{ "language": "en", "url": "https://math.stackexchange.com/questions/503278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Three numbers, one of the number's digit sum is equal to two other digit difference So as the title says I need three numbers witch has this quality : one of the numbers digit sum is equal to other two number differnce e.g. I 68 II 52 III 97 third number digit sum is 16 and its equal to I and II difference which is 16. So my question is this - is there a posibility that there is three nubmers that apply this quality more than once among the same numbers? And if there is a such set, how is there some way to get it, and if there is not then why?
Consider 3 numbers a, b and c: $$A = 10*a1 + a2\\ B = 10*b1 + b2\\ C = 10*c1 + c2$$ (in your example: a = 68 = 10*6 + 8) The relation you are describing: $$\begin{cases} a1+a2 = C-B\\ b1+b2 = C-A \end{cases}$$ $$\begin{cases} C = a1+a2+B\\ C = b1+b2+A \end{cases}$$ $$a1+a2+B = b1+b2+A$$ $$a1+a2 + 10*b1+b2 = b1+b2 + 10*a1+a2$$ $$a1 + 10*b1 = b1 + 10*a1$$ $$a1=b1$$ So if a1 = b1 then you will always be able to find a number C that matches the relation. For example a1=b1=5, a2=3, b2=9 $$A = 53\\ B = 59\\ => C = 67 $$ Or in Vedran Šegos example a1=b1=0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/503373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Row and Column Picture of a 3 x 3 Singular Matrix (Strang P43, 2.1.32) Suppose $\mathbf{u}$ and $\mathbf{v}$ are the first two columns of a 3 by 3 matrix $A$. Which third columns $\mathbf{w}$ would make this matrix singular? Describe a typical column picture of $A\mathbf{x} = \mathbf{b}$ in that singular case, and a typical row picture (for a random $\mathbf{b}$). $\boxed{\text{(P38) Column picture:}}$ $A\mathbf{x} = \mathbf{b}$ asks for a combination of columns to produce $\mathbf{b}$. $\boxed{\text{(P38) Row picture:}}$ Each equation in $A\mathbf{x} = \mathbf{b}$ gives a line (n = 2) or a plane (n = 3) or a "hyperplane" (n > 3). They intersect at the solution or solutions, if any. Answer: $A$ is singular when its third column $\mathbf{w}$ is a combination $c\mathbf{u} + d\mathbf{v}$ of the first columns. $\color{red}{\Large{[}}$ A typical column picture has $\mathbf{b}$ outside the plane of $\mathbf{u, v, w}. \color{red}{\Large{]}}$ $\color{#0070FF}{\Large{[}}$ A typical row picture has the intersection line of two planes parallel to the third plane. $\color{#0070FF}{\Large{]}}$ Then no solution. "Singularity" hasn't been formally defined, but P27 identifies it with dependent columns. So the third column depends on the first two, ie: $\mathbf{w} =c\mathbf{u} + d\mathbf{v} \; \forall \; c, d \in \mathbb{R}$. Moreover, $\mathbf{w}$ is a plane on which the vectors $\mathbf{u, v}$ lie. I seize this paragraph. $\large{1.}$ How and why is the red bracket true? $\qquad \large{2.}$ How and why is the blue bracket true? Please mind that this question is from but Section 2.1 of IoLA, 4th ed, by Strang, so please keep answers rudimentary.
Your first question: the red bracket is true because b is randomly chosen, it can be either on the plane or out of the plane of u, v, w. Furthermore, the possibility that it's out of the plane is bigger than that it's on the plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/503440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove irreducibility of a polynomial Let $m$ be an integer, squarefree, $m\neq 1$. Prove that $x^3-m$ is irreducible in $\mathbb{Q}[X]$. My thoughts: since $m$ is squarefree, i have the prime factorization $m=p_1\cdots p_k$. Let $p$ be any of the primes dividing $m$. Then $p$ divides $m$, $p$ does not divide the leading coefficient, $p^2$ does not divide $m$. Hence $x^3-m$ is irreducible over $\mathbb{Q}$ by Eisenstein. Questions: 1) Do you think it's correct? 2) Is there some different way to prove irreducibility of this polynomial. Thanks to all.
$x^3-m$ is reducible iff it has a factor of degree 1 iff it has a root iff $m$ is a cube. In particular, $x^3-m$ is irreducible when $m$ is squarefree.
{ "language": "en", "url": "https://math.stackexchange.com/questions/503511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Show that a matrix $A$ is singular if and only if $0$ is an eigenvalue. I can't find the missing link between singularity and zero eigenvalues as is stated in the following proposition: A matrix $A$ is singular if and only if $0$ is an eigenvalue. Could anyone shed some light?
If 0 is an eigenvalue, then there exists a vector $v$ in your space such that $A.v = 0$. If your matrix size is 4x4 with one 0 eigenvalue and you write the image of the eigenvectors, you get: $$(v11, v12, v13, 0)$$ $$(v21, v22, v23, 0)$$ $$(v31, v32, v33, 0)$$ $$(v41, v42, v43, 0)$$ You can see it's singular because: * *the 3 vectors cannot possibly span a 4-dimensional space, it's an hyperplane, a 3 dimensional sub-space in 4 dimensions (so it would be a line in two dimension, a plane in 3 dimension) *any point not on the hyperplane cannot be described as a combination of the 3 column vectors described above. So not all points can be reached through multiplication by $A$. *so the transformation that is $A$ cannot always be inverted, because for all $y$ there is not always a point $x$ such that $Ax=y$ (for example, the $y$ not on the hyperplane) *so $A$ is singular, it cannot be inverted in general hopes this rolls out the reasoning clearly enough.
{ "language": "en", "url": "https://math.stackexchange.com/questions/503585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 8, "answer_id": 2 }
Conditional Probability Question EDIT!!! a) The probability that any child in a certain family will have blue eyes is 1/4, and this feature is inherited independently by different children in the family. If there are five children in the family and it is known that at least one of these children has blue eyes, what is the probability that at least three of the children have blue eyes? b) Consider a family with the five children just described. If it is known that the youngest child in the family has blue eyes, what is the probability that at least three of the children have blue eyes? Hello! I'm pretty sure I understand part (a), but I'm not sure about part b. Since the child is actually distinguished in this case, does it change the denominator? So since its distinguished who actually has the blue eyes, namely, the young child, does the denominator just become 1/4? Actually it turns out that the denominator is actually 1. Can someone please explain? Please note that the answers in (a) and (b) are actually different. They are NOT the same. It turns out that knowing who the child is actually simplifies the problem and removes the necessity of using conditionals. Thanks in advance! Part a: Let: A= event that at least 3 children have blue eyes B= event that at least 1 child has blue eyes $\therefore A \subset B$ $\Pr(A \mid B)=\cfrac{\Pr(A \cap B)}{\Pr(B)}=\cfrac{\Pr(A)}{\Pr(B)}=\cfrac{ \sum\limits_{i=3}^5 \binom{5}{i} \cdot (0.25)^i \cdot 0.75^{5-i}}{1-0.75^5}=0.1357\tag{1}$ Part b: A=event youngest child has blue eyes B=event at least 3 children have blue eyes $\Pr(A \mid B)=\cfrac{ \sum\limits_{i=2}^4 \binom{4}{i} \cdot (0.25)^i \cdot 0.75^{4-i}}{1/4??}\tag{2}$
I know that a lot of time has passed, but FWIW here's my take. b) Consider a family with the five children just described. If it is known that the youngest child in the family has blue eyes, what is the probability that at least three of the children have blue eyes? If "it is known" implies truth, i.e. P(youngest child has blue eyes) = 1. Except that now, your event $A$ is no longer "At least three children have blue eyes." It is now at least two children have blue eyes. Now you can sum across your probabilities as usual: $$ p(A|B) = \dfrac{\sum^4_{i=2}\binom{4}{i}\frac{1}{4}^i\frac{3}{4}^{4-i}}{1} $$ Another way of thinking about the problem is that you define a new scenario. You have a group of 4 children, probability for having blue eyes as $p_2$, and the problem is to find the probability that at least 2 have blue eyes. There is no longer any conditional probability, but if you find $p_2$, you get the answer to the question posed at the beginning.
{ "language": "en", "url": "https://math.stackexchange.com/questions/503641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Find the steady state temperature of the rod A rod occupying the interval $0 \leq x \leq l$ is subject to the heat source $f(x) =0, $ for $ 0 < x < L/2$, $f(x) =H $ for $ L/2 <x <L ,H>0$ (1)The rod satisfies the heat equation $$u_t = u_{xx} + f(x)$$ and its ends are kept at zero temperature. Find the steady-state temperature of the rod. (2)Which point is the hottest, and what is the temperature there? Attempt: I only know that since, its ends are kept at zero temperature, $$u_t=0$$. Where should I go from here.
For the steady state, you are solving $$u_t=0 \implies u_{xx}=-f(x)$$ The general solution to this, given $f$, is pretty straightforward: $$u(x) = \begin{cases} c_1 x+c_2 & x \in \left (0,\frac{L}{2}\right)\\-\frac12 H x^2 + d_1 x+d_2 & x \in \left (\frac{L}{2},L\right) \end{cases} $$ Yes, we have different constants in each branch of the integration interval. We thus need four conditions. Two come from the boundary conditions: $$u(0)=u(L)=0$$ The other two come from continuity requirements for $u$ and its derivative (otherwise, the heat flow is discontinuous, not what should happen in a steady-state solution). Thus, $$c_1 \frac{L}{2}+c_2 = -\frac18 H L^2 + d_1 \frac{L}{2} + d_2$$ $$c_1 = -\frac12 H L + d_1$$ Four equations, four unknowns, solve. It's not as bad as you may think. For example, $c_2=0$. The rest fall out from simple substitutions. The result I get is $$u(x) = \begin{cases} \frac18 H L x & x \in \left (0,\frac{L}{2}\right)\\-\frac12 H x^2 + \frac{5}{8} H L x-\frac18 H L^2 & x \in \left (\frac{L}{2},L\right) \end{cases} $$ EDIT To see what the above temperature looks like, rewrite the above result in the following form: $$f(y) = y \,\theta \left ( \frac12-y\right) - (4 y-1)(y-1)\, \theta \left (y- \frac12\right) $$ where $\theta$ is the Heaviside step function, $y=x/L$, and $f(y) = 8 u(L y)/(H L^2)$. Here is a plot of $f(y)$:
{ "language": "en", "url": "https://math.stackexchange.com/questions/503716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Fourier series of $f(x)=x^2$ in $x∈[0,2\pi]$ and in $x∈[−\pi,\pi]$? Fourier series - what is the difference between the Fourier series of $f(x)=x^2$ in $x∈[0,2\pi]$ and in $x∈[−\pi,\pi]$?
Conceptually, Fourier series try to express a periodic function in terms of sines and cosines. Consider the two situations you mentioned here, letting $f(x) = x^2$. If your periodic function is $f$ from $[0,2\pi]$, then it would be just the right part of the parabola from 0 to $2\pi$ repeated again and again. But if you evaluate f from $-\pi$ to $\pi$, then the function looks like a full parabola from $-\pi$ to $\pi$, which, if repeated, creates a completely different periodic function than the other case. Therefore, the domain is very important in defining what exactly the periodic function you are trying to express in Fourier's domain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/503796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does every $p$-group of odd order admit fixed point free automorphisms? Does every $p$-group of odd order admit fixed point free automorphisms? equivalently, Given an odd order $p$-group $P$, is there a group $C$ such that we can form a Frobenius group $P\rtimes C$? Note that this is not true for $p$-groups of even order, for example $Q_8$ and $C_4$. But I cannot think of an example with odd order.
The following family of $p$-groups provides counter examples $$G = \langle a, b, c,d |a^{p^n}=b^{p^4}=c^{p^4}=d^{p^2}=1, [a,b]=[a,c]=b^{p^2}, [a,d]=c^{p^2}, [b,c]=a^{p^{n-2}}, [b,d]=c^{p^2}, [c,d]=c^{p^2} \rangle$$ with $p$ odd, and $n>3$. For such a group $G$, every automorphism is central, that is $\operatorname{Aut}(G)$ acts trivially on $G/\operatorname{Z}(G)$. It is easy to see that the number of central automorphisms in that case (actually, for any finite group with no abelian direct factor) is equal to the order of $\operatorname{Hom}\left(G/G',\operatorname{Z}(G)\right)$. Thus $\operatorname{Aut}(G)$ is a $p$-group, so there is no automorphism acting fixed point freely on $G$. The above example is due to V. Jain, P. Rai and M. Yadav. As mentioned by Steve D, it is proved by U. Martin and G. Helleloid that (in some sense) almost finite $p$-groups have an automorphism group of $p$-power order, thus 'almost' $p$-groups have no fixed point free automorphisms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/503891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Hint on power sum coefficients Please do not give anything more than a tiny hint for this question. I know that there is a well-known formula for $$\sum_{i=1}^n i^k,$$ where $k$ is any non-negative integer. I have been able to prove that in fact it is a polynomial in $n$, $$\sum_{i=1}^n i^k = \sum_{j=0}^{k+1} a_j n^j,$$ with high-order term $\frac 1 {k+1} n^{k+1}$ and zero constant term. In the process, I found a rather awkward method of calculating the rest of the coefficients. I'm now trying to figure out what the rest of them are. So far, I've gotten $$a_j = \frac{k!}{j!(k-j+1)!}-\sum_{m=j+1}^{k+1}a_{m}\frac{m!}{j!(m-j+1)!},$$ where $a_j$ is the coefficient of $n^j$ (for $j\le k$). Can someone give me a tiny hint on how to proceed? Please do not go and tell me what the coefficients are, or how the rest of the proof goes, or anything like that.
One hint is that it is far easier to do this is you replace $i^k$ by another polynomial of degree $k$ in $i$, namely by$~\binom ik$. Check that you can find $\sum_{i=0}^n\binom ik$ easily. Then it is theoretically only a question of transforming the basis $[1,i,i^2,\ldots]$ of the polynomial functions in$~i$ to the basis $[\binom i0=1,\binom i1=i,\binom i2=\frac{i(i-1)}2,\ldots]$ and back. In practice this messes the concrete values up considerably.
{ "language": "en", "url": "https://math.stackexchange.com/questions/503986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
mean value theorem on an open interval I know that the conditions for the mean value theorem state $f$ must be continuous on $[a,b]$ and differentiable on $(a,b)$. What happens if we change the condition to $f$ is continuous on $ (a,b)$ but not at the endpoints?
Given $a < b$, for any $\epsilon_{a} > 0$ and any $\epsilon_{b} > 0$ such that $\epsilon_{a} + \epsilon_{b} < b - a\quad\exists\ \xi \ni$ $$ {{\rm f}\left(b - \epsilon_{b}\right) - {\rm f}\left(a + \epsilon_{\rm a}\right) \over b - a - \epsilon_{a} - \epsilon_{b}} = {\rm f}'\left(\xi\right)\,, \qquad\mbox{where}\quad a + \epsilon_{a} < \xi < b - \epsilon_{b} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/504211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Summation Index Confusion I've seen various confusing variations of the summation signs, so can anyone give me clarification on them? I understand the most common ones from calculus class, for example: $$\sum^\infty_{i=1}\frac{1}{i}\to\infty$$ The rest is very confusing, as all the indices and whatnot are on the bottom of the summation, sometimes with nothing on the top. For example, the multinomial theorem: $$(x_1+\dots +x_k)^n=\sum_{a_1,a_2,\dots ,a_k\geq 0} {n\choose{a_1,\dots ,a_k}}x_1^{a_1}x_2^{a_2}\cdots x_k^{a_k}$$ Or another formula to calculate $(1+y)^n$ when $n$ isn't necessarily a natural number: $$(1+x)^n=\sum_{i\geq0}{n\choose{i}}x^i$$ There are also cases of the following exemplar types of summations: $$\underset{i\text{ odd}}{\sum_{i=0}^n}\dots$$ $$\sum_{a_1=n}^n\dots$$ Can someone please explain what all these indications of summation indices mean? Thank you.
TZakrevskiy’s answer covers all of your examples except the one from the multinomial theorem. That one is abbreviated, and you simply have to know what’s intended or infer it from the context: the summation is taken over all $k$-tuples $\langle a_1,\ldots,a_k\rangle$ of non-negative integers satisfying the condition that $a_1+a_2+\ldots+a_k=n$. If the condition were written out in full, the summation would look like $$\huge\sum_{{a_1,\ldots,a_k\in\Bbb N}\atop{a_1+\ldots+a_k=n}}\ldots\;.$$ (Note that my $\Bbb N$ includes $0$.) Added example: Let $n=2$ and $k=3$. The ordered triples $\langle a_1,a_2,a_3\rangle$ that satisfy $a_1+a_2+a_3=2$ are: $$\begin{array}{ccc} a_1&a_2&a_3\\ \hline 0&0&2\\ 0&2&0\\ 2&0&0\\ 0&1&1\\ 1&0&1\\ 1&1&0 \end{array}$$ Thus, the sum in question is $$\begin{align*} \binom2{0,0,2}x_1^0x_2^0x_3^2&+\binom2{0,2,0}x_1^0x_2^2x_3^0+\binom2{2,0,0}x_1^2x_2^0x_3^0\\ &+\binom2{0,1,1}x_1^0x_2^1x_3^1+\binom2{1,0,1}x_1^1x_2^0x_3^1+\binom2{1,1,0}x_1^1x_2^1x_3^0\;. \end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/504262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Taylor Expanding an Integrand Just want to be sure of this: If I taylor expand an integrand about 0 then truncate it to say linear order, then integrate this truncation, does the integral evaluate to a function which is an accurate representation of the original integral, but only around 0? If so why is this? Cheers!
The Taylor theorem says (I skip the hypothesis): $$f(x) = f(0)+f'(0)x+\frac{f'(\xi)}{2}x^2.$$ The point is that $\xi$ depends on $x$ and in general this dependance is very nasty. You can integrate both parts of the equation with respect to $x$ on some interval, but I don't think you can get something useful from the right side representation other than approximations of the integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/504326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding the Domain and Range of a function composition I'm having trouble finding the domain and range of a function composition. $f(x) = x^2 - 3x$ $g(x) = \sqrt{x}$ $(g \circ f)(x) = g(f(x)) = \sqrt{(x^2 - 3x)}$ How do I find the domain and range of $(g \circ f)(x)$? (I know the answer because it's in the back of the book, but please tell me how?) Thanks in advance!
The outer function $g:\ y\mapsto\sqrt{\mathstrut y}$ is defined when $y\geq 0$. The inner function $f:\ x\mapsto y:=x(x-3)$ is $\geq0$ when $x\leq0$ or $x\geq 3$, and is $<0$ when $0<x<3$. It follows that the domain of $g\circ f$ is ${\mathbb R}\setminus\>]0,3[\ $. Since the graph of $f$ is a parabola it follows that $f$ takes all values $y\in{\mathbb R}_{\geq0}$ when $x$ goes from $3$ to $\infty$, and the same is true when $x$ goes from $-\infty$ to $0$. Since $g$ maps the set ${\mathbb R}_{\geq0}$ onto itself we therefore conclude that the range of $g\circ f$ is ${\mathbb R}_{\geq0}$, whereby each value is taken exactly two times.
{ "language": "en", "url": "https://math.stackexchange.com/questions/504417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding time with given distance and acceleration. Help Needed A plane accelerates from rest at a constant rate of $5.00 \, \frac{m}{s^2}$ along a runway that is $1800\;m$ long. Assume that the plane reaches the required takeoff velocity at the end of the runway. What is the time $t_{TO}$ needed to take off? I tried to find time by using only distance and acceleration, so $\frac{v_f-v_i}{a} = \frac{1800}{5} = 360 \, s$. However, this is incorrect. I also tried taking the square root of the answer, however that too is incorrect. Any help would be appreciated.
The acceleration is $a=5$, and you start from rest at position zero on the runway. The runway length is $L = 1800$. Your speed will be $v(t) = \int_0^t a d \tau = at$. (Starting from rest means $v(0) = 0$.) Your position will be $x(t) = \int_0^t v(\tau) d \tau = \int_0^t a \tau d \tau = a \frac{t^2}{2}$. (Starting from 'position zero' means we are taking $x(0) = 0$.) So we need to solve $L = x(t) = a \frac{t^2}{2}$ for $t$. This gives $t = \sqrt{\frac{2L}{a}} = \sqrt{720}$ ($\approx 27$ seconds).
{ "language": "en", "url": "https://math.stackexchange.com/questions/504494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Proving that every countable metric space is disconnected? Any hints on how I should go about this? Thanks.
The statement is false if $|X|=1$, so countable here is probably supposed to be understood as countably infinite. HINT: Let $X$ be the space, and let $x,y\in X$ with $x\ne y$. Let $r=d(x,y)>0$. Use the fact that $X$ is countable to show that there must be an $s\in(0,r)$ such that $$\{z\in X:d(x,z)=s\}=\varnothing\;.$$ If $B(x,s)=\{z\in X:d(x,z)<s\}$, show that $B(x,s)$ and $X\setminus B(x,s)$ are non-empty open sets in $X$, and conclude that $X$ is not connected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/504588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Simplest or nicest proof that $1+x \le e^x$ The elementary but very useful inequality that $1+x \le e^x$ for all real $x$ has a number of different proofs, some of which can be found online. But is there a particularly slick, intuitive or canonical proof? I would ideally like a proof which fits into a few lines, is accessible to students with limited calculus experience, and does not involve too much analysis of different cases.
For $x>0$ we have $e^t>1$ for $0<t<x$ Hence, $$x=\int_0^x1dt \color{red}{\le} \int_0^xe^tdt =e^x-1 \implies 1+x\le e^x$$ For $x<0$ we have $e^{t} <1$ for $x <t<0$ $$-x=\int^0_x1dt \color{red}{\ge} \int^0_xe^tdt =1-e^x \implies 1+x\le e^x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/504663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "113", "answer_count": 27, "answer_id": 9 }
Simplest proof that $\binom{n}{k} \leq \left(\frac{en}{k}\right)^k$ The inequality $\binom{n}{k} \leq \left(\frac{en}{k}\right)^k$ is very useful in the analysis of algorithms. There are a number of proofs online but is there a particularly elegant and/or simple proof which can be taught to students? Ideally it would require the minimum of mathematical prerequisites.
$\binom{n}{k}\leq\frac{n^k}{k!}=\frac{n^k}{k^k}\frac{k^k}{k!}\leq\frac{n^k}{k^k}\sum_m\frac{k^m}{m!}=(en/k)^k$. (I saw this trick in some answer on this site, but can't recall where.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/504707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Proof: $\tan(x)$ is surjective from $(-\pi/2,\pi/2)$ onto $\mathbb R$ To prove $\tan(x)$ defined on $]-π/2;π/2[$ is injective I take the derivative of $\tan(x)$ to get $\sec(x)^2$. This shows that $\tan(x)$ is monotonic (strictly) increasing which implies it is injective. However how do I show it is surjective ? That every single real number corresponds to some number in the domain of $\tan(x)$ ?
The above use of the intermediate value theorem uses $\overline{\mathbb R}$ to accomplish its task, but also seems to beg the question by asserting the one-sided limits of tangent at $-\frac{\pi}{2}$ and $\frac{\pi}{2}$. Here is a proof that does not use either geometry or $\overline{\mathbb R}$ The proof will however use various derivative rules including L'Hospital and Intermediate Value Theorem (all of which can clearly be derived in a noncircular manner) $\forall x\in\mathbb R$, it is clear there $\exists k\in\mathbb N$ at which $2^k \ge x$ (proof found elsewhere). We can use this fact to show tangent is surjective on $\mathbb R$ on the $\mathcal D_{\tan}(-\frac{pi}{2},\frac{\pi}{2})$ without resorting to geometry. The derivative of $f(x) = \frac{tan(\frac{\pi/2+x}{2})}{tan(x)}$ is, after plenty of simplification, $\frac{1}{cos(\frac{\pi/2+x}{2})sin(x)} (\frac{cos(x)}{2cos(\frac{\pi/2+x}{2})} - \frac{sin(\frac{\pi/2+x}{2})}{sin(x)})$. This function can be seen to be negative on the domain $(\pi/4,\pi/2)$. The limit of $f$ as $x$ approaches $\pi/2$ is 2, and is easily findable using the chain rule as well as L'Hopital rule. Then it is clear since the derivative is negative and the minimum is 2, that all values of $x$ on the open interval $(\pi/4,\pi/2)$ have $f(x) \ge 2$. This is when the pieces start to fall into place. Let's form the sequence $a_i$ defined by $a_1 = \pi/4$, $a_j = (\pi/2 + a_{j-1})/2$ for all other $j$. We've already shown that for all $j > 1$, $\frac{\tan(a_j)}{\tan(a_{j-1})} \ge 2$. It is therefore clear that for all real $r>=1$, there is some $i_1$ where $tan(a_{i_1}) \ge 2^{{i_1}-1} \ge r$ (from the above mentioned theorem). Since $\tan(-x) = -\tan(x)$, the same principle can be applied to any negative real $r <= -1$ to find an $i_2$ where $tan(-a_{i_2}) \le -2^{{i_2}-1} \le r$. These two values are appropriate for use in the intermediate value theorem to show there must exist some value $k$ in $(a_{i_2},a_{i_1})$ where $\tan(k) = r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/504797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Open subgroups of $\mathbb{R}$ Let $G$ be a nonempty open subset of $\mathbb{R}$ (with usual topology on $\mathbb{R}$) such that $x,y\in G$ implies that $x-y\in G$. Show that $G=\mathbb{R}$. Clearly $0\in G$. Now how to show that all real numbers are there in $G$? Please help.
Hint: Every nonempty open subset of $\Bbb R$ contains at least one open interval.
{ "language": "en", "url": "https://math.stackexchange.com/questions/504845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }