Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
How to do this Math induction problem? Show that:
$$\frac n3 + \frac n9 + \frac {n}{27} + \cdots = \frac n2.$$
When I start with $\frac 13 + \frac 19 + \frac {1}{27}$ it leads to a number close to $.5$ but it's not exactly $.5$.
| You have an infinite geometric series $\frac n3 + \frac n9 + \frac n{27}+\dots=n\sum_{i=1}^\infty\frac 1{3^n}$. Using the terminology in the linked article, we have $a=\frac 13$ (the first term) and $r=\frac 13$ (the ratio between successive terms. As long as $|r| \lt 1$ this infinite sum will converge to $\frac a{1-r}=\frac {\frac 13}{1-\frac 13}=\frac 12$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/504922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
What is $\int\log(\sin x)~dx$? I know that the value of the integral of $\cot(x)$ is $\log|\sin x|+C$ .
But what about:
$$\int\log(\sin x)~dx$$
Is there any easy way to find an antiderivative for this? Thanks.
| using $$\ln \sin x =-\ln 2-\sum_{n=1}^{\infty}\frac{\cos(2nx)}{n}, \ x \in [0,\pi].$$
$$\int \ln(\sin x)dx=-\ln(2)\int dx-\sum^{\infty}_{n=1}\frac{1}{n}\int (\cos 2nx)dx$$
$$=-\ln(2)\cdot x-\sum^{\infty}_{n=1}\frac{\sin(2nx)}{2n^2}+\mathcal{C}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/504983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Commutator of $x$ and $p^2$ I have a question:
If I have to find the commutator $[x, p^2]$ (with $p= {h\over i}{d \over dx} $) the right answer is:
$[x,p^2]=x p^2 - p^2x = x p^2 -pxp + pxp - p^2x = [x,p]p + p[x,p] = 2hip$
But why can't I say:
$[x,p^2]=x p^2 - p^2x = - x h^2{d^2 \over dx^2} + h^2 {d^2 \over dx^2}x = 0$ ?
Thank you for your reply.
| What you describe is a quite common situation which pops up when dealing with commutators of operators. On an appropriate space of functions $\mathcal D$ (like an $L^2$-space or the Schwartz space etc...), the operators $x$ and $p$ are given by
$$x(f)(x):=xf(x), $$
$$p(f)(x):=\frac{h}{i}\frac{df}{dx}, $$
for all $f\in \mathcal D$ and $x$ in the domain of $f$. In other words, $x(f)$ and $p(f)$ are elements in $\mathcal D$, i.e. functions. In particular $\frac{df}{dx}$ is the derivative of $f$ w.r.t. $x$ at the point $x$, by convention.
The commutator
$$[x,p]$$
is the operator that, evaluated at any $f$, gives the function $[x,p](f)$ s.t.
$$[x,p](f)(x):=\frac{h}{i}x\frac{df}{dx}-\frac{h}{i}\frac{d}{dx}(xf)=
\frac{h}{i}x\frac{df}{dx}-\frac{h}{i}x\frac{d}{dx}(f)-\frac{h}{i}f(x)=
-\frac{h}{i}f(x), $$
or $[x,p](f)=-\frac{h}{i}f$.
Similarly,
$$[x,p^2]$$
is the operator that, once evaluated at any $f\in \mathcal D$, gives the function
$[x,p^2](f)$, with
$$[x,p^2](f)(x)=-h^2x\frac{d^2f}{dx^2}+h^2\frac{d}{dx}\left(\frac{d}{dx} (xf) \right)=
-h^2x\frac{d^2f}{dx^2}+h^2\frac{d}{dx}\left(f+x\frac{df}{dx} \right)=
-h^2x\frac{d^2f}{dx^2}+h^2\frac{df}{dx}+
h^2\frac{d}{dx}\left(x\frac{df}{dx}\right)=\\
-h^2x\frac{d^2f}{dx^2}+h^2\frac{df}{dx}+
h^2\frac{df}{dx}+ h^2x\frac{d^2f}{dx^2}=2h^2\frac{df}{dx}.$$
Equivalently
$$[x,p^2](f)(x)=2hip(f)(x)$$
or
$$[x,p^2](f)=2hip(f)$$
as expected.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/505059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
Show that $\gcd(a + b, a^2 + b^2) = 1$ or $2$ if $\gcd(a, b)=1$ Show that $\gcd(a + b, a^2 + b^2) = 1$ or $2$ if $\gcd(a, b)=1$.
I have absolutely no clue where to start and what to do, please provide complete proof and answer.
| We first show that there is no odd prime $p$ that divides both $a+b$ and $a^2+b^2$.
For if $p$ divides both, then $p$ divides $(a+b)^2-(a^2+b^2)$, so $p$ divides $2ab$. Since $p$ is odd, it divides one of $a$ or $b$, say $a$. But then since $p$ divides $a+b$, it must divide $b$. This contradicts the fact that $a$ and $b$ are relatively prime.
So the only possible common divisors of $a+b$ and $a^2+b^2$ are powers of $2$. If $a$ is even and $b$ is odd (or the other way) then the greatest common divisor of $a+b$ and $a^2+b^2$ is therefore $1$.
If $a$ and $b$ are both odd, then $a^2+b^2\equiv 2\pmod{8}$, and therefore the highest power of $2$ that divides $a^2+b^2$ is $2^1$. So in that case $\gcd(a+b,a^2+b^2)=2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/505106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Use $\epsilon-\delta$ definition of limit to prove that $\displaystyle \lim_{x \to 0} x \lfloor \frac{1}{x} \rfloor = 1$. I was trying to write some nice problems for applying $\epsilon-\delta$ definition to give it to my friend but then I realized that I couldn't solve some of them either. This is one of them:
Use $\epsilon-\delta$ definition of limit to prove that
$$ \lim_{x \to 0} x \lfloor \frac{1}{x} \rfloor = 1$$
It's easy to show that this is true by using the squeeze(sandwich) theorem, but I'm looking for an $\epsilon-\delta$ proof.
Also, a similar problem could be:
$$ \lim_{n \to \infty} \frac{[nx]}{n}=x$$
Again it's obvious that this is true by using the squeeze theorem, but I'm looking for an elementary proof that uses nothing but just the definition of the limit of a sequence.
| Sorry, to long for a comment, will delete it in the future. Here is exactly what I mean by my comment.
Assume that on some interval $I=(a-b,a+b)$ around $a$ we have.
$$f(x) \leq g(x) \leq h(x) \, \forall x\in I \backslash \{ a \}$$
and the outside limits are easy, meaning that you can prove with $\epsilon-\delta$ that $\lim_{x \to a} f(x) =\lim_{x \to a} h(x)= L$. Then you get for free a proof with $\epsilon-\delta$ that $\lim_{x \to a} g(x)= L$.
Indeed
$$f(x) \leq g(x) \leq h(x) \Rightarrow f(x) -L \leq g(x)-L \leq h(x)-L \Rightarrow $$
$$\left|g(x)-L \right| \leq \max\{ \left| f(x) -L \right| , \left| h(x) -L \right| \} (*)$$
Now, pick an $\epsilon >0$, pick the corresponding $\delta_1$ for $g$ and $\delta_2$ for $h$ and set $\delta = \min \{ \delta_1, \delta_2 \}$. Thus
Then if $0 < |x-a | < \delta$ you have $0 < |x-a | < \delta_1$ and $0 < |x-a | < \delta_2$
$$ \left| f(x) -L \right| < \epsilon, \left| h(x) -L \right| < \epsilon \,,$$
and if you plug these in $(*)$ you are done.
For this problem, the heuristic reason why I think that, no matter what the approach is, if it is simple it is a hidden squeeze theorem argument: the simplest way of relating $\lfloor y \rfloor$ to $y$ for all real $y$ at once is $y-1 \leq \lfloor y \rfloor \leq y$. Moreover, the bounds can be attained, so you can't improve it. But once you do that, it becomes exactly the argument I included.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/505187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Certainty that one has found all of the socks in a pile Suppose that I have a pile of $n$ socks, and, of these, $2k$ are "mine." Each of the socks that is mine has a mate (so that there are $k$ pairs of my socks) I know $n$, but not $k$. Assume that all possible values of $k$ are equally likely.
Now, I draw socks from the pile, uniformly and at random. After I draw a sock, I keep it if it is mine, and discard it otherwise.
If, after I have drawn $m$ of my socks from the pile, I have drawn only pairs of my socks (that is, it I have drawn one of my socks, I have also drawn it's mate), how confident am I that I have drawn all $2k$ of my socks from the pile?
| Suppose after $m$ draws, you might have exactly $j$ pairs of your socks and no singletons. Then, given $K=k,m,n$ and provided that $j \le k$, $2j \le m$, and $2k \le n$, the probability of this event is
$$\Pr(J=j \text{ pairs and no singletons}| K=k,m,n) = {k \choose j}{n-2k \choose m-2j}\big/ {n \choose m}$$
and so the probability that $J=j$ given $K=j,m,n$ is
$$\Pr(J=j \text{ pairs and no singletons}| K=j,m,n) = {n-2j \choose m-2j}\big/ {n \choose m}.$$
You have suggested a flat prior for $K$, so the posterior probability given you have only have pairs is
$$\Pr(K=j| J=j \text{ pairs and no singletons}, m,n) = \frac{\displaystyle{n-2j \choose m-2j}}{\displaystyle\sum_{k=j}^{\lfloor n/2 \rfloor} {k \choose j}{n-2k \choose m-2j}}.$$
If you do not know $j$ and you are just observing the that the number of your odd socks is $0$ after $m$ draws from the pile then the calculation becomes a little more complicated with extra sums
$$\Pr(K=J| \text{no singletons}, m,n) = \frac{\displaystyle \sum_{j=0}^{\lfloor m/2 \rfloor} {n-2j \choose m-2j}}{\displaystyle\sum_{j=0}^{\lfloor m/2 \rfloor}\displaystyle\sum_{k=j}^{\lfloor n/2 \rfloor} {k \choose j}{n-2k \choose m-2j}}.$$
If your prior for $K$ was not flat, it could easily be introduced into the results above to give
$${k \choose j}{n-2k \choose m-2j} \Pr(K=k) \big/ {n \choose m}\, , \, \, {n-2j \choose m-2j} \Pr(K=j) \big/ {n \choose m} \, ,$$
$$\frac{\displaystyle {n-2j \choose m-2j}\Pr (K=j)}{\displaystyle\sum_{k=j}^{\lfloor n/2 \rfloor} {k \choose j}{n-2k \choose m-2j}\Pr (K=k)} \text{ and } \frac{\displaystyle \sum_{j=0}^{\lfloor m/2 \rfloor} {n-2j \choose m-2j}\Pr (K=j)}{\displaystyle\sum_{j=0}^{\lfloor m/2 \rfloor}\displaystyle\sum_{k=j}^{\lfloor n/2 \rfloor} {k \choose j}{n-2k \choose m-2j}\Pr (K=k)}$$ respectively.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/505264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why are strict inequalities stronger than non-strict inequalities? I'm working with induction proofs involving inequalities and I am encountering example proofs that wish to show things of the sort, $n!\le\ n^n$ for every positive integer. The proof given in the inductive step is, $(n+1)!$ $=$ $(n+1)\dot\ n!$ $\le$ $(n+1)n^n$$\lt$ $(n+1)(n+1)^n$ $=$ $(n+1)^{n+1}$ I'm not quite clear why the strict inequality was admissible in the proof, if I'm trying to show $n!\le\ n^n$, rather than $n!\lt\ n^n$. Thanks for any insight.
| You can change $<$ by $\leq$ but not $\leq$ by $<$.
For example, we have $1 < 2$, so $1 \leq 2$ is also true.
On the other hand, we have $1 \leq 1$, but it's not true that $1 < 1$.
Also, you proved $n! < n^n$ for $n > 1$, but this is not true for every positive integer, since for $n=1$ we get the equality. But it's true that $n! \leq n^n$ for every positive integer $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/505351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Surprising identities / equations What are some surprising equations/identities that you have seen, which you would not have expected?
This could be complex numbers, trigonometric identities, combinatorial results, algebraic results, etc.
I'd request to avoid 'standard' / well-known results like $ e^{i \pi} + 1 = 0$.
Please write a single identity (or group of identities) in each answer.
I found this list of Funny identities, in which there is some overlap.
| This one really surprised me:
$$\int_0^{\pi/2}\frac{dx}{1+\tan^n(x)}=\frac{\pi}{4}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/505367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "350",
"answer_count": 108,
"answer_id": 24
} |
Surprising identities / equations What are some surprising equations/identities that you have seen, which you would not have expected?
This could be complex numbers, trigonometric identities, combinatorial results, algebraic results, etc.
I'd request to avoid 'standard' / well-known results like $ e^{i \pi} + 1 = 0$.
Please write a single identity (or group of identities) in each answer.
I found this list of Funny identities, in which there is some overlap.
| Some zeta-identies have been much surprising to me.
Let's denote the value $\zeta(s)-1$ as $\zeta_1(s)$ then
$$ \small \begin{array} {}
1 \zeta_1(2) &+&1 \zeta_1(3)&+&1 \zeta_1(4)&+&1 \zeta_1(5)&+& ... &=&1\\
1 \zeta_1(2) &+&2 \zeta_1(3)&+&3 \zeta_1(4)&+&4 \zeta_1(5)&+& ... &=&\zeta(2)\\
& &1 \zeta_1(3)&+&3 \zeta_1(4)&+&6 \zeta_1(5)&+& ... &=&\zeta(3)\\
& & & &1 \zeta_1(4)&+&4 \zeta_1(5)&+& ... &=&\zeta(4)\\
& & & & & &1 \zeta_1(5)&+& ... &=&\zeta(5)\\
... & & & & & & & &... &= & ...
\end{array}
$$
There are very similar stunning alternating-series relations:
$$ \small \begin{array} {}
1 \zeta_1(2) &-&1 \zeta_1(3)&+&1 \zeta_1(4)&-&1 \zeta_1(5)&+& ... &=&1/2\\
& &2 \zeta_1(3)&-&3 \zeta_1(4)&+&4 \zeta_1(5)&-& ... &=&1/4\\
& & & &3 \zeta_1(4)&-&6 \zeta_1(5)&+& ... &=&1/8\\
& & & & & &4 \zeta_1(5)&-& ... &=&1/16\\
... & & & & & & & &... &= & ...
\end{array}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/505367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "350",
"answer_count": 108,
"answer_id": 56
} |
Surprising identities / equations What are some surprising equations/identities that you have seen, which you would not have expected?
This could be complex numbers, trigonometric identities, combinatorial results, algebraic results, etc.
I'd request to avoid 'standard' / well-known results like $ e^{i \pi} + 1 = 0$.
Please write a single identity (or group of identities) in each answer.
I found this list of Funny identities, in which there is some overlap.
| This is slightly contrived, but consider a situation where you have two balls, of mass $M$ and $m$, where $M=16\times100^N\times m$ for some integer $N$. The balls are placed against a wall as shown:
We push the heavy ball towards the lighter one and the wall. The balls are assumed to collide elastically with the wall and with each other. The smaller ball bounces off the larger ball, hits the wall and bounces back. At this point there are two possible solutions: the balls collide with each other infinitely many times until the larger ball reaches the wall (assume they have no size), or the collisions from the smaller ball eventually cause the larger ball to turn around and start heading in the other direction - away from the wall.
In fact, it is the second scenario which occurs: the larger ball eventually heads away from the wall. Denote by $p(N)$ the number of collisions between the two balls before the larger one changes direction, and gaze in astonishment at the values of $p(N)$ for various $N$:
\begin{align}
p(0)&=3\\
p(1)&=31\\
p(2)&=314\\
p(3)&=3141\\
p(4)&=31415\\
p(5)&=314159\\
\end{align}
and so on. $p(N)$ is the first $N+1$ digits of $\pi$!
This can be made to work in other bases in the obvious way.
See 'Playing Pool with $\pi$' by Gregory Galperin.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/505367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "350",
"answer_count": 108,
"answer_id": 88
} |
Using characteristic function to deduce convergence of Bernoulli random variables Let $Y_1, Y_2,...$ be a sequence of independent Bernoulli(0.5) random variables and $X_n =
\sum_{i=1}^{n} Y_i 2^{-i}$ I need to use the characteristic function to deduce that $X_n$ converges in distribution and determine the limiting distribution. Also need to determine if $X_n$ converges in other senses.
I know in general, the Bernoulli Characteristic function is
\begin{align}
\varphi(t) = 1 - p + pe^{it}
\end{align}
But beyond that, I am really lost on how to use it to show convergence.
| Bernoulli RVs are bounded, so you have that the convergence even occurs pointwise, hence a.s. pointwise, hence in probability, hence in distribution. Judging by the way the question was asked, I think your instructor wants a very specific answer. The limiting RV in any of the senses above is uniformly distributed on $[0, 1]$. You can sort of tell that's the case because if you think about what you're doing, at each step you are specifying the nth binary expansion digit, and you are doing so with equal probability for 0 or 1. To prove this rigorously, I think but am not completely sure that you can imitate the proof of the convergence condition for an infinite product. To state that heuristically, you basically want to compute the log of the product you are taking, do a Taylor expansion, show that second order and higher contributions disappear in the limit, and then you're done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/505383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Last two digits of $2^{11212}(2^{11213}-1)$
What are the last two digits of the perfect numbers $2^{11212}(2^{11213}-1)$?
I know that if $2^n-1$ is a prime, then $2^{n-1}(2^n-1)$ is a perfect number and that every even perfect number can be written in the form $2^n(2^n-1)$ where $2^n-1$ is prime. I'm not sure how to use this information though.
| We want the remainder when the product is divided by $100$. Th remainder on division by $4$ is $0$, so all we need is the remainder on division by $25$.
Note that $\varphi(25)=20$. So by Euler's Theorem, $2^{20}\equiv 1\pmod{25}$. It is easier to note that $2^{10}\equiv -1\pmod{25}$.
It follows that $2^{11212}\equiv -4\pmod{25}$. The same idea shows that $2^{11213}-1\equiv -9\pmod{25}$. Thus our product is congruent to $36$, or equivalently $11$, modulo $25$.
Now solve $x\equiv 0\pmod{4}$, $x\equiv 11\pmod{25}$. The solution is $36$ modulo $100$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/505537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Calculus in ordered fields Is there any ordered field smaller that the set of real numbers in which we can do calculus, also with many restrictions ?
If not why ?
| I remember reading in Körner's book, "A Companion to Analysis", he discusses problems with calculus over $\mathbb{Q}$, and ordered fields it seems. I think there was a problem with continuity over $\mathbb{Q}$ and of course the fact it's not complete. It might be worth a look if you've access to it.
Section 1.3
and Chapter 1 anyway it think he deals with ordered fields:
http://books.google.ie/books?id=H3zGTvmtp74C&printsec=frontcover&source=gbs_atb#v=onepage&q&f=false
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/505600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Nilpotent matrix in $3$ dimensional vector space This is a part of a long problem and I'm stuck in two questions of it.
Let $E$ a $3$ dimensional $\mathbb R$- vector space and $g\in\mathcal{L}(E)$ such that $g^3=0$. So the first question is to prove that
$$\dim\left(\ker g\cap \mathrm{Im}\ g\right)\leq 1$$
and the second is: assume that $g^2\ne 0$, prove that if $\ker g=\ker g^2$ then $E=\ker g\oplus \mathrm{Im}\ g$ and deduce a contradiction, hence determinate $\dim\ker g$. Thanks for any help.
| For the first part dim(im$\,g$)$ + $ dim $(\ker\,g) = 3$.
The possible combinations are $0 + 3$, $1 + 2$, $2 + 1$, $3 + 0$ and you can see that any intersection cannot have dimension greater than 1.
For the second part:
\begin{align*}
g^3(x) &= 0,\, \forall x \in E \\
g^2(g(x)) &= 0 ,\, \forall x \in E \\
&\Rightarrow \text{ im } g \subset \ker g^2 - (1)
\end{align*}
Assume $\ker g = \ker g^2$ then
\begin{align*}
(1) &\Rightarrow \text{ im } g \subset \ker g \\
&\Rightarrow g(g(x)) = 0,\, \forall x \in E \\
&\Rightarrow g^2 = 0
\end{align*}
This is a contradiction as $g^2 \neq 0 - (2)$
\begin{align*}
g^2(x) &= g(g(x)) = 0,\, \forall x \in \ker g \\
&\Rightarrow \ker g \subset \ker g^2 \\
&\Rightarrow \dim (\ker g) \leq \dim (\ker g^2) \\
(2) &\Rightarrow \dim (\ker g) < \dim (\ker g^2) - (3)
\end{align*}
Clearly, as $g^2 \neq 0$, we have with (1) and (3):
\begin{align*}
1 &\leq \dim (\ker g) < \dim (\ker g^2) \leq 2 \\
&\Rightarrow \dim (\ker g) = 1
\end{align*}
Addendum 1: Sometimes playing with an example helps to see what's going on. Find the kernel and image of $A$ and $A^2$ in the following example in $\mathbb R^3$.
$A = \begin{bmatrix}0&1&0\\0&0&1\\0&0&0\end{bmatrix}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/505738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that a function does not have a limit as $x\rightarrow \infty $ Problem statement:
Prove that the function $f(x)=\sin x$ does not have a limit as $x\rightarrow \infty $.
Progress:
I want to construct a $\varepsilon -\delta $-proof of this so first begin by stating that the limit actually exists:
$\lim_{x\rightarrow \infty}f(x)=l$ if $\forall \varepsilon >0 \exists K>0: x>K \Rightarrow \left | f(x)-l \right |<\varepsilon$
Now we want to prove a negation(correct?), that is:
$\forall \varepsilon >0 \nexists K>0: x>K \Rightarrow \left | \sin x-l \right |<\varepsilon$. But how do I prove that there doesn't exist such a K?
I do know that $\left | \sin x -l \right |$ can be simplified(say using the triangle inequality) but I am no longer able to just find a contradiction but actually construct a direct proof.
| In terms of writing the logical negation, it is best to use a universal quantifier before the $x$:
$$
\forall \varepsilon >0 \exists K>0: \forall x>K,\ \left | f(x)-l \right |<\varepsilon.
$$
The logical negation of this statement is
$$
\exists \varepsilon >0 \forall K>0: \exists x>K,\ \left | f(x)-l \right |>\varepsilon.
$$
If you think about what this means, it means that you need to be able to find arbitrarily large $x$ with $|f(x)-l|$ bigger than a specified quantity.
For your concrete example, you can fix $\varepsilon=1/2$, say. Then, given any $K$, as the values of the sine range from $-1$ to $1$ in every interval of length $2\pi$, you can certainly find $x$ with $|\sin(x)-l|>1/2$, whatever number $l$ is.
Once you understand this, in practice what you need to do is find two sequences $\{x_k\}$, $\{y_k\}$, both going to infinity, such that $\{\sin x_k\}$ and $\{\sin y_k\}$ converge to different values.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/505836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
a game of coloring edges of graph I have a clique of size 5 which is partially colored(black or white). I have to color remaining edges so that each of the triangle has either 1 or 3 black edges. How should I go about coloring the graph or how can I tell this is not possible.
| Hint: Given a (complete) coloring of the clique, each vertex of the form $v_{i} v_{i+2}$ is uniquely determined.
It then remains to check that triangles of the form $v_i , v_{i+2}, v_{i+4} $ satisfy the conditions.
Hint: Assume that such a coloring is possible. Label the black edges $-1$ and the white edges $1$. Then, we know that the product each triangle is $-1$. Use this to get conditions on the edges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/505927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Prove that : $\tan 40 + \tan 60 + \tan 80 = \tan 40 \cdot \tan 60 \cdot \tan 80$ I started from Left hand side as 3^1/2 + tan 2(20) +tan 4(20). But that brought me a lot of terms to solve which ends (9 tan 20 - 48 tan^3 20 -50 tan^5 20 - 16 tan^7 20 + tan^9 20)/(1- 7 tan^2 20 + 7 tan^4 20 - tan^6 20), which is very huge to solve. If someone can help me in teaching me how to begin, that would be very greatful.
| Generally, if $\alpha+\beta+\gamma=180^\circ$ then $\tan\alpha+\tan\beta+\tan\gamma=\tan\alpha\tan\beta\tan\gamma$.
That is because
$$
0 = \tan180^\circ = \tan(\alpha+\beta+\gamma) = \text{a certain fraction}.
$$
The numerator in the fraction is $\tan\alpha+\tan\beta+\tan\gamma-\tan\alpha\tan\beta\tan\gamma$.
Just apply the usual formula for the tangent of a sum and you'll get this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/506002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Stuck on a particular 2nd order non-homogeneous ODE ($y'' + 4y' + 5y = t^2$)
$y'' + 4y' + 5y = t^2$
So I solve for $r^2 + 4r + 5 = 0$ returning $r = 2 \pm 2i$. So $y_c = e^{-2t}(C_1\cos(2t) + C_2\sin(2t)$. For $y_p(t)$ I pick $At^2$. So $2A + 8At + 5At^2 = t^2$. I have been stuck on finding a particular solution for 30 minutes now.
My first question: Is there any particular method for finding $y_p(t)$?
Secondly, how to find $y_p(t)$ here?
| You should use method of undetermined coefficients.
Take
$$y_{p}(t)=at^2+bt+c$$
then
$$
y'_{p}(t)=2at+b
$$
$$
y''_{p}(t)=2a
$$
and substitue it in differential equation
$$2a+8at+4b+5at^2+5bt+5c=t^2$$
from equality of polynomials we have
$5a=1,8a+5b=0,2a+4b+5c=0$
from here we can find $a=\frac{1}{5},b=\frac{-8}{25},c=\frac{22}{125}$.
So
$$
y_{p}(t)=\frac{1}{5}t^2+\frac{-8}{25}t+\frac{22}{125}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/506077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
} |
Divisibility involving root of unity Let $p$ be a prime number and $\omega$ be a $p$-th root of unity. Suppose $a_0,a_1, \dots, a_{p-1}, b_0, b_1, \dots, b_{p-1}$ be integers such that $a_0 \omega^0+a_1 \omega^1+ \dots a_{p-1} \omega^{p-1}$ and $b_0 \omega^0 + b_1 \omega^1 + \dots b_{p-1} \omega^{p-1}$ are also integers
Prove that $(a_0 \omega^0+a_1 \omega^1+ \dots a_{p-1} \omega^{p-1})-(b_0 \omega^0 + b_1 \omega^1 + \dots b_{p-1} \omega^{p-1})$ is divisible by $p$ if and only if $p$ divides all of $a_0-b_0$, $a_1-b_1$, $\dots$, $a_{p-1}-b_{p-1}$
| Irreducible polynomial over $\mathbb Z[X]$
Consider the factorization $X^p-1=(X-1)\Phi_p$ where $\Phi_p$ is the cyclotomic polynomial defined to contain all primitive $p$'th roots of unity over $\mathbb C$. Dividing both sides by $(X-1)$ we get
$$
\Phi_p=X^{p-1}+X^{p-2}+...+1
$$
Since all cyclotomic polynomials are irreducible over $\mathbb Q[X]$ they are in particular irreducible over $\mathbb Z[X]$. So given $\omega\neq 1$ that is a $p$'th root of unity we then know that $\omega$ is a root of $\Phi_p$ and that if $\omega$ is a root of some other polynomial $f\in\mathbb Z[X]$ then $\Phi_p$ divides $f$.
The coefficients of each subexpression
Now if $a_0\omega^0+a_1\omega^1+...+a_{p-1}\omega^{p-1}=k$ is an integer then $\omega$ is a root of
$$
f:=(a_0-k)+a_1 X+...+a_{p-1} X^{p-1}\in\mathbb Z[X]
$$
so $\Phi_p$ divides $f$ in $\mathbb Z[X]$. Hence there must exist $s\in\mathbb Z$ so that $f=s\cdot\Phi_p$ showing that
$$
a_0-k=a_1=...=a_{p-1}=s
$$
A similar argument shows that we must have some $t\in\mathbb Z$ so that $b_0-m=b_1=...=b_{p-1}=t$ in order to have $b_0\omega^0+b_1\omega^1+...+a_{p-1}\omega^{p-1}=m\in\mathbb Z$.
The coefficients of the difference
With the above we see that
$$
\begin{align}
&(a_0\omega^0+a_1\omega^1+...+a_{p-1}\omega^{p-1})-(b_0\omega^0+b_1\omega^1+...+a_{p-1}\omega^{p-1})\\
&=(s+k-(t+m))\omega^0+(s-t)\omega^1+...+(s-t)\omega^{p-1}\\
&=k-m
\end{align}
$$
but this actually suggests that your statement we are trying to prove is wrong in the first place for this holds regardles of $s$ and $t$ so that you can always choose $k-m$ divisible by $p$ without having $a_i-b_i\equiv s-t=0$ mod $p$. This seems to me a contradiction to the statement we are trying to prove! Please correct me if I am mistaken.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/506186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is there a finite number of ways to change the operation in a ring and retain a ring? Consider, for instance, the ring $(\mathbb{Z}, +, \cdot)$. We can create a new ring $(\mathbb{Z}, \oplus, \odot)$ by defining
$$ a \oplus b = a + b - 1 \quad \text{and} \quad a \odot b = a + b - ab. $$
There are also other operations that we could use as 'addition' and 'multiplication' (for example we could define $a \oplus b = a + b - 1$ and $a \odot b = ab - (a + b) + 2$, etc.)
My question is: is there a finite number of different operations $\oplus$ and $\odot$ that we could define on $\mathbb{Z}$ and retain the ring structure, or are there infinitely many such operations? what about on $\mathbb{R}$? what about on $\mathbb{Z} / n \mathbb{Z}$?
| On $\Bbb {Z/nZ}$ there must be finitely many as there are only finitely many operations on a finite set.
Your examples are derived by considering replacing each element $x$ with $x-1$ and seeing what happens to the operations. The constant is arbitrary, so you have an infinite number of choices on $\Bbb {Z, R}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/506236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Differentiability of $\sin ^2(x+y)+i\cos ^2(x+y)$ I want to find the set of all points $z\in\mathbb{C}$ such that $f:\mathbb{C}\to\mathbb{C}$ given by $f(x+iy)=\sin ^2(x+y)+i\cos ^2(x+y)$ is differentiable at $z$.
Isn't it true that $f$ is differentiable at all points of $\mathbb{C}$? Because $\sin$ and $\cos$ are differentiable... Am I wrong?
Thanks.
| The total differential is straightforward:
$$ df(z) = 2 \sin(x+y) \cos(x+y) (dx + dy) - 2 i \cos(x+y) \sin(x+y) (dx + dy) $$
Also,
$$ dx = \frac{dz + d\bar{z}}{2} \qquad \qquad dy = \frac{dz - d\bar{z}}{2i} $$
Checking differentiability is the same thing as checking that the coefficient on $d\bar{z}$ is zero, if you write it in the $\{ dz, d\bar{z} \}$ basis. Equivalently, you set $dz=0$ (so that $dx + i dy = 0$) and check if the total differential is zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/506323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Odds of choosing n members of a x sized subset of a y sized set. Lets say I have a bag of 8 rocks. 3 rocks are red, the rest are black. I choose, randomly, with replacement, 3 rocks. What are the odds that at least one that I choose is red?
It seems in first pass that the odds if I draw just one time, is 3/8. If I draw twice, then, would the odds be 6/8? Doesn't seem so, since drawing 3 times would by that reasoning given 9/8 > 1.0, so somewhere I've gone wrong.
To expand to the general case, how do I determine the odds of choosing n members of a x sized subset of a y sized set, if I draw z times with replacement. The real world problem I'm really trying to solve involves figures on the order of thousands.
Apologies if some of my terminology is off. Been a few years since I took discrete...
| Hint: Consider the complement. What is the probability that all rocks are black?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/506422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Alternative Creative Proofs that $A_4$ has no subgroups of order 6 Since I've been so immersed in group theory this semester, I have decided to focus on a certain curious fact: $A_4$ has no subgroups of order $6$.
While I know how to prove this statement, I am interested in seeing what you guys can offer in terms of unique and creative proofs of this statement!
Proofs without words would be interesting too, although I'm not sure that is possible.
| So we have a group $G$ of order $12$ with a normal subgroup $H$ of order $4$ and a normal subgroup $K$ of order $6$. Then $H\cap K$ is a normal subgroup. It has order $2$ (since if $H\cap K=1$, then $|G|=24$). Thus, $|G/(H\cap K)|=6$, and it has a unique Sylow 3-subgroup, so it is not $S_3$. The only alternative is that it is cyclic. Because a quotient of $G$ contains an element of order $6$, so does $G$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/506478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Examples of uncountable subgroups, but with countably many cosets. My question is pretty much as stated in the title: what examples are there (or does there exist) uncountable subgroups of a group but which have countably-infinite many cosets. I can only think of examples in the other direction (countable subgroups giving uncountably-many cosets).
| The uncountable subgroup of $\Bbb Z^{[0,1]}$ of maps $[0,1]\to\Bbb Z$ vanishing at $0$ has index $|\Bbb Z|=\aleph_0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/506561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
How to find the range of $\sqrt {x^2-5x+4}$ where x is real. How to find the range of $$\sqrt {x^2-5x+4}$$ where $x$ is real.
What I've tried:
Let $\sqrt {x^2-5x+4}=y$, solving for real $x$, as $x$ is real discriminant must be $D\geq0$. Solving I get $y^2\geq\frac{-9}{4}$. Which I suppose implies all real . But on wolfram alpha it says $y\geq0$. What am I doing wrong.
Any help appreciated.
Thank you.
| Note that the domain of the function is $(-\infty,1]\cup [4, \infty)$, since
$$ y=\sqrt{(x-1)(x-4)}, $$
which tells you that $y\geq 0$
Note: Compare with the function
$$y=\sqrt{x}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/506707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
How is it possible that $\infty!=\sqrt{2\pi}$? I read from here that:
$$\infty!=\sqrt{2\pi}$$
How is this possible ?
$$\infty!=1\times2\times3\times4\times5\times\ldots$$
But
\begin{align}
1&=1\\
1\times2&=2\\
1\times2\times3&=6\\
&~\vdots\\
1\times2\times3\times\ldots\times50&=3.0414093201713376\times10^{64}
\end{align}
This is obviously increasing, so how does factorial of $\infty$ become $\sqrt{2\pi}$ ?
| It is taken from
$$ 1\cdot2\cdot3\cdot \ldots \cdot n= n!$$
This is the exponential of
$$ \ln(1)+\ln(2)+\ln(3)+ \ldots + \ln(n) = \ln(n!) $$
Now if you write formally the derivative of the Dirichlet-series for zeta then you have
$$ \zeta'(s) = {\ln(1) \over 1^s}+{\ln(1/2) \over 2^s} +{\ln(1/3) \over 3^s} + \ldots $$
This is for some s convergent and from there can be analytically continued to $s=0$ as well from where the the formal expression reduces to
$$ \zeta'(0) = -(\ln(1) +\ln(2) +\ln(3) + \ldots )$$
which is then formally identical to $ - \lim_{n \to \infty} \ln(n!)$ .
Now the slope of zeta at zero can numerically be approximated and gives a concrete number $\zeta'(0) \approx -0.91893...$. It can also analytically be determined to equal $\ln(1/\sqrt{2\pi})$ .
Finally, since the formal notations coincide (except of the sign) one goes to write the exponential of this value to become the "regularized" value of the infinite factorial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/506781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
Support of Convolution and Smoothing
I just want to know how it follows that $v^{\epsilon} \in C^{\infty}(\bar{V})$? I could see how $v^{\epsilon} \in C^{\infty}(V)$ by using the translations, but I'm having difficulty seeing how it extends to $\bar{V}$, since it says that $u_{\epsilon}(x) := u(x^{\epsilon}) \text{ for } x \text{ in } V$, we are mollifying on $V$. I also don't think we can use the same type of extension as in the previous question I asked, since previously we extended a function that had already become zero to zero on the rest of the domain. Do you have any idea of how this follows? Thanks for the assistance.
| $\DeclareMathOperator \supp {supp}$For functions on $\mathbb R^n$ we have in general $\supp (f*g) \subset \supp f+\supp g$, where $A+B = \{a + b \mid a \in A, b \in B\}$. To see this note that $x$ must be in this set for $f(x-y)g(y)$ to be non-zero.
Thus we have $\supp f^\epsilon \subset \supp f + B(0,\epsilon) = \{x \mid d(x,\supp f)<\epsilon\}$, so $\bigcap_{\epsilon > 0} \supp f^\epsilon \subset \supp f$ (which I guess is what you were trying to get at with your "limit').
Intuitively we expect $\supp f \subset \supp f^\epsilon$, and this is true if $\eta(0)>0, \eta\ge 0, f \ge 0$. (I'm assuming $\eta$ is at least continuous here; usually we take it to be smooth.) In general, however, I do not think this always holds.
The only time we would have $\supp f ^\epsilon \subset \supp f$ is when $\supp f$ is the whole space: there will always be some "leakage" of $f^\epsilon$ over the boundary of $\supp f$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/506860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Expressing the trace of $A^2$ by trace of $A$ Let $A$ be a a square matrix. Is it possible
to express $\operatorname{trace}(A^2)$ by means of $\operatorname{trace}(A)$ ? or at least
something close?
| It is not. Consider two $2\times 2$ diagonal matrices, one with diagonal $\{1,-1\}$ and one with diagonal $\{0,0\}$. They have the same trace, but their squares have different traces.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/506962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 3,
"answer_id": 1
} |
How to integrate $\int \frac{1}{\sin^4(x)\cos^4(x)}\,\mathrm dx$? How can I integrate $$\int \frac{1}{\sin^4(x)\cos^4(x)}\,\mathrm dx.$$
So I know that for this one we have to use a trigonometric identity or a substitution. Integration by parts is probably not going to help. Can someone please point out what should I do to evaluate this integral?
Thanks!
| Hint: Substitute $t=\tan x$.
Then $\frac{dx}{\sin^4 x \cos^4 x} = \frac{(1+t^2)^3}{t^4}dt$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/507091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
A wrong proof that the kernel and image are always complementary Let $E$ be a vector space, $f\colon E\rightarrow E$ an endomorphism. Let $A=\ker(f)\oplus \operatorname{im}(f)$; that is $$A=\{x\in E\;|\; \text{there exists a unique}\; (a,b)\in \ker(f)\times\operatorname{im}(f), x=a+b\}.$$ I have a proof that $A=E$ which clearly need not be true, but i don't see what went wrong in my proof. thanks for your help!
Proof $A$ is obviously a subspace of $E$. Moreover $$\dim(A)=\dim(\ker(f)\oplus \operatorname{im}(f))=\dim(\ker(f)+\dim(\operatorname{im}(f))$$ hence by the rank nullity theorem $$\dim(A)=\dim(E)$$ hence $$A=E.$$
| The direct sum of two subspaces of $E$ is an abstract vector space that has a canonical map to $E$, but that map can fail to be injective, and it will precisely when the two subspaces have non-zero intersection. So your mistake is in assuming that the direct sum is a subspace of $E$.
More precisely, if $W_1,W_2$ are subspaces of $E$, then there is $W_1+W_2$, the sum of the two subspaces, which is by definition the subset of $E$ consisting of all sums $w_1+w_2$ with $w_1\in W_1$ and $w_2\in W_2$. This is a subspace of $E$. Then there is the direct sum, $W_1\oplus W_2$, which is, by definition, the set $W_1\times W_2$ with scalar multiplication done component-wise. There is a linear map $W_1\oplus W_2\rightarrow E$ given by $(w_1,w_2)\mapsto w_1+w_2$, so the image is $W_1+W_2\subseteq E$, but there could be a kernel. Namely, if $w$ is a non-zero element of $W_1\cap W_2$, then $(w,-w)\in W_1\oplus W_2$ is non-zero and maps to $w-w=0$. Conversely, if $(w_1,w_2)\mapsto 0$, then $w_1=-w_2\in W_1\cap W_2$. So the induced surjective linear map $W_1\oplus W_2\rightarrow W_1+W_2$ will be an isomorphism if and only if $W_1\cap W_2=\{0\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/507157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Proving convergence Using the definition Use the definition of convergence to prove if $x_n$ converges to $5$, then
$\frac{x_n+1}{\sqrt{x_n-1}}$ converges to $3$.
| There is an almost automatic but not entirely pleasant procedure. Let $y_n=\frac{x_n+1}{(x_n-1)^{1/2}}$. We will need to examine $|y_n-3|$.
Note that
$$y_n-3=\frac{x_n+1}{(x_n-1)^{1/2}}-3=\frac{(x_n+1)-3(x_n-1)^{1/2}}{(x_n-1)^{1/2}}.$$
Multiply top and bottom by $(x_n+1)+3(x_n-1)^{1/2}$. This is the usual "rationalizing the numerator" trick.
After some manipulation we get
$$y_n-3=\frac{(x_n-2)(x_n-5)}{(x_n-1)^{1/2}((x_n+1)+3(x_n-1)^{1/2} )}.\tag{1}$$
Now we need to make some estimates. Ultimately, given $\epsilon\gt 0$, we will need to choose $N$ so that if $n\ge N$ then $|y_n-3|\lt \epsilon$.
Observe first that there is an $N_1$ such that if $n\ge N_1$ then $|x_n-5|\lt 1$. That means that if $n\ge N_1$, we have $4\lt x_n\lt 6$.
In particular, $(x_n-1)^{1/2}\gt \sqrt{3}\gt 1$, and $(x_n+1)+3(x_n-1)^{1/2}\gt 8$. Also, $|x_n-2|\lt 4$. It follows from Equation (1) that if $n\ge N_1$, then
$$|y_n-3| \lt \frac{4}{(1)(8)}|x_n-5|.$$
By the fact that the sequence $(x_n)$ converges, there is an $N_2$ such that if $n\ge N_2$ then $|x_n-5|\lt \frac{(1)(8)}{4}\epsilon$.
Finally, let $N=\max(N_1,N_2)$. If $n\ge N$, then, putting things together, we find that
$$|y_n-3|\lt \epsilon.$$
Remark: We were using only the definition of convergence. If we relax and use some theorems, all we need to say is that the function $\frac{x+1}{(x-1)^{1/2}}$ is continuous at $x=5$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/507230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Expected shortest path in random graph Consider all connected graphs with $n$ verticies where each vertex connects to $k$ other verticies. We choose such a graph at random. What is the expected value of the shortest path between two random points? What is the expected value of the maximal shortest path?
If an exact solution would be too complicated, I would also appreciate an approximate solution for $k<<n$.
| This seems like a hard problem. For $k=2$ you can do exact calculations because the graph is a single cycle. For a random $k$-connected graph where $ k << n$ and $k > 2$, you can argue that for a given chosen starting vertex $v$, there will be expected $\Theta(k(k-1)^{m-1})$ vertices $w$ that will have a shortest path of length $m$ from $v$ to $w$ when $k$ and $m$ are small, so you should get something like $\log_{k-1} n$ or very close for the expected value of the shortest path length between two random points when $k << n$. Arguing for the expected length of the maximal shortest path seems trickier, but I would be surprised if it wasn't also $O(\log_k n)$ for fixed $k > 2$ when $n$ is large.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/507312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Question about Fermat's Theorem I'm trying to find $2^{25} \mod 21 $. By Fermat's theorem, $2^{20} \cong_{21} 1 $. Therefore, $2^{25} = 2^{20}2^{5} \cong_{21} 2^5 = 32 \cong_{21} 11 $. However, the answer in my book is $2$! What am I doing wrong?
Also, I would like to ask what are the last two digits of $1 + 7^{162} + 5^{121} \times 3^{312} $
Thanks for your help.
| By Euler's Theorem, we have $2^{12}\equiv 1\pmod{21}$. That is because $\varphi(21)=(2)(6)=12$. Thus $2^{25}=2^{12\cdot 2}\cdot 2^1\equiv 2\pmod{21}$.
If we want to use Fermat's Theorem, we work separately modulo $3$ and modulo $7$.
We have $2^2\equiv 1\pmod{3}$, and therefore $2^{25}=2^{2\cdot 12}\cdot 2^1\equiv 2\pmod{3}$.
Similarly, $2^6\equiv 1\pmod{7}$ and therefore $2^{25}\equiv 2\pmod{7}$.
It follows that $2^{25}\equiv 2\pmod{21}$.
Added: For the last two digits of $1+7^{162}+(5^{121})(3^{312})$, it will be enough to evaluate modulo $4$ and modulo $25$.
Modulo $4$: We have $7\equiv -1\pmod{4}$, so $7^{162}\equiv 1\pmod{4}$. Similarly, $5^{121}\equiv 1\pmod{4}$ and $3^{312}\equiv 1\pmod{4}$. Adding up, we get the sum is $\equiv 3\pmod{4}$.
Modulo $25$: We don't have to worry about the messy last term. Note that $7^2\equiv -1\pmod{25}$, so $7^{160}\equiv 1\pmod{25}$. Thus $7^{162}\equiv -1\pmod{25}$. Thus our sum is congruent to $0$ modulo $25$.
Finally, we want the multiple of $25$ between $0$ and $75$ which is congruent to $3$ modulo $4$. A quick scan shows the answer is $75$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/507422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
For any finite group, there is a homomorphism whose image is simple This is for homework. The question asks
"Show that, for any finite group $G$, there is a homomorphism $f$ such that $f(G)$ is simple."
My thought was this. Since $G$ is finite, there are only a finite number of normal subgroups of $G$, call them $N_1, \dotsc, N_k$. Now consider the canonical homomorphism $\pi : G \to G / N_1 \times \dotsb \times N_k$. Call $G / N_1 \times \dotsb \times N_k = Q$. I have shown in a previous exercise that $H$ is a normal subgroup of $G$ if and only if $\pi(H)$ is a normal subgroup of $Q$. So, say there is some normal subgroup $N$ of $Q$. By that exercise, this implies $\pi^{-1}(N)$ is a normal subgroup of $G$, and thus $\pi^{-1}(N) = N_i$ for some $i$. Now $N = \pi(N_i)$. But isn't $\pi(N_i)$ a trivial subgroup of $Q$? Am I done at this point?
| If $G$ is already simple then any isomorphism will do, so assume $G$ is not simple. Because $G$ is finite, there must exist a maximal nontrivial proper normal subgroup $N$, meaning $N \neq 1$ and whenever
$N\leq H \leq G$ with $H$ normal in $G$ then $H=G$ or $H=N$. Now show that $G/N$ must be simple.
The problem with your argument is that there is nothing stopping $Q$ being trivial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/507479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
How to prove $(1+x)^n\geq 1+nx+\frac{n(n-1)}{2}x^2$ for all $x\geq 0$ and $n\geq 1$? I've got most of the inductive work done but I'm stuck near the very end. I'm not so great with using induction when inequalities are involved, so I have no idea how to get what I need...
\begin{align}
(1+x)^{k+1}&\geq (1+x)\left[1+kx+\frac{k(k-1)}{2}x^2\right]\\
&=1+kx+\frac{k(k-1)}{2}x^2+x+kx^2+\frac{k(k-1)}{2}x^3\\
&=1+(k+1)x+kx^2+\frac{k(k-1)}{2}x^2+\frac{k(k-1)}{2}x^3
\end{align}
And here's where I have no clue how to continue. I thought of factoring out $kx^2$ from the remaining three terms, but I don't see how that can get me anywhere.
| If $n=1$, it is trivial. Suppose it is true for $n$. We will show that this formula is true for $n+1$.
$$
\begin{aligned}
(1+x)^{n+1}&=(1+x)^n(1+x)\\
&\ge \left( 1+nx+\frac{n(n-1)}{2} x^2 \right)(1+x)\\
&= 1+nx+\frac{n(n-1)}{2}x^2 +x+nx^2 +\frac{n(n-1)}{2} x^3\\
&= 1+(n+1)x+\frac{n^2-n+2n}{2}x^2+\frac{n(n-1)}{2} x^3\\
&= 1+(n+1)x +\frac{n^2+n}{2}x^2+\frac{n(n-1)}{2} x^3\\
&\ge 1+(n+1)x +\frac{n(n+1)}{2}x^2\\
&= 1+(n+1)x +\frac{(n+1)(n+1-1)}{2}x^2
\end{aligned}
$$
so we get desired result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/507537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
A probability problem on limit theorem (!) Each of the 300 workers of a factory takes his lunch in one of the three competing restaurants (equally likely, so with probability $1/3$). How many seats should each restaurants have so that, on average, at most one in 20 customers will remain unseated?
How can I approach this?
It is not clear to me why it is given as an excercise to the limit theorems chapter.
| That this is an exercise on the limit theorems chapter is actually a clue. Let $n=300$ denote the number of workers, $s$ the number of seats in each restaurant and $X$ the number of workers trying to get a seat in restaurant 1. Then $X-s$ customers remain unseated at restaurant 1 if $X\gt s$, and none if $X\leqslant s$ hence the mean number of customers who remain unseated at restaurant 1 is $E[X-s;X\gt s]$. By symmetry, the mean number of customers who remain unseated at any of the three restaurants is $m=3\cdot E[X-s;X\gt s]$ and one asks for $s$ such that $m\leqslant20$.
The distribution of $X$ is binomial $(n,\frac13)$ hence its mean is $\frac13n$ and its variance is $\frac29n$. If $n$ is large enough for the central limit theorem to apply, then
$$
X=\tfrac13n+\tfrac13\sqrt{2n}\cdot Y,
$$
where $Y$ is roughly standard normal. Defining
$$
s=\tfrac13n+\tfrac13\sqrt{2n}\cdot r,
$$ one gets $m=\sqrt{2n}\cdot E[Y-r;Y\gt r]$. Since $Y$ is approximately standard normal, for every $r$,
$$
E[Y-r;Y\gt r]\approx\varphi(r)-r(1-\Phi(r)),
$$
hence one solves numerically $\varphi(r)-r(1-\Phi(r))=\frac{20}{\sqrt{2n}}=\frac13\sqrt6$. The root is $r^*\approx-0.665$, which yields $s^*=\frac13n+\frac13\sqrt{2n}\cdot r^*\approx100-\frac{20}9\sqrt6\approx94.56$.
Finally, if the gaussian approximation applies, each restaurant should have at least $95$ seats.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/507603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to calculate a Bell number (Bell[n] mod 95041567) quickly enough? How to calculate a Bell number (Bell[n] mod 95041567) quickly enough?
Here n maybe very big as 2^31.
Bell number is http://oeis.org/A000110
| Wikipedia (https://en.wikipedia.org/wiki/Bell_number) states Touchard's congruence for a prime power $p^m$ as
$$B_{p^m + n} = m B_n + B_{n+1} \pmod{p}$$
Using this and the Chinese remainder theorem you should be able to calculate the numbers pretty quickly.
More details: One straightforward way is to take $m=1$ in the congruence. Then we are left with the linear recurrence relation $B_{p+n} = B_n + B_{n+1}$. This can be solved efficiently by transforming it into a matrix and using exponentiation by squaring (https://en.wikipedia.org/wiki/Exponentiation_by_squaring). For example: Let $p=5$, then
$$\begin{pmatrix} B_{n+1} \\ B_{n+2} \\ B_{n+3} \\ B_{n+4} \\ B_{n+5} \end{pmatrix}=\begin{pmatrix} 0 & 1 & 0 & 0 & 0\\0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0& 1 & 0 \\0 & 0 & 0 & 0 & 1 \\ 1 & 1 & 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} B_n \\ B_{n+1} \\ B_{n+2} \\ B_{n+3} \\ B_{n+4} \end{pmatrix}.$$
To calculate $B_n \pmod{5}$ for some big $n$ just raise the matrix in the middle to the $n$th power, multiply by $\begin{pmatrix} B_0 & B_1 & B_2 & B_3 & B_4 \end{pmatrix}^T$ and look at the first entry in the resulting vector.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/507673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Showing an integral is unbounded Let $$F(x)=x^2\sin\left(\frac{1}{x^2}\right)\quad\forall x\in(0,1]$$ and $$G(x)=F'(x)=2x\sin\left(\frac{1}{x^2}\right)-2\frac{1}{x}\cos\left(\frac{1}{x^2}\right)\quad\forall (0,1].$$ I want to show that $$\lim_{t\searrow0}\int_t^1\left|G(x)\right|\,\mathrm{d}x=+\infty.$$
I've had a hard time proving this, given that the values of $|G(x)|$ oscillate between $0$ and an ever increasing number at an ever increasing frequency as $x$ approaches zero, making it hard to bound the integral from below to establish that it is unbounded.
Any hints would be greatly appreciated.
| The $2x\sin\left(\dfrac{1}{x^2}\right)$ part is bounded, and can therefore be ignored when investigating the convergence or divergence of the integral.
So let's look at the integral
$$\int\limits_{a_{k+1}}^{a_k} \left\lvert \frac{2}{x}\cos \left(\frac{1}{x^2}\right)\right\rvert\, dx$$
where the $a_k$ are the successive zeros of $\cos (1/x^2)$, $a_k = 1/\sqrt{(k-\frac12)\pi}$.
We can estimate it from below by shrinking the interval, so that $\cos (1/x^2)$ stays away from $0$, say
$$I_k = \int\limits_{b_k}^{c_k} \left\lvert\frac{2}{x}\cos\left(\frac{1}{x^2}\right) \right\rvert\,dx$$
with
$$b_k = \frac{1}{\sqrt{(k+\frac13)\pi}},\quad c_k = \frac{1}{\sqrt{(k-\frac13)\pi}},$$
so that $\lvert\cos (1/x^2)\rvert \geqslant \frac12$ for $b_k\leqslant x \leqslant c_k$. That yields
$$I_k \geqslant \frac{1}{c_k}\left(c_k - b_k\right).$$
For $k$ sufficiently large, you have constants independent of $k$ such that
$$\frac{1}{c_k} \geqslant A\sqrt{k},\quad c_k - b_k \geqslant \frac{B}{k\sqrt{k}},$$
so $\sum_{k\geqslant K} I_k$ is bounded below by a harmonic series for some $K$ sufficiently large, hence
$$\lim_{t\to 0} \int_t^1 \lvert G(x)\rvert\,dx = +\infty.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/507726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Using the definition of the derivative to prove a constant function I am presented with the following task:
"Let $f$ be a function defined on the interval $I$. All we know about $f$ is that there is a constant $K$ such that
$$|f(a) - f(b)| \leq K|a-b|^2$$
for all $a, b \in I$. Show that $f$ is constant on $I.$ (Hint: calculate the derivative using the definition of the derivative first.)"
I am utterly confused. The Mean-Value Theorem crosses my mind while looking at the equation. I believe that I am supposed to somehow prove that $f(a) - f(b) = 0$ for all $a, b$, thus proving that the function is constant on $I$, but I'm having a hard time seeing how to make any progress in that direction.
| Suppose for example that $0\in I$ and $1\in I$.
Then
$$
|f(1)-f(0)|=\bigg|\sum_{k=0}^{n-1} f(\frac{k+1}{n})-f(\frac{k}{n})\bigg|
\leq \sum_{k=0}^{n-1} \big|f(\frac{k+1}{n})-f(\frac{k}{n})\big|
\leq \sum_{k=0}^{n-1} K\frac{1}{n^2}=\frac{K}{n}
$$
Since this holds for any $n$, we deduce $f(1)=f(0)$.
You can do this for any pair other than $(0,1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/507788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How to prove the limit of a sequence using "$\epsilon-N$" I think I have a proper understanding of the general procedure, but I'm having difficulty manipulating my inequality so that I can isolate $n$ by itself. Sadly I wasn't given many examples to model my answer on.
Prove that $\displaystyle\lim_{n\to\infty}\frac{n+1}{n^2+1}=0$
So I'm given $L=0$. I then look at the inequality
$$\left| \frac{n+1}{n^2+1}-0\right|<\epsilon$$
but I have no idea how to isolate $n$. The best I can come up with, which may be the right idea, is to use another function $f$ such that
$$\left|\frac{n+1}{n^2+1}\right|<f<\epsilon$$
and then work with that. But my idea of using $f=\lvert n+1\rvert$ seems to have me a bit stuck too.
| Since the given sequence is positive for all $n\geq 1$, we can drop the absolute value signs.
Consider the inequality:
\begin{equation*}
\frac{n+1}{n^2+1}<\epsilon
\end{equation*}
This is a messy inequality, as solving for $n$ would be rather difficult. Instead we shall find an upper bound for the numerator $(n+1)$ and a lower bound on the denominator $(n^2+1)$ so that we can construct a new sequence $f_{n}$ such that $$\frac{n+1}{n^{2}+1}<f_{n}<\epsilon.$$
Since $n+1$ behaves like $n$ at very large numbers, we shall try to find a value for $b$ such that $bn>n+1$. And since $n^{2}+1$ behaves like $n^{2}$ for very large numbers, we shall try to find a value for $c$ such that $cn^{2}<n^{2}+1$. $n+1<n+n=2n$ and $n^{2}+1>\frac{1}{2}n^{2}$ for all $n\geq 1$.
Now, consider the new inequality:
\begin{align*}
\frac{n+1}{n^{2}+1}<\frac{2n}{\frac{1}{2}n^{2}}<\frac{4}{n}<\epsilon
\end{align*}
We have
\begin{equation*}
\frac{4}{n}<\epsilon\iff \frac{4}{\epsilon}<n
\end{equation*}
Let $\displaystyle N(\epsilon)=\lfloor4/\epsilon\rfloor$. Therefore, $\forall \epsilon>0,\ \exists N(\epsilon)=\lfloor4/\epsilon\rfloor\ni\forall n>N(\epsilon)$
\begin{align*}
\frac{n+1}{n^{2}+1}&<\frac{4}{n}<\frac{4}{N(\epsilon)+1}<\frac{4}{4/\epsilon}=\epsilon
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/507897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 0
} |
If $ a \mid bc $ then $\frac{a}{\gcd(a,b)} \mid c$? Prove or reject this statement:
If $ a \mid bc $ then $\displaystyle \frac{a}{\gcd(a,b)} \mid c$
| Hint: If $a\mid bc$ and $(a,b)=1$ then $a\mid c$. Use this to show:
$$
\displaystyle a \mid bc \implies \frac{a}{\gcd(a,b)} \mid \frac{b}{\gcd(a,b)}\times c \implies \frac{a}{\gcd(a,b)}\mid c
$$
Because $\gcd(\frac{a}{\gcd(a,b)},\frac{b}{\gcd(a,b)})=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/508057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
A round table probability question Hi guys I am writing my P exam for the second time and I remembered two question that confused me when writing the exam. I asked my prof. but it confused him as well. For simplicity I will ask one question here and post the other one after.
So if you could please help me I would really be greatful.
Question:
You have 3 Actuaries, 4 Social Workers, and 3 clients. They all sit at a ROUND table. What is the probability that NO actuaries sit beside each other.
My attempt:
I labelled 3 A, 4S, 3C. Then I labelled 1 to 10 on a circle. I placed A in one spot and I knew no other actuaries sit beside each other so we have 7 spaces left. so now how can we place those other 2 actuaries so they dont sit beside each other.
Then I got confused.Please help out.
Thank you
| The way you are asking the question it seems to me that whether it is social workers or clients has no relevance. So basically we have 10 people seated around a round table and 3 specific people must not sit next to each other.
So label the seats 1 through 10 as you suggested: You can choose 3 out of 10 in $\binom{10}{3}=120$ ways.
You can construct all ways to choose 3 non-adjacent seats by choosing $3$ out of $7$, say $i<j<k\in\{1,2,...,7\}$ and map them to $i+1,j+2,k+3$ which points to three seats among $\{2,3,...,10\}$ separated by at least one seat each. This produces $\binom{7}{3}=35$ succesful patterns.
But this does not account for patterns where the first seat chosen is $1$. These paterns can be constructed by fixing the first seat as $i=1$ and then choose $j,k\in\{2,3,...,7\}$ and mapping these to $j+1,k+2$ in $\{3,...,9\}$. This produces another $\binom{6}{2}=15$ succesful patterns.
To sum up we have $35+15=50$ succesful patterns out of $120$ possible patterns. Thus we have
$$
P=\frac{50}{120}=\frac{5}{12}
$$
probability of succes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/508127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Showing that rationals have Lebesgue measure zero. I have been looking at examples showing that the set of all rationals have Lebesgue measure zero. In examples, they always cover the rationals using an infinite number of open intervals, then compute the infinite sum of all their lengths as a sum of a geometric series. For example, see this proof.
However, I was wondering if I could simply define an interval $(q_n - \epsilon, q_n + \epsilon )$ around each rational number $q_n$. Since there is a countable number of such intervals, the Lebesgue measure must be bound above by a countable sum of their Lebesgue measure (subadditivity). i.e. $\mu(\mathbb{Q}) \leq \mu(\bigcup^{\infty}_{i=1} (q_n - \epsilon, q_n + \epsilon)) = \sum^{\infty}_{i=1} \mu((q_n - \epsilon, q_n + \epsilon))$.
Then, argue that each individual term is $\mu((q_n - \epsilon, q_n + \epsilon)) < 2\epsilon$ and is thus zero since the epsilon is arbitrary?
| Define an interval $(q_n−\frac{ϵ}{2^n},q_n+\frac{ϵ}{2^n})$
around each rational number $q$
For $\epsilon>0$, $\mu((q_n−\frac{ϵ}{2^n},q_n+\frac{ϵ}{2^n}))=2\left(\frac{\epsilon}{2^n}\right)$
and $\sum^{\infty} _{n=1} 2\frac{ϵ}{2^n}=2ϵ$
Since $ϵ$ is arbitrary, so $μ(Q) = 0$.
I think this simple trick could work
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/508217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36",
"answer_count": 2,
"answer_id": 1
} |
About a proof that functions differing from a measurable function on a null set are measurable Consider the following problem:
Say $f: M \to \mathbb{R} $ is measurable function, $M$ a measurable set and $g : M \to \mathbb{R}$ such that $ Y = \{ x : f(x) \neq g(x) \} $ is a null set. We want to show $g$ is measurable.
So the solution would be to consider the difference $d = g - f$. This function $d$ is zero except on a null set.consider the following set
$$
\{x : d(x) > a \} =
\begin{cases}
N, & \text{if }a \geq 0 \\
N^c, & \text{if } a < 0
\end{cases}
$$
Where $N$ is null set. Then it follows that $d$ is measurable and therefore $g$ is measurable. However, I still feel dirty on the set my teacher considered. Why $\{x : d(x) > a \}$ is equal to complement of null set $N$ when $a < 0 $ ?
| The expression that troubles you:
$$\{x: d(x) > a\} = \begin{cases}N&: a \ge 0\\N^c&: a < 0 \end{cases}$$
is really an abuse of notation. What is meant is something like:
$$\{x: d(x) > a\} = \begin{cases}N_a&: a \ge 0\\N_a^c&: a < 0 \end{cases}$$
because the null set $N_a$ varies with $a$, in the following way:
$$N_a = \begin{cases}\{x: d(x) > a\}&: a \ge 0\\ \{x: d(x) \le a\} &: a < 0\end{cases}$$
(This definition of $N_a$ is a consequence of the asserted equality above.)
The idea is that for each choice of $a$, we have that:
$$N_a \subseteq \{x: d(x) \ne 0\} = \{x: f(x) \ne g(x)\}$$
(check it!) and we know by assumption that the latter is a null set. So each $N_a$ is a subset of a null set.
So, provided $N_a$ is actually measurable, it must have $\mu(N_a) = 0$ by monotonicity of the measure $\mu$.
The way the solution is set up indicates that indeed each $N_a$ is measurable. Measure spaces having this property, where each subset of a null set is also a measurable set (and hence a null set), are called complete measure spaces.
Every measure space can be modified in a nonessential way to become a complete measure space; however the proof of this completion theorem for measure spaces is a bit technical so you may want to take it for granted as a justification for the assumption in your question.
Coming back to the original question, if we look at the case where $a <0$, we have that $\{x: d(x) > a\}$ is the complement of the null set $N_a = \{x :d(x) \le a\}$. It thus does not bear any relation to the null set $N_{a'} = \{x:d(x) >a'\}$ for any $a' > 0$.
You are therefore right to conclude that the solution as written is "dirty", because it obscures the fact that the null set $N$ depends on $a$ in a nontrivial way.
I hope this clarifies the matter for you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/508325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to show that the given equation has at least one complex root ,a s.t |a|>12 How do I show that the equation $x^3+10x^2-100x+1729$$=0$ has at least one complex root $a$ such that $|a|$$>$$12$.
| The function necessarily has three complex roots, which we'll call $\alpha$, $\beta$ and $\gamma$. Hence your polynomial can be factored as
$$(x - \alpha) (x - \beta) (x - \gamma) = x^3 + 10x^2 - 100x + 1729$$
Expanding the left hand side, we find that
$$-\alpha \beta \gamma = 1729$$
If it were true that all three roots had moduli less than or equal to $12$, we would have
$$1729 = |\alpha| |\beta| |\gamma| \le 12^3 = 1728$$
a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/508416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Range of $S = \frac{1}{\sqrt{1}}+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}+..............+\frac{1}{\sqrt{n}}$ If $\displaystyle S = \frac{1}{\sqrt{1}}+\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{3}}+..............+\frac{1}{\sqrt{n}}$ and $n\in \mathbb{N}$. Then Range of $S$ is
$\underline{\bf{My \;\; Try}}$:: For Lower Bond:: $\sqrt{n}\geq \sqrt{r}\;\; \forall r\in \mathbb{N}$
So $\displaystyle \frac{1}{\sqrt{r}}\geq \frac{1}{\sqrt{n}}$ Now Adding from $r = 1$ to $r=n$
$\displaystyle \sum_{r=1}^{n}\frac{1}{\sqrt{r}}\geq \frac{n}{\sqrt{n}} = \sqrt{n}\Rightarrow \sum_{r=1}^{n}\frac{1}{\sqrt{r}}\geq\sqrt{n}$
Now How Can i found Upper Bond , Help Required.
Thanks
| When we say a series is bounded, we mean it is bounded by a CONSTANT. As Gerry wrote above, lower bound is $1$ and there is no upper bound.
If you are familiar with the Harmonic series, you can see each term in $S$ greater equal than each term of the Harmonic series. We all know the Harmonic series diverges and hence there is no upper bound for $S$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/508503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to prove that $a
Let $S_n=1+\frac{1}{2}+\frac{1}{3}+\ldots+\frac{1}{n}$, where $n$ is a positive integer. Prove that for any real numbers $a,b,0\le a\le b\le 1$, there exist infinite many $n\in\mathbb{N}$ such that
$$a<S_n-[S_n]<b$$
where $[x]$ represents the largest integer not exceeding $x$.
This problem is from China 2012 China Second Round (High school math competition) competition last problem, I think this problem has more nice methods, maybe using analytic methods.
| Let $i\geqslant1/(b-a)$ and $k=\lceil S_i\rceil$. Since the sequence $(S_j)_{j\geqslant1}$ is unbounded, some values of this sequence are greater than $k+a$, hence $n=\min\{j\mid S_j\gt k+a\}$ is well defined and finite. Then $S_{n-1}\leqslant k+a\lt S_{n}$ and $n\gt i$ since $S_i\lt k+a$ hence
$$
S_{n}=S_{n-1}+1/n\lt k+a+1/i\leqslant k+b.
$$
Thus, $k+a\lt S_{n}\lt k+b$, in particular, $\lfloor S_{n}\rfloor=k$ and $a\lt S_{n}-\lfloor S_{n}\rfloor\lt b$.
For every $i$ large enough, this provides some $n\gt i$ such that $a\lt S_{n}-\lfloor S_{n}\rfloor\lt b$. To get infinitely many indexes $n$ such that $a\lt S_{n}-\lfloor S_{n}\rfloor\lt b$, just iterate the construction.
This approach works for every unbounded sequence $(S_n)_n$ such that $S_{n+1}-S_n\to0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/508574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
} |
Proof without using Extreme value theorem? $$\text{Let } f:[a,b]\to \mathbb{R} \text{ be continuous and strictly positive. } $$
$$ \text{prove } c= \inf f(\,(a,b)\,)\neq 0$$
Is there a way other than using the extreme value theorem?
One way might be to show $c= \min f( [a,b] )$, and therefore positive, but I'm not sure how to proceed (without the theorem).
If $ c=0 $ intuition tells me there must be a point $x_0 \in [a,b]$ such that $\lim_{x \to x_0} f(x) = 0 $, which contradicts its continuous and positive...
| Suppose that $\inf\{f(x):x\in(a,b)\}=0$. Then, for each positive integer $n$, there is some $x_n\in(a,b)$ such that $$(*)\quad 0<f(x_n)<\frac{1}{n}.$$ Since $[a,b]$ is compact and is the closure of $(a,b)$, the sequence $(x_n)$ has a subsequence $(x_{n_k})$ that converges to some $x^*\in[a,b]$. Since $f$ is continuous, $$\lim_{k\to\infty}f(x_{n_k})=f(x^*).$$ But this limit must be zero because of $(*)$, implying that $f(x^*)=0$, which is a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/508635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Matrix Algebra Question (Linear Algebra) Find all values of $a$ such that $A^3 = 2A$, where
$$A = \begin{bmatrix} -2 & 2 \\ -1 & a \end{bmatrix}.$$
The matrix I got for $A^3$ at the end didn't match up, but I probably made a multiplication mistake somewhere.
| The matrix $A$ is clearly never a multiple of the identity, so its minimal polynomial is of degree$~2$, and equal to its characteristic polynomial, which is $P=X^2-(a-2)X+2-2a$. Now the polynomial $X^3-2X$ annihilates $A$ if and only if it is divisible by the minimal polynomial $P$. Euclidean division of $X^3-2X$ by $P$ gives quotient $X+a-2$ and remainder $(a^2-2a)X-(a-2)(2-2a)$. This remainder is clearly$~0$ if $a=2$ and for no other values of $a$, so $A^3=2A$ holds precisely for that value.
You can in fact avoid doing the Euclidean division by observing, since $X$ divides $X^3-2X$, that for $P$ to divide $X^3-2X$, unless $P$ itself has a factor $X$ (which happens for $a=1$ but divisibility then fails anyway), $P$ must divide (and in fact be equal to) $(X^3-2X)/X=X^2-2$; this happens for $a=2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/508896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Connected, but it is not continuous at some point(s) of I I have a mathematics problem, but I have no idea. please help me...
The problem is "Give an example of a function $f(x)$ defined on an interval I whose graph is connected, but is is not continuous at some point(s) of $I$"
In my idea, a solution is topologist's sin curve.
\begin{gather*}
f\colon[0, 1] \to [0, 1] \\\\
f(x) = \begin{cases}
\sin(1/x) &(x\neq 0)\\
0 &(x=0).
\end{cases}
\end{gather*}
Is this function connected on $\mathbb{R}^2$ space but not continuous at $x=0$ ?
Please help me...
| This is a good example. The graph of the topologist's sine curve, which includes the point $(0,0)$ as you have indicated, is indeed connected. However, it is not continuous. To see this, try and produce a sequence of points $x_n$ converging to $0$ for which $\sin(1/x_n) = 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/508952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
A problem about dual basis I've got a problem:
Let $V$ be the vector space of all functions from a set $S$ to a field $F$:
$(f+g)(x) = f(x) + g(x)\\
(\lambda f)(x) = \lambda f(x)$
Let $W$ be any $n$-dimensional subspace of $V$. Show that there exist points $x_1, x_2, \dots, x_n$ in $S$ and funtions $f_1, f_2, \dots, f_n$ in $W$ such that $f_i(x_j) = \delta_{ij}$
I'm confusing about set $S$. In the case $S$ has less than $n$ elements, how can I chose $x_1, x_2, \dots, x_n$? Anyone can help me solve this problem?
thank you.
| This is not a dual basis question. The dual of a vector space $V$ over field $k$ is the set of all linear functions from $V$ into $k$.
In your case the functions do not have to be linear, or even continuous. An example of what you have is $\mathbb R^n$, which can be thought of as the set of all functions from the set $\lbrace 1,2, \ldots n\rbrace$ into the field $\mathbb R$. These functions take the number $i$ and return the vector's $i^{th}$ component. We just usually write the vectors in the form $ \left( \begin{array}{ccc}
f(1) \\
f(2) \\
... \\
f(n) \end{array} \right)$. Now imagine we are actually working in the vector space $\mathbb R^n$ and that it contains an $m$ dimensional subspace. What does this tell us about the size of $n$? And what kind of vectors exist in a space this large?
The dual to the space $\mathbb R^n$ of column vectors, is the space of length $n$ row vectors. These work like functions on column vectors through matrix multiplication. The row vectors are functions $v: \lbrace 1,2, \ldots, n \rbrace \to \mathbb R$. The dual basis elements are (linear) functions $\alpha: \mathbb R^n \to \mathbb R$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/509042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
If a and b are group elements and ab ≠ ba, prove that aba ≠ identity. Q: If a and b are group elements and ab ≠ ba, prove that aba ≠ identity.
I began by stating my theorem, then assumed ab ≠ ba. Then I tried a few inverse law manipulations, which worked in a sense, however they brought me nowhere, as I couldn't conclude my proof concretely. Suggestions? Solutions?
| Try proving the contrapositive; that is, if $aba=e$ then $ab=ba$. First of all, multiply both sides of $aba=e$ on the left by $a^{-1}$ to get $ba=a^{-1}$; now multiply both sides of this on the right by $a^{-1}$ to get $b=a^{-2}$. Now, can you see why $b$ commutes with $a$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/509132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 0
} |
Does $\int_a^b f'(\gamma(t))\gamma'(t)\,dt$ depend on the path for meromorphic functions? Given a meromorphic function $f:\mathbb C\to\mathbb C$ and a smooth curve $\gamma:[a,b]\to\Gamma\subset\mathbb C$ with $\gamma(a)\neq\gamma(b)$, I am tempted to think the fundamental theorem of calculus yields
$$\int_a^b\underbrace{f'(\gamma(t))\gamma'(t)\,dt}_{=df} = f(\gamma(t))\Big|_{t=a}^b (*)$$
independently of the chosen path. However, taking a closed integral through both $\gamma(a)$ and $\gamma(b)\neq\gamma(a)$, the residue theorem $\oint f'(\gamma(t))\gamma'(t)\,dt = 2\pi i\sum_k\operatorname{Res}_k(f')$ proves me wrong when the closed curve surrounds a singularity of $f'$ since the two paths connecting $\gamma(a)$ and $\gamma(b)$ would otherwise cancel out the residue. So now my question is:
What correction term needs to be added to $(*)$? Can that be fixed at all, or did I miss something?
| The correction term can be given in terms of the winding numbers $n(k,\gamma)$ of the directed curve $\gamma$ about the poles $k$ of $f$. We have
$$\int_a^b f'(\gamma(t))\gamma'(t) dt = f(\gamma(t)) \bigg\vert_{a}^b +2\pi i \sum_k n(k,\gamma)\mathrm{Res}_k(f').$$
Intuitively, the winding number counts how many times a curve wraps around a single point. For simple closed curves, this number will always be $0$, $1$, or $-1$, depending on whether or not the curve encloses the given point, and what direction the parametrization is given in. Note that this formula recovers the Residue theorem (these results are pretty much equivalent, if you throw some topology under the rug), if we suppose that $\gamma$ is a closed curve.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/509235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to solve a recursive equation I have been given a task to solve the following recursive equation
\begin{align*}
a_1&=-2\\
a_2&= 12\\
a_n&= -4a_n{}_-{}_1-4a_n{}_-{}_2, \quad n \geq 3.
\end{align*}
Should I start by rewriting $a_n$ or is there some kind of approach to solve these?
I tried rewriting it to a Quadratic Equation (English isn't my native language, sorry if this is incorrect). Is this the right approach, if so how do I continue?
\begin{align*}
a_n&= -4a_n{}_-{}_1-4a_n{}_-{}_2\\
x^2&= -4x-4\\
0&= x^2 + 4x + 4
\end{align*}
| $\displaystyle{\large a_{1} = -2\,\quad a_{2} = 12,\quad
a_{n} = -4a_{n - 1} - 4a_{n - 2}\,,\quad n \geq 3}$.
$$
\sum_{n = 3}^{\infty}a_{n}z^{n}
=
-4\sum_{n = 3}^{\infty}a_{n - 1}z^{n}
-
4\sum_{n = 3}^{\infty}a_{n - 2}z^{n}
=
-4\sum_{n = 2}^{\infty}a_{n}z^{n + 1}
-
4\sum_{n = 1}^{\infty}a_{n}z^{n + 2}
$$
$$
\Psi\left(z\right) - a_{1}z - a_{2}z^{2}
=
-4z\left[\Psi\left(z\right) - a_{1}z\right]
-
4z^{2}\Psi\left(z\right)
$$
where $\displaystyle{\Psi\left(z\right) \equiv \sum_{n = 1}^{\infty}a_{n}z^{n}.
\quad
z \in {\mathbb C}.\quad \left\vert z\right\vert < {1 \over 2}}$.
\begin{align}
\Psi\left(z\right)
&=
{\left(4a_{1} +a_{2}\right)z^{2} + a_{1}z \over 4z^{2} + 4z + 1}
=
{4z^{2} - 2z \over 4z^{2} + 4z + 1}
=
{4z^{2} - 2z \over \left(1 + 2z\right)^{2}}
=
\left(z - 2z^{2}\right)\,{{\rm d} \over {\rm d} z}\left(1 \over 1 + 2z\right)
\\[3mm]&=
\left(z - 2z^{2}\right)\,{{\rm d} \over {\rm d} z}\sum_{n = 0}^{\infty}\left(-\right)^{n}2^{n}z^{n}
=
\left(z - 2z^{2}\right)\sum_{n = 1}^{\infty}\left(-\right)^{n}2^{n}nz^{n - 1}
\\[3mm]&=
\sum_{n = 1}^{\infty}\left(-1\right)^{n}2^{n}nz^{n}
-
\sum_{n = 1}^{\infty}\left(-1\right)^{n}2^{n + 1}nz^{n + 1}
=
\sum_{n = 1}^{\infty}\left(-1\right)^{n}2^{n}nz^{n}
-
\sum_{n = 2}^{\infty}\left(-1\right)^{n - 1}2^{n}\left(n - 1\right)z^{n}
\\[3mm]&=
-2z
+
\sum_{n = 2}^{\infty}\left(-1\right)^{n}2^{n}nz^{n}
-
\sum_{n = 2}^{\infty}\left(-1\right)^{n - 1}2^{n}\left(n - 1\right)z^{n}
\\[3mm]&=
-2z
+
\sum_{n = 2}^{\infty}\left(-1\right)^{n}\left(2n - 1\right)2^{n}z^{n}
\end{align}
$$
\Psi\left(z\right)
=
\sum_{n = 1}^{\infty}a_{n}z^{n}
=
\sum_{n = 1}^{\infty}\left(-1\right)^{n}\left(2n - 1\right)2^{n}z^{n}
$$
$$
\begin{array}{|c|}\hline\\
\color{#ff0000}{\large\quad%
a_{n}
\color{#000000}{\large\ =\ }
\left(-1\right)^{n}\left(2n - 1\right)2^{n}\,,
\qquad\qquad
n = 1, 2, 3, \ldots
\quad}
\\ \\ \hline
\end{array}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/509309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Show that when $BA = I$, the solution of $Ax=b$ is unique I'm just getting back into having to do linear algebra and I am having some trouble with some elementary questions, any help is much appreciated.
Suppose that $A = [a_{ij}]$ is an $m\times n$ matrix and $B = [b_{ij}]$ and is an $n\times m$ matrix and $BA = I$ the identity matrix that is $n \times n$.
Show that if for some $b \in \Bbb{R}^m$ the equation $Ax = b$ has a solution, then the solution is unique.
| The usual method for solving uniqueness problems is generally this: assume you have two solutions, say $x$ and $y$. Then do manipulation, use theorems, whatever, and somehow show that $x=y$.
In your case, if $Ax=b$ and $Ay=b$, then multiply both sides on the left by $B$. Then $B(Ax)=Bb$ and $B(Ay)=Bb$. Can you take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/509380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Newton's Method - Slow Convergence I'm using Newton's method to find the root of the equation $\frac{1}{2}x^2+x+1-e^x=0$ with $x_0=1$. Clearly the root is $x=0$, but it takes many iterations to reach this root. What is the reason for the slow convergence? Thanks for any help :)
| Newton's method has quadratic convergence near simple zeros, but the derivative of $\frac{1}{2}x^2+x+1-e^x$ at $x=0$ is zero and so $x=0$ is not a simple zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/509450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Convergence of partial sums and their inverses
If a sequence $s_{k}$ of partial sums converges to a nonzero limit, and we assume that $s_{k} \neq 0$ for all $k$ $\epsilon$ $\mathbb{N}$, then also the sequence $\left \{ \frac{1}{s_{k}} \right \}$ converges.
In my book, $s_{k}$ is defined as $\sum_{j = 1}^{k}\frac{a_{j}}{10^{j}}$
which is a decimal expansion.
I can't immediately see why this sequence converges - Maybe I'm just braindead today, but I can't think of any examples in my head that are making sense to me. Can anyone point me in the right direction about why this is true?
| In general, if $\{a_n\}$ is any convergent sequence with a limit $a\neq 0$, then $\dfrac{1}{a_n}$ converges to $\dfrac{1}{a}$.
Proof. Let $\epsilon>0$. Then
$$
\left|\frac{1}{a_n}-\frac{1}{a}\right|=\left|\frac{a-a_{n}}{aa_n}\right|
$$
Since $a_n\to a$ as $n\to\infty$, we can choose a positive integer $N_1$ such that if $n\ge N_1$, then $|a-a_n|\leq \dfrac{|a|}{2}$ for all $n\ge N_1$. This implies $a_n\ge\dfrac{|a|}{2}$ for all $n\ge N_1$. Now choose a positive integer $N_2$ such that $|a-a_n|\leq \dfrac{a^2}{2}\epsilon$ for all $n\ge N_2$. Let $N=\max(N_1, N_2)$. It follows that
$$
\left|\frac{1}{a_n}-\frac{1}{a}\right|=\left|\frac{a-a_{n}}{aa_n}\right|=\frac{|a-a_n|}{|a||a_n|}\leq \frac{\frac{a^2}{2} \epsilon}{\frac{a^2}{2}}=\epsilon
$$
for each $n\ge N$. So $\dfrac{1}{a_n}\to\dfrac{1}{a}$ as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/509503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Means and Variances For a laboratory assignment, if the equipment is
working, the density function of the observed outcome
$X$ is
$$
f(x) = \begin{cases} 2(1-x), & 0 <x<1, \\
0, & \text{otherwise.} \end{cases}
$$
Find the variance and standard deviation of $X$.
We know that the variance is related to the mean and the second moment. I am stuck on how to set up the integrals for the both of them.
| The integrals are
$$
\text{mean} = \mu = \int_0^1 x f(x) \, dx
$$
and
$$
\text{variance} = \sigma^2 = \int_0^1 (x-\mu)^2 f(x)\,dx.
$$
In order to evaluate the second integral, one must find $\mu$ by evaluating the first integral.
The second integral is the variance $\sigma^2$; the standard deviation is the square root of that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/509574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Is a chain complete lattice complete? If every chain in a lattice is complete (we take the empty set to be a chain), does that mean that the lattice is complete? If yes, why?
My intuition says yes, and the reasoning is that we should somehow be able to define a supremum of any subset of the lattice to be the same as the supremum of some chain related to that lattice, but I've not abled to make more progress on the same. ANy suggestions?
| If $L$ is not complete, it has a subset with no join; among such subsets let $A$ be one of minimal cardinality, say $A=\{a_\xi:\xi<\kappa\}$ for some cardinal $\kappa$. For each $\eta<\kappa$ let $A_\eta=\{a_\xi:\xi\le\eta\}$; $|A_\eta|<\kappa$, so $A_\eta$ has a join $b_\eta$. Clearly $b_\xi\le b_\eta$ whenever $\xi\le\eta<\kappa$, so $\{b_\xi:\xi<\kappa\}$ is a chain; indeed, with a little more argumentation we can assume that the chain is a strictly increasing $\kappa$-sequence. Now let $b=\bigvee_{\xi<\kappa}b_\xi$ and show that $b=\bigvee A$ to get a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/509635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Proof of equivalence of algebraic and geometric dot product? Geometrically the dot product of two vectors gives the angle between them (or the cosine of the angle to be precise). Algebraically, the dot product is a sum of products of the vector components between the two vectors.
However, both formulae look quite different but compute the same result. What is an easy/intuitive proof showing the equivalence between the two definitions?
I.e., demonstrate that $a_x b_x + a_y b_y + a_z b_z = \lvert a \rvert \lvert b \rvert \cos{\theta}$
| TL;DR: So, in reality, while they do look different they actually do the exact same thing, thus outputting the same result. It's just that one is in regular coordinates and the other is in polar coordinates, that's all.
Let us consider the 2D case for simplicity.
Imagine we have a vector $\mathbf{r} = (x,y)$. We can represent this vector in polar coordinates as:
\begin{align}
\mathbf{r} = (||\mathbf{r}||\cos(\theta), ||\mathbf{r}||\sin(\theta))
\end{align}
Consider two vectors $\mathbf{r_1}$, $\mathbf{r_2}$ with components $x_1$, $x_2$, $y_1$, $y_2$, $\theta_1$, $\theta_2$, then
$\mathbf{r_1} \cdot \mathbf{r_2} = (x_1, y_1) \cdot (x_2, y_2) = x_1x_2 + y_1 y_2$
and (in polar)
\begin{align}
\mathbf{r_1} \cdot \mathbf{r_2} &= (||\mathbf{r_1}||\cos(\theta_1), ||\mathbf{r_1}||\sin(\theta_1)) \cdot (||\mathbf{r_2}||\cos(\theta_2), ||\mathbf{r_2}||\sin(\theta_2))\\
&= (||\mathbf{r_1}||)(||\mathbf{r_2}||) \cos(\theta_1) \cos(\theta_2) + (||\mathbf{r_1}||)(||\mathbf{r_2}||) \sin(\theta_1) \sin(\theta_2)\\
&= (||\mathbf{r_1}||)(||\mathbf{r_2}||) [\cos(\theta_1) \cos(\theta_2) + \sin(\theta_1) \sin(\theta_2)]
\end{align}
Using the trigonmetric identity $\cos(\alpha - \beta) = \cos(\alpha) \cos(\beta) + sin(\alpha) \sin(\beta)$ yields:
\begin{align}
\mathbf{r_1}
\cdot \mathbf{r_2} = (||\mathbf{r_1}||)(||\mathbf{r_2}||) \cos(\theta_1 - \theta_2)
\end{align}
The difference in angle between both vectors is $\Delta \theta = \theta_1 - \theta_2 $. Therefore,
\begin{align}
\mathbf{r_1} \cdot \mathbf{r_2} = x_1x_2 + y_1y_2 = (||\mathbf{r_1}||)(||\mathbf{r_2}||)\cos(\Delta\theta)
\end{align}.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/509719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 2,
"answer_id": 0
} |
A continuous function that when iterated, becomes eventually constant Let $f: \mathbb{R} \rightarrow \mathbb{R}$ be a continuous function, and let $c$ be a number. Suppose that for all $x \in \mathbb{R}$, there exists $N_x > 0$ such that $f^n(x) = c$ for all $n \geq N_x$. Is it possible that $f^n$ (this is $f$ composed with itself $n$ times) is not constant for any $n$?
I don't know if this is true or false. So far I haven't made much progress besides trivial stuff (in an example $c$ is the unique fixed point of $f$, we have $f(x) > x$ for all $x > c$ or $f(x) < x$ for all $x > c$, etc..)
| Take for example
$$f_k(x) = \begin{cases}0 &\text{ for } x \geq -k \\ x+k & \text{ otherwise}\end{cases}$$
which are all continuous and we have $f_1^n = f_n$ which is not constant.
I hope this helps $\ddot\smile$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/509775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
If the index $n$ of a normal subgroup $K$ is finite, then $g^n\in K$ for each $g$ in the group. Let $K \unlhd G$ be a normal subgroup of some group $G$ and let $|G/K|=n<\infty$. I want to show that $g^n\in K$ for all $g\in G$.
Let $g\in G$, if $g\in K$, then $g^n\in K$ and we are done. If $g^n\notin K$ then consider the set of left cosets
$$
C=\{K, gK,g^2K,...,g^{n-1}K\}
$$
I want to show that these cosets are all disjoint and hence $C=K$, then I want to show that $g^nK=K$, so $g^n\in K$.
Suppose
$$
g^lK=g^mK
$$
for some $m,l<n$, then $g^{m-l}\in K$. I am not sure how to proceed from there.
| You dont need to go that far. Since you know that $|G/K|=n<\infty$ and
$G/K =\{K, gK,g^2K,...,g^{n-1}K\}$. Then for any $gK$ in $G/K$, $(gK)^n= g^nK=K $ which answers your question
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/510023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
If $\frac{\cos x}{\cos y}=\frac{a}{b}$ then $a\tan x +b\tan y$ equals If $$\frac{\cos x}{\cos y}=\frac{a}{b}$$ Then $$a \cdot\tan x +b \cdot\tan y$$ Equals to (options below):
(a) $(a+b) \cot\frac{x+y}{2}$
(b) $(a+b)\tan\frac{x+y}{2}$
(c) $(a+b)(\tan\frac{x}{2} +\tan\frac{y}{2})$
(d) $(a+b)(\cot\frac{x}{2}+\cot\frac{y}{2})$
My approach :
$$\frac{\cos x}{\cos y} = \frac{a}{b} $$
[ Using componendo and dividendo ]
$$\frac{\cos x +\cos y}{\cos x -\cos y} = \frac{a+b}{a-b}$$
$$=\frac{2\cos\frac{x+y}{2}\cos\frac{x-y}{2}}{2\sin\frac{x+y}{2}\sin\frac{y-x}{2}}$$
I'm stuck, I'd aprecciate any suggestions. Thanks.
| So, we have $$\frac a{\cos x}=\frac b{\cos y}=\frac{a+b}{\cos x+\cos y}$$
$$\implies a\tan x+b\tan y=\frac a{\cos x}\cdot\sin x+\frac b{\cos y}\cdot\sin y$$
Putting the values of $\displaystyle\frac a{\cos x},\frac b{\cos y}$
$$a\tan x+b\tan y=(\sin x+\sin y)\frac{(a+b)}{\cos x+\cos y}$$
Now, $\displaystyle \cos x+\cos y=2\cos\frac{x+y}2\cos\frac{x-y}2$ and
$\displaystyle \sin x+\sin y=2\sin\frac{x+y}2\cos\frac{x-y}2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/510108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
How many ways to go from $(0,0)$ to $(20,10)$ if precisely $2$ right moves need to be made in a row? Just to clarify: This is the number of ways to go from point $(0,0)$ to point $(20,10)$ if the only directions allowed are right and up. The catch: Each of the ways must include precisely (only) $1$ instance of a "double right".
I know that all of the ways to go from $(0,0)$ to $(20,10)$ can be found by $C(30,20)$ or $C(30,10)$, but I'm having trouble wrapping my head around how to only include ways that have precisely $1$ double right.
| Imagine that your 'long right' was instead a single rightward step. Then, rather than going from $(0,0)$ to $(20,10)$ instead you would move from $(0,0)$ to $(20-k+1,10)$, where $k$ is the length of the 'long right' — but you would do it without any two consecutive rightward moves. What's more, since there's only one 'long right', we can replace any of the $j=20-k+1$ rightward moves by it and get a unique configuration (no two such configs can collide); this means that for each way of getting from $(0,0)$ to $(j, 10)$ without consecutive rightward moves, there are $j$ ways with one 'long right' move. So once we have the quantity $C_j$ of such ways, we can calculate the sum $\sum_kjC_j$ over all $k$ to get the final answer. In fact, since every value of $k$ corresponds to a unique value of $j$ and we never directly use the value of $k$, we may as well make our sum be over $j$ instead of $k$, and we'll consider it to be summed that way from here on out.
Now, let's look at that quantity $C_j$. Since now between any two rightward moves we know that there's at least one upward move, we can eliminate the first upward move between each of the $j-1$ pairs of consecutive moves to get a configuration where we instead have $j$ rightward moves and $10-(j-1)$ upward moves with no restrictions on them; contrariwise, corresponding to any one of these configurations of $j$ rightward moves and $10-(j-1)$ upward moves we have a unique configuration of $j$ rightward moves and $10$ upward moves by just re-inserting the $(j-1)$ upward moves we took out, one between each pair of rightward moves. (You may have seen this under the name 'Stars and Bars' at some point). Can you take it from here to see what $C_j$ must be, and then finish by calculating the original sum?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/510259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Infinite Geometric Series I'm currently stuck on this question:
What is the value of c if $\sum_{n=1}^\infty (1 + c)^{-n}$ = 4 and c > 0?
This appears to be an infinite geometric series with a = 1 and r = $(1 + c)^{-1}$, so if I plug this all into the sum of infinite geometric series formula $S = \frac{a}{1 - r}$, then I get the following:
$4 = \frac{1}{1 - (1 + c)^{-1}}$, which eventually lets me solve c = $\frac{1}{3}$. But this answer isn't right. Can someone help me out? Thanks in advance!
| HINT:
As $n$ starts with $1$ not $0$
$\displaystyle a=(1+c)^{-1}=\frac1{1+c},$ not $(1+c)^0=1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/510311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Low Level Well-Ordering Principle Proof Suppose that I have k envelopes, numbered $0,1,...,k−1$, such that envelope i contains $2^i$ dollars. Using the well-ordering principle, prove the following claim.
Claim: for any integer $0 ≤ n < 2^k$, there is a set of envelopes that contain exactly n dollars between them.
How would I tackle this problem? I am stumped. I know that the well-ordering principle states that a non-empty set will always have a smallest element in the set, but I don't exactly know how to apply it to write the proof.
| Let S denote the values in $\{0,1,2,…,2k−1\}$ that cannot be expressed with the envelopes. $0$ is not in $S$, because "no envelopes" has zero dollars. $S$ is well-ordered because it is a subset of the naturals; if it is nonempty it has a smallest element $s$. Hence $0,1,2,…,s−1$ can all be expressed with combinations of envelopes, but s cannot be. You need to use the combination of these two facts to get a contradiction; hence $S$ is empty and your theorem is proved.
If you need a hint for how to get that contradiction, let me know.
How do I get to a contradiction?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/510395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Prove that $x^2 + xy + y^2 \ge 0$ by contradiction Using the fact that $x^2 + 2xy + y^2 = (x + y)^2 \ge 0$, show that the assumption $x^2 + xy + y^2 < 0$ leads to a contradiction...
So do I start off with...
"Assume that $x^2 + xy + y^2 <0$, then blah blah blah"?
It seems true...because then I go $(x^2 + 2xy + y^2) - (x^2 + xy + y^2) \ge 0$.
It becomes $2xy - xy \ge 0$, then $xy \ge 0$.
How is this a contradiction?
I think I'm missing some key point.
| By completing square,
$$x^2+xy+y^2 = x^2+2x\frac{y}{2}+\frac{y^2}{4} + \frac{3y^2}{4} = \left(x+\frac{y}{2}\right)^2+\frac{3y^2}{4}\ge 0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/510488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 4
} |
Is there a continuous bijection between an interval $[0,1]$ and a square: $[0,1] \times [0,1]$? Is there a continuous bijection from $[0,1]$ onto $[0,1] \times [0,1]$?
That is with $I=[0,1]$ and $S=[0,1] \times [0,1]$, is there a continuous bijection
$$
f: I \to S?
$$
I know there is a continuous bijection $g:C \to I$ from the Cantor set $C$ to $[0,1]$.
The square $S$ is compact so there is a continuous function
$$
h: C \to S.
$$
But this leads nowhere.
Is there a way to construct such an $f$?
I ask because I have a continuous functional $F:S \to \mathbb R$.
For numerical reason, I would like to convert it into the functional
$$
G: I \to \mathbb R, \\
G = F \circ f ,
$$
so that $G$ is continuous.
| No, such a bijection from the unit interval $I$ to the unit square $S$ cannot exist. Since $I$ is compact and $S$ is Hausdorff, a continuous bijection would be a homeomorphism (see here). But in $I$ there are only two non-cut-points, whereas in $S$ each point is a non-cut-point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/510573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 5,
"answer_id": 1
} |
Why isn't removing zero rows an elementary operation? My prof taught us that during Gaussian Elimination, we can perform three elementary operations to transform the matrix:
1) Multiple both sides of a row by a non-zero constant
2) Add or subtract rows
3) Interchanging rows
In addition to those, why isn't removing zero rows an elementary operation? It doesn't affect the system in any way. Define zero rows to be a row with no leading variables.
For example isn't $\begin{bmatrix}a & b & k\\c & d & m\end{bmatrix} \rightarrow
\begin{bmatrix}a & b & k\\c & d & m\\0 & 0 & 0\end{bmatrix}$
| Here is a slightly more long-range answer: a matrix corresponds to a linear operator $T:V\to W$ where $V$ and $W$ are vector spaces with some chosen bases. The elementary row operations (or elementary column operations) then correspond to changing the basis of $W$ or of $V$ to give an equivalent matrix: one which represents the same linear operator but with the bases changed around. Under this correspondence you can get all the possible matrices corresponding to the linear operator $T$ by doing elementary row and column operations.
Adding or removing a row of zeroes will not give you a matrix corresponding to the linear operator $T$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/510672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 0
} |
Cool mathematics I can show to calculus students. I am a TA for theoretical linear algebra and calculus course this semester. This is an advanced course for strong freshmen.
Every discussion section I am trying to show my students (give them as a series of exercises that we do on the blackboard) some serious math that they can understand and appreciate. For example, when we were talking about cross products, I showed them the isomorphism between $\mathbb{R}^3$ with cross product and Lie algebra $so(3)$. Of course I didn't use fancy language, but we have proved Jacobi identity for cross product, then we picked a basis in skew-symmetric matrices $3\times 3$ and checked that when you commute these three basis vectors, you get exactly the "right hand rule" for cross product. So these two things are the same.
For the next recitation the topics are 1) eigenvalues and eigenvectors; 2) real quadratic forms.
Can you, please, recommend some cool things that we can discuss with them so that they will learn about eigenvalues and forms, but without doing just boring calculations? By "cool things" I mean some problems coming from serious mathematics, but when explained properly they might be a nice exercise for the students.
Moreover, do you know if there is a book with serious math given in accessible for freshmen form (in the form of exercises would be absolutely great!)?
Thank you very much!
| I myself did not study graph theory yet but I do know that if you consider the adjacency matrix of a graph, then there are interesting things with its eigenvalues and eigenvectors.
You can also show how to solve a system of ordinary differential equations of the form
$$
{x_1}'(t) = a_{11}x_1(t) + a_{12}x_2(t) + a_{13}x_3(t)\\
{x_2}'(t) = a_{21}x_1(t) + a_{22}x_2(t) + a_{23}x_3(t)\\
{x_3}'(t) = a_{31}x_1(t) + a_{32}x_2(t) + a_{33}x_3(t)
$$
using linear algebra. Let
$$
x(t) =
\begin{pmatrix}
{x_1}(t)\\
{x_2}(t)\\
{x_3}(t)
\end{pmatrix}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
A =
\begin{pmatrix}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{pmatrix}.
$$
Then the system can be written as $x'(t) = Ax(t)$. You can diagonalize (assume its possible) to get a invertible matrix $Q$ and diagonal matrix $D$ so that $A = QDQ^{-1}$. Then $x'(t) = QDQ^{-1}x(t)$ or $Q^{-1}x'(t) = DQ^{-1}x(t)$. Let $y(t) = Q^{-1}x(t)$. Then $y'(t) = Dy(t)$. Since $D$ is diagonal, this is now a system of equation which is independent of each other which can be solved easily. I thought this was a nice application. This was in my linear algebra book by Friedberg, Insel and Spence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/510749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34",
"answer_count": 6,
"answer_id": 0
} |
Error Term for Fourier Series? Suppose I have a piecewise smooth $2 \pi$-periodic function $f$ on $\mathbf{R}$ with a Fourier series $\sum_{n \in \mathbf{Z}}a_n e^{inx}$, a number $x_0 \in \mathbf R$, and $N>0$. I would like an upper bound for
$|f(x_0)-\sum_{n=-N}^N a_n e^{inx_0}|$. For example, if $f$ is a periodic function so that $f(x)=x$ on $(-\pi,\pi)$, and I require a partial Fourier series which is within $\frac{1}{2}$ of $f(x_0)$ at, say, $x_0=1$, I want to know how far I should go. Ideally, I would like an answer in the spirit of estimating the error term for Taylor series.
| Fourier series have a spirit quite different from Taylor series, as they are nonlocal. They behavior is affected by everything that goes on in the domain of definition. To get an explicit estimate, you can take a proof of pointwise convergence $s_N(f;x)\to f(x)$ and try to make it quantitative. For example, take Theorem 8.14 in Rudin's Principles of Mathematical Analysis (3rd edition):
Fix $x$ and suppose there are constants $\delta>0$ and $M<\infty$ such that $$|f(x+t)-f(x)| \le M|t|$$ whenever $|t|<\delta$. Then $s_N(f;x)\to f(x)$.
Proof: Define $$g(t)=\frac{f(x-t)-f(x)}{\sin (t/2)}$$ so that
$$s_N(f;x)-f(x)= \frac{1}{2\pi}\int_{-\pi}^\pi \left[g(t)\cos \frac{t}{2}\right]\sin Nt\,dt + \frac{1}{2\pi}\int_{-\pi}^\pi \left[g(t)\sin \frac{t}{2}\right]\cos Nt\,dt $$
For Rudin, an application of the Riemann-Lebesgue lemma ends the proof here. But that doesn't give an error estimate.
Instead, integrate by parts, turning the integrals into
$$ \frac{1}{2\pi N}\int_{-\pi}^\pi \frac{d}{dt} \left(g(t)\cos \frac{t}{2}\right)\cos Nt\,dt - \frac{1}{2\pi N}\int_{-\pi}^\pi \frac{d}{dt}\left(g(t)\sin \frac{t}{2}\right)\sin Nt\,dt $$
plus boundary terms (coming from discontinuities of $g$), each of which also has $N$ in the denominator and is bounded by $\sup |g|$. My (rough) estimate is
$$|s_N(f;x)-f(x)|\le \frac{k}{\pi N}\sup |g|+ \frac{1}{N} \sup \left| \left(g(t)\cos \frac{t}{2}\right)' \right| + \frac{1}{N} \sup \left| \left(g(t)\sin \frac{t}{2}\right)' \right|$$
where $k$ is the number of discontinuities of $g$. Since you know $g$, you can find $N$ that will achieve the required precision.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/510853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What is "observation"? Often in mathematical writing one encounters texts like ''..we observe that this-and-that..''. Also one may find a review report basically saying ''..the paper is just a chain of observations...''.
What is an ''observation'' and how it differ from ''true results''? Is it possible to turn a nice theorem with a nice proof just to an ''observation'' with, say, some nice definitions? Does a ''real result'' turn to an ''observation'' if it is represented in a way that makes it easy to, well, observe?
I'm asking because I face papers that seem to be just ''chains of observations'' but in the same time I have seen some papers rejected for being ''just observations''. I start to feel it has something to do with the way the core ideas are presented. Or maybe some smart definitions make certain results somewhat ''too easy'' to be mathematically interesting? (Or maybe I'm just not capable to tell the difference between math and ''math''...)
| Typically, one says that something is observed as a synonym for it being obvious or at least not something that they intend to prove. An observation isn't a result per se, but an implied result often left up to the reader.
Papers can contain a lot of observations, as long as the reviewers don't feel that this is being intellectually dishonest and skirting meaningful issues. That said, sometimes the technical issues result from the definitions themselves, and historically, many 'real results' have been made 'observations' by a better use of definitions, and sometimes this is exactly the point.
When doing mathematics research, I've found that discovering the definitions or frameworks that make the results somewhat "too easy" is really the crux of the issue and this is often the place where real progress is made.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/511059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
} |
Heat Equation identity with dirichlet boundary condition Show an energy identity for the heat equation with convection and Dirichlet boundary condition.
$$u_t -ku_{xx}+Vu_x=0 \qquad 0<x<1, t>0$$
$$u(0,t) = u(1,t)=0 \qquad t>0$$
$$u(x,0) = \phi(x) \qquad 0<x<1$$
Attempt: I think I can apply maximum principle, but dont know how to approach it.
Thanks!!
| You can use the separation of variables techniques, but you should pay more attention about choosing the suitable separation parameter.
Let $u(x,t)=X(x)T(t)$ ,
Then $X(x)T'(t)-kX''(x)T(t)+VX'(x)T(t)=0$
$X(x)T'(t)=kX''(x)T(t)-VX'(x)T(t)$
$X(x)T'(t)=(kX''(x)-VX'(x))T(t)$
$\dfrac{T'(t)}{T(t)}=\dfrac{kX''(x)-VX'(x)}{X(x)}=-\dfrac{4k^2n^2\pi^2+V^2}{4k}$
$\begin{cases}\dfrac{T'(t)}{T(t)}=-\dfrac{4k^2n^2\pi^2+V^2}{4k}\\kX''(x)-VX'(x)+\dfrac{4k^2n^2\pi^2+V^2}{4k}X(x)=0\end{cases}$
$\begin{cases}T(t)=c_3(s)e^{-\frac{t(4k^2n^2\pi^2+V^2)}{4k}}\\X(x)=\begin{cases}c_1(s)e^{\frac{Vx}{2k}}\sin n\pi x+c_2(s)e^{\frac{Vx}{2k}}\cos n\pi x&\text{when}~n\neq0\\c_1xe^{\frac{Vx}{2k}}+c_2e^{\frac{Vx}{2k}}&\text{when}~n=0\end{cases}\end{cases}$
$\therefore u(x,t)=C_1xe^{\frac{2Vx-V^2t}{4k}}+C_2e^{\frac{2Vx-V^2t}{4k}}+\sum\limits_{n=0}^\infty C_3(n)e^{\frac{2Vx-t(4k^2n^2\pi^2+V^2)}{4k}}\sin n\pi x+\sum\limits_{n=0}^\infty C_4(n)e^{\frac{2Vx-t(4k^2n^2\pi^2+V^2)}{4k}}\cos n\pi x$
$u(0,t)=0$ :
$C_2e^{-\frac{V^2t}{4k}}+\sum\limits_{n=0}^\infty C_4(n)e^{-\frac{t(4k^2n^2\pi^2+V^2)}{4k}}=0$
$\sum\limits_{n=0}^\infty C_4(n)e^{-\frac{t(4k^2n^2\pi^2+V^2)}{4k}}=-C_2e^{-\frac{V^2t}{4k}}$
$\sum\limits_{n=0}^\infty C_4(n)e^{-kn^2\pi^2t}=-C_2$
$C_4(n)=\begin{cases}-C_2&\text{when}~n=0\\0&\text{otherwise}\end{cases}$
$\therefore u(x,t)=C_1xe^{\frac{2Vx-V^2t}{4k}}+C_2e^{\frac{2Vx-V^2t}{4k}}+\sum\limits_{n=0}^\infty C_3(n)e^{\frac{2Vx-t(4k^2n^2\pi^2+V^2)}{4k}}\sin n\pi x-C_2e^{\frac{2Vx-V^2t}{4k}}=C_1xe^{\frac{2Vx-V^2t}{4k}}+\sum\limits_{n=1}^\infty C_3(n)e^{\frac{2Vx-t(4k^2n^2\pi^2+V^2)}{4k}}\sin n\pi x$
$u(1,t)=0$ :
$C_1e^{\frac{2V-V^2t}{4k}}=0$
$C_1=0$
$\therefore u(x,t)=\sum\limits_{n=1}^\infty C_3(n)e^{\frac{2Vx-t(4k^2n^2\pi^2+V^2)}{4k}}\sin n\pi x$
$u(x,0)=\phi(x)$ :
$\sum\limits_{n=1}^\infty C_3(n)e^{\frac{Vx}{2k}}\sin n\pi x=\phi(x)$
$\sum\limits_{n=1}^\infty C_3(n)\sin n\pi x=\phi(x)e^{-\frac{Vx}{2k}}$
$C_3(n)=2\int_0^1\phi(x)e^{-\frac{Vx}{2k}}\sin n\pi x~dx$
$\therefore u(x,t)=2\sum\limits_{n=1}^\infty\int_0^1\phi(x)e^{-\frac{Vx}{2k}}\sin n\pi x~dx~e^{\frac{2Vx-t(4k^2n^2\pi^2+V^2)}{4k}}\sin n\pi x$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/511143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How do I prove this transcendental equation has a solution? I am trying to prove that for the following equation, there is a B that solves it (c is a constant):
$1-B = e^{-cB}$
I understand this is a transcendental equation, but how do I prove there is a B that solves it?
I need a non-zero solution. c > 1
| Note that for $c \gt 1$, for $B$ slightly greater than zero $1-B \gt e^{-cB}$ (you can use the derivatives to show this). At $B=1$ we have $1-B \lt e^{-cB}$ and there must be a point where they cross.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/511222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Number theory problem in induction Without using the fundamental theorem of algebra (i.e. the prime factorization
theorem), show directly that every positive integer is uniquely representable as the product
of a non-negative power of two (possibly $2^0=1$) and an odd integer.
| Existence: Every time we take a positive integer and divide it by $2$, it gets smaller. So by the Well-Ordering Principle (equivalently, by the nonexistence of positive integers $n_1 > n_2 > ... n_k > ...$), eventually this process has to stop: thus we've written $x = 2^a y$ with $y$ not divisible by $2$ (which I assume is your definition of "odd". It's a short argument involving the Division Theorem to show that an integer is odd if and only if it's of the form $2k+1$.)
Uniqueness: Suppose $x = 2^{a_1} y = 2^{a_2} z$ with $0 \leq a_1 \leq a_2$ and $y,z \in R$ and $y$ and $z$ not divisible by $2$. Then $y = 2^{a_2-a_1} z$. Since $y$ is not divisible by $2$ we must have $a_2 - a_1 = 0$, i.e., $a_1 = a_2$; since $\mathbb{Z}$ is an integral domain -- i.e., $AB = AC$ and $A \neq 0$ implies $B = C$ -- we conclude $y = z$.
I claim that essentially the same reasoning proves a much more general result.
Proposition: Let $R$ be an integral domain satisfying the ascending chain condition on principal ideals (ACCP) -- e.g. a Noetherian domain -- and let $p$ be a nonzero, nonunit element of $R$. Then every nonzero element $x \in R$ can be written as $p^a y$ with $p \nmid y$, and if $p^{a_1} y_1 = p^{a_2} y_2$, then $a_1 = a_2$ and $y_1 = y_2$.
Indeed, the uniqueness part of the argument holds verbatim with $2$ replaced by $p$.
Existence: If $x$ is not divisible by $p$, then we may take $a_1 = 0$ and $y =x$. Otherwise, we may write $x = p y_1$. If $y_1$ is not divisible by $p$, then again we're done; if not, we can write $y_1 = p y_2$, so $x = p^2 y_2$. If at some point we reach $x = p^a y_a$ with $y_a$ not divisible by $p$, then we're done. If not then $(y_1) \subsetneq (y_2) \subsetneq \ldots$ is an infinite strictly ascending chain of principal ideals, contradiction.
Thus we've associated a sort of $\operatorname{ord}_p$ function to any nonzero nonunit in any domain satisfying ACCP. This function is a discrete valuation if and only if $p$ is a prime element.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/511315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Under what conditions is the identity |a-c| = |a-b| + |b-c| true? As the title suggests, I need to find out under what conditions the identity |a-c| = |a-b| + |b-c| is true.
I really have no clue as to where to start it. I know that I must know under what conditions the two sides of a triangle are equal to the remaining one. However, I really can't figure out when is that true. Would anyone care to enlighten me as to how would I go about doing this?
| if $a<c \implies a<b<c$ and if $c<a \implies c<b<a$ with those condition that identify is always true. So for any b in the interval $(a,c)$ or $(c,a)$ the identity $|a-c| = |a-b| + |b-c|$ is true."
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/511485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Determining if a sequence of functions is a Cauchy sequence? Show that the space $C([a,b])$ equipped with the $L^1$-norm $||\cdot||_1$ defined by $$ ||f||_1 = \int_a^b|f(x)|dx ,$$
is incomplete.
I was given a counter example to disprove the statement:
Let $f_n$ be the sequence of functions:
$$f_n(x) = \begin{cases} 0 & x\left[a,\frac{b-a}{2}\right)\\
nx-n\frac{(b-a)}{2} & x\in\left[\frac{b-a}{2},\frac{b-a}{2}+\frac{1}{n}\right)\\
1 & x\in \left[\frac{b-a}{2}+\frac{1}{n},b\right] \end{cases}.$$
This is a cauchy sequence that converges to a discontinuous function.
My question is:
How do I see that such a sequence of functions is cauchy? My thought was that the $||\cdot||_1$ will determine the differences in area under the curve for each function, so that $||f_n-f_m||\leq \frac{(b-a)}{2}$. Is this correct?
| No, that is not correct. You need to be able to make $\lVert f_n - f_m \rVert$ arbitrarily small for sufficiently large $n,m$. $(b-a)/2$ is a fixed number. However, you do have the right idea: try find a bound for $\lVert f_n - f_m \rVert$ for $m \geq n$ by bounding the measure of set on which the difference is nonzero, multiplied by the maximum difference between the functions on that set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/511573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Proving $f(C) \setminus f(D) \subseteq f(C \setminus D)$ and disproving equality Let $f: A\longrightarrow B$ be a function.
1)Prove that for any two sets, $C,D\subseteq A$ , we have $f(C) \setminus f(D)\subseteq f(C\setminus D)$.
2)Give an example of a function $f$, and sets $C$,$D$, for which $f(C) \setminus f(D) \neq f(C\setminus D)$
First time exposed to sets.. how would I go about proving this? What assumptions should I make?
|
Here, $f: A \rightarrow B$ is in green and $\{S_i\} = S_i$ for $S = A, B$ and $i = 1, 2.$
Ignore $\{b_2\} = B_2$ in this picture. This picture proves that
$ f(A_1) - f(A_2) \neq f(A_1 - A_2)$. Incidentally, the same picture works for Is this proof correct for : Does $F(A)\cap F(B)\subseteq F(A\cap B) $ for all functions $F$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/511662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
How to express a vector as a linear combination of others? I have 3 vectors, $(0,3,1,-1), (6,0,5,1), (4,-7,1,3)$, and using Gaussian elimination I found that they are linearly dependent. The next question is to express each vector as a linear combination of the other two. Different resources say just to use Gaussian elimination, but I just end up with a matrix in RREF. How can I find different vectors as a linear combination of others?
| Let's look at Gaussian elimination:
\begin{align}
\begin{bmatrix}
0 & 6 & 4 \\
3 & 0 & -7 \\
1 & 5 & 1 \\
-1 & 1 & 3
\end{bmatrix}
\xrightarrow{\text{swap row 1 and 3}}{}&
\begin{bmatrix}
1 & 5 & 1 \\
3 & 0 & -7 \\
0 & 6 & 4 \\
-1 & 1 & 3
\end{bmatrix}\\
\xrightarrow{R_2-3R_1}{}&
\begin{bmatrix}
1 & 5 & 1 \\
0 & -15 & -10 \\
0 & 6 & 4 \\
-1 & 1 & 3
\end{bmatrix}\\
\xrightarrow{R_4+R_1}{}&
\begin{bmatrix}
1 & 5 & 1 \\
0 & -15 & -10 \\
0 & 6 & 4 \\
0 & 6 & 4
\end{bmatrix}\\
\xrightarrow{-\frac{1}{15}R_2}{}&
\begin{bmatrix}
1 & 5 & 1 \\
0 & 1 & 2/3 \\
0 & 6 & 4 \\
0 & 6 & 4
\end{bmatrix}\\
\xrightarrow{R_3-6R_2}{}&
\begin{bmatrix}
1 & 5 & 1 \\
0 & 1 & 2/3 \\
0 & 0 & 0 \\
0 & 6 & 4
\end{bmatrix}\\
\xrightarrow{R_4-6R_2}{}&
\begin{bmatrix}
1 & 5 & 1 \\
0 & 1 & 2/3 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{bmatrix}\\
\xrightarrow{R_1-5R_2}{}&
\begin{bmatrix}
1 & 0 & -7/3 \\
0 & 1 & 2/3 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{bmatrix}\\
\end{align}
If $v_1$, $v_2$ and $v_3$ are your vectors, this says that
$$
v_3=-\frac{7}{3}v_1+\frac{2}{3}v_2
$$
because elementary row operations don't change linear relations between the columns.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/511841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Show that there exist infinitely many primes of the form $6k-1$ This is a question on the text book that i have no way to deal with. Can anyone help me?
Show that there exist infinitely many primes of the form $6k-1$
| Hint: Suppose $p_1, \dots, p_n$ be all the primes of the form $6k-1$, then $N = 6p_1\dots p_n-1$ is also of the form $6k-1$. If $N$ is divisible by a prime, it must be $3$ or of the form $6k+1$ (why?). Show that these can't actually be factors, so $N$ is prime.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/511955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Creating an alternating sequence of positive and negative numbers
TL; DR -> How does one create a series where at an arbitrary $nth$ term, the number will become negative.
I'm learning a lot of mathematics again, primarily because there are such wonderful resources available on the internet to learn. On this journey, I've stumbled across some very interesting sequences, for example:
$$ a_n = \{ 1, -1, 1, -1, 1, -1, ... \tag{1} \}$$
And this is one example of an interesting diverging sequence, and this can be created using this function:
$$f(x) = x^{n+1} \tag{2}$$
Now this is a sequence that you can easily create, how would one create a series where you can have the $-1$ appear at an arbitrary $nth$ term?
For example:
$$ a_n = \{ 1, 1, -1, 1, 1, -1, ... \tag{3}\}$$
How would one attempt to define the series on $(3)$?
| $a_n=\frac13-\frac23\cos\frac{2n\pi}3-\frac23\cos\frac{4n\pi}3$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/512063",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
If $M\oplus M$ is free, is $M$ free? If $M$ is a module over a commutative ring $R$ with $1$, does $M\oplus M$ free, imply $M$ is free? I thought this should be true but I can't remember why, and I haven't managed to come up with a counterexample.
I apologize if this has already been answered elsewhere.
| This would mean that there is no element of order 2 in a $K$-group, which is clearly not correct. To find an example, you can try to find a ring $R$ such that $K_0(R)$ has an element of order 2. As an example related to my research interest, I can say $K_0(C(\mathbb{RP}^2))= \mathbb{Z}\oplus \mathbb{Z}/2\mathbb{Z}$, see Karoubi 1978, IV.6.47.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/512163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 1
} |
Calculating a SQRT digit-by-digit? I need to calculate the SQRT of $x$ to $y$ decimal places. I'm dealing with $128$-bit precision, giving a limit of $28$ decimal places. Obviously, if $\,y > 28$, the Babylonian method, which I'm currently using, becomes futile, as it simply doesn't offer $y$ decimal places.
My question to you is, can I calculate the Square Root digit-by-digit? For example, calculating the $n$-th digit, assuming I already have digit $\,n - 1$. I'm well aware this may be impossible, as I've not found any such method through Google yet, just checking here before I give up in search of arbitrary precision methods.
| This method is perhaps not very practical, and I don't know how to properly typeset (or even explain) this method, but I will show in an example how to compute $\sqrt{130}$ digit by digit. It is very similar to ordinary long division. (If someone has ideas on how to improve the typesetting, please show me!)
Step 1:
Insert delimiters, grouping the digits two by two from the right:
$$\sqrt{1/30}$$
Step 2:
Start from with the leftmost digit (-group). What is the square root of $1$? It is $1$, so the first digit is $1$. Put $1$ in a memory column (to the left in this example). Subtract $1^2=1$, and move down the next digit group,
$$
\begin{array}{rcl}
1 & \qquad & \sqrt{1/30}=1\ldots \\
+1 & & -1 \\
\overline{\phantom{+}2} & & \overline{\phantom{-}030}
\end{array}
$$
Step 3
Add a symbol $x$ to (two places in) the memory column:
$$
\begin{array}{rcl}
1\phantom{1} & \qquad & \sqrt{1/30}=1\ldots \\
+1\phantom{1} & & -1 \\
\overline{\phantom{+}2x} & & \overline{\phantom{-0}30} \\
\phantom{}x
\end{array}
$$
We want to find a digit $x$ such that $x\cdot 2x$ is as large as possible, but below $30$ (our current remainder). This $x$ will be the next digit in the result. In this case, we get $x=1$ ($x=3$ would for example give $3\cdot23=69$, which is too much), so we replace $x$ with $1$ in the memory column and put a $1$ in the result. Finish the step by subtracting $1\cdot 21=21$ from the remainder, and moving down the next digit group (which is $00$, since all the decimals are zero in our case)
$$
\begin{array}{rcl}
1\phantom{1} & \qquad & \sqrt{1/30}=11\ldots \\
+1\phantom{1} & & -1 \\
\overline{\phantom{+}21} & & \overline{\phantom{-0}30} \\
\phantom{+2}1 & & \phantom{}-21 \\
\overline{\phantom{+}22} & & \overline{\phantom{-00}900}
\end{array}
$$
As we have come to moving down decimals, we should also add a decimal point to the result.
Step 4
Add a symbol $x$ to (two places in) the memory column:
$$
\begin{array}{rcl}
1\phantom{1x} & \qquad & \sqrt{1/30}=11.\ldots \\
+1\phantom{1x} & & -1 \\
\overline{\phantom{+}21}\phantom{x} & & \overline{\phantom{-0}30} \\
\phantom{+2}1\phantom{x} & & \phantom{}-21 \\
\overline{\phantom{+}22x} & & \overline{\phantom{-00}900} \\
\phantom{+22}x & &
\end{array}
$$
Which digit $x$ now makes $x\cdot 22x$ as large as possible, but less than $900$? The answer is $x=4$, which is the next digit in the result.
$$
\begin{array}{rcl}
1\phantom{1x} & \qquad & \sqrt{1/30}=11.4\ldots \\
+1\phantom{1x} & & -1 \\
\overline{\phantom{+}21}\phantom{x} & & \overline{\phantom{-0}30} \\
\phantom{+2}1\phantom{x} & & \phantom{}-21 \\
\overline{\phantom{+}224} & & \overline{\phantom{-00}900} \\
\phantom{22}+4 & & \phantom{0}-896 \\
\overline{\phantom{+}228} & & \overline{\phantom{-0000}400}
\end{array}
$$
Subtract, move down the next digit group, add the memory column, ...
Step n
Imitate what we did in step 4.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/512358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 1
} |
How can I find the formula used to produce this number? In a game, each character has different attributes with values to them. The attributes are things like Strength and Speed and are graded on a scale of 1-100.
The game uses a formula to produce an overall number. I want to make a program that can use the formula to produce the overall, but I do not know the formula used in the game to produce it. I have 14 examples of attributes and the overall value given.
The Numbers I have -
In previous games, there were 10 different attributes, the sum of which was divided by 10 with 10 added to the result.
Edit: A friend gave me the formula so I'll just post it here.
((Striking Power+Grappling Power+Durability+Charisma+1)÷8.25)
+
((Submission+Striking Defense+Grappling Defense+Speed+Toughness+1)÷12.25)
+
((Jumping+Agility+Adrenaline+Recovery+Tag Team+2)÷25.25)
| That particular function could have been discovered by linear regression, although you might have needed a few data points.
In general, reverse engineering a formula from data is a very hard and involved process, requiring lots of experimentation and some amount of experience in interpreting the results. Although sometimes you get lucky and things are simple and don't take much work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/512420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Finding the value of $ \sum_{n=5}^{204} (n - 2) $ Is there a generalized formula for finding a sum such as this one? I'm going over an old quiz for a programming class but I'm not able to solve it:
$$ \sum_{n=5}^{204} (n - 2) $$
I know this is probably dead simple, but I'm seriously lacking on the math side of computer science.
| the only thing that you have to kcon is that :
$$
\sum_{k=0}^{n} k = \frac{n(n+1)}{2}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/512482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Prove if $a\mid b$ and $b\mid a$, then $|a|=|b|$ , $a, b$ are integers. Form the assumption, we can say $b=ak$ ,$k$ integer, $a=bm$, $m$ integer.
Intuitively, this conjecture makes sense. But I can't make further step.
| From what you wrote, $a=akm$, so $a(1-km)=0$. Also $b=bmk$, so $b(1-km)=0$. Thus either $a=b=0$ (and hence $|a|=|b|$), or $mk=1$. The units in $\mathbb Z$ are of course only $+1$ and $-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/512560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
} |
Assignment: Find $a$ and $b$ such that a piecewise function is continuous I'm having trouble solving a problem given in an assignment:
If the following function $f(x)$ is continuous for all real numbers $x$, determine the values of $a$ and $b$.
$$
f(x)=\begin{cases}
a\sin(x)+b~~~~~x\le 0\\
x^2+a~~~~~~~~~~0<x\le 1\\
b\cos(2\pi x)+a~~~~~~1<x
\end{cases}
$$
I've found through guesswork that an $a$ and $b$ are 1 and 1, although I'm not sure how to prove this.
| Notice that regardless of which values we give to the constants $a$ and $b$, the three functions $f_1(x) = a\sin x + b$, $f_2(x) = x^2+a$, and $f_3(x) = b\cos(2\pi x) + a$ are all continuous, and so the only points at which $f(x)$ can be discontinuous are the points $x = 0$ and $x = 1$ (where $f$ changes from being equal to $f_1$ to $f_2$ and $f_2$ to $f_3$, respectively). So from the definition of continuity at a point, we need to choose $a$ and $b$ that make
$$
\lim_{x\to x_0^-}f(x) = f(x_0) = \lim_{x\to x_0^+} f(x)
$$
true at the points $x_0 = 0$ and $x_0 = 1$.
Do you see how to proceed from here? Try to use the definition of $f$ and the above equalities to create two equations in the unknowns $a$ and $b$. I'll sketch how to do it below at $x_0 = 0$ as a spoiler.
[Spoiler: Interpreting the above equations at $x_0 = 0$]
From the meaning of the above limits and the definition of $f$, you should show that we need to choose $a$ and $b$ to ensure $b = a\sin(0) + b = \lim_{x\to 0^-}f(x) = f(0) = \lim_{x\to 0^+}f(x) = 0^2 + a = a$ is true. (Now setup a similar equation at $x_0 = 1$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/512653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Independence of a random variable $X$ from itself In our lecture on probability, my professor made the comment that "a random variable X is not independent from itself." (Here he was specifically talking about discrete random variables.) I asked him why that was true. (My intuition for two counterexamples are $X \equiv 0$ and $X$ s.t. $$m_X(x) = \begin{cases}1, &\text{ if } x = x_0\\ 0, &\text{ if }x \neq x_0.)\end{cases}$$
In these cases, it seems that $\mathbb{P}(X \leq x_1 , X \leq x_2) = \mathbb{P}(X \leq x_1) \cdot \mathbb{P}(X \leq x_2)$.
My professor's response was, "The independence from or dependence of $X$ on itself depends on the definition of the joint distribution function $m_{X,X}$, which is essentially arbitrary."
Can someone help me to understand this?
| The only events that are independent of themselves are those with probability either $0$ or $1$. That follows from the fact that a number is its own square if and only if it's either $0$ or $1$. The only way a random variable $X$ can be independent of itself is if for every measurable set $A$, either $\Pr(X\in A)=1$ or $\Pr(X\in A)=0$. That happens if and only if $X$ is essentially constant, meaning there is some value $x_0$ such that $\Pr(X=x_0)=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/512755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 2,
"answer_id": 0
} |
Generating Pythagorean Triples S.T. $b = a+1$ I am looking for a method to generate Pythagorean Triples $(a,b,c)$. There are many methods listed on Wikipedia but I have a unique constraint that I can't seem to integrate into any of the listed methods.
I need to generate Pythagorean Triples $(a,b,c)$ such that:
$$a^2 + b^2 = c^2$$
$$a\lt b\lt c \,; \quad a,b,c \in \Bbb Z^+$$
$$and $$
$$b=a+1$$
Is there a way to modify one of the listed methods to include this constraint?
| We give a way to obtain all solutions. It is not closely connected to the listed methods. However, the recurrence we give at the end can be expressed in matrix form, so has a structural connection with some methods in your linked list.
We want $2a^2+2a+1$ to be a perfect square $z^2$. Equivalently, we want $4a^2+4a+2=2z^2$, that is $(2a+1)^2-2z^2=-1$.
This is a Pell equation. One can give a recurrence for the solutions. One can also give a closed form that has a similar shape to the Binet closed form for the Fibonacci numbers.
Added: We can for example get all solutions by expressing $(1+\sqrt{2})^{2n+1}$, where $n$ is an integer, in the form $s+t\sqrt{2}$, where $s$ and $t$ are integers. Then $z=t$ and $2a+1=s$.
One can get a closed form from this by noting that $(1-\sqrt{2})^{2n+1}=s-t\sqrt{2}$. That gives us
$$s=\frac{(1+\sqrt{2})^{2n+1} + (1-\sqrt{2})^{2n+1}}{2}.$$
There is a similar formula for $t$.
Remark: The following recurrence is probably more useful than the closed form.
If $(1+\sqrt{2})^{2n+1}=s_n+t_n\sqrt{2}$, then $(1+\sqrt{2})^{2n+3}=s_{n+1}+t_{n+1}\sqrt{2}$, where
$$s_{n+1}=3s_n+4t_n,\qquad t_{n+1}=2s_n+3t_n.$$
This will let you quickly compute the first dozen or so solutions (the numbers grow fast). We start with $n=0$, which gives a degenerate triangle. For $n=1$, we get $s_1=7$, $t_1=5$, which gives the $(3,4,5)$ triangle. We get $s_2=41, t_2=29$, giving the triple $(20,21,29)$. And so on.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/512807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
What is the purpose of implication in this scenario? Consider the case:
Let:
*
*S(x) = “x is a student”
*F(x) = “x is a faculty member”
*A(x, y) = “x has asked y a question”
*Dx and Dy = Consists of all people associated with your school.
Use quantifiers to express this statement:
Some student has never been asked a question by a faculty member.
Attempted Solution:
Exist x, All y ( S(x) AND F(y) AND (NOT A(y, x)) )
Book Solution:
Exist x ( S(x) AND All y ( F(y) -> NOT A(y, x) ) )
What is the point of using implication here? Would my answer also be correct?
| Your attempted solution is wrong on one point, and equivalent on the other.
The placement of the "All y" part is arbitrary between the two options - as $S(x)$ does not depend on $y$, it can be placed on either side of it (for lack of a better explanation).
For the implication, your implication is inaccurate - it implies that $F(y)$ for all $y$, as it can only be true if ALL of the conditions are true, since you used "and" for all of them.
On the other hand, here's how you'd write the book's answer without explicitly using implication:
$$\exists x \text{ s.t. } (S(x) \land \forall y, (\lnot F(y) \lor \lnot A(y,x)))$$
The english version of this is "There is a person who is a student and who, if you look at all other people, they are either not faculty or have not asked the student a question".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/512861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Literary statements that are false as mathematics I recently wanted to use
the title of the famous short story
"Everything that Rises must Converge"
in a poem of mine.
However, the mathematician in me
insisted on changing it to
"Everything that Rises, if the rise is bounded, must Converge".
Are there other literary quotations
that are false mathematically,
and how can they be changed to make them true?
Note:
Attempts to use
"To be or not to be"
will be dealt with
most severely.
| The test instructions, “Draw a circle around the correct answer.” really means, “Draw a Jordan curve around the correct answer.” - or, even more accurately, “Draw an approximation of a Jordan curve around the correct answer.”
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/512915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41",
"answer_count": 16,
"answer_id": 8
} |
Need to prove that $(S,\cdot)$ defined by the binary operation $a\cdot b = a+b+ab$ is an abelian group on $S = \Bbb R \setminus \{-1\}$. So basically this proof centers around proving that (S,*) is a group, as it's quite easy to see that it's abelian as both addition and multiplication are commutative. My issue is finding an identity element, other than 0. Because if 0 is the identity element, then this group won't have inverses.
The set explicitly excludes -1, which I found to be its identity element, which makes going about proving that this is a group mighty difficult.
| I believe you meant to write $S=\mathbb{R}\backslash\{-1\}$
$0$ is indeed the identity element since for any $a\in S$, $a * 0=a+0+a.0=a$
For $b$ to be the inverse of $a$, we require $a * b=0$.
Hence $a+b+a.b=0$
$b+a.b=-a$
$b(1+a)=-a$
$b=\frac{-a}{1+a}$
which is fine, since $a$ can't be $-1$ (since it's not an element of $S$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/513045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
How many 4 vertices connective graphs not including a triangle? How many 4 vertices connective graphs not including a triangle?
I am thinking the answer might be 2, for one is a square, another is straight. But I am not sure.
| I assume that you mean simple graphs, i.e., graphs with no loops or multiple edges. Every tree on $4$ vertices satisfies your condition, because a tree has no cycles at all, and you’ve missed the tree with a vertex of degree $3$. If the graph is not a tree, it must contain a cycle, that cycle must be a $4$-cycle, and it’s easy to check that adding any edge to a $4$-cycle creates a triangle. Thus, the graphs are the $4$-cycle and the trees on $4$ vertices.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/513133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
One question to know if the number is 1, 2 or 3 I've recently heard a riddle, which looks quite simple, but I can't solve it.
A girl thinks of a number which is 1, 2, or 3, and a boy then gets to ask just one question about the number. The girl can only answer "Yes", "No", or "I don't know," and after the girl answers it, he knows what the number is. What is the question?
Note that the girl is professional in maths and knows EVERYTHING about these three numbers.
EDIT: The person who told me this just said the correct answer is:
"I'm also thinking of a number. It's either 1 or 2. Is my number less than yours?"
| I am thinking of a positive integer. Is your number, raised to my number and then increased in $1$, a prime number?
$$1^n+1=2\rightarrow \text{Yes}$$
$$2^n+1=\text{possible fermat number}\rightarrow \text{I don't know}$$
$$3^n+1=2k\rightarrow \text{No}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/513239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "298",
"answer_count": 39,
"answer_id": 1
} |
One question to know if the number is 1, 2 or 3 I've recently heard a riddle, which looks quite simple, but I can't solve it.
A girl thinks of a number which is 1, 2, or 3, and a boy then gets to ask just one question about the number. The girl can only answer "Yes", "No", or "I don't know," and after the girl answers it, he knows what the number is. What is the question?
Note that the girl is professional in maths and knows EVERYTHING about these three numbers.
EDIT: The person who told me this just said the correct answer is:
"I'm also thinking of a number. It's either 1 or 2. Is my number less than yours?"
| Along the lines of "the open problem" method -
Define $$ f(n) = \pi^{n-1}\mathrm{e}^{\pi(n-1)}.$$
Where $n$ is the number chosen by the girl. The question is, is $f(n)$ irrational?
If $n=1$, $f(1) = 1$, so the answer is "No".
If $n=2$, $f(2) = \pi\mathrm{e}^{\pi}$, the answer is "Yes".
If $n=3$, $f(3) = \pi^2\mathrm{e}^{2\pi}$, so the answer is "I don't know".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/513239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "298",
"answer_count": 39,
"answer_id": 33
} |
Whether polynomials $(t-1)(t-2),(t-2)(t-3),(t-3)(t-4),(t-4)(t-6)$ are linearly independent. Question is to check if :
$(t-1)(t-2),(t-2)(t-3),(t-3)(t-4),(t-4)(t-6)\in \mathbb{R}[t]$ are linearly independent.
Instead of writing linear combination and considering coefficient equations, I would like to say in the following way :
set of all polynomials of degree $\leq 2$ is a vector space with basis $1,t,t^2$ over $\mathbb{R}[t]$
and there are 4 polnomials in the collection$(t-1)(t-2),(t-2)(t-3),(t-3)(t-4),(t-4)(t-6)\in \mathbb{R}[t]$
any collection of $n+1$ elements in a vector space of dimension $n$ is linearly dependent.
Thus, the collection $\{(t-1)(t-2),(t-2)(t-3),(t-3)(t-4),(t-4)(t-6)\}$ is linearly dependent in $\mathbb{R}[t]$
I would be thankful if some one can say if this justification is correct
i would be thankful if someone wants to say something more about this kind of checking..
| If you assume that the last one is a linear combination of the first three, that is, that we can write
$$
(t-4)(t-6) = a(t-1)(t-2) + b(t-2)(t-3) + c(t-3)(t-4)
$$
and expand the four polynomials, then you get the three equations
$$
\begin{cases}a + b + c = 0 & \text {from } t^2 \\-3a -5b-7c = -10 & \text{from } t \\ 2a + 6b + 12c = 24 & \text{from } 1\end{cases}
$$
Solving these equations then reveals that $a = \frac{3}{2}$, $b = -\frac{9}{2}$ and $c = 4$, so we get
$$
(t-4)(t-6) = \frac{3}{2}(t-1)(t-2) -\frac{9}{2}(t-2)(t-3) + 4(t-3)(t-4)
$$
Of course, there were no guarantee that we could write the last polynomial as a linear combination of the others. It could've been the case that $(t-1)(t-2)$ could be written as a linear combination of $(t-2)(t-3)$ and $(t-3)(t-4)$, and that $(t-4)(t-6)$ was linearly independent of the rest of them. If that were the case, then the set of equations above would've had no solution, and we would've had to swap which function we're trying to write as a linear combination of the others.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/513300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
differentiation, critical number and graph sketching Consider the graph of the function $$f(x) = x^2-x-12$$
(a) Find the equation of the secant line joining the points $(-2, -6)$ and $(4, 0)$.
(b) Use the Mean Value Theorem to determine a point c in the interval $(-2, 4)$ such
that the tangent line at c is parallel to the secant line.
(c) Find the equation of the tangent line through c.
(d) Sketch the graph of $f$, the secant line, and the tangent line on the same axes.
I used $m= \dfrac{y_2-y_1}{x_2-x_1}$ to get the slope.
$$m =\frac{0-(-6)}{4-(-2)}=\dfrac66=1$$
or $m=\dfrac{f(b)-f(a)}{b-a}$ and it gave me $1$
then differentiating:
$$f'(x)= 2x-1=1 \leadsto 2x=1+1\leadsto 2x=2\leadsto x=1$$
| a) Say the point $(x,y)$ is on the secant line between points $(-2,6)$, $(4,0)$. So the equation of the secant line will be
$$\frac{y-(-6)}{x-(-2)}=\frac{0-(-6)}{4-(-2)}\Rightarrow\frac{y+6}{x+2}=1\Rightarrow y=x-4$$
As you see the slope of the secant line is $1$.
b) If the tangent line is parallel to secant line then the slope of tangent line at $c$ has same slope with secant line that is:
$$f'(c)=1\Rightarrow2c-1=1\Rightarrow c=1$$
Now if you substitute $c=1$ into $f$ you will find $f(c)=-12$. So the tanget line is tangent to $f$ at $(1,-12)$
c) The equation of tangent line at $(x_0,y_0)$ is $y-y_0=f'(x_0)(x-x_0)$. So the equation of tangent at $(1,-12)$ will be
$$y-(-12)=f'(1)(x-1)\Rightarrow y+12=x-1\Rightarrow y=x-13$$
d) For plotting you can use wolframalpha
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/513359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove any function for $L^\infty$ norm can be approximated Prove that any function, continuous on an interval of $\mathbb R$, can be approximated by polynomials, arbitrarily close for the $L^{\infty}$ norm (this is the Bernstein-Weierstrass theorem). Let $f$ be a continuous function on $[0,1]$. The $n$-th Bernstein polynomial is:
\begin{align}
\displaystyle B_n(x)=\sum_{k=0}^{n}\binom{n}{k}x^k(1-x)^{n-k}f\left(\frac{k}{n}\right)
\end{align}
| Here we start off with a very simple LOTUS (Law of the unconcious statistician) problem. So first we start off with:
$$\displaystyle S_n(x)=\frac{B^{(n,x)}}{n}$$
Here $\displaystyle B^{(n,x)}$ is a binomial random variable with parameters $n$ and $x$. We must prove that $\displaystyle B_n(x)=\mathbb E(f(S_n(x)))$, or:
$$\displaystyle \sum_{k=0}^{n}\binom{n}{k}(1-x)^{n-k}f\left(\frac{k}{n}\right)=\mathbb E\left(f\left(S_n(x)\right)\right) $$
By LOTUS we have:
$$\mathbb E(f\left( S_n\right))=\sum_{k=0}^{n}\mathbb P(S_n)f(k)$$
Then noting that the PMF of a Bin is:
$$\mathbb P\left(S_n=k\right)=\binom{n}{k}x^k(1-x)^{n-k}$$
We get:
$$\sum_{k=0}^{n}\binom{n}{k}x^k(1-x)^{n-k}f\left(\frac{k}{n}\right)=\mathbb E\left(f\left(S_n\right)\right)$$
And with this we can prove that: $\displaystyle ||B_n-f||_{L_{\infty}([0,1])}\rightarrow 0$ as $n\rightarrow \infty$. $f$ is real, defined and continuous on $[0,1]$. Thus as n goes to infinity:
$$\displaystyle \left|\left|\sum_{k=0}^{n}\binom{n}{k}(1-x)^{n-k}f\left(\frac{k}{n}\right)-f(x)\right|\right|_{L_{\infty}^{([0,1])}}\rightarrow 0$$
So let us begin. Consider the random variable $\displaystyle f\left(\frac{S_n}{n}\right)$ from the previous problem. The expected value of this polynomial is a Bernstein polynomial, or:
$$\displaystyle \sum_{k=0}^{n}\binom{n}{k}p^k(1-p)^{n-k} f\left(\frac{k}{n}\right)$$
Now let us consider continuity and the law of large numbers. By continuity we can fix an $\epsilon > 0$, and say $\exists \alpha > 0$ s.t $0\leq x,y\leq 1$ and $|f(x)-f(y)|<\epsilon$. Thus now by the law of large numbers we can say, $\displaystyle \exists n_0\in \mathbb Z$, independent of the parameter $p$, s.t.
$$\displaystyle \mathbb P_n\left(\left|\frac{S_n}{n}-p\right|>\alpha\right)< \epsilon\hspace{5mm}\forall n\geq n_0$$
Thusly:
$$\displaystyle \left|\mathbb E_n\left[f\left(\frac{S_n}{n}\right)\right]-f(p)\right|=\left|\sum_{k=0}^{n}\left(f\left(\frac{k}{n}\right)-f(p)\right)\mathbb P_n(S_n=k)\right|$$
Working out the absolute values:
$$\displaystyle \sum_{\left|\frac{k}{n}-p\right|\leq \alpha}\left|f\left(\frac{k}{n}\right)-f(p)\right|\mathbb P_n(S_n=k)+\sum_{\left|\frac{k}{n}-p\right|> \alpha}\left|f\left(\frac{k}{n}\right)+f(p)\right|\mathbb P_n(S_n=k)$$
Then using the Law of Large numbers to make the subsititution:
$$\displaystyle \sum_{\left|\frac{k}{n}-p\right|\leq \alpha}\epsilon \mathbb P_n(S_n=k)+\sum_{\left|\frac{k}{n}-p\right|> \alpha}2 \sup_{0\leq x \leq 1}|f(x)|\mathbb P_{n}(S_n=k)$$
Simplifying:
$$\displaystyle = \epsilon \mathbb P_n\left(\left|\frac{S_n}{n}-p\right|\leq \alpha\right)+2\sup_{0\leq x\leq 1}|f(x)|\mathbb P_n\left(\left|\frac{S_n}{n}-p\right|>\alpha\right)$$
Therefore in conclusion we get for every $n\geq n_0$,
$$\displaystyle \left|\mathbb E_n\left[f\left(\frac{S_n}{n}\right)\right]-f(p)\right|\leq\epsilon+2\epsilon \sup_{0\leq x\leq 1}|f(x)|$$
Which clearly shows that we can be make $\displaystyle \left|\mathbb E_n\left[f\left(\frac{S_n}{n}\right)\right]-f(p)\right|$ arbitrarily small with respect to $p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/513558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Expectation values of male-male, male-female, female-female pairs N people are seated at random to form a circle, among whom N1 are male. What are the expected numbers of male-male, male-female, and female-female nearest-neighbor pairs?
| Label the chairs $1$ to $N$, counterclockwise, and assume $N\gt 1$. Let $X_i=1$ if the person in Chair $i$ is male and his neighbour in the counterclockwise direction is male. Let $X_i=0$ otherwise. Then the number $X$ of male-male nearest neighbour (unordered) pairs is given by
$$X=X_1+X_2+\cdots+X_N.$$
The $E(X_i)$ are all the same. So by the linearity of expectation we have $E(X)=NE(X_1)$.
The probability that $X_1=1$ (and hence the expectation of $X_1$) is the probability the people in chairs $1$ and $2$ are both male. This probability is $\frac{N_1}{N}\cdot\frac{N_1-1}{N-1}$.
It follows that $E(X)=\frac{N_1(N_1-1)}{N-1}$.
The calculations for the other two questions are similar.
Remark: Alternately, one can do a Total Expectation calculation. I prefer the indicator random variable version.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/513720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can the limit of a product exist if neither of its factors exist? Show an example where neither $\lim\limits_{x\to c} f(x)$ or $\lim\limits_{x\to c} g(x)$ exists but $\lim\limits_{x\to c} f(x)g(x)$ exists.
Sorry if this seems elementary, I have just started my degree...
Thanks in advance.
| If you accept divergence as well, $f(x)=g(x)=1/x$ has no limit at $0$, but $1/x^2$ diverges to infinity as $x$ goes to $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/513822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 4
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.