Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
What is the difference between a Definition and a Theorem? This may get into a discussion, but I have a homework problem and it tells me there is a difference between a definition and a theorem. I don't know how to differentiate the two in this question:
Consider the domain of all quadrilaterals. Let
A(x) = "x has four right angles."
R(x) = "x is a rectangle."
Write the meaning of each mathematical statement in the predicate logic, keeping in mind the logical distinction between definitions and theorems.
(a) Definition. A quadrilateral is a rectangle if it has four right angles.
(b) Theorem. A quadrilateral is a rectangle if it has four right angles.
| A definition describes the exact meaning of something, whereas a theorem proves something.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/488276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 3
} |
Unsolved problems in number theory What are the most interesting examples of unsolved problems in number theory which an 18 year can understand?
| I think nearly everything one does in number theory is understandable if you followed the story far enough. Even the most difficult to understand theorem usually makes sense in special cases if you understood the great theorems that came before it.
But yeah there is a wealth of immediately understandable theorems in number theory that have gone unsolved so far. One that hasn't been mentioned yet is the twin prime conjecture that states there are infinitely many pairs of primes that are $2$ apart, such as $(3,5),(5,7),(11,13),...$.
There are also problems that have been solved...assuming other results that haven't been proved. My favourite of these is the congruent number problem. This asks which positive integers can be areas of right angled triangles with rational sides. For example $(3,4,5)$ has area $5$ yet no such triangle exists with area $1$ (not obvious).
In studying this problem you get to be led through a rich area of number theory upto modern day research on elliptic curves and modular forms. As mentioned above this problem has been solved...assuming a really interesting but technical result called the Birch Swinnerton-Dyer conjecture, an extremely difficult problem that is worth a million dollars to whoever solves it!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/488332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
Limit of a Sum Involving Binomial Coefficients I would like to prove that $$\dfrac{\sqrt{n}}{{{2n \choose n}^2}} \cdot \sum_{j=0}^n {n \choose j}^4$$ converges to $\sqrt{\dfrac{2}{\pi}}$ as $n \to \infty$.
Evaluating the sum in Matematica for large values of $n$ suggests that $\sqrt{\dfrac{2}{\pi}}$ is indeed the correct limit.
| It is enough to use the Central Limit Theorem. For large values of $n$, the binomial distribution converges to a normal distribution, and by Stirling's approximation
$$ \binom{2n}{n}\sim\frac{4^n}{\pi n}$$
holds, hence your limit is just the reciprocal of:
$$ \int_{-\infty}^{+\infty}e^{-2x^2}\,dx = \sqrt{\frac{\pi}{2}}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/488438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Strong Induction Proof: Fibonacci number even if and only if 3 divides index
The Fibonacci sequence is defined recursively by $F_1 = 1, F_2 = 1, \; \& \; F_n = F_{n−1} + F_{n−2} \; \text{ for } n ≥ 3.$ Prove that $2 \mid F_n \iff 3 \mid n.$
Proof by Strong Induction : $\bbox[5px,border:1px solid green]{\color{green}{n = 1 }}$ $2 \mid F_1$ is false. Also, $3
\mid 1$ is false.
The implication [False $\iff$ False] is vacuously true.
$\bbox[5px,border:1px solid green]{\color{green}{\text{Induction Hypothesis}}}$
Assume that $2 \mid F_i \iff 3 \mid n$ for every integer $i$ with $1 ≤ i ≤ k$.
$\bbox[5px,border:1px solid green]{\color{green}{k + 1 \text{th Case}}} \;$ To prove:
$\quad 2 \mid F_{k + 1} \iff 3 \mid k + 1.$
$\bbox[5px,border:1px solid green]{\color{green}{n = k + 1 = 2}} \;$
$2 \mid F_2$ is false. Also, $3
\mid 2$ is false. So [False $\iff$ False] is vacuously true.
Hence assume that $k + 1 ≥ 3.$ We now consider three cases:
$\bbox[5px,border:1px solid green]{\color{green}{\text{Case 1: } k + 1 = 3q}}$ Thus $3 \require{cancel}\cancel{\mid} k$ and $3 \require{cancel}\cancel{\mid} (k − 1)$. By the ind hyp, $3 \require{cancel}\cancel{\mid} k
\iff F_k$ odd & $3 \require{cancel}\cancel{\mid} (k − 1) \iff F_{k - 1}$ odd. Since $F_{k+1} = F_k + F_{k−1}$, thus $F_{k+1}$ = odd + odd = even.
$\bbox[5px,border:1px solid green]{\color{green}{\text{Case 2: } k + 1 = 3q + 1}}$ Thus $3 | k$ and $3 \require{cancel}\cancel{\mid} (k − 1).$ By the ind hyp, $3 | k \iff F_k$ even & $3 \require{cancel}\cancel{\mid} (k − 1) \iff F_{k - 1}$ odd. Thus $F_{k+1}$ odd.
$\bbox[5px,border:1px solid green]{\color{green}{{\text{Case 3: }} k + 1 = 3q + 2}}$ Thus $3 \require{cancel}\cancel{\mid} k$ and $3 | (k −1).$ By the ind hyp, $3 \require{cancel}\cancel{\mid} k \iff F_k$ odd and $3 \mid (k − 1) \iff F_{k - 1}$ even. Thus $F_{k+1}$ odd. $\blacksquare$
$\Large{1.}$ Does the proof clinch the $(\Leftarrow)$ of the $(k + 1)$th case?
$\Large{2.}$ Since the recursion contains $n, n - 1, n - 2$, thus the recursion "time lag" is $3$ here.
So shouldn't $3$ base cases be checked?
$\Large{3.}$ Further to #2, shouldn't "assume $k + 1 \geq \cancel{3} 4$" instead?
$\Large{4.}$ Shouldn't the $n = k + 1 = 2$ case precede the induction hypothesis?
I referenced 1. Source: Exercise 6.35, P152 of Mathematical Proofs, 2nd ed. by Chartrand et al
Supplement to peterwhy's Answer:
$\Large{1.1.}$ I wrongly believed that all 3 Cases proved the $\Leftarrow$. I now see that Case 1 is $\Leftarrow$ via a Direct Proof. Cases 2 and 3 are $\Rightarrow$ via a Proof by Contraposition. Nonetheless, how would one foreknow/prevision to start from $3 \mid n$ for both directions of the proof?
| Since the period of $2$ in base $\phi^2$ is three places long = $0.10\phi\; 10\phi \dots$, and the fibonacci numbers represent the repunits of base $\phi^2$, then it follows that $2$ divides every third fibonacci number, in the same way that $37$ divides every third repunit in decimal (ie $111$, $111111$, $111111111$, etc).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/488518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 2
} |
Polynomials $P(x)$ satisfying $P(2x-x^2) = (P(x))^2$ I am looking (to answer a question here) for all polynomials $P(x)$ satisfying the functional equation given in the title.
It is not hard to notice (given that one instinctively wants to complete the square in $P(2x - x^2)$ ) that $P(x) = 1-x$ is a solution; and the product relation
$$P(2x-x^2) = P(x)^2, Q(2x-x^2) = Q(x)^2 \implies (P\cdot Q)(2x-x^2) = (P\cdot Q)(x)^2 $$
shows that $(1-x)^n$ are all solutions as well.
Apart from the constant solutions $0$ and $1$, is it true that $(1-x)^n$ are all solutions? Could somebody give me a hint as to how to prove this (or a counterexample)?
Edit: I haven't had luck finding a formula for the $n$th derivatives of $P(x)$ at one in the case $P(1) = 1$, though the first two are zero.
| Here’s an answer that has the same content as @Did‘s, but organized differently:
Let $f(x)=2x-x^2$ and also let $u(x)=1-x$, thought of as a transformation of the (real) line. Let $g=u^{-1}\circ f\circ u$, which you compute to be $g(x)=x^2$. Now let $Q=P\circ u$, so that $P=Q\circ u^{-1}$ and the requirement $P^2=P\circ f$ gets translated to $(Q\circ u^{-1})^2=Q\circ u^{-1}\circ u\circ g\circ u^{-1}=Q\circ g\circ u^{-1}$. This is requiring $g\circ Q\circ u^{-1}=Q\circ g\circ u^{-1}$, in other words $g\circ Q=Q\circ g$. We’re asking which polynomials commute with $x^2$ in the sense of substitution. But the only such polynomials are the pure monomials $Q=x^n$ (and the constant zero, of course). Resubstitute and get @Did’s result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/488614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Dice probability problem collection I tag these questions as homework, as they are older exam questions and every year
Can you try to solve/explain how to solve with a method some of these?
If something is answered in an old post please post the link.
1. We toss 2 dice. Using probability-generating functions compute the probability that their sum is 4.
2. We roll two different dice 10 times. What's the probability the first die to show a number greater than the number of the second die for 3 three times*?
*I think they mean that this happens three times (and they don't care about the precedence of those 3 times out of 10)
3. Consider a random experiment in which we roll consecutively two dice until "we get a sum of 10 for the first time". We denote the random variable $X$ which countsthe number of repetitions until the end of the experiment. Compute the probability-generating function of $X$.
To community: I'm familiar with many terms of probability theory so with a good analysis I think I might understand your answers.
Reminder:Their diffictulty may seem easy to medium but not everyone is great at this.
| 1:
The probability generating function for the total of a single die is
$$
G(z) = \frac16 (z+z^2+\cdots+z^6)
$$
The probability generating function of a sum is just the product of the two probability generating functions. So, the probability generating function for the sum of two dice will be
$$
G_2(z)=G(z)\cdot G(z)=\left[G(z)\right]^2 =
\left[\frac16 (z+z^2+\cdots+z^6)\right]^2
$$
The answer to your question will be the coefficient of $z^4$.
2:
If we roll two fair dice, there are three possibilities: the two results are equal, the first is greater than the second, or the second is greater than the first. The sum of the probabilities of each of these events is $1$, and the probabilities of the second two events are equal. The probability that two dice are equal is $\frac16$, which means that there's a $\frac56$ chance that one of the other two occur, which means that the latter two events each have a $\frac5{12}$ chance of occurring.
The probability that the first one is greater than the other is then simply $p=\frac5{12}$. The probability that this occurs $3$ times out of $10$ trials total is
$$
\binom{10}{3}\left(p\right)^{3}\left(1-p\right)^{7}
$$
3:
First of all, we need the probability that two dice sum to $10$. This, as it ends up, is $p=\frac{3}{36}=\frac{1}{12}$. The probability of getting a sum of $10$ on the first try is then
$$
p = \frac{1}{12}
$$
The probability of getting the first sum of $10$ on the second try is the probability of not getting $10$ the first try times the probability of getting $10$ on the second, which is
$$
(1-p)p = \frac{1}{12}\times\frac{11}{12}=\frac{11}{12^2}
$$
The probability of getting the first sum of $10$ on the $n^{th}$ try (extending the above logic) is
$$
(1-p)^{n-1}p=\frac1{12}\times\left(\frac{11}{12}\right)^{n-1}
$$
So, our probability generating function is simply
$$
\begin{align}
G(z) &= \sum_{n=1}^\infty P(X=n)z^n \\&=
\sum_{n=1}^\infty \frac1{12}\cdot\left(\frac{11}{12}\right)^{n-1} z^n \\&=
\frac{z}{12} \sum_{n=1}^\infty \left(\frac{11z}{12}\right)^{n-1}
\end{align}
$$
This expression can be simplified by noting that this is a geometric series.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/488696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Trying to understand an integral algebraically and conceptually $f'(t) = \frac{Ae^t}{(0.02A+e^t)^2}$ It is defined that $f'(t) = \frac{Ae^t}{(0.02A+e^t)^2}$ is the rate of population.
It is also given that the population at $t=0$ is given as 6.
Our goal is to find the time $t$ where the population hits $30$.
I want to solve of the constant $A$, but I am not sure how to.
My understanding is that in order to do so, we just take the integral of $f'(t)$ and plug in $f(0)=6$, but I ended up only to find the value of the constant that results from taking any indefinite integral.
Can someone help me out ?
According to the book I have $A = 46.15$ by finding $\lim_{t \to \infty}f(t)=30$.
Which I couldn't understand conceptually why we can do so.
| Well,
$$f(t)-f(0) = A \int_0^t dt' \frac{e^{t'}}{(0.02 A+e^{t'})^2}$$
To evaluate the integral, substitute $u=e^{t'}$ and get
$$f(t)-6 = A \int_1^{e^{t}} \frac{du}{(u+0.02 A)^2} = A \left [\frac{1}{1+0.02 A}-\frac{1}{e^t+0.02 A} \right ]$$
Now you gave us
$$\lim_{t \to \infty} f(t)=30$$
so that
$$30-6=24=\frac{A}{1+0.02 A} \implies 24 + \frac{12}{25} A=A \implies A = \frac{600}{13} \approx 46.15$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/488781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Notation of exponential function? What is the difference between this notation of the exponential function
$$(1+\frac{1}{n})^n \rightarrow e \\ \mathbf{as} \\ n \rightarrow \infty$$
and this notation:
$$\lim_{n\rightarrow \infty} (1+\frac{x}{n})^n$$
Why is there a variable $x$ in the second equation, and a $1$ in the first equation? That would make these limits not the same, yet wikipedia presents both as the exponential function.
| Indeed the limits are not the same. The second one is $e^x$. Only if you choose $x=1$, you get the first limit. The second one is more general.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/488838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
when does $f(a)^{f(b)}=f(a^b)$? First $\text{f}\left( 1 \right)=1$ beacause
$\text{f}\left( a \right)^{\text{f}\left( 1 \right)}=\text{f}\left( a \cdot 1 \right)$, and
$\log_{\text{f}\left( a \right)} \text{f}\left( a \right)^{\text{f}\left( 1 \right)} =\text{f}\left( 1 \right) \log_{\text{f}\left( a \right)} \text{f}\left( a \right) =1$
If $\text{f}\left( 2 \right) = c$ then $\text{f}\left( 2 \right)^{\text{f}\left( 2 \right)}=\text{f}\left( 4 \right)$ so $\text{f}\left( 4 \right)=c^c$ so there should be only one
group of functions as a solution.
So if $\overbrace{x^{\text{...}^{x}}}^{\text{n times}}=2$ then $\overbrace{{\text{f}\left( x \right)}^{\text{...}^{{\text{f}\left( x \right)}}}}^{\text{n times}}={\text{f}\left( 2 \right)}$
then the $n$th super root of $c={\text{f}\left( x \right)}$, $n=\text{slog}_x2$
so ${\text{f}\left( x \right)}$=the $\text{slog}_x2$ root of c.
I am rather incompetent of my answer so if someone could look it over
that would be great.
Super roots and slogs http://en.wikipedia.org/wiki/Tetration#Inverse_relations
| Assume $a,b>0$ so that the RHS makes sense. Let $\mathcal{P}=(0,1),\;\mathcal{Q}=(1,\infty)$, noting that $f(1)=1$.
If $f(s)=1$ for some $s\in\mathcal{P}$, then for all $y,\;f(s^y)=1 \Rightarrow f(r)=1$ over $\mathcal{P}$. However, then for $y\in \mathcal{P}:\;f(x^y)=f(x) \Rightarrow f=\mathcal{C}$, and hence $f=1$ over $\mathbb{R}^+$. Similarly if $f(s)=1$ for some $s\in\mathcal{Q}$ then $f=1$ over $\mathbb{R}^+$. In a similar vein, suppose there exist distinct $x_0,y_0\in \mathcal{P}$ such that $f(x_0)=f(y_0)\neq 1$ then $f(x_0)^{f(r)}=f(y_0)^{f(r)} \Rightarrow f(x_0^r)=f(y_0^r)$ which implies $f(x)=\mathcal{C}$ and since $\mathcal{C}=\mathcal{C}^{ \mathcal{C}}\Rightarrow \mathcal{C}=\pm 1$ we have $f=1$ over $\mathcal{P}$, which, as previously demonstrated, implies $f=1$ over $\mathbb{R}^+$. Similarly for the case $x_0,x_1\in\mathcal{Q}$. Hence if $f$ is not injective it is $1$ everywhere.
Naturally, we now assume $f$ is injective.
Observe that $f(z^{y^x})=f( z)^{f(y)^{f(x)}}\Rightarrow f(z^{y^n})=f( z)^{f(y)^{f(n)}} $ but also that inductively, we have $f(z^{y^n})=f(z)^{f(y)^n}$. Combining these facts, $f(n)=n$ for positive integers $n$. Hence we have $f(x^n)=f(x)^n$. Now suppose that $f(x)>x$ over $(a,b)\subset\mathcal{Q},$ then by the preceding statement, $f(x^n)>x^n$ over $(a^n,b^n)$ and hence the inequality is satisfied over arbitrarily large intervals, which contradicts $f(n)=n$. Similarly if $f(x)<x$ over any subinterval of $\mathcal{Q}$. Hence we must have $f(x)=x$ over $\mathcal{Q}$. Now assuming $f(x)>x$ over $\mathcal{P}$, our preceding proposition would imply that for $y\in\mathcal{Q}$ and $x\in\mathcal{P}:\; y^x=f(y^x)<f(y)^{f(x)}=f(y^x)$ which is absurd. Similarly if $f(x)<x$ over $\mathcal{P}$. Hence $f(x)=x$ over $\mathcal{P}$ as well.
Therefore $f(x)=1$ or $f(x)=x.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/488986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Compositeness of $n^4+4^n$ My coach said that for all positive integers $n$, $n^4+4^n$ is never a prime number.
So we memorized this for future use in math competition. But I don't understand why is it?
| You can work $\bmod 5$:
As Jossie said, if $n$ is even, then both numbers are even. If $n$ is odd, set $n = 5k + r$;
If not, you can repeatedly use the fact that for $p$ a prime and $(a, p) = 1, a^{p - 1} = 1 \pmod p$ and so $a^p = a \pmod p$;
in this case, $(a, 5) = 1$ , then $a^{4n} = 1 \pmod 5$
$0 \leq r <5$ . Then
$4^n + n^4 = 4^{5k + r} + r^4 \pmod 5 = 4^{5k} 4^r + r^4 = 4^{r + 1} + 1 = 4$. $4^r + 1 = 4 + 1 = 5 \pmod 5$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/489071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
} |
Formula for summation of integer division series Consider '\' to be the integer division operator, i.e.,
$a$ \ $b = \lfloor a / b\rfloor$
Is there a formula to compute the following summation:
N\1 + N\2 + N\3 + ... + N\N
| This is not a closed form, but an alternate characterization of this sum is
$$
\sum_{k=1}^n\lfloor n/k\rfloor=\sum_{k=1}^nd(k)\tag{1}
$$
where $d(k)$ is the number of divisors of $k$. This can be seen by noticing that $\lfloor n/k\rfloor$ increases by $1$ when $k\mid n$:
$$
\begin{array}{c|cc}
\lfloor n/k\rfloor&1&2&3&4&5&6&k\\
\hline\\
0&0&0&0&0&0&0\\
1&\color{#C00000}{1}&0&0&0&0&0\\
2&\color{#C00000}{2}&\color{#C00000}{1}&0&0&0&0\\
3&\color{#C00000}{3}&1&\color{#C00000}{1}&0&0&0\\
4&\color{#C00000}{4}&\color{#C00000}{2}&1&\color{#C00000}{1}&0&0\\
5&\color{#C00000}{5}&2&1&1&\color{#C00000}{1}&0\\
6&\color{#C00000}{6}&\color{#C00000}{3}&\color{#C00000}{2}&1&1&\color{#C00000}{1}\\
n
\end{array}
$$
In the table above, each red entry indicates that $k\mid n$, and each red entry is $1$ greater than the entry above it. Thus, the sum of each row increases by $1$ for each divisor of $n$.
A simple upper bound is given by
$$
n(\log(n)+\gamma)+\frac12\tag{2}
$$
This is because we have the following bound for the $n^\text{th}$ Harmonic Number:
$$
H_n\le\log(n)+\gamma+\frac1{2n}\tag{3}
$$
where $\gamma$ is the Euler-Mascheroni Constant.
Research Results
After looking into this a bit, I found that the Dirichlet Divisor Problem involves estimating the exponent $\theta$ in the approximation
$$
\sum_{k=1}^nd(k)=n\log(n)+(2\gamma-1)n+O\left(n^\theta\right)
$$
Dirichlet showed that $\theta\le\frac12$ and Hardy showed that $\theta\ge\frac14$.
There is no closed form known for $(1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/489146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Identity in Thom spaces. Let $T$ be the one-point compattification, $E$ a real vector bundle, $\epsilon$ the trivial line bundle and $\Sigma$ the suspension operation. How can I prove that
$$ T(\epsilon \oplus E) \simeq \Sigma T E\,\,\,\, ?$$
| Try and prove:
1) The Thom space of the trivial line bundle is $S^1$
2) For 'nice' spaces the one point compactification satisfies $(X \times Y)_+ = X_+ \wedge Y_+$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/489227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is it true that $ \sum \limits_{i=1}^{\infty} f(i) = \lim_{n \to \infty} \sum \limits_{i=1}^{n} f(i) $? Is the equality below true?
$$ \sum \limits_{i=1}^{\infty} f(i) = \lim_{n \to \infty} \sum \limits_{i=1}^{n} f(i) $$
| Since it wasn't specified, I'm assuming that the codomain of $f$ is something in which the symbols used make sense.
The equality $\displaystyle \sum \limits_{i=1}^{\infty} f(i) = \lim_{n \to \infty} \sum_{i=1}^{n} f(i)$ holds by definition, if $\displaystyle \lim \limits_{n \to \infty} \sum \limits_{i=1}^{n} f(i)$ exists.
If the limit doesn't exist, then the LHS is devoided of meaning and the 'equality' is neither true, nor false, it simply doesn't make sense.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/489302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Minimum number of colors I just read an old book today and it was stated that mathematicians are still unable to answer "What is the minimum number of colours needed to paint a map such that adjacent countries will not have the same colour" , so the mathematicians now know the answer? or is it still unknown !
| The 4-color theorem has been proven. In my "Graphs and Digraphs" book by Chartrand and Lesniak (4ed 2005), the story is told that in 1890 Heawood proved the 5-color theorem as a result of spotting an error in a flawed 4-color theorem by Kempe a decade earlier. After 1890 we had the 5-color theorem, and the 4-color conjecture for many years.
It was not until June 21, 1976 that the 4-color theorem was actually proved by Appel and Haken. Anyway, textbooks changed a little bit after then, but not much as the 5-color theorem is doable in about 1 textbook page, but the way in which Appel and Hanken proved the 4-color theorem was computer intensive and not conducive to inserting in a chapter on graph colorings.
This is the most likely explanation for your book. It was probably written before 1976. Note that I would not throw the book away or think of it as obsolete. We still cannot fit a proof of the 4-color theorem on one page of a textbook, although finding less computer dependent ways to prove 4-color has been a source of active research. Also note that the 5-color theorem proof is still a favorite of graph theory students due to its elegance and relative simplicity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/489346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Constructing irreducible polynomials over the Polynomial Ring Well, my question is essentially:
Let $R$ be a Factorial Ring (UFD, basically) and let $p$ be a prime element in $R$.
Let $d$ be an integer larger than 2, and let
$f(t) = t^d + c_{d-1} t^{d-1} + ... + c_0$ be a polynomial belonging to $R[t]$. Let $n \ge 1$ be an integer.
Show that $g(t) = f(t) + \displaystyle \frac{p}{p^{nd}}$ is irreducible in $K[t]$, where $K$ is the quotient field of $R$.
Okay, well, I know that it is enough, from Gauss' Lemma, to show that (after multiplying through by $p^{nd}$) $g$ has no non-trivial factorizations in $R$ itself.
EDIT: My apologies, I forgot that this holds specifically for primitive polynomials, and no such specification was given. Nevertheless, perhaps, I suppose the gcd of the coefficients could be factored out?
I also tried thinking of $g$ as essentially just a translation of the graph of $f$ slightly upward, and that $g$ converges to $f$ as $n \to \infty$, but I don't know exactly what to make of that, because, well, as $n$ gets larger, it starts to inch towards the roots of $f$, and would no longer be irreducible, but perhaps I've confused myself.
(As an auxilliary question, there was a previous exercise which asked us to prove that if $R = \mathbb{Z},$ then, if $f$ had $m$ real roots, then, "for sufficiently large $n$", $g$ also has $m$ EDIT: real roots. This should be fairly apparent, I should think, from the graph? But if not, then how would one rigorously establish this?)
Anyway, thanks a lot.
| As $p$ is a unit in $K$, the irreducibility of $g(t)$ is equivalent to the irreduciblity of $h(t):=p^{nd}g(t)$. Write
$$h(t)=(p^nt)^{d}+c_{d-1}p^n (p^nt)^{d-1}+\cdots + c_1p^{(d-1)n}(p^nt)+p(1+p^{nd-1}c_0).$$
Now apply Eisenstein criterion to the polynomial
$$ H(T)=T^d+c_{d-1}p^n T^{d-1}+\cdots + c_1p^{(d-1)n}T+p(1+p^{nd-1}c_0)$$
and Gauss lemma to conclude that $H(T)\in K[T]$ is irreducible. As $h(t)$ is obtained from $H(T)$ by a change of variables, it is also irreducible.
Edit Forgot the auxillary question. It is false: consider $f(t)=t^2$ and $p$ a prime number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/489427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to compute the integral $ \int\frac{1}{x\sqrt{x^2 +3x}}dx$ Given a problem :
$$ \int\frac{1}{x\sqrt{x^2 +3x}}dx, $$
what is the best solution for this?
I am thinking about solving this problem by using :
$$ u = x+3;\qquad x = u-3; $$
So that we get :
$ \int\frac{1}{x\sqrt{x}\sqrt{x+3}} dx$, then $ \int\frac{1}{(u-3)^{3/2}(u)^{1/2}} du$,
then
$ \int (u)^{-1/2} (u-3)^{-3/2} du$.
Am I right so far? or is there a better method? Thanks.
| I have not done an integral this cumbersome in quite a while. The technique that immediately stands out to me is trigonometric substitution. While I have performed the integration correctly on paper, I would appreciate someone being on the hunt for typesetting errors. Buckle seatbelt...
We have
$$\int \frac{1}{x\sqrt{x^2+3x}} dx.$$
Now we complete the square on the radicand to get
$$\int \frac{1}{x\sqrt{x^2+3x}} dx=\int \frac{1}{x\sqrt{\left( x+\frac{3}{2} \right)^2-\frac{9}{4}}} dx.$$
For our substitution, let
\begin{align*}
&x+\frac{3}{2}=\frac{3}{2}\sec(\theta) \\
\Rightarrow & x=\frac{3}{2}(\sec(\theta)-1) \\
\Rightarrow & dx=\frac{3}{2}\sec(\theta)\tan(\theta)d \theta. \\
&\left( x+\frac{3}{2} \right)^2=\frac{9}{4}\sec^2(\theta) \\
\Rightarrow &\left( x+\frac{3}{2} \right)^2-\frac{9}{4}=\frac{9}{4}\sec^2(\theta)-\frac{9}{4} \\
& \qquad \qquad \qquad \quad= \frac{9}{4}\tan^2(\theta).
\end{align*}
We now make our substitutions and then integrate with respect to $\theta$.
\begin{align*}
\int \frac{1}{x\sqrt{\left( x+\frac{3}{2} \right)^2-\frac{9}{4}}} dx &=\int\frac{\frac{3}{2}\sec(\theta)\tan(\theta) d \theta}{\frac{3}{2}(\sec(\theta)-1)\sqrt{\frac{9}{4}\tan(\theta)}} \\
&=\frac{2}{3}\int \frac{\sec(\theta)\tan(\theta) d \theta}{\tan(\theta)(\sec(\theta)-1)} \\
&=\frac{2}{3}\int\frac{\sec(\theta)d \theta}{\sec(\theta)-1} \\
&=... \\
&=\frac{2}{3}\int \left( \csc^2(\theta)+\csc(\theta)\cot(\theta) \right) d\theta \\
&=\frac{2}{3}\left( -\cot(\theta)-\csc(\theta) \right) +c.
\end{align*}
Finally we back substitute (a right triangle helps). We started with the substitution
$$x+\frac{3}{2}=\frac{3}{2}\sec(\theta) \Rightarrow \sec(\theta)=\frac{2x+3}{3}.$$
If we form a right triangle with $3$ adjacent to $\theta$, and $2x+3$ on the hypoteneuse (this follows as $\sec(\theta)=\frac{2x+3}{3}$), then we find that the opposite side is $2\sqrt{x^2+3x}.$
Thus the back substitution becomes (reading straight from the right triangle),
\begin{align*}
&\frac{2}{3}\left( -\cot(\theta)-\csc(\theta) \right) +c \\
=&\frac{2}{3}\left( -\frac{3}{2\sqrt{x^2+3x}} -\frac{2x+3}{\sqrt{x^2+3x}}\right) +c \\
=& ... \\
=& -\frac{2\sqrt{x^2+3x}}{3x}+c.
\end{align*}
I will be happy to include the derivations of the trigonometric moves, and this final simplification upon request where I inserted "..." into the process. Transforming $$\frac{\sec(\theta)}{\sec(\theta)-1}=\csc^2(\theta)+\csc(\theta)\cot(\theta)$$ was not what I would call trivial (took me 6 steps or so). A picture of the right triangle for the back substitution would also enhance this, and I could also add that later unless someone else feels more motivated right now.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/489494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
How to Make an Introductory Class in Set Theory and Logic Exciting I am teaching a "proof techniques" class for sophomore math majors. We start out defining sets and what you can do with them (intersection, union, cartesian product, etc.). We then move on to predicate logic and simple proofs using the rules of first order logic. After that we prove simple math statements via direct proof, contrapositive, contradiction, induction, etc. Finally, we end with basic, but important concepts, injective/surjective, cardinality, modular arithmetic, and relations.
I am having a hard time keeping the class interested in the beginning set theory and logic part of the course. It is pretty dry material. What types of games or group activities might be both more enjoyable than my lectures and instructive?
| maybe start with some riddles from http://en.wikipedia.org/wiki/Charles_Lutwidge_Dodgson or http://en.wikipedia.org/wiki/Raymond_Smullyan both have made books full with logical riddles
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/489562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
} |
Is every symmetric positive semi-definite matrix a covariance of some multivariate distribution? One can easily prove that every covariance matrix is positive semi-definite. I come across many claims that the converse is also true; that is,
Every symmetric positive semi-definite matrix is a covariance marix of some multivariate distribution.
Is it true? If it is, how can we prove it?
| The wikipedia article on covariance matrices answers that (the excerpt below is taken verbatim from that article):
From the identity just above, let $\mathbf{b}$ be a $(p \times 1)$ real-valued vector, then:
$$\operatorname{var}(\mathbf{b}^{\rm T}\mathbf{X}) = \mathbf{b}^{\rm T} \operatorname{var}(\mathbf{X}) \mathbf{b},$$
which must always be nonnegative since it is the variance of a real-valued random variable and the symmetry of the covariance matrix's definition it follows that only a positive-semidefinite matrix can be a covariance matrix. The answer to the converse question, whether every symmetric positive semi-definite matrix is a covariance matrix, is "yes". To see this, suppose $\mathbf{M}$ is a $p\times p$ positive-semidefinite matrix. From the finite-dimensional case of the spectral theorem, it follows that $\mathbf{M}$ has a nonnegative symmetric square root, that can be denoted by $\mathbf{M}^{1/2}$. Let $\mathbf{X}$ be any $p\times 1$ column vector-valued random variable whose covariance matrix is the $p\times p$ identity matrix. Then:
$$\operatorname{var}(\mathbf{M}^{1/2}\mathbf{X}) = \mathbf{M}^{1/2} (\operatorname{var}(\mathbf{X})) \mathbf{M}^{1/2} = \mathbf{M}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/489632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 1
} |
How to evaluate this limit: $\lim_{x\to 0}\frac{\sqrt{x+1}-1}{x} = \frac12$? How do I evaluate the limit of $$\lim_{x\to 0}\frac{\sqrt{x+1}-1}{x} = \frac{1}{2}$$? As $x$ approaches $0$, I know the answer is $\frac{1}{2}$, but I got this question wrong. I think you have to multiply by the conjugate of the numerator? Can someone explain the steps to this, I must be doing something wrong when I multiply.
| Method 1 (basic)
$$\frac{\sqrt{x+1} - 1}{x} \stackrel{\sqrt{x+1}^2 = |x+1|}{=} \frac{|x + 1| - 1}{x (\sqrt{x+1} + 1)} \stackrel{x+1 \geq 0, \text{ for well-def.}}{=} \frac{1}{\sqrt{x + 1} + 1} \to \frac{1}{1+1} = \frac{1}{2}$$
as $x\to 0$, because $x\mapsto \sqrt{x}$ is continuous (and so the limit can be "used as input")
Method 2 (derivative)
$$\frac{\sqrt{x+1} - 1}{x} = \frac{f(1 + x) - f(1)}{x}$$
where $f(y) = \sqrt{y}$, thus the limit is
$$f'(1) = \frac{1}{2 \sqrt{1}} = \frac{1}{2}$$
Method 3 (l'Hospital)
Since the form is $\frac{0}{0}$, l'Hospital can be applied and gives
$$\lim_{x\to 0} \frac{\sqrt{x+1} - 1}{x} = \lim_{x\to 0} \frac{\frac{1}{2\sqrt{x+1}}}{1} = \lim_{x\to 0} \frac{1}{2\sqrt{x + 1}} = \frac{1}{2 \sqrt{1}} = \frac{1}{2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/489699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Proof: $a = x^2 - y^2, x - y > 1 \implies a$ is not a prime number I am having trouble doing a proof on the subject:
$a = x^2 - y^2, x - y > 1 \implies a$ is not a prime number.
$a, x, y \in Z$
The book has not yet introduced the theory of congruences, so it should be a 'fairly' simple proof i guess ?
Thanks for your time
| $x^2-y^2 = (x-y)(x+y)$
if $x-y$ is not zero and $x+y$ is not zero and $(x-y)>1$ so is $(x+y)$
then $x^2-y^2$ is divisible by both $(x-y)$ and $(x+y)$ so it is not prime number
my concern is he doesn't mention if both $x$ and $y$ are natural numbers if they are not then this claim would be not correct
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/489796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Infinite amount of additions, finite sum? I suggest it's a popular question, so if it was asked already (I couldn't find it anyway), close this question instead of downvoting, thanks!
Let's check this addition: $\sum_{n=0}^{\infty}\frac{1}{2^n}=2$
It looks like $1 + \frac12 + \frac14 + \frac18 + \frac1{16} + ... $ for every natural $n$.
How can the sum of this (and other similar) additions be finite? I tried to understand the concept of infinite, and if a variable is finite, then it always can be bigger than a value we give for it.
In the case of the example, by each time when I increase $n$ (I make it closer to infinite), I always add a number for the sum. The bigger $n$ is, the smaller this number is, so the sum always increases in a slighty smaller amount.
I got this fact as an answer even at the university. But I still don't understand it...$n$ always increases, so the sum is also always increases! Infinite is infinite, it can be always a bigger value, no matter what we write to $n$. For an incredibly big calue, the sum must be bigger. I mean, we can prove that for a given $n$, the sum becomes bigger than 1.7, or 1.8, kor 1.95, et cetera.
For an incredibly big $n$ value, even if its bigger than that we can even display (googolplex, for example), the sum should be bigger than 2.
...at least, theoretically. I don't get it. I've heard some "real-life" examples like eating $\frac1n$ table of chocolate every day, or the ancient Greek story (maybe from Archimedes) about the race of the turtle (or whatever), but it didn't help in understanding.
Is there anyone who can explain it on an understandable way?
| An infinite number of summands is not really a problem, if the summands decrease quickly enough. For example
$$\lim_{n\to\infty} \sum_{k=1}^n \frac{1}{k^2} = \frac{\pi^2}{6}<\infty$$
But
$$\lim_{n\to\infty} \sum_{k=1}^n \frac{1}{k} = \infty$$
So for the convergence of series (which is what you are puzzled about), there are a lot of so-called convergence tests for the summands $a_k$ to determine, whether
$$\lim_{n\to\infty} s_n := \lim_{n\to\infty} \sum_{k=1}^n a_k$$
converges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/489871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 4
} |
Show that there are exactly two values in $\{0, 1, ..., N - 1\}$ satisfying $x^{2} \equiv a \pmod{N}$.
Fix a positive integer $N > 1$. We say that $a$ is a quadratic residue modulo $N$ if there exists $x$ such that $a \equiv x^{2} \pmod{N}$.
Let $N$ be an odd prime and $a$ be a non-zero quadratic residue modulo $N$. Show that there are exactly two values in $\{0, 1, ..., N - 1\}$ satisfying $x^{2} \equiv a \pmod{N}$.
This means to prove that there are exactly two values of $x$ that satisfy $x^{2} \equiv a \pmod{N}$.
I think I should prove that there are at most two values and at least two values that satisfy the above constraints.
I know that because $a$ is a quadratic residue modulo $N$, $a \equiv x^{2} \pmod{N}$, and given in the problem is $x^{2} = a \pmod{N}$.
I can combine these two equations using modular arithmetic to either $x^{2}a = x^{2}a \pmod{N}$ or $x^{2} + a = x^{2} + a \pmod{N}$.
I don't know if this is on the right track or how to continue the proof.
This is a homework question, so I'd be a grateful for a hint of some sort that nudges me in the right direction.
| Few remarks just to remind you some theorems about congruences when the modulus is prime:
1- If $p$ is a prime number and $P(x)$ is a polynomial of degree $k$, then $P(x) \equiv 0 \pmod{p}$ has at most $k$ number of solutions.
2- when $p$ is a prime number, $\mathbb{Z}/p\mathbb{Z}$ is an integral domain. That means whenever $a.b \equiv 0 \pmod{p}$ then $a \equiv 0 \pmod{p}$ or $b \equiv 0 \pmod{p}$. This can be equivalently said as $p \mid ab \implies p \mid a$ or $p \mid b$.
3- Every finite integral domain is a field (This is more related to abstract algebra than elementary number theory though).
4- Notice that if $n\neq 1>0$ then $2 \equiv 0 \pmod{n}$ if and only $n=2$. As a corrolary, $-1 \equiv 1 \pmod{n}$ if and only if $n=2$.
The rest is now clear as André Nicolas has said:
If $a$ is a quadratic residue, then by definition there is at least one $x$ such that $x^2 \equiv a \pmod{N}$. But also $(-x)^2 \equiv a \pmod{N}$. Therefore if $x$ and $-x$ are different then we have found at least two solutions, also because of remark 1 we know that we can have at most 2 solutions because $P(x)=x^2-a$ is of degreee 2 and that proves there are exactly 2 solutions.
To show that $x \not\equiv -x \pmod{N}$ assume the opposite. If $x \equiv -x \pmod{N}$ then $2x \equiv 0 \pmod{N}$, but $2 \not\equiv 0 \pmod{N}$ because $N \neq 2$. That implies $x \equiv 0 \pmod{N}$.But that forces $a$ to be $0$ modulo $N$ which is contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/489975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Linear Algebra- Proof of trace property How can I prove that $\text{tr}(A)=\text{tr}(A^T)$ using the fact that $\text{trace}(A)= a_{11} +a_{22} +... +a_{nn}$?
| If $A$ has coefficients $(A)_{ij}=a_{ij}$ then by definition $A^t$ has as coefficients $(A^t)_{ij}=a_{ji}$. But in particular when $i=j$; $$(A)_{ii}=a_{ii}=(A^t)_{ii}$$
For example, if $$A=\begin{pmatrix}1&6&2\\7&-2&-9\\-1&0&2\end{pmatrix}$$ then $$A^t=\begin{pmatrix}1&7&-1\\6&-2&0\\2&-9&2\end{pmatrix}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/490078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
perimeter of square inscribed in the triagle In the figure given below, PQR is a triangle with sides PQ=10, PR=17, QR=21. ABCD is a square inscribed in the triangle. I want to find perimeter of square ABCD that is to find the length of side AB. But by using of basic high school geometry concepts, not by trigonometry.
I have drop perpendicular to side QR, and by using heron's formula i found its length 8. but i am confused what to do next. So, please help me.
Any other solutions are expected with above limitation(to use basic high school concepts not by trignometry)
THANKS...............
| Area of Triangle $PQR = 84$ by using semiperimeter formula. From this we get height of Triangle as 8.
Say, side of square $= x$, hence $AB=BC=CD=AD=x$
Now take small triangle $APB$, in this height will be $8-x$. Area of Triangle = $\frac{x\cdot(8-x)}{2}$ -----(1)
Now, Area of Trapezoid $ABRQ = \frac{(21+x)\cdot x}{2}$ -------(2)
As we know, adding (1) & (2), we get area of Triangle $PQR$ i.e 84.
Hence, $$8x-x^2+21x+x^2 = 84\cdot2$$
$$29x = 168$$
$$x = 5.8$$
Hence Perimeter of Square $ABCD= 4x = 4\cdot5.8= 23.2.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/490158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Lorentz force and Newton's second law Here's the question.
Consider a particle of mass $m$ that carries a charge $q$. Suppose that the particle is under the influence of both and electric field $\mathbf{E}$ and a magnetic field $\mathbf{B}$ so that the particle's trajectory is described by the path $\mathbf{x}(t)$ for $a\le{t}\le{b}$. Then the total force acting on the particle is given by the Lorentz force
$$\mathbf{F}=q(\mathbf{E}+\mathbf{v}\times\mathbf{B}),$$
Where $\mathbf{v}$ denotes the velocity of the trajectory.
(a) Using Newton's second law of motion ($\mathbf{F}=m\mathbf{a}$), where $\mathbf{a}$ denotes the acceleration of the particle to show that
$$m\mathbf{a}\cdot\mathbf{v}=q\mathbf{E}\cdot\mathbf{v}$$
So a quick substitution yields
$$m\mathbf{a}\cdot\mathbf{v}=\mathbf{F}\cdot\mathbf{v}=q(\mathbf{E}+\mathbf{v}\times\mathbf{B})\cdot\mathbf{v}=(q\mathbf{E}+q(\mathbf{v}\times\mathbf{B}))\cdot\mathbf{v}$$
So i think the next step is
$$=q\mathbf{E}\cdot\mathbf{v}+q(\mathbf{v}\times\mathbf{B})\cdot\mathbf{v}$$
And since the dot product of those last two is $0$, we get are result. But this is where I'm having trouble (new to line integrals)
(b) If the particle moves with a constant speed, use (a) to show that $\mathbf{E}$ does now work along the path of the particle
So
$$\int_C{q\mathbf{E}\cdot\mathbf{v}}=\int_C{m\mathbf{a}\cdot\mathbf{v}}=\int_C{m\frac{d\mathbf{v}}{dt}}\cdot\mathbf{v}$$
But i'm not sure how to procede...
| The problem is that not only the force produced by the electric field acts on the particle, if this were so then the particle would accelerate (it would be subject to a non-zero force, this is N. Second law), so there is a force countering that of the electric field ${\bf F} = -q{\bf E}$, only then can ${\bf v}$ be constant. The work done by such force is
$$W= \int_{t_a}^{t_b} {\bf F \cdot v }dt = -q\int_a^b {\bf E}\cdot d{\bf s} $$
The second integral shows that this can be interpreted as work done by (or rather against) the electric field.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/490234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solve for $z$ in $z^5=32$ This was the last question on my Year 11 Complex Numbers/Matrices Exam
Name all 5 possible values for $z$ in the equation $z^5=32$
I could only figure out $2$. How would I go about figuring this on paper?
| We have $z^5=2^5$ or $$\left(\frac{z}{2}\right)^5=1$$ then we have the solutions are $$z=2*\large{(e^\frac{2ki\pi}{5})}$$ where $k=0,1,2,3,4.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/490317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 0
} |
Fundamental group of product of schemes Is the \'etale fundamental group of the product of two schemes $X_1$ and $X_2$ related to the \'etale fundamental groups of $X_1$ and $X_2$ individually?
| The following is proven SGA 1, X.1.7:
Let $k$ be an algebraically closed field, and $X,Y$ connected $k$-schemes such that $X$ is proper and $Y$ is locally noetherian. Let $x,y$ be geometric points of $X,Y$ with values in the same algebraically closed field. Then the canonical homomorphism of profinite groups $\pi_1^{ét}(X \times_k Y,(x,y)) \to \pi_1^{ét}(X,x) \times \pi_1^{ét}(Y,y)$ is an isomorphism.
We really need properness, since the isomorphism fails for $X=Y=\mathbb{A}^1_{\mathbb{F}_p}$ (see SGA 1, X.1.10).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/490351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
what happens to rank of matrices when singular matrix multiply by non-singular matrix?? I need to find the prove for any rectangular matrix $A$ and any non-singular matrices $P$, $Q$ of appropriate sizes,
rank($PAQ$)= rank($A$)
I know that when singular matrix multiply by non-singular the result will be singular matrix. However, It is not relevant to the rank of the matrix. If A is singular with rank 2, why rank($PAQ$)= 2 not any numbers.
| Hint: multiplying matrices amounts to compose linear maps. In other words, suppose $A$ is an $n\times m$ matrix, i.e. a linear map $f_A:\mathbb R^m\to\mathbb R^n$. Then, to perform the product $PAQ$ is the same as to give a factorization of $f_{PAQ}$ as $$\mathbb R^m\overset{f_Q}{\longrightarrow}\mathbb R^m\overset{f_A}{\longrightarrow}\mathbb R^n\overset{f_P}{\longrightarrow}\mathbb R^n,$$ where $f_Q$ and $f_P$ are isomorphisms. Hence,
$$\textrm{rank}(PAQ):=\dim\,(\textrm{im}\,(f_P\circ f_A\circ f_Q))=\dim\,(\textrm{im}\,f_A)=:\textrm{rank}\,A.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/490451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
} |
The power set of the intersection of two sets equals the intersection of the power sets of each set It would be great if someone could verify this proof.
Theorem:
$\mathcal{P}(A \cap B) = \mathcal{P}(A) \cap \mathcal{P}(B)$
Proof:
First I prove that $\mathcal{P}(A \cap B) \subseteq \mathcal{P}(A) \cap \mathcal{P}(B)$. Take any subset $X \subseteq A \cap B$. Then $X \in \mathcal{P}(A \cap B)$. Also, $X \subseteq A \wedge X \subseteq B$, meaning that $X \in \mathcal{P}(A) \wedge X \in \mathcal{P}(B)$. This, again, means that $X \in \mathcal{P}(A) \cap \mathcal{P}(B)$, and this proves the statement.
Then I prove that $\mathcal{P}(A) \cap \mathcal{P}(B) \subseteq \mathcal{P}(A \cap B)$. Again, take any subset $X \subseteq A \cap B$. Using just the same arguments as above, this is also a truth.
The theorem follows.
| In the fullest generality, consider a nonempty set $I$ and a family $A$ of sets indexed by $I$ (such that we will write $A_i$ for the set corresponding to index $i \in I$).
We then have:
$$X \in \bigcap_{i \in I} \mathscr{P}(A_i) \Longleftrightarrow (\forall i)(i \in I \Rightarrow X \subseteq A_i) \Longleftrightarrow X \subseteq \bigcap_{i \in I} A_i \Longleftrightarrow X \in \mathscr{P}\left(\bigcap_{i \in I}A_i\right)$$
and the equality
$$\bigcap_{i \in I}\mathscr{P}(A_i)=\mathscr{P}\left(\bigcap_{i \in I}A_i\right)$$
follows from the axiom of extensionality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/490524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 4
} |
Find largest $k$ such that the diophantine equation $ax+by=k$ does not have nonnegative solution. It is given that $a$ and $b$ are coprime positive integers. My question is, what is the largest integer $k$ such that the diophantine equation $ax+by=k$ does not have any solution where $x$ and $y$ are nonnegative integers ?
| Let $a,b$ be positive integers with $\gcd(a,b)=1$. Let $S=\langle a,b \rangle=\{ax+by: x, y \in {\mathbb Z}_{\ge 0}$ denote the numerical semigroup generated by $a,b$. Then the gap set of $S$,
$$ G(S) = {\mathbb Z}_{\ge 0} \setminus S $$
is a finite set. The largest element in $G(S)$ is denoted by $F(S)$, and called the Frobenius number of $S$.
It is well known that $F(S)=ab-a-b$. Here is a quick and self-contained proof.
If $ab-a-b \in S$, then $ab-a-b=ax+by$ with $x,y \in {\mathbb Z}_{\ge 0}$. But then $a(x+1)=b(a-1-y)$, so that $b \mid (x+1)$ since $\gcd(a,b)=1$. Analogously, $a \mid (y+1)$ since $\gcd(a,b)=1$.
Since $x,y \ge 0$, $x \ge b-1$ and $y \ge a-1$. But then $ax+by \ge a(b-1)+b(a-1)>ab-a-b$. Thus, $ab-a-b \in G(S)$.
We now show that $n>ab-a-b$ implies $n \notin G(S)$.
Since $\gcd(a,b)=1$, there exist $r,s \in \mathbb Z$ such that $n=ar+bs$. Note that the transformations $r \mapsto r \pm b$, $s \mapsto s \mp a$ lead to another pair of solutions to $ax+by=n$. So if $r \notin \{0,1,2,\ldots,b-1\}$, by repeated simultaneous applications of this pair of transformations, we can ensure $r \in \{0,1,2,\ldots,b-1\}$. So now assume $n=ar_0+bs_0$ with $0 \le r_0<b$. Then
$$ s_0 = \dfrac{n-ar_0}{b} > \dfrac{ab-a-b-ar_0}{b} = \dfrac{a(b-1-r_0)-b}{b} \ge \dfrac{-b}{b} = -1. $$
Thus, $s_0 \ge 0$, and so $n \in S$. $\blacksquare$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/490602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove that an uncountable set X is equivalent to X\Y where Y is a denumerable subset of X How do I prove this?
The problem contains the following "hint": Prove that X/Y contains a denumerable family of pairwise disjoint denumerable subsets.
I am not sure how that proves cardinal equivalence. My first instinct was to try to show that a bijection exists between X and X\Y, but that may be extremely difficult which would be why the hint was given.
Any help would be appreciate, and also carefully describe how the solution was found.
Thank you.
| Let $g\colon \omega\to Y$ be a bijection.
Find an injection $f\colon \omega\to X\setminus Y$ recursively as follows: Assume we have already found an injective map $f_n\colon n\to X\setminus Y$ (the case $n=0$ being trivial). Then
$$ x\mapsto\begin{cases}f_n(x)&\text{if }x<n,\\g(x-n)&\text{if }x\ge n\end{cases}$$
defines an injection $\omega\to X$. By assumtion, this is not a bijection, hence there exists some $x_n\notin Y\cup \operatorname{im}f_n$. Define $f_{n+1}\colon n+1\to X\setminus Y$ per $f_{n+1}(n)=x_n$ and $f_{n+1}(x)=f_n(x)$ otherwise.
Ultimately define $f(n)=f_{n+1}(n)$. Now you can define $h\colon X\to X\setminus Y$ per
$$ h(x)=\begin{cases}f(2g^{-1}(x))&\text{if }x\in Y,\\
f(2f^{-1}(x)+1)&\text{if }x\in \operatorname{im}f,\\
x&\text{otherwise}.\end{cases}$$
Why is this a bijection?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/490660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
What should be added to $x^4 + 2x^3 - 2x^2 + x - 1$ to make it exactly divisible by $x^2 + 2x - 3$? I'm a ninth grader so please try to explain the answer in simple terms .
I cant fully understand the explanation in my book .
It just assumes that the expression that should be added has a degree of 1.
I apologize if this question is too simple or just stupid but this is a genuine doubt.
| You can get to a quartic divisible by $x^2+2x-3$ by writing
$$\begin{align}
x^4+2x^3-2x^2+x-1+\text{something}&=(x^2+2x-3)(x^2+ax+b)\cr
&=x^4+(a+2)x^3+(2a+b-3)x^2+(2b-3a)x-3b\cr
\end{align}$$
which leads to
$$\text{something} = ax^3+(2a+b-1)x^2+(2b-3a-1)x+(1-3b)$$
for any coefficients $a$ and $b$ (of the quotient) that your heart desires. What the book presumably has in mind is to make the "something" of degree as small as possible. You can obviously get rid of the $ax^3$ by setting $a=0$ and then the $(2a+b-1)x^2$ by setting $b=1$. This leaves
$$\text{something}=(2b-3a-1)x+(1-3b)= x-2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/490744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 5
} |
Proving that all sets of size n have $2^n$ subsets I've tried to construct a proof for this using recursion. My knowledge of recursion and set theory in general is quite limited so I'd appreciate some feedback!
The claim in symbolic logic: $\forall n \in \mathbb N, \exists u \in U, S(u) \wedge |u| = n \Rightarrow \mathcal |P(u)| = 2^n$ where:
$U:$ the set of everything
$S(x):$ x is a set
Initial Value
Assume $n = 0$
Then $\exists u \in U, |u| = 0$
Then $ u = \varnothing $
Then $ |\mathcal P(u)| = | \{ \varnothing \} | = 1 = 2^0 $
Prove: $\forall n \in \mathbb N, \exists u, x \in U, [S(u) \wedge (x \notin u) \wedge (|u| = n) \wedge \mathcal |P(u)| = 2^n] \Rightarrow [(|u \cup \{x\}| = n + 1) \wedge (|\mathcal P(u \cup \{x\})| = 2^{n + 1})]$
Assume $|u| = n$
Then $ |\mathcal P(u)| = 2^n$
Then $ |u \cup \{x\} | = n + 1$ because $ |\{x\}| = 1$
Then $ \mathcal |P(u \cup \{x\})| = |\mathcal P(u)| * |\mathcal P(\{x\})| = 2^n * 2 = 2^{n+1} $
Is this convincing enough or do I need to add more?
EDIT: Fixed notation
| you have proved it for sets of size 0
now you have to prove:
if a set of size n has $ 2^n $ different subsets
then a set of size n+1 has $ 2^{n+1} $ different subsets
and then you can use induction
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/490946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
If Q-Cauchy sequences must limit to a rational, how can they construct the reals? I'm currently in a graduate Math for Economists course, and we spent last week learning how to construct the reals from the rationals using Cauchy sequences and their equivalence classes. I know that I'm missing something, because from the definitions presented to me, I fail to see how the Q-Cauchy sequences can represent the irrationals.
We define each real number,r, as the equivalence class of Cauchy sequences with r as their limit. Yet, without the reals, we only have Q convergence. So a sequence Q-Converges if for all $n>N, |a_n-L|< E,$ where $L,E$ are rational.
So, how can we define $\pi$, or $\sqrt{2}$? There is no Q-Cauchy sequence which Q-converges to anything other than a rational.
| I'm not sure I understand your question properly but here goes..
It is not true that all Cauchy sequences in $\mathbb{Q}$ converge to a limit in $\mathbb Q$. For example consider $a_{n} = F_{n+1} / F_n$, where $F_n$ is the $n$th Fibonacci number. It is well known that this converges to the golden ratio $\phi = \dfrac{1+\sqrt{5}}{2}$ which is irrational. There is an 'elegant' sequence of rationals that converges to $\sqrt{2}$ but I can't think of it off the top of my head.
In other words, the metric space of rational numbers is not complete, however adding irrational numbers to it turns it into a complete metric space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/491052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Teaching the concept of a function. I am doing a class for at risk high school math students on the concept of a function. I have seen all the Internet lesson plans and different differentiated instruction plans. The idea of a function as a machine has always sat well with me, so I was thinking of playing off that. Are there any "out of the box" ideas that perhaps someone used or saw or knows that might hit home?
| Some everyday concepts could help. Such as
In a restaurant menu (f=food item, p=price of item):
Is f a function of p? Is p a function of f?
On back of a mailed envelop (s=street address, z=5-digit zip code):
Is s a function of z? Is z a function of s?
In a teacher's grade book (n=name of student who took a test, g= grade of student)
Is n a function of g? Is g a function of n?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/491149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Calculus: Area Between Two Curves Find the area between the two curves $y = x^2 + ax$ and $y = 2x$ between $x=1$ and $x=2$, given that $a>2$
I found the antiderivative of $x^2+ax-2x$ and included the area between $x=1$ and $x=2$ which is $\dfrac{3a-4}{6}-\dfrac{6a-4}{3}$. I don't understand what $(a>2)$ means in the problem.
| Hint: the area between two curves $f(x)$ and $g(x)$ can be found by the formula
$$A=\int_c^d{[f(x)-g(x)]dx}$$
It seems the reason it's asking you for $a>2$ is it will cancel the $2x$ if $a=2$. Perhaps you should plot the graphs for different values of $a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/491224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
increasing union of finitely generated submodules of M need not be finitely generated Show by an example that an increasing union of finitely generated submodules of M need not be finitely generated.
I was thinking about $R[x_1,x_2,x_3,....]$. Then if we consider the ideal $<x_1>$ , does it form a submodule?
$f(x_1,x_2).g(x_1)\notin<x_1>$, where $f(x_1,x_2)\in R[x_1,x_2,x_3,....]$ and $g(x_1)\in <x_1>$.
So it is not closed under multiplication and hence is not a submodule of $R[x_1,x_2,x_3,....]$
Is this correct?
Can you help me to find an answer for this problem?
| You don't quite say what $M$ is. I'm assuming it's a module over a commutative ring $R$. IF $M$ is required to be finitely generated, then in order to get any non-(finitely generated) submodule of $M$ we need $R$ to be non-Noetherian, and your example seems as simple as any.
On the other hand, you don't say that you need $M$ to be finitely generated. If you don't require finite generation of $M$, there are simpler examples: take $R = \mathbb{Z}$, $M = \mathbb{Q}$:
$\mathbb{Z} \subsetneq \frac{1}{2} \mathbb{Z} \subsetneq \frac{1}{6} \mathbb{Z} \subsetneq \ldots \subsetneq \frac{1}{n!} \mathbb{Z} \subsetneq \ldots \subset \mathbb{Q}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/491286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show that $ (n+1)(n+2)\cdots(2n)$ is divisible by $ 2^n$, but not by $ 2^{n+1}$ darij grinberg's note:
I. Niven, H. S. Zuckerman, H. L. Montgomery, An introduction to the theory of numbers, 5th edition 1991, §1.4, problem 4 (b)
Let $n$ be a nonnegative integer. Show that $ (n+1)(n+2)\cdots(2n)$ is divisible by $ 2^n$, but not by $ 2^{n+1}$.
I have no idea how to prove this. Can anyone help me through the proof. Thanks.
| You can do it by induction. The base case is easy. For the induction step, suppose the result is true for $n=k$. So we assume that we know that
$$(k+1)(k+2)\cdots(2k)\tag{1}$$
is divisible by $2^k$ but not by $2^{k+1}$.
Now the product when $n=k+1$ is
$$(k+2)(k+3)\cdots(2k)(2k+1)(2k+2).\tag{2}$$
To get from the product (1) to the product (2), we multiply (1) by $\frac{(2k+1)(2k+2)}{k+1}=2(2k+1)$. Thus the product (2) has "one more $2$" than the product (1).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/491371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 3
} |
Is $\{\emptyset\}$ a subset of $\{\{\emptyset\}\}$? $\{\emptyset\}$ is a set containing the empty set. Is $\{\emptyset\}$ a subset of $\{\{\emptyset\}\}$?
My hypothesis is yes by looking at the form of "the superset $\{\{\emptyset\}\}$" which contains "the subset $\{\emptyset\}$".
| For such abstract questions, it is important that you stick to the definitions of the involved notions.
By definition, $A$ is a subset of $B$ if every element contained in $A$ is also contained in $B$.
Now we look at $A = \{\emptyset\}$ and $B = \{\{\emptyset\}\}$.
$A$ has exactly one element, namely $\emptyset$.
This is not an element of $B$, since the only element of $B$ is $\{\emptyset\}$ and $\emptyset \neq \{\emptyset\}$ (see below).
So $A$ is not a subset of $B$.
Why $\emptyset \neq \{\emptyset\}$?
By definition, two sets are equal if they contain the same elements. However, $\emptyset$ is the empty set without any element, but $\{\emptyset\}$ is a $1$-element set with the element $\emptyset$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/491465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 5,
"answer_id": 3
} |
Inverse Function of $ f(x) = \frac{1-x^3}{x^3} $ I need to show that these function has a continuous inverse function and find this inverse function.
$$ f(x) = \frac{1-x^3}{x^3} $$
Defined on $ (1,\infty) $
I think I need to check for bijectivity. Don't know how.
I tried to solve the function to $x$ then. But somehow I only end up with $ -\frac{1}{x^3} = -1 -y $ and don't know how to get to $x = ...$
Maybe there is no inverse function!? Or maybe just on the defined area? I don't know.
| The original function $f$ takes an $x$ and spits out $y=f(x)$. The inverse is a function $g$ that takes any $y$ from the range of $f$ and spits out $x=g(y)$ such that $g(f(x))=x$ for any $x$ in the domain. So you want to solve for $y$ - you already almost got the correct answer. Once you get $x=g(y)$ for some function $g$, check that $g(f(x)) = x$ to make sure.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/491564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Find an example of non-locally finite collection I got stuck on this problem and can't find any hint to solve this. Hope some one can help me. I really appreciate.
Give an example of a collection of sets $A$ that is not locally finite, such that the collection $B = \{\bar{X} | X \in A\}$ is locally finite.
Note: Every element in $B$ must be unique, so maybe there exist 2 distinct sets $A_{1}, A_{2} \in A$, but have $\bar{A_{1}} = \bar{A_{2}}$. So that's why this problem would be right even though $A \subset \bar{A}$.
Thanks everybody.
| Let Q be the rationals in R, and A be the collection of sets of the form {a+√p, a ∈ Q}, p ∈ N.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/491748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Taylor series of fraction addition of common function What is the typical trick for finding the taylor series of a common function that is in the denominator when adding a constant.
eg:
$$f(x)=\frac{1}{e^x-c}$$
I know you can write $f(x)=\frac{e^x}{e^{2x}-ce^x}$ and then invoke
$$Taylor(\frac{f(x)}{g(x)})=\frac{Taylor{f(x)}}{Taylor(g(x))}$$
but I feel that there might be an easier way to evaluate $f(x)$
Any hint would be appreciated
Edit:
the origin of this question is to do integrals like
$$\int_a^b (\frac{8\pi hc}{\lambda^5}\frac{1}{e^{\frac{hc}{\lambda kt}}-1})\,d\lambda$$
and limits for the same function
$$\lim_{\lambda \to +\lambda_0}{\frac{8\pi hc}{\lambda^5}\frac{1}{e^{\frac{hc}{\lambda kt}}-1}}$$
| If the problem for you is that the function is in the denominator, why not just take it as the numerator, i.e. re-write
$f(x) = \frac{1}{e^{x} - 2}$
as
$f(x) = (e^{x} - 2)^{-1}$
and proceed as
$f'(x) = -(e^{x} -2)^{-2}\cdot e^{x} \\
f''(x) = 2(e^{x} -2)^{-3}\cdot e^{2x} - (e^{x} -2)^{-2}\cdot e^{x} \\
f'''(x) = -6(e^{x}-2)^{-4}\cdot e^{3x} + 4(e^{x} -2)^{-3}\cdot e^{2x} + 2(e^{x} -2)^{-3}\cdot e^{2x} - (e^{x} -2)^{-2}\cdot e^{x} \\
\ldots$
where you should be able to write the derivatives more compactly
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/491819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\log_{2}(7)$ is irrational
Prove that $\log_{2}(7)$ is an irrational number.
My Attempt:
Assume that $\log_{2}(7)$ is a rational number. Then we can express $\log_{2}(7)$ as $\frac{p}{q}$, where $p,q\in \mathbb{Z}$ and $q\neq 0$. This implies that $7^q = 2^p$, where either $p,q>0$ or $p,q<0$.
My question is this: why can't we count for $p,q<0$? My textbook's author counts only for $p,q>0$. Could someone explain the reasoning being used by the author?
| Consider the case $\log_27=p/q$ where $p, q<0$. You'll still have
$$
7^q=2^p
$$
and thus you'll have
$$
\frac{1}{7^{-q}}=\frac{1}{2^{-p}}
$$
which is the same as saying
$$
7^{-q}=2^{-p}
$$
and since $-q$ and $-p$ are both positive, you're back in the case where both exponents are positive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/491884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
$e^{\ln(-2)} = -2$ but $\ln(-2) = \ln 2+i\pi$. How does this work? I'm messing with exponential growth functions. I noticed that I can write
$y(t)=y(0)\alpha^t$ as
$y(t)=y(0)e^{\ln(\alpha)t}$
(and then I can go ahead and replace $\ln(\alpha)$ with $\lambda$.)
But how do I handle when the $\alpha$ in $\ln(\alpha)$ is negative? WolframAlpha simply evaluates
$e^{\ln(-2)}$ as $-2$
but how do I get avoid the whole negative number issue?
WolframAlpha also calls log the natural logarithm, which is confusing. :/
| With Euler's identity, $e^{i\theta}=\cos \theta+\sin\theta$. This is just a nice identity, proven to work for complex numbers.
Any complex number can be written then as :$$z=re^{i\theta}$$
where $r$ is its modulus.
Now take $\ln$ of that:
$$\ln z=\ln (r e^{i\theta})=\ln r+\ln e^{i\theta}=\ln r+i\theta $$
Now, because $e^{i\theta}=e^{i(\theta+2k\pi)}$, we have in general,
$$\ln z=\ln r+i(\theta+2k\pi)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/491952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Difference between Deformation Retraction and Retraction I am currently reading through Hatcher's Algebraic Topology book. I am having some trouble understanding the difference between a deformation retraction and just a retraction. Hatcher defines them as follows:
A deformation retraction of a space $X$ onto a subspace $A$ is a family of maps $f_t:X \to X$, $t \in I$, such that $f_0=\mathbb{1}$ (the identity map), $f_1(X)=A$, and $f_t|A=\mathbb{1}$ for all $t$.
A retraction of $X$ onto $A$ is a map $r:X \to X$ such that $r(X)=A$ and $r|A=\mathbb{1}$.
Is the notion of time the important characteristic that sets the two ideas apart? (It seems that the definition of deformation retraction utilizes time in its definition, whereas retraction seems to not.)
Any insight is appreciated. Also, if anyone have additional suggested reading material to help with concepts in Algebraic topology, that would be much appreciated.
| The difference between a retraction and a deformation retraction does have to do with the "notion of time" as you suggest.
Here's a strong difference between the two:
1) For any $x_0 \in X$, $\{x_0\} \subset X$ has a retract. Choose $r : X \to \{x_0\}$ to be the unique map to the one-point set. Then, certainly, $r(x_0) = x_0$.
2) However, $\{x_0\} \subset X$ only has a deformation retraction if $X$ is contractible. To see, why, notice there has to be a family of maps $f_t : X \to X$ such that $f_0(x) = x$, $f_1(x) = x_0$, and $f_{t}(x_0) = x_0$ for every $t$. This gives a homotopy from $id_X$ to the constant map at $x_0$, which makes $X$ contractible.
In fact, showing a deformation retract from $X$ onto a subspace $A$ always exhibits that $A$ and $X$ are homotopy equivalent, whereas $A$ being a retract of $X$ is weaker. (But often, still useful! Two spaces being homotopy equivalent is very strong indeed!)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/492031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28",
"answer_count": 2,
"answer_id": 0
} |
what is the proper contour for $\int_{-\infty}^{\infty}\frac{e^z}{1+e^{nz}}dz$ what is the proper contour for $$\int_{-\infty}^{\infty}\frac{e^z}{1+e^{nz}}dz:2\leq n$$
I tried with rectangle contour but the problem which I faced how to make the contour contain all branches point because $1+e^{nz}=0$ for every $z=\frac{2k+1}{n}$ :$k\in Z$
,or how to avoid them
| Set $w=nz$, then your integral is equal to $\displaystyle\frac{1}{n}\int_{\mathbb R}\frac{e^{w/n}}{1+e^w}\,dw=\frac{I}{n}$. Now, for $M>0$ big, consider the following rectangle contour: $$t_1\in[-M,M],\\ M+t_2i,\,t_2\in[0,2\pi],\\t_3+2\pi i,\,t_3\in[M,-M],\\-M+t_4i,t_4\in[2\pi,0].$$ On the second line, $$|I|\leq\int_0^{2\pi}\left|\frac{e^{M/n+ti/n}}{1+e^{M+ti}}\right|\,dt\leq\int_0^{2\pi}\frac{e^{M/n}}{|e^{M+ti}|-1}\,dt\leq2\pi\frac{e^{M/n}}{e^M-1},$$ which goes to $0$ as $M\to\infty$. Similarly for the fourth line. On the third line, we get $$-\int_{-M}^M\frac{e^{t/n+2\pi i/n}}{1+e^{t+2\pi i}}\,dt\to-e^{2\pi i/n}I,$$ as $M\to\infty$. Now, the residue on $\pi i$ is equal to $-e^{\pi i/n}$, so you have that $$I(1-e^{2\pi i/n})=-2\pi ie^{\pi i/n}\Rightarrow I=\frac{2\pi i}{e^{\pi i/n}-e^{-\pi i/n}}=\frac{\pi}{\sin(\pi/n)},$$ therefore your integral is equal to $\displaystyle\frac{\pi/n}{\sin(\pi/n)}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/492115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Finding the sum of two numbers given the difference of LCM and GCD.
l-h=57
Let, the two numbers be a and b.
Then,
ab=lh
How to proceed further? Please give me just hints? They have asked the minimum value of the sum so, I think that I have to take different possibilities.
| You want to minimize $lh$ subject to the condition $l-h=57$. So $l=57+h$, and the formula you want to minimize is $lh=(57+h)h$. Since we are looking at natural numbers, this will be the smallest possible when $h=1$, and then $lh=(57+1)\times1=58$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/492185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
what's the maximum volume of a cone with a 10 cm generator i have no idea how to solve this i tried writing down the volume expression depending on the angle $ v= \frac{1}{3}\pi r^2h $ knowing that $ r= 10\sin\theta $ and $h= 10\cos\theta $ then $ V= \frac{1000}{3}\pi(\sin^2\theta \cos\theta) $ and that way the volume would deppend on the angle of the generator to the axis and i thought to differentiate the function of the volume in order to find the maximum value $ \frac{dv}{d\theta}= \frac{1000\pi}{3}\sin\theta ( 3 \cos^3\theta -1) $ this is simplifying at much as i can but i can't see a way to solve it for zero
| The angle seems fine to me as a variable, but since it the formula contains $r^2$ the function should be $v(\theta)=\cfrac {1000\pi}3\sin^2 \theta \cos \theta$. Then you need to take care over the differentiation.
Now $\sin^2 \theta \cos \theta=(1-\cos^2\theta) \cos \theta=\cos \theta -\cos^3\theta$
So $\cfrac {dv}{d \theta}=\cfrac {1000\pi}3\left(-\sin \theta +3\sin\theta\cos^2\theta\right)$
And this expression is zero if $\sin \theta (3\cos^2\theta-1)=0$, which happens if $\sin \theta =0$ or $\cos \theta =\pm\cfrac 1{\sqrt 3}$. You just have to identify the maximum.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/492247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What do the curvature and torsion measure? Consider a smooth surface for simplicity. What does its curvature measure? What does its Gaussian/Riemannian curvature measure? What does its torsion measure?
What does the Ricci curvature measure?
| If you search the questions on MSE and/or MO, I think you will,find some pretty good insights into these topics; if I recall correctly, these topics have come up more than once; many times, in fact. For example, this one might be a good place to start:
Geometric interpretation of connection forms, torsion forms, curvature forms, etc
Good luck in you searches!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/492317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Use Laplace transform to solve the following initial–value problems. Use Laplace transform to solve the following initial–value problem.
$y′′′′ + 2y′′ + y = 0, y(0) = 1, y′(0) = −1, y′′(0) = 0, y′′′(0) = 2$
Answer
$s^4 L(s) - s^3y(0) -s^2 y'(0) - s y''(0) - y'''(0) +2[s^2L(s)-sy(0)-y'(0)] +L(s) \\\\$
I get the partial fraction part and got stuck, need help!
$L(s) =\frac{s^3 - s^2 +2s}{s^4 +2s^2 +1}= \frac{s-1}{s^2 +1}+\frac{s+1}{(s^2+1)^2} \:\:$Factorising the denominator I get: $(s^2+1)^2$
Please some let me know if Im heading in the wrong direction here.
| I guess it is the last fraction, $\frac{1+s}{(s^2+1)^2}$, that causes you problems.
For the (one sided) Laplace transform we have the transform pairs
$$\sin{t}\stackrel{\mathcal{L}}{\longmapsto}\frac{1}{s^2+1},$$
$$tf(t)\stackrel{\mathcal{L}}{\longmapsto}-F'(s),$$
and
$$f'(t)\stackrel{\mathcal{L}}{\longmapsto}sF(s).$$
The first two mean that
$$t\sin{t}\stackrel{\mathcal{L}}{\longmapsto}\frac{2s}{(s^2+1)^2},$$
and the third rule gives a hint on how to get rid of the $s$ in the numerator.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/492513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Group theory with analysis I've studied group theory upto isomorphism.
Topics include : Lagrange's theorem, Normal subgroups, Quotient groups, Isomorphism theorems.
I too have done metric spaces and real analysis properly. Can you recommend any good topic to be presented in a short discussion. A good proof on an interesting problem will be highly appreciated.(E.g.- Any subgroup of (R,+) is either cyclic or dense).Is there any such problem which relates number theory and metric spaces or real analysis?
Thanks in advance.
| If you have covered elementary point set topology a possibility might be to discuss basics of topological groups. For example, show how having a (continuous) group structure on a topological space simplifies the coarsest separation axioms ($T_0$ implies Hausdorff). Not a cool theorem, but may be the first encounter with homogeneity to some of your audience.
If you want to discuss number theory and metrics, then I would consider Kronecker approximation theorem. Time permitting include the IMHO cool application: given any finite string of decimals, such as $31415926535$, there is an integer exponent $n$ such that the decimal expansion of $2^n$ begins with that string of digits
$$
2^n=31415926535.........?
$$
The downside of that is that metric properties take a back seat. You only need the absolute value on the real line and the pigeon hole principle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/492573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Limit $\lim_{n\to\infty}\frac{n^3+2}{n^2+3}$ Find the limit and prove your answer is correct
$$\lim_{n\to\infty}\frac{n^3+2}{n^2+3}$$
By divide everything by $n^3$ I got
$$\lim_{n\to\infty}\frac{n^3+2}{n^2+3}=\frac10 $$
which is undefined. So I conclude that there is no limit for this sequence. However, I don't know how to prove it
| Hint:
$$\frac{n^3+2}{n^2+3}=\frac{n^3+3n-3n +2 }{n^2+3}= \frac{n(n^2+3)}{n^2+3} +\frac{2-3n}{n^2+3}=n +\frac{2-3n}{n^2+3} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/492633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Why $f(x) = \sqrt{x}$ is a function? Why $f(x) = \sqrt{x}$ is a function (as I found in my textbook) since for example the square root of $25$ has two different outputs ($-5,5$) and a function is defined as "A function from A to B is a rule of corre-
spondence that assigns to each element in set A exactly one element in B.", so $f(x) = \sqrt{x}$ is not a function?
| I will assume that in this portion of your textbook it is assumed that $x \in \mathbb{R}$, and with that condition $f(x)=\sqrt{x}$ is certainly a function. Specifically $f:[0,\infty) \rightarrow [0,\infty)$. It meets the formal definition of a function (not one to many).
Your confusion is due to an inappropriate extrapolation of reasoning. Specifically this:
You know that $x^2=25 \Rightarrow x= \pm \sqrt{25}$ by the square root property. However this function in no way involves taking the square root of both sides of an equality. It is just a function $f(x)=\sqrt{x}$, and the domain is $x \geq 0$ by virtue of the fact that you are living in the real number system in this portion of your textbook.
Now $f(x)= \pm \sqrt{x}$ is certainly not a function. For example, if the question were "Let $y^2=x$. Is $y$ a function of $x$?" You would say no in this case as $y= \pm \sqrt{x}$, and you would choose $x=25$ to counter the definition of a function.
Note, your statement "the square root of $25$ has two different outputs" is false. There is only one output. However, if $y^2=25$, then $y$ has two different solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/492707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 2,
"answer_id": 1
} |
Is this an analog of the mean value theorem for vector-valued functions? I was reading through various proofs of the multi-dimensional analogues of the mean value theorem. Suppose we have a $C^1$ function $f: \mathbb{R}^n \supseteq U \to \mathbb{R}^m$. I had thought there was a theorem that
*
*Given a ball $B\subset U$, $x,y\in B$, there exists a point $\xi \in B$ such that $f(x)-f(y)= Df(\xi)(x-y)$,
although in general $\xi$ will not lie on the line between $x$ and $y$. But the way I had remembered to prove it is incorrect. Is this a theorem at all?
Another related theorem is that
\2. Suppose $C\subset U$ is convex. Given $x,y \in C$, suppose $\|Df(\xi)\| \leq \eta$ for all $\xi $ on the line between $x$ and $y$. Then $\|f(x) - f(y)\| \leq \eta \|x-y\|$.
The proofs I'm reading for #2 are somewhat complicated (e.g. Rudin PMA pp. 113 and 219), and I wondered if there was a simpler one just leveraging #1.
Thanks for clarifying this for me.
| There is nothing like your proposed theorem (1). In one dimension, the MVT says that if you cover $d$ miles in $t$ hours, you must at some time have been traveling at $\frac dt$ miles per hour. But in more than one dimension this is false, because you can drive all over town and return home, and so have a net velocity of zero, without ever stopping the car. In one dimension, say on a long driveway, you have less room to maneuver, and have to stop the car in order to turn it around.
A specific counterexample is $f : \Bbb R \to \Bbb R^2$ given by $t\mapsto (\cos t, \sin t)$. Then $f(2\pi)-f(0) = (0,0)$, but $(0,0)$ is nowhere in the range of $Df$, since $\left|Df\right|$ is everywhere $1$. Here the car is driving around in circles. Even though the car never stopped, its net velocity after a complete trip around is still zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/492767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Block Decomposition of a linear map on $\Lambda^2TM$ I'm trying a exercise from Peter Petersen's book, and I did the following:
Let $*$ be the Hodge star operator, I know that $\Lambda^2TM$ decompose into $+1$ and $-1$ eigenspaces $\Lambda^+TM$ and $\Lambda^-TM$ for $*$.
I know that, if $e_1,e_2,e_3,e_4$ is an oriented orthonormal basis, then
$$e_1\wedge e_2\pm e_3\wedge e_4\in\Lambda^{\pm}TM$$
$$e_1\wedge e_3\pm e_4\wedge e_2\in\Lambda^{\pm}TM$$
$$e_1\wedge e_4\pm e_2\wedge e_3\in\Lambda^{\pm}TM$$
What i can't prove is:
Thus, any linear map $L:\Lambda^2TM\to\Lambda^2TM$ has a block decomposition
$$\begin{bmatrix}
A&B \\
C&D
\end{bmatrix}$$
$$A:\Lambda^+TM\to\Lambda^+TM$$
$$B:\Lambda^-TM\to\Lambda^+TM$$
$$C:\Lambda^+TM\to\Lambda^-TM$$
$$D:\Lambda^-TM\to\Lambda^-TM$$
| Presumably $M$ is a $4$-manifold here, since you are working with $\Lambda^2 TM$. If $M$ is $2n$-dimensional, the Hodge star gives rise to a decomposition $\Lambda^n TM = \Lambda^+ TM \oplus \Lambda^- TM$.
First, note that any $\omega \in \Lambda^2 TM$ can be written as
$$\omega = \tfrac{1}{2}(\omega + \ast \omega) + \tfrac{1}{2}(\omega - \ast \omega) = \omega^+ + \omega^- \in \Lambda^+ TM \oplus \Lambda^- TM.$$
If $L: \Lambda^2 TM \longrightarrow \Lambda^2 TM$ is linear, we can once again write
$$L(\omega) = L(\omega^+ + \omega^-) = L(\omega^+) + L(\omega^-).$$
But now both $L(\omega^+)$ and $L(\omega^-)$ can be decomposed with respect to the direct sum:
$$L(\omega^+) = \tfrac{1}{2}(L(\omega^+) + \ast L(\omega^+)) + \tfrac{1}{2}(L(\omega^+) - \ast L(\omega^+)) = L(\omega^+)^+ + L(\omega^+)^-$$
and similarly
$$L(\omega^-) = L(\omega^-)^+ + L(\omega^-)^-.$$
Then the block decomposition of $L$ is simply
$$L = \begin{pmatrix} A & B \\ C & D \end{pmatrix},$$
$$A(\omega^+) = L(\omega^+)^+, \quad B(\omega^-) = L(\omega^-)^+,$$
$$C(\omega^+) = L(\omega^+)^-, \quad D(\omega^-) = L(\omega^-)^-.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/492839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Principal Component Analysis in Face Recognition - number of Eigenvalues? I'm at the beginning of learning about the PCA as it is applied in the field of face recognition (Eigenface algorithm) and I came about the following question:
"You're using a training set of 80 images (150x150 pixels). After visualizing the Eigenvalues you decide to keep 40% of the Eigenvectors. What dimension do the resulting projected images (template) have?"
Now, since I think the number of Eigenvalues calculated from a data set is equal to the number of dimensions of that data set, I'd say you'd get images with 9000 dimensions because the dimension of the training images is 150x150=22500 and I'd keep 40% of those.
So is this assumtion correct? Or does the number of Eigenvalues differ from the dimension of the input images ?
Thank you, if you need clarification on the question, just ask.
| Yes. That's correct. Although 40% of eigenvectors is not very meaningful. It's better to talk about how much variance is captured, i.e., the proportion of the sum of eigenvalues.
See Eckart–Young theorem to understand how eigenvalues related to reconstruction error.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/492883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
How is a simple birth process is time-homogeneous? Why is it that a simple birth process is time-homogeneous?
The incidence of a birth in a small time interval depends on the population size at the time of start of the interval. Doesn't this population size clearly depend on the time (since the population grows over time)?
Or is the time factor irrelevant, since another population (also undergoing simple birth process) at an earlier time might have a larger population? Even so, within the same population, population size still does depend on the time, so I'm still unsure how we can say that a simple birth process is time-homogeneous.
| The notion of time-homogeneity of a stochastic process $(X_t)$ can refer to the invariance of its distribution, that is, to the fact that the distribution of $X_t$ does not depend on $t$ and more generally to the fact that, for every set $T$ of nonnegative time indices, the distribution of $X_{t+T}=(X_{t+s})_{s\in T}$ does not depend on $t$.
Or it can refer to the invariance of its evolution, that is, to the fact that the conditional distribution of $X_{t+1}$ conditionally on $X_t$ does not depend on $t$ and more generally to the fact that, for every set $T$ of nonnegative time indices, the conditional distribution of $X_{t+T}$ conditionally on $X_t$ does not depend on $t$.
A simple birth process is time-homogeneous in the second sense (which is the most frequently used of the two).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/492980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Placing Ts on the $x$-axis
A "T" consists of two perpendicular intervals $\{c\}\times[0,a]$ and $[b,d]\times \{a\}$ (with $b<c<d$) on the plane. We say that the T is placed on point $c$. Is it possible to place non-intersecting T's on all real numbers on the $[0,1]$ interval of the $x$-axis?
I believe the answer should be "no". Suppose it were possible. We can assume that each T has equal left-halfwidth and right-halfwidth (i.e. half of the horizontal line of the T.) For each real number $x\in[0,1]$, let $h_x$ denote the height of its T and $w_x$ denote its halfwidth. Then two real numbers $x,y\in[0,1]$ have intersecting T's if $h_y< h_x$ and $|x-y|\le w_y$, and vice versa. So every time we have $h_x>h_y$, we must have $|x-y|>w_y$. How can we get a contradiction?
| This may not actually address your question, but it may help you think about it in a different way:
For any set of finitely many $x_i \in [0,1]$, we know you can construct a set of open intervals $I_i$ such that $x_i \in I_i$ and $x_i \notin I_j \;\forall j < i$. This is (should be) fairly easy to understand why; a really easy way to construct all of these intervals is if we put all our $x_i$'s in ascending order, assign the first $x$ the interval $(-\infty,\infty)$, and each subsequent $x_i$ the interval $(x_{i-1}, \infty)$. It is clear to see that we can show this for the case for any finite sequence of $x$ values.
To show how we manage this for an infinite number of values, I am going to put the sequence together like this: $\frac{1}{2}, \frac{1}{4}, \frac{3}{4}, \frac{1}{8}, \frac{3}{8}, \frac{5}{8}, \frac{7}{8}...$
We can then see that we can assign each number an interval like this: $(-\infty,\infty),(-\infty,\frac{1}{2}),(\frac{1}{2},\infty),(-\infty,\frac{1}{4}),(\frac{1}{4},\frac{3}{4}),(\frac{3}{4},\infty)...$
I'm not quite sure what this sort of sequence is called, but if we tile them this way we, in addition showing we can do this for all numbers, we get a really cool fractal structure.
This has a lout of similarity to what the problem describes, and you ay be able to do the same thing with the T's. (except for the problem that the T's have closed intervals...)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/493028",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
Evaluating $\int_a^b \frac12 r^2\ \mathrm d\theta$ to find the area of an ellipse I'm finding the area of an ellipse given by $\frac{x^2}{a^2}+\frac{y^2}{b^2} = 1$. I know the answer should be $\pi ab$ (e.g. by Green's theorem). Since we can parameterize the ellipse as $\vec{r}(\theta) = (a\cos{\theta}, b\sin{\theta})$, we can write the polar equation of the ellipse as $r = \sqrt{a^2 \cos^2{\theta}+ b^2\sin^2{\theta}}$. And we can find the area enclosed by a curve $r(\theta)$ by integrating
$$\int_{\theta_1}^{\theta_2} \frac12 r^2 \ \mathrm d\theta.$$
So we should be able to find the area of the ellipse by
$$\frac12 \int_0^{2\pi} a^2 \cos^2{\theta} + b^2 \sin^2{\theta} \ \mathrm d\theta$$
$$= \frac{a^2}{2} \int_0^{2\pi} \cos^2{\theta}\ \mathrm d\theta + \frac{b^2}{2} \int_0^{2\pi} \sin^2{\theta} \ \mathrm d\theta$$
$$= \frac{a^2}{4} \int_0^{2\pi} 1 + \cos{2\theta}\ \mathrm d\theta + \frac{b^2}{4} \int_0^{2\pi} 1- \cos{2\theta}\ \mathrm d\theta$$
$$= \frac{a^2 + b^2}{4} (2\pi) + \frac{a^2-b^2}{4} \underbrace{\int_0^{2\pi} \cos{2\theta} \ \mathrm d\theta}_{\text{This is $0$}}$$
$$=\pi\frac{a^2+b^2}{2}.$$
First of all, this is not the area of an ellipse. Second of all, when I plug in $a=1$, $b=2$, this is not even the right value of the integral, as Wolfram Alpha tells me.
What am I doing wrong?
| HINT:
Putting $x=r\cos\theta,y=r\sin\theta$
$$\frac {x^2}{a^2}+\frac{y^2}{b^2}=1,$$
$$r^2=\frac{a^2b^2}{b^2\cos^2\theta+a^2\sin^2\theta}=b^2\frac{\sec^2\theta}{\frac{b^2}{a^2}+\tan^2\theta}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/493104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 2
} |
How to solve this ODE tT' +2T = 0.
I'm solving for T here, and I think that T' is actually T'(t), but I'm not sure because the above is how my instructor emailed it to us, but then I don't know why the independent variable is multiplying T' though.
I used the characteristic and got that T= ce^((-2/t)*t) which I'm very certain is not right.
| This is a separable DE. Rewrite it as $\dfrac{T'}{T}=-\dfrac{2}{t}$, and integrate.
We get
$$\ln |T|=-2\ln|t|+C.$$
Take the exponential of both sides. We get $|T|=\dfrac{K}{t^2}$. If you know an "initial" value (the value of $T$ at some given $t_0$, you can find $K$.
Remark: The $T$, as you expected, is supposed to be a function of $t$, and the $T'$ refers to differentiation with respect to $t$. It would have been clearer if the equation had been written as $t\dfrac{dT}{dt}+2T=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/493154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove that $B$ is nilpotent.
Let $A$ and $B$ be complex matrices with $AB^2-B^2A=B$. Prove that $B$ is nilpotent.
By the way: This problem is from American Mathematical Monthly, Problem 10339,and this solution post 1996 American Mathematical Monthly, page 907.
My question: This problem has other nice solutions? Thank you.
| Multiply both sides by $B^{n-1}$ on the right so the given condition is that $B^n = AB^{n+1} - B^2 A B^{n-1}.$ The trace of a product is invariant under cyclic permutations of the factors, so the trace of the right hand side is zero. The trace of $B^n$ is zero for all $n\geq 1$ so $B$ is nilpotent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/493228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Ideals generated by roots of polynomials Let $\alpha$ be a root of $x^3-2x+6$. Let $K=\mathbb{Q}[\alpha]$ and let denote by $\mathscr{O}_K$ the number ring of $K$. Now consider the ideal generated by $(4,\alpha^2,2\alpha,\alpha -3)$ in $\mathscr{O}_K$. I want to prove that this ideal is actually the entire ring $\mathscr{O}_K$. Do i need to know who $\alpha$ is to do this? Is there any theorem which could help?
| Hint/steps for one route that caught my eye. There may be a shorter one out there. Call that ideal $I$. Observe that $\alpha \in {\cal O}_K$.
*
*Show that $\alpha+1\in I$.
*Show that $\alpha^2-1\in I$.
*Show that $1\in I$.
Remark: this is my second suggestion - the first suggestion had 5 steps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/493294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
A Modular Diophantine Equation $a = (N \bmod c)\bmod d$
$b = (N \bmod d)\bmod c$
That is $a$ and $c$ is remainder of $N$ when divided by $c$ and $d$ in different order.
What can we say about $N$ if $a,b,c,d$ are known and $N < cd$?
| Without loss of generality, assume that $c<d$. Moreover assume that $x \bmod n$ denotes the unique representative in $[0,n-1]$.
The first remark is that $a=N \bmod c$, because (since $c<d$) this value is already reduced modulo $d$. For the other reduction, we only know that $N \bmod d=b+tc$, where $$0\leq t \leq \frac{(d-1)-b}{c}.$$
Now, from $N \bmod c$ and $N \bmod d$ you can easily reconstruct $N$ using the CRT (since $0\leq N <cd$).
In addition, if $N_t$ denotes the solution obtained for some acceptable value of $t$, we have $N_t=N_0+tc$.
Here is an example, take $c=11$, $d=101$, $a=2$ and $b=3$. Since $(d-4)/c\approx 8.8$, we have $t\in [0;8]$. This yields the following set of solutions:
$$
S=\{508,519,530,541,552,563,574,585,596\}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/493398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Factorizing a difference of two $n$-th powers How can be proved that
$$a^n-b^n=\displaystyle\prod_{j=1}^{n}(a-\omega^j b)$$
where $\omega=e^{\frac{2\pi i}{n}}$ is a primitive $n$-th root of $1$?
| Hint: $$a^n-b^n = b^n((a/b)^n-1).$$
Now you can represent a polynomial as a product of terms $(x-x_i)$ where $x_i$ are its roots.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/493510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove that the field F is a vector space over itself. How can I prove that a field F is a vector space over itself?
Intuitively, it seems obvious because the definition of a field is
nearly the same as that of a vector space, just with scalers instead
of vectors.
Here's what I'm thinking:
Let V = { (a) | a in F } describe the vector space for F. Then I just show
that vector addition is commutative, associative, has an identity and
an inverse, and that scalar multiplication is distributary,
associative, and has an identity.
Example 1: Commutativity of addition, Here x,y are in V
(x)+(y) = (y)+(x)
(x+y)=(y+x): vector addition
x + (X+y) = (X + x)+y: associative property
Example 2: Additive inverse
x,y,0 in V
(x)+(y)=(0)
(x+y)=(0) vector addition
Let y=-x in V
(X+-x)=(0) substitute
(0)=(0) simplify
I don't know if I'm going in the right direction with this, although
it seems like it should be a pretty simple proof. I think mostly I'm
having trouble with the notation.
Any help would be greatly appreciated! Thanks in advance!
| If in the axioms of vector spaces you assume that the vector space is the same as the field, and you identify vector addition and scalar multiplication respectively with addition and multiplication in the field, you will see that all axioms are contained in the set of axioms of a field. There is nothing more to check than this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/493588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Need help with $\int_0^\infty\left(\pi\,x+\frac{S(x)\cos\frac{\pi x^2}2-C(x)\sin\frac{\pi x^2}2}{S(x)^2+C(x)^2}\right)dx$ Let
$$I=\int_0^\infty\left(\pi\,x+\frac{S(x)\cos\frac{\pi x^2}2-C(x)\sin\frac{\pi x^2}2}{S(x)^2+C(x)^2}\right)dx,\tag1$$
where
$$S(x)=-\frac12+\int_0^x\sin\frac{\pi t^2}2dt,\tag2$$
$$C(s)=-\frac12+\int_0^x\cos\frac{\pi t^2}2dt\tag3$$
are shifted Fresnel integrals.
Mathematica and Maple return the integral $I$ unevaluated. Numerical integration suggests that
$$I\stackrel?=-\frac\pi4,\tag4$$
but I was not able to prove it.
So, I ask for your help with this problem.
| Notice the integrand can be rewritten as:
$$\pi x + \frac{S C' - C S'}{S^2 + C^2}
= \pi x + \frac{1}{2i}\left(\frac{C' - iS'}{C - iS} - \frac{C' + iS'}{C + iS}\right)
= \pi x + \frac{1}{2i} \log\left(\frac{C-iS}{C+iS}\right)'
$$
The integral is equal to
$$\lim_{x\to\infty} \left[\frac{\pi t^2}{2} + \frac{1}{2i}\log\left(\frac{C(t)-iS(t)}{C(t)+iS(t)}\right)\right]_0^x
=\lim_{x\to\infty} \left(\frac{\pi x^2}{2} + \Im ( \log M(x) )\right)
$$
where $\displaystyle\quad M(x) = \frac{C(x)-iS(x)}{C(0)-iS(0)}$.
Using
$$\int_0^{\infty} \cos \frac{\pi t^2}{2} dt = \int_0^{\infty} \sin \frac{\pi t^2}{2} dt = \frac12,$$
we can simplify $M(x)$ to
$$\begin{align}
\frac{\color{red}{-}\int_{x}^{\infty} e^{-i\frac{\pi}{2} t^2}dt}{\color{red}{-\frac12+\frac12 i}}
= & \sqrt{2}e^{i(\color{red}{\frac{\pi}{4}}-\color{blue}{\frac{\pi}{2}x^2})}\int_0^{\infty}
e^{-i(\frac{\pi}{2} t^2 + \pi x t)} dt\\
\sim &
\frac{\sqrt{2}}{\color{green}{i} \pi x} e^{i(\color{red}{\frac{\pi}{4}}-\color{blue}{\frac{\pi}{2}x^2})}( 1 + O(\frac{1}{x}) )
\quad\text{ for large } x.
\end{align}
$$
From this, we find the integral equals to
$$\lim_{x\to\infty} \left( \frac{\pi x^2}{2} + \color{red}{\frac{\pi}{4}}
- \color{green}{\frac{\pi}{2}} - \color{blue}{\frac{\pi x^2}{2}} + O(\frac{1}{x}) \right) = -\frac{\pi}{4}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/493674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 1,
"answer_id": 0
} |
Will the rules of calculus stay the same when a real-valued function is defined over infinite number of variables? So the question would be:
*
*Can we ever talk about a real-valued function that is defined over infinite number of variables?
*Will the rules of calculus remain the same for such functions described in 1.?
| There is a theory of "calculus on Banach spaces", which comes close to what you are asking for. See for example Wikipedia:Fréchet derivatives for some information about Fréchet and Gateaux derivatives for functions $f : U \to V$ where $U$ and $V$ are Banach spaces.
Some things remain the same in the infinite dimensional setting, but some things are fundamentally different. For example the closed unit ball in a Banach space is compact if and only if the space is finite dimensional.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/493750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Is there a triangle like this? This is my question that I posted at https://mathematica.stackexchange.com/questions/32338/is-there-a-triangle-like-this "I want to find the numbers $a$, $b$, $c$, $d$ of the function $y = \dfrac{a x + b}{c x + d}$ so that the triangle $ABC$ with three points $A$, $B$, $C$ have integer coordinates and lies on the graph of the given function, then the centroid of the triangle $ABC$ (have also integer coordinates) is also lies on the graph". The anserw at that site is yes. But, he said "I think I was just lucky to get an integer solution since I haven't imposed all the constraints".
| My student found a function $y = \dfrac{93 x + 6}{x+2}$. Then, $A(-6,138)$, $B(8,75)$, $C(-62,96)$ and centroid $G(20,103)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/493814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How can I explain this formally? Let $f: \Bbb R \to \Bbb R$ and $x \in \Bbb R$. Suppose that $\lim_{y \to x+} f(y)$ exists as a real number. If there is an $r \in \Bbb R$ such that
$$\lim_{y \to x+} f(y) > r$$
then there exists $n \in \Bbb N$ (dependind on $x$ and $r$) such that
$$f(z)>r$$
whenever $x<z<x+1/n$.
I think this is intuitively obvious, but I'm trying to understand this in a formal way, maybe with the definition of limit of a function.
| If $L$ is the limit, let $\epsilon = L - r$. Then by definition of a limit, there exists a $\delta > 0$ such that whenever $z \in (x, x + \delta$, we have $|f(z) - L| < \epsilon$. In particular, it follows that $f(z) > r$ for all such $z$.
Now just choose $n$ sufficiently large that $\frac{1}{n} < \delta$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/493898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Matrix Multiplication Problem I'm working on the following problem and I can't seem to come up with the right answer.
$$ \text{Let}: A^{-1} =
\begin{bmatrix}
1 & 2 & 1 \\
0 & 3 & 1 \\
4 & 1 & 2 \\
\end{bmatrix}
$$
Find a matrix such that:
$$ ACA =
\begin{bmatrix}
2 & 1 & 3 \\
-1 & 2 & 2 \\
2 & 1 & 4 \\
\end{bmatrix}
$$
Could someone point me in the right direction? Thanks!
| Let $$ B =
\begin{bmatrix}
2 & 1 & 3 \\
-1 & 2 & 2 \\
2 & 1 & 4 \\
\end{bmatrix}
$$
$ACA = B$ if and only if $A^{-1}ACA = A^{-1}B$ if and only if $A^{-1}ACAA^{-1} = A^{-1}BA^{-1}$. Now, multiplication between matrices is not commutative but it is associative! Hence you have:
$$A^{-1}ACAA^{-1} = (A^{-1}A)C(AA^{-1}) = C$$
Then to find such a matrix $C$ you just need to calculate $A^{-1}BA^{-1}$, which is something you can do explicitly.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/493964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
The order of permutation groups and alternating groups The question was:
True or False: $\forall{n}\in{\mathbb{N}}$ the group $S_n$ and $A_n$ have different sizes.
My answer is False. That is since both $A_1 =(\text{id})$ and $S_1 =(\text{id})$.
Can any one confirm my answer please? Thank you very much.
| Of course $S_n$ in general is much larger than $A_n$. $A_n$ is the group of permutations with even parity only, while $S_n$ is the group of all permutations of $\{1,\cdots,n\}$. Not all permutations are of even parity, e.g., consider
$$(1,2)\circ(1,3)\circ(1,5)$$
the product of three transpositions that give rise to a cycle. In fact any cycle of odd length does not belong to $A_n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/494047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Drawing 3 cards of same suit and 2 of a different suit What is the probability of having 3 cards of the same suit and $2$ cards of the same suit (but a different suit than the first three) in a $5$ card hand from a standard $52$ card deck?
What is the difference between solving this problem using approach (1) and (2). Assuming we divide both by ${52 \choose 5}$
$(1)$
$$
{4 \choose 1}\cdot{13 \choose 3}\cdot{3 \choose 1}\cdot{13 \choose 2}
$$
$(2)$
$$
{4 \choose 2}\cdot{13 \choose 3}\cdot{13 \choose 2}
$$
Why can't I just pick the two suits all at once? Why do they have to be separated out? In the first approach I don't have the problem of duplicates. A {10_Hearts, J_Hearts, K_Hearts, 3_Spades, Q_Spades} doesn't equal {3_Spades, Q_Spades, X_spades, 10_Hearts, J_Hearts}
| Select the suit with 2 cards (1 out of 4), select their values (2 out of 13), then select the suit with 3 cards (1 out of the remaining 3) and then the 3 values (3 out of 13) for a total of
$$
\binom{4}{1} \cdot \binom{13}{2} \cdot \binom{3}{1} \cdot \binom{13}{3} = 267696
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/494146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Binomial Expansion involving two terms? How would you find the 4th term in the expansion $(1+2x)^2 (1-6x)^{15}$?
Is there a simple way to do so?
Any help would be appreciated
| $$(1+2x)^2 (1-6x)^{15}=\sum_{i=0}^{2}\binom{2}{i}(2x)^i\sum_{j=0}^{15}\binom{15}{j}(-6x)^j=$$
using $i+j=3$ for fourth term we get
$$\sum_{i+j=3}\binom{2}{i}2^i\binom{15}{j}(-6)^jx^3=$$
$$=\left(\binom{15}{3}(-6)^3+4\binom{15}{2}(-6)^2+4\binom{15}{1}(-6)^1\right)x^3=$$
$$=(-98280+15120-360)x^3=-83520x^3$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/494239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Sum of Fourier Series I need to find the Fourier Series for $f\in \mathcal{C}_{st}$ that is given by
$$f(x)=\begin{cases}0,\quad-\pi<x\le 0\\ \cos(x),\quad0\le x<\pi\end{cases}.$$
in the interval $]-\pi,\pi[$ and give the sum of the series for $x=p\pi,p\in\mathbb{Z}$.
What I know:
If $f(x)=\sum_{n=-\infty}^{\infty}\alpha_ne^{inx}$ on $[-\pi,\pi]$, then
$\alpha_n=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(x)e^{-inx}dx$.
Questions:
Can I use the above formula (the intervals are different)?
Should I use integration by parts?
To compute the sum, do I just substitute $x=p\pi$ in?
| Here is how you advance
$$ \alpha_n=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(x)e^{-inx}dx= \frac{1}{2\pi}\int_{-\pi}^{0}(0)e^{-inx}dx + \frac{1}{2\pi}\int_{0}^{\pi} \cos(x)e^{-inx}dx$$
$$ \implies \alpha_n = {\frac {in \left( {{\rm e}^{i\pi \,n}}+1 \right) }{2\pi({n}^{2}-1)}}.$$
The case $n=1$ can be obtained from the above formula as
$$ \alpha_1=\lim_{n\to 1}\alpha_{n}=\frac{1}{4}. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/494297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
If $S$ is a subring of $R$ then $\operatorname{char}(S)\leq\operatorname{char}(R)$, when $S$ and $R$ have finite characteristic $\newcommand{\ch}{\operatorname{char}}$
a) Let $S$ be a subring of a ring $R$ and let $\ch (S)$ and $\ch (R)$ be finite. Then $\ch (S) \le \ch (R)$. Could someone give me a hint.
b) Prove that if $S$ and $R$ have the same unity then $\ch (R)$ and $\ch (S)$ are equal.
| $\newcommand{\ch}{\operatorname{ch}}$Let us start with the second part. If a ring has a unity, then its characteristic is the additive period of the unity. So if $\mathbf{r}$ and its subring $\mathbf{s}$ have the same unity, then they have the same characteristic.
In a ring that may or may not have a unity, the characteristic is the (possibly infinite) least common multiple (lcm) of the periods of its elements. If $\ch(\mathbf{r}) = n$ is finite, then $n x = 0$ for all elements of $x \in \mathbf{r}$, and thus for all elements $x \in \mathbf{s} \subseteq \mathbf{r}$. Thus the period of all elements of $\mathbf{s}$ divides $n$, and so the lcm $m = \ch(\mathbf{s})$ of all these periods divides $n = \ch(\mathbf{r})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/494371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Result $a \equiv b \mod m$ or $c \equiv d \mod m$ is false $\Rightarrow ac \equiv cd \mod m$ is false? Root extraction $\mod m$ I know that $a \equiv b \pmod{m}$ and $c \equiv d \pmod{m}$ implies $ac \equiv bd \pmod{m}$.
However, can one show that if $a \equiv b \pmod{m}$ is false, then:
$ac \equiv bd \pmod{m}$ is false, when $c \equiv d \pmod{m}$ is false.
and
$ac \equiv bd \pmod{m}$ is false, when $c \equiv d \pmod{m}$ is true.
That is the product of two congruences modulo $m$ is always false, if one of them is false ?
I need this result to understand how to extract roots modulo $m$.
$x^k \equiv b \pmod{m}$ has the solution $b^u$, since one can find $u, w$ such that:
$x^k \equiv b \Rightarrow b^u \equiv (x^k)^u \equiv x^{1+\theta(m)w}\equiv x \pmod{m}$ where $\theta$ denote Euler's Totient function.
Here I see that $x$ is congruent to $b^u$ and $(x^k)^u$ but how to I do tell if $b^u \equiv (x^k)^u \Rightarrow x^k \equiv b \pmod{m}$ ?
Thanks for your time.
| Isn't this as simple as noting that $1\cdot k \equiv k\cdot 1\pmod m$ for all $k$? So if we take $k$ such that $k\not\equiv 1\pmod m$ then we have a counter-example to your claim.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/494446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Is there a name for this particular linear fractional transformation? Is there a conventional name for this function?
$$
\begin{align}
g(t) & = \frac{1+it}{1-it} \\[15pt]
& = \frac{1-t^2}{1+t^2} + i\frac{2t}{1+t^2}.
\end{align}
$$
This function comes up from time to time.
It enjoys these properties:
*
*(A restriction of) it is a homeomorphism from $\mathbb R\cup\{\infty\}$ to $\{z\in\mathbb C:|z|=1\}$ (where $\infty$ the infinity of complex analysis, so the former space is the one-point compactification of the real line).
*${}$
$$
\begin{align}
\text{If $x,y\in\mathbb R$ and $g(t)=x+iy$ then }g(-t) & = \phantom{-}x-iy, \\[10pt]
\text{and }g(1/t) & = -x+iy.
\end{align}
$$
*$\text{If $t=\tan\frac\theta2$ then }g(t) = e^{i\theta} = \cos\theta+i\sin\theta.\tag{1}$
*Via $(1)$ and the consequence that $d\theta=2\,dt/(1+t^2)$, it reduces certain differential equations involving rational functions of $\sin\theta$ and $\cos\theta$ to differential equations involving rational functions of $t$, and in particular reduces antidifferentiation of rational functions of $\sin\theta$ and $\cos\theta$ to antidifferentiation of rational functions of $t$.
*It is a bijection from $\mathbb Q\cup\{\infty\}$ to the set of rational points on the circle (thus showing that infinitely many of those exist, and consequently infinitely many primitive Pythagorean triples exist).
*As pointed out by "Did" in comments below, it is a conformal bijection between the Poincaré halfplane $\{z\in\mathbb C\mid \Im(z)>0\}$ and the open unit disk $\{z\in\mathbb C\mid |z|<1\}$ (and also between the complementary half-plane and the complement of the open unit disk).
Is there a conventional name for this function?
| It is a rational equivalence between the line and the circle. That's what an algebraic geometer would say in any case. More precisely, it is a birational map.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/494513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Laplace transform of product of $\sinh(t)$ and $\cos(t)$ If I have a function $f(t)=\sinh(t)\cos(t)$ how would I go about finding the Laplace transform? I tried putting it into the integral defining Laplace transformation:
$$
F(s)= \int_0^\infty \mathrm{e}^{-st}\sinh(t)\cos(t)\,\mathrm{d}t
$$
But this integral looks very hairy to me. Can $\sinh(t)\cos(t)$ be rewritten as something more manageable perhaps?
| = L [ { (( e^(-t) - e^(t) ))/2 }.{ cost } ]
Taking Constant outside and separating Laplace to different terms,
= (1/2).{ L[ e^(-t).cost ] - L[ e^(t).cost ] }
Separating solving,
L[ e^(-t).cost ],
first solve ----> L[ cost ] = s/ (s^(2) + 1)
Therefore for L[ e^(-t).cost ],
Using shifting property,
L[ e^(-t).cost ] = (s+1)/( (s+1)^2 + 1 ) ...........(1)
similarly,
L[ e^(t).cost ] = (s-1)/( (s-1)^2 + 1 ) ............(2)
Subtracting (1) by (2),
will be Answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/494580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Prove that $\int \limits_0^{2a}f(x)dx=\int_0^a[f(x)+f(2a-x)]dx$ How do I attempt this? What is the aim of the proof?
I've been trying silly things with the LHS and the RHS but cannot produce anything of use.
Can someone offer a very slight hint on proceeding?
Thanks
I did this so far:
$\displaystyle \int \limits_0^{2a}f(x)dx=\int \limits_0^a[f(x)+f(2a-x)]dx = \int \limits_0^a f(x)dx + \int \limits_0^a f(2a-x)dx$
| $$\int_0^a f(2a-x)dx = \int_a^{2a} f(y)dy.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/494668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Expectation conditioned on a sub sigma field Let $X$ and $Y$ be two integrable random variables defined on the same probability space $(\Omega,\mathcal F,\mathbf P)$
Let $\mathcal A$ be a sub-sigma-field such that X is $\mathcal A$-measureable.
*
*Show that $E(Y|A)=X$ implies $E(Y|X)=X$
*Show by counter-example that $E(Y|X)=X$ does not imply that $E(Y|A)=X$
| Because $X$ is $A-$measurable, $\sigma(X)\subseteq A$ therefore:
$$
\mathbb{E}(Y|X)=\mathbb{E}[\mathbb{E}(Y|A)|X]=\mathbb{E}[X|X]=X.
$$
For the next part, suppose that $\sigma(X)\subset A$ therefor there is a set $G\notin \sigma(X)$ and $G\in A$ and we have $\mathbb{E}[\mathbb{1}_G|X]=\mathbb{P}(G|X)=p>0$. Consider $Y=\frac{1}{p}X\mathbb{1}_G$. Then $\mathbb{E}[Y|X]=X$ however $\mathbb{E}[Y|A]=\frac{X}{p}\mathbb{E}[\mathbb{1}_G|A]=\frac{1}{p}X\mathbb{1}_G$ which is not equal to $X$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/494727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding the Determinant of a $3\times 3$ matrix. Show that:
$$
\begin{vmatrix}
x & a & b \\
x^2 & a^2 & b^2 \\
a+b & x+b & x+a \\
\end{vmatrix} = (b - a)(x-a)(x-b)(x+a+b)
$$
I tried expanding the whole matrix out, but it looks like a total mess. Does anyone have an idea how this could be simplified?
| you can expand the matrix determinant and look for "shortcuts" , my solution :
$$
\begin{vmatrix}
x & a & b \\
x^2 & a^2 & b^2 \\
a+b & x+b & x+a \\
\end{vmatrix} =
x \begin{vmatrix}
x & a\\
x^2 & a^2\\
\end{vmatrix}-a\begin{vmatrix}
x^2 & b^2\\
a+b & x+a\\
\end{vmatrix}+b\begin{vmatrix}
x^2 & a^2\\
a+b & x+b\\
\end{vmatrix}
$$
now just expand the matrices determinant:
$$=x(x^2a^2-ax^2)-a(x^2(x+a)-b^2(a+b))+b(x^2(x+b)-a^2(a+b))=$$
open the brackets:
$$x^3a^2-ax^3-ax^3-a^2x^3-a^2b^2+ab^3+bx^3+x^2b^2-a^3b-a^2b^2=
$$
eliminate opposite sign expressions :
$$
a^3x-b^3x-ax^3+ab^3+bx^3-a^3b=
$$
$$x(a^3-b^3)-x^3(a-b)-ab(a^2-b^3)=$$
extract common divisor :
$$[b-a](x^3-x(a^2+b^2+ab)+ab(a+b))=$$
$$[b-a](x^3-x((a+b)^2-ab)+ab(a+b))=$$
$$[b-a](x(x^2-(a+b)^2)+ab(x+a+b))=[b-a][x+a+b](x(x-a-b)+ab)=[b-a][x+a+b](x^2-x(a+b)+ab)=[b-a][x+a+b][x-a][x-b].
$$
$$
\square
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/494820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Why can't I take the square root of both sides when finding the roots of a quadratic equation? Is $x^2 - x - 12 = 0$ equivalent to $x = \sqrt{x + 12}$?
I started with
$x^2 - x - 12 = 0$
and made the following changes:
$x^2 - x - 12 = 0$
$x^2 = x + 12$
$x = \sqrt{x + 12}$
From here I can eyeball it and see that x = 4 and x = -3 are solutions.
I know there is a better way to find the roots, but I was told that $x^2 - x - 12 = 0$ and $x = \sqrt{x + 12}$ are not equivalent. If they are not, why not?
| It's not equivalent for two reasons:
*
*We don't know that $x + 12$ is non-negative, so it might not be valid to take square roots
*$\sqrt{x + 12}$ is always non-negative by definition, provided it's defined. So $x = -3$ is not actually a solution to this new equation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/494896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
When multiplying 2 positive integers, why when we round the larger number up and the smaller number down, the product will be lower and vice versa. I experimented with 2 digits multiplication and found out that when we rounded the larger number up and the smaller number down, the product will be lower. When we rounded the larger number down and the smaller number up, the product will be higher.
For example (We first multiply 97 by 84):
$97\times84=8148$
$98\times83=8134$
$99\times82=8118$
$96\times85=8160$
$95\times86=8170$
$94\times87=8178$
$90\times90=8100$
$90\times91=8190$
My questions are:
1. How can we explain this? Do we need to use any inequality such as the Geometric Mean or something?
2. It seems that $90\times91=8190$ is the highest, if we we plot a graph can we explain why is it the maximum?
3. Is it a general property of integer, are there any other systems with other operations that have this property, can we explain it more abstractly?
Thank you!
| Consider $a+b=2k$ and assume WLOG $a\ge b$
Then $a=k+x$, $b=k-x$ for any $x\ge0$
Finally, $ab=k^2-x^2$, and since
$$x^2\ge0\implies k^2-x^2\ge k^2$$
And equality is achieved when $x=0$, that is, $a=b$. Finally, we note that we note that "rounding $a$ down and $b$ up" is precisely minimizing $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/494975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Why does the Method of Successive Approximations for a Differential Equation work? Time dependent perturbation theory in quantum mechanics is often derived using the Method of Successive Approximations for a Differential Equation. I have not seen an explanation or a more rigorous outline of proof as to why this should work?
Normally we start with this equation:
$$
i\hbar \dot d_f=\sum_n\langle f^0|H^1(t)|n^0\rangle e^{i\omega_{fn} t}d_n(t)
$$
and then we say that to zeroth order the RHS is 0. and then we plug that into the first order and so on, as outlined in Shankar:
and as what $d_f(t)$ is. To zeroth order, we ignore the right-hand side of Eq. (18.2.5) completely, becasue of the explicit $H^1$, and get
$$\dot d_f=0 \tag{18.2.7}$$
in accordance with our expectations. To first order, we use the zeroth-order $d_n$ in the right-hand side because $H_1$is itself of first order. This gives us the first-order equation
$$
\dot d_f(t)=\frac{-i}\hbar\langle f^0|H^1(t)|i^0\rangle e^{i\omega_{fi} t} \tag{18.2.8}
$$
the solution to which is, with the right initial conditions, is
$$
d_f(t) = \delta_{fi} - \frac{i}\hbar\int_0^t \langle f^0|H^1(t')|i^0\rangle e^{i\omega_{fi} t'}\text dt' \tag{18.2.9}
$$
But how could we explain why this should work more rigorously?
| Cleaning stuff up, here are your givens:
$$i\frac{\partial{|\psi(t)\rangle}}{\partial t}= {H(t)}{|\psi(t)\rangle},\quad{|\psi(t_0)\rangle}=|\psi\rangle$$
Now, I claim that this series is a solution:
$$|\psi(t)\rangle=\left(1+(-i)\int_{t_0}^tdt_1 {H(t_1)}+(-i)^2\int_{t_0}^tdt_1\int_{t_0}^{t_1}dt_2{H(t_1)H(t_2)}+...\right)|\psi\rangle$$
Check it, by differentiating by $t$, and noticing how $-iH(t)$ arises as a common factor. That's what is usually called a T-exponent. And you are asking about the first term in the series.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/495033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
How do the terms “countable” and “uncountable” not assume the continuum hypothesis?
*
*Every countable set has cardinality $\aleph_0$.
*The next larger cardinality is $\aleph_1$.
*Every uncountable set has cardinality $\geq 2^{\aleph_0}$
Now, an infinite set can only be countable or uncountable, so how does this concept not negate the possible existence of a set $S$ such that $\aleph_0<|S|<2^{\aleph_0}?$
I am guessing the issue is that claim $(3)$ is in fact not necessarily true. If so, I would be glad to hear more on why this is not the case. I've only started looking into this area of mathematics recently, so pardon me if my question is naive.
| When we say that a set is finite if there is a bijection between the set and a proper initial segment of the natural numbers. We say that a set is infinite if it is not finite.
Similarly, we say that a set is countable if there is an injection from that set into the set of natural numbers. We say that it is uncountable if it is not countable. That is all.
The continuum hypothesis is a statement about a particular set and its cardinality. It has nothing to do with the definition of countable, uncountable or cardinals in general.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/495098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
homomorphism from $S_3$ to $\mathbb Z/3\mathbb Z$
TRUE/FALSE TEST:
*
*There is a non-trivial group homomorphism from $S_3$ to $\mathbb Z/3\mathbb Z.$
My Attempt:
True: Choose $a,b\in S_3$ such that $|a|=3,|b|=2.$
Then $S_3=\{1,a,a^2,b,ba,ba^2\}.$
Define $f:S_3\to\mathbb Z/3\mathbb Z:b^ia^j\mapsto j+3\mathbb Z$
Then $f$ is a nontrivial homomorphism.
Is the attempt correct?
| HINTS:
*
*The image of a homomorphism is a subgroup of co-domain. Does $\mathbb{Z}_3$ has any subgroups except $\{\bar{0}\}$ and itself?
*Since $|S_3|=3!=6$ and $|\mathbb{Z}_3|=3$ then any function $\varphi: S_3 \to \mathbb{Z}_3$ will be not one-to-one. So, the kernel must be non-trivial.
*If $\varphi: S_3 \to \mathbb{Z}_3$ is a homomorphism that doesn't send everything to $\bar{0} \in \mathbb{Z}_3$ then it must be surjective (According to what I said in 1). Now, what happens if you use the first isomorphism theorem? Note that $\operatorname{ker{\varphi}}$ is a normal subgroup of $S_3$ but $S_3$ has no normal subgroups of order 2.
I have actually given you more information that you need, but to sum it up, there are no non-trivial homomorphisms from $S_3$ to $\mathbb{Z}_3$. In a fancy way, they write this as $\operatorname{Hom}(S_3,\mathbb{Z}_3)=\{e\}$ where $e: S_3 \to \mathbb{Z}_3$ is defined as $e(\sigma)=\bar{0}$ for any $\sigma \in S_3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/495195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
} |
Minimal Counterexample for False Prime-Generating Quadratic Polynomials (Chartrand Ex 7.66)
Factor the quadratic: $n^2 \pm n + 41 = n(n \pm 1) + 41 = n\left[(n \pm 1) + \cfrac{41}{n}\right]$.
So if we find at least one $n$ such that $\frac{41}{n}$ is an integer, or equivalently an $n$ such that $n \mid 41$,
then we'll have proven by counterexample that these 2 quadratics generate primes only for all $0 \leq n \leq \text{some natural number}$. By inspection, $n = 41$ works as one such value.
I see that $n = 41$ is one counterexample for $n^2 - n + 41$. Still, for $n^2 + n + 41$, the minimal counterexample is $n = 40$ because $\quad 40^2 + 40 + 41 \; = \; 1681 = \; 41^2$ = composite number.
$\Large{\text{1.}}$ What divulges/uncloaks $n = 40$ as the minimal counterexample for $n^2 - n + 41$?
$\Large{\text{2.}}$ How and why would one divine/previse to factor (as above) $n^2 \pm n + 41 = n(n \pm 1) + 41 $?
$\Large{\text{3.}}$ How and why would one divine/previse the failure of $n^2 \pm n + 41$ for some $n$?
Supplementary dated Jan 7 2014:
$\Large{\text{2.1.}}$ I still don't register the factoring. Customarily, I'd factor out $a_0$ as so: $\color{green}{f(a_0)=a_0\left[a_m (a_0)^{m-1} + \ldots + a_1 + 1\right]}.$
Yet $Q2$ compels: $\color{brown}{f(a_0)=a_0\left[a_m (a_0)^{m-1} + \ldots + a_1\right] + a_0}.$?
$\Large{\text{3.1.}}$ If the green is composite, then the green contradicts the proposition that a polynomial generates primes. For the green to be composite, $\color{green}{a_0} \neq \pm 1$.
Still, wouldn't you also need $\color{green}{\left[a_m (a_0)^{m-1} + \ldots + a_1 + 1\right]} \neq \pm 1$ ?
| *
*Nothing written so far actually proves $n=40$ is the minimal counterexample for $n^2+n+41.$ All that has been shown is that the factorization $n(n+1)+41$ makes it clear that $n=40$ is a counterexample, while the second factorization you wrote makes it clear that $n=41$ is also a counterexample.
*For a similar reason to what I write below for 3.
*For any polynomial $f(n)=a_m n^m + \ldots a_1 n + a_0,$ putting $n=a_0$ shows that $a_0$ divides $f(a_0),$ so if we have such a polynomial where $a_0 \neq \pm 1$ then we automatically know we have a counterexample at $n=|a_0|.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/495253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Product polynomial in $\mathbb{F}_7$ I need to compute the product polynomial $$(x^3+3x^2+3x+1)(x^4+4x^3+6x^2+4x+1)$$
when the coefficients are regarded as elements of the field $\mathbb{F}_7$.
I just want someone to explain to me what does it mean when a cofficient (let us take 3 for example) is in $\mathbb{F}_7$ ? I know that $\mathbb{F}_7= \mathbb{Z}/7\mathbb{Z}$
| We use $3$ as shorthand for the coset $$7\mathbb{Z}+3=\{\ldots,-11,-4,3,10,17,\ldots\}$$ in $\mathbb{Z}/7\mathbb{Z}$. Since $$\cdots=7\mathbb{Z}-11=7\mathbb{Z}-4=7\mathbb{Z}+3=3\mathbb{Z}+10=3\mathbb{Z}+17=\cdots,$$ when working in $\mathbb{Z}/7\mathbb{Z}$, we have $$\cdots=-11=-4=3=10=17=\cdots.$$
Practically, this means if $k$ arises as a coefficient, we can replace $k$ with $k \text{ mod } 7$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/495418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
} |
Valid Sobolev Norm on $\mathbb{R}$? I have seen many questions along this line, but none quite answered my question as far as I could tell.
On all of $\mathbb{R}$, is the Sobolev norm ever defined as follows
$$\|f\|_{W_2^k(\mathbb{R})} := \|f\|_{L_2(\mathbb{R})}+|f|_{W_2^k(\mathbb{R})},$$
where $|f|_{W_2^k(\mathbb{R})}:=\|f^{(k)}\|_{L_2(\mathbb{R})}$ denotes the usual seminorm?
Usually you see this definition for domains satisfying the uniform cone condition or something like that.
| Yes, this is a perfectly fine definition on $\mathbb R$. The purpose of the uniform (interior) cone condition is to make sure that all points of the domain are uniformly easy to approach "from the inside"; this enables us to control lower order derivatives globally, by integrating higher order derivatives (Poincaré's inequality). When the domain is all of $\mathbb R$ or $\mathbb R^n$, access "from the inside" isn't an issue at all. Depending on how the uniform cone condition is stated, one even may be able to say that it holds vacuously when the boundary is empty.
By the way, you can easily check that this definition gives control on all derivatives, by using the Fourier transform. The $L^2$ norms with weights $1$ and $|\xi|^k$ together control all $L^2$ norms with weights $|\xi|^j$, $0< j<k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/495498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Strange delta function I don't know what to do when I see a delta-function of the following sort appear in an integral (3d-spherical here):
$$\delta^3(r\sin \theta - r_0).$$
E.g. the argument is a function of two of the variables.
I'm familiar with the standard properties of delta-fns. Are there some tricks I should know about this?
| In this answer, it is shown that when composing the dirac delta with $g(x)$, we get
$$
\int_{\mathbb{R}^n} f(x)\,\delta(g(x))\,\mathrm{d}x=\int_{\mathcal{S}}\frac{f(x)}{|\nabla g(x)|}\,\mathrm{d}\sigma(x)
$$
where $\mathcal{S}$ is the surface on which $g(x)=0$ and $\mathrm{d}\sigma(x)$ is standard surface measure on $\mathcal{S}$.
In your question, $\mathcal{S}$ is the surface where $r\sin(\theta)-r_0=0$ and therefore,
$$
|\nabla g(r,\theta,\phi)|=\sqrt{1+\cos^2(\theta)\tan^2(\phi)}
$$
$$
\mathrm{d}\sigma(x)=\frac{r_0^2}{\sin^3(\theta)}\sqrt{1-\sin^2(\theta)\sin^2(\phi)}\,\mathrm{d}\theta\,\mathrm{d}\phi
$$
So
$$
\begin{align}
&\int_{\mathbb{R}^n}f(r,\theta,\phi)\,\delta(r\sin(\theta)-r_0)\,\mathrm{d}r\,\mathrm{d}\theta\,\mathrm{d}\phi\\
&=\int_{\mathcal{S}}f(r,\theta,\phi)\frac{r_0^2}{\sin^3(\theta)}\frac{\sqrt{1-\sin^2(\theta)\sin^2(\phi)}}{\sqrt{1+\cos^2(\theta)\tan^2(\phi)}}\,\mathrm{d}\theta\,\mathrm{d}\phi
\end{align}
$$
Computation of $|\nabla g|$
The Jacobian from $(r,\theta,\phi)$ to $(x,y,z)$ is
$$
\mathcal{J}=\left[
\begin{array}{ccc}
\cos (\theta ) \cos (\phi ) & -r \cos (\phi ) \sin (\theta ) & -r \cos (\theta ) \sin
(\phi ) \\
\cos (\phi ) \sin (\theta ) & r \cos (\theta ) \cos (\phi ) & -r \sin (\theta ) \sin
(\phi ) \\
\sin (\phi ) & 0 & r \cos (\phi )
\end{array}
\right]
$$
and its inverse is
$$
\mathcal{J}^{-1}=\frac1r\left[
\begin{array}{ccc}
r \cos (\theta ) \cos (\phi ) & r \cos (\phi ) \sin (\theta ) & r \sin (\phi ) \\
-\sec (\phi ) \sin (\theta ) & \cos (\theta ) \sec (\phi ) & 0 \\
-\cos (\theta ) \sin (\phi ) & -\sin (\theta ) \sin (\phi ) & \cos (\phi )
\end{array}
\right]
$$
Since
$$
\begin{align}
\mathrm{d}(r\cos(\theta)-r_0)
&=\begin{bmatrix}\sin(\theta)&r\cos(\theta)&0\end{bmatrix}
\begin{bmatrix}\mathrm{d}r\\\mathrm{d}\theta\\\mathrm{d}\phi\end{bmatrix}\\
&=\begin{bmatrix}\sin(\theta)&r\cos(\theta)&0\end{bmatrix}\mathcal{J}^{-1}
\begin{bmatrix}\mathrm{d}x\\\mathrm{d}y\\\mathrm{d}z\end{bmatrix}
\end{align}
$$
we simply compute the length
$$
\begin{align}
|\nabla g|
&=\Big|\begin{bmatrix}\sin(\theta)&r\cos(\theta)&0\end{bmatrix}\mathcal{J}^{-1}\Big|\\
&=\sqrt{1+\cos^2(\theta)\tan^2(\phi)}
\end{align}
$$
Computation of $\mathrm{d}\sigma$
On $\mathcal{S}$, $r=\frac{r_0}{\sin(\theta)}$, therefore,
$$
\mathcal{J}=\left[
\begin{array}{ccc}
\cos (\theta ) \cos (\phi ) & -r_0 \cos (\phi ) & -r_0 \cot (\theta ) \sin
(\phi ) \\
\cos (\phi ) \sin (\theta ) & r_0 \cot (\theta ) \cos (\phi ) & -r_0 \sin
(\phi ) \\
\sin (\phi ) & 0 & r \cos (\phi )
\end{array}
\right]
$$
Therefore, the changes in position induced by a change of $\theta$ and $\phi$ is
$$
\begin{bmatrix}\mathrm{d}x\\\mathrm{d}y\\\mathrm{d}z\end{bmatrix}
=\mathcal{J}\begin{bmatrix}\mathrm{d}r\\\mathrm{d}\theta\\\mathrm{d}\phi\end{bmatrix}
=\mathcal{J}\begin{bmatrix}-\frac{r_0\cos(\theta)}{\sin^2(\theta)}&0\\1&0\\0&1\end{bmatrix}
\begin{bmatrix}\mathrm{d}\theta\\\mathrm{d}\phi\end{bmatrix}
$$
We simply compute the length of the cross product
$$
\left|\,\mathcal{J}\begin{bmatrix}-\frac{r_0\cos(\theta)}{\sin^2(\theta)}\\1\\0\end{bmatrix}
\times
\mathcal{J}\begin{bmatrix}0\\0\\1\end{bmatrix}\,\right|
=\frac{r_0^2}{\sin^3(\theta)}\sqrt{1-\sin^2(\theta)\sin^2(\phi)}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/495558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Can this intuition give a proof that an isometry $f:X \to X$ is surjective for compact metric space $X$? A prelim problem asked to prove that if $X$ is a compact metric space, and $f:X \to X$ is an isometry (distance-preserving map) then $f$ is surjective. The official proof given used sequences/convergent subsequences and didn't appeal to my intuition. When I saw the problem, my immediate instinct was that an isometry should be "volume-preserving" as well, so the volume of $f(X)$ should be equal to the volume of $X$, which should mean surjectivity if $X$ is compact. The notion of "volume" I came up with was the minimum number of $\epsilon$-balls needed to cover $X$ for given $\epsilon > 0$. If $f$ were not surjective, then because $f$ is continuous this means there must be a point $y \in X$ and $\delta > 0$ so that the ball $B_\delta(y)$ is disjoint from $f(X)$. I wanted to choose $\epsilon$ in terms of $\delta$ and use the fact that an isometry carries $\epsilon$-balls to $\epsilon$-balls, and show that given a minimum-size cover of $X$ with $\epsilon$ balls, that a cover of $X$ with $\epsilon$-balls could be found with one fewer ball if $f(X) \cap B_\delta(y) = \emptyset$, giving a contradiction. Can someone see a way to make this intuition work?
| That's a nice idea for a proof. I think perhaps it works well to turn it inside out, so to speak:
Lemma. Assuming $X$ is a compact metric space, for each $\delta>0$ there is a finite upper bound to the number of points in $X$ with a pairwise distance $\ge\delta$. (Let us call such a set of points $\delta$-separated.)
Proof. $X$ is totally bounded, so there exists a finite set $N$ of points in $X$ so that every $x\in X$ is closer than $\delta/2$ to some member of $N$. Any two points in $B_{\delta/2}(x)$ are closer together than $\delta$, so there cannot be a $\delta$-separated set with more members than $N$.
Now let $f\colon X\to X$ be an isometry and not onto. Let $x\in X\setminus f(X)$, and let $\delta>0$ be the distance from $x$ to $f(X)$. Let $E\subseteq X$ be a $\delta$-separated set with the largest possible number of members. Then $f(E)\cup\{x\}$ is such a set with more members. Contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/495648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 1,
"answer_id": 0
} |
How to solve this geometry problem?
I have been given 2 squares here, and I supposed the ratio of (area of shaded region/ are of outer of square)
Obviously there are four right angled triangles here, once we have their size length we can calculate their are and get the final answer.
My book's answer says that all of the triangle 3-4-5 triangle, but I can't figure how to get the measurement of the base/altitude in triangle with altitude=3 and base=4 respectively. Can anyone tell me , How do I calculate the other side of triangles with given information.
| Well, the obvious answer is that they are all $3,4,5$ triangles, but how might we see that?
Let's look at the angles which meet at the corner between the marked $3$ and $4$. Let the angle in the triangle with $3$ be $\alpha$, then the angle in the triangle with $4$ which is $90^{\circ}-\alpha$ because the unshaded area is a square and all its angles are right angles. You can trace the angles round and see that all the triangles are similar. And the fact that the outer figure is a square means they are all the same size (congruent).
Once that is established - and I agree that it needs to be checked, rather than assumed - you can use the $3,4,5$ fact with confidence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/495752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
$\lambda : (x,y,x^2+\lambda y^2)$ contains straight lines For which $\lambda\in \mathbb{R}$ $$\varphi(x,y) = (x,y,x^2+\lambda y^2)$$ is a ruled surface, i.e. it can also be parametrized as
$$ \mathbb{R}^2 \ni (t,u) \mapsto g(t) + uw(t) $$
with $g$ a differentiable curve and $w$ a differentiable vector field?
| If $\lambda<0$, then $\lambda=-k^2$ and $x^2+\lambda y^2=(x-ky)(x+ky)$.
Set $u=x-ky$ and $v=x+ky$. Then $x=\frac{u+v}{2}$, $y=\frac{u-v}{2k}$ and $z=uv$.
Then:
$$
(u,v)\rightarrow \; (\frac{u+v}{2},\frac{u-v}{2k},uv)=(\frac{u}{2},\frac{u}{2k},0)+v\,(\frac{1}{2},\frac{-1}{2k},u)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/495826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Dedekind cuts: Showing that the set B has no smallest element Let $A$ and $B$ be subset of the rational numbers defined as follows:
$A = \{p \in \mathbb{Q} \mid p\leq0 \lor p^2<2\}$
$B = \{q \in \mathbb{Q} \mid q>0 \land q^2 >2\}$
*
*Show that the set $B$ has no smallest element.
*Show that the set $A$ fails to have a least upper bound property
My attempt at question 1 is that we try to find an element $x \in \mathbb{Q} $ such that $x$ has form $x= q - \delta$, where $q \in B, \delta > 0$ and $q - \delta < q$. I know that $ x^2 = (q - \delta)^2 > 2$.
This is the point where I get stuck. I'm trying to define $\delta$ in terms of $q$ and then expanding the term $(q-\delta)^2$ to show that it is greater than 2. I am not too sure if this is the correct and most efficient way to do prove that B has no smallest element.
Help and advice is very much appreciated. Thank you
| We try to discover something that might work.
Suppose we are given a positive rational $r$ such that $r^2\gt 2$. We want to produce a smaller positive rational $s$ such that $s^2\gt 2$.
We will produce $s$ by taking a little off $r$, say by using $s=r-\epsilon$, where $\epsilon$ is a small positive rational.
So we need to make sure that $(r-\epsilon)^2$ is still $\gt 2$.
Calculate. We have
$$(r-\epsilon)^2-2=(r^2-2)-2r\epsilon+\epsilon^2\gt (r^2-2)-2r\epsilon.$$
If we can make sure that $(r^2-2)-2r\epsilon\ge 0$ we will have met our goal. That can be done by choosing $\epsilon=\frac{r^2-2}{2r}$. That leads to the choice
$$s=r-\frac{r^2-2}{2r}=\frac{r^2+2}{2r}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/495899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Probability of winning the game 1-2-3-4-5-6-7-8-9-10-J-Q-K A similar question to mine was answered here on stackexchange:
Probability of winning the game "1-2-3"
However, I am unable to follow the formulas so perhaps someone could show the calculation and the way they arrived at it to answer this question.
My card game is very similar, except the counting of cards goes all the way from 1 (Ace) through to King and then restarts. If you can make it through an entire deck of cards without hitting the same ranked card you win. So, for example, if you call out Ace - 2 -3 - 4 - 5 - 6, etc. and hit "7" when you've just called out "7", you lose.
What is the chance of winning this card game?
| I think you'll find what you want in the paper Frustration solitaire by Doyle, Grinstead, and Snell at http://arxiv.org/pdf/math/0703900.pdf -- it looks like they get the answer
$$\begin{align}
{R_{13} \over 52!} &= {4610507544750288132457667562311567997623087869 \over 284025438982318025793544200005777916187500000000}\cr
\cr
&= 0.01623272746719463674 \ldots
\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/495991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
} |
Gradient of a dot product The wikipedia formula for the gradient of a dot product is given as
$$\nabla(a\cdot b) = (a\cdot\nabla)b +(b\cdot \nabla)a + a\times(\nabla\times b)+ b\times (\nabla \times a)$$
However, I also found the formula $$\nabla(a\cdot b) = (\nabla a)\cdot b + (\nabla b)\cdot a $$
So... what is going on here? The second formula seems much easier. Are these equivalent?
| Since there are not many signs that one may easily use in mathematical notations, many of these symbols are overloaded. In particular, the dot "$\cdot$" is used in the first formula to denote the scalar product of two vector fields in $\mathbb R^3$ called $a$ and $b$, while in the second formula it denotes the usual product of the functions $a$ and $b$. This means that both formulae are valid, but each one is so only in its proper context.
(It is scary to see that the answers and comments that were wrong collected the most votes!)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/496060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "53",
"answer_count": 4,
"answer_id": 3
} |
Use every non-abelian group of order 6 has a non-normal subgroup of order 2 to classify groups of order 6.
Prove that every non-abelian group of order $6$ has a non-normal subgroup of order $2$. Use this to classify groups of order $6$.
I proved that every non-abelian group of order 6 has a nonnormal subgroup of order 2, but then how can I use this to classify groups of order 6?
| Assuming $H$ is a non normal subgroup of order $2$.
Consider Action of $G$ on set of left cosets of $H$ by left multiplication.
let $\{g_iH :1\leq i\leq 3\}$ be cosets of $H$ in $G$.
(please convince yourself that there will be three distinct cosets)
we now consider the action $\eta : G\times\{g_iH :1\leq i\leq 3\} \rightarrow \{g_iH :1\leq i\leq 3\}$ by left multiplication.
i.e., take an element $g\in G$ and consider $g.g_iH$ as there are only three distinct sets we get $g.g_iH = g_jH$ for some $j\in \{1,2,3\}$
In this manner, the elements of $g\in G$ takes cosets $g_iH$ (represented by $i$) to cosets $g_jH$ (represented by $j$)
i.e., we have map $\eta :\{1,2,3\} \rightarrow \{1,2,3\}$
which can be seen as $\eta : G\rightarrow S_3$
we know that $Ker(\eta)$ is normal in $G$ which is contained in $H$.
As $H$ is not normal in $G$ we end up with the case that $Ker(\eta)=(1)$ i.e., $\eta$ is injective.
i.e., we have $G$ as a subgroup(isomorphic copy) of $S_3$. But, $|G|=|S_3|=6$. Thus, $G\cong S_3$.
So, for any non abelian group $G$ of order $6$ we have $G\cong S_3$.
For an abelian group of order $6$ we have already know that $G$ is cyclic and $G\cong \mathbb{Z}_6$.
So, only non isomorphic groups of order $6$ are $S_3,\mathbb{Z}_6$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/496096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 0
} |
Can an integer of the form $4n+3$ written as a sum of two squares? Let $u$ be an integer of the form $4n+3$, where $n$ is a positive integer. Can we find integers $a$ and $b$ such that $u = a^2 + b^2$? If not, how to establish this for a fact?
| I'll write another argument with more group theoretic flavor in my opinion. Suppose that $p=4k+3$ is a prime number and you can write $p=x^2+y^2$. then $x^2+y^2 \equiv 0 \pmod{p} \iff x^2 \equiv -y^2 \pmod{p} \iff (xy^{-1})^2 \equiv -1 \pmod{p}$. Therefore $t=xy^{-1}$ is a solution of $x^2 \equiv -1 \pmod{p}$.
Now consider the group $\mathbb{Z}^*_p$ which consists of all non-zero residues in mod $p$ under multiplication of residues. $|G|=(4k+3)-1=4k+2$. Therefore, by a group theory result (you can also use a weaker theorem in number theory called Fermat's little theorem), for any $a \in \mathbb{Z}^*_p: a^{|G|}=1$, i.e. $a^{4k+2}=1$.
We know that there exists $x=t$ in $\mathbb{Z}^*_p$ such that $x^2 = -1$, hence, $x^4 = 1$. But this means that $\operatorname{ord}(x) \mid |G| \implies 4 \mid 4k+2$. But $4 \mid 4k$ and therefore $4 \mid 4k+2 - 4k = 2$ which is absurd. This contradiction means that it's not possible to write $p=x^2+y^2$ for $x,y \in \mathbb{Z}$.
EDIT: I should also add that any integer of the form $4k+3$ will have a prime factor of the form $4k+3$. The reason is, if none of its factors are of this form, then all of its prime factors must be of the form $4k+1$. But you can easily check that $(4k+1)(4k'+1)=4k''+1$ which leads us to a contradiction. This is how you can generalize what I said to the case when $n=4k+3$ is any natural number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/496255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 5,
"answer_id": 0
} |
Showing that $\mathbb{Q}[\sqrt{2}, \sqrt{3}]$ contains multiplicative inverses
Why must $\mathbb{Q}[\sqrt{2}, \sqrt{3}]$ -- the set of all polynomials in $\sqrt{2}$ and $\sqrt{3}$ with rational coefficients -- contain multiplicative inverses?
I have gathered that every element of $\mathbb{Q}[\sqrt{2}, \sqrt{3}]$ takes form $a + b\sqrt{2} + c\sqrt{3} + d\sqrt{2}\sqrt{3}$ for some $a,b,c,d \in \mathbb{Q}$, and I have shown that $\mathbb{Q}[\sqrt{2}]$ and $\mathbb{Q}[\sqrt{3}]$ are both fields, but it's not clear to me why this allows for general multiplicative inverses to exist in $\mathbb{Q}[\sqrt{2}, \sqrt{3}]$.
| Let $K$ be an extension field of the field $F$ and let $0\ne a\in K$ be algebraic over $F$; then $F[a]$ contains the inverse of $a$.
Indeed, if $f(X)=c_0+c_1X+\dots+c_{n-1}X^{n-1}+X^n$ is the minimal polynomial of $a$ over $F$, then it's irreducible, so $c_0\ne 0$ and
$$
c_0+c_1a+\dots+c_{n-1}a^{n-1}+a^n=0.
$$
Multiply by $c_0^{-1}a^{-1}$ to get
$$
a^{-1}=-c_0^{-1}(c_1+\dots+c_{n-1}a^{n-2}+a^{n-1})
$$
so $a^{-1}\in F[a]$.
If now $0\ne b\in F[a]$, you also have $F[b]\subseteq F[a]$. Since $F[b]$ is an $F$-subspace of $F[a]$, it's finite dimensional over $F$; therefore $b$ is algebraic over $F$ and, by what we showed above, $b^{-1}\in F[b]\subseteq F[a]$.
Now you can apply this to $\mathbb{Q}[\sqrt{2},\sqrt{3}]$, which is equal to $F[\sqrt{3}]$ where $F=\mathbb{Q}[\sqrt{2}]$, which is a field. Then also $F[\sqrt{3}]$ is a field by the same reason. You can take $K=\mathbb{C}$, of course.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/496334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 0
} |
Find solution of PDE $x \frac {\partial z} {\partial x} +y \frac {\partial z} {\partial y}=z$
Problem:Find solution of Cauchy problem for the first order PDE $x \frac {\partial z} {\partial x} +y \frac {\partial z} {\partial y}=z$,on $ D= {(x,y,z): x^2 +y^2 \neq0,z>0} $ with initial condition $x^2+y^2=1,z=1$
Solution:Using Lagrange's solution we get,
$\phi \left ( x \over y \right )={y \over z} $ where $\phi$ is some arbitrary function
But I don't know "How to do this with above conditions"
| This is Euler's equation for homogeneous functions of degree $k=1$. Hence we know that $z( \lambda x, \lambda y)=\lambda z(x,y)$ for all $\lambda>0$. Since $z \equiv 1$ on the unit circle, we find that $$z(x,y)=z \left( \sqrt{x^2+y^2}\frac{(x,y)}{\sqrt{x^2+y^2}} \right)=\sqrt{x^2+y^2} \;z \left( \frac{(x,y)}{\sqrt{x^2+y^2}} \right)=\sqrt{x^2+y^2}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/496432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.