Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Differential equation of $y = e^{rx}$ I am trying to find what values of r in $y = e^{rx}$ satsify $2y'' + y' - y = 0$
I thought I was being clever and knew how to do this so this is how I proceeded.
$$y' = re^{rx}$$
$$y'' = r^2 e^{rx}$$
$$2(r^2 e^{rx}) +re^{rx} -e^{rx} = 0 $$
I am not sure how to proceed from here, the biggest thing I am confused on is that I am working with a variable x, with no input conditions at all, and a variable r (the constant) so how do I do this?
| Here's the best part: $e^{rx}$ is never zero. Thus, if we factor that out, it is simply a quadratic in $r$ that remains.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Infinite products - reference needed! I am looking for a small treatment of basic theorems about infinite products ; surprisingly enough they are nowhere to be found after googling a little. The reason for this is that I am beginning to read Davenport's Multiplicative Number Theory, and the treatment of L-functions in there requires to understand convergence/absolute convergence of infinite products, which I know little about. Most importantly I'd like to know why
$$
\prod (1+|a_n|) \to a < \infty \quad \Longrightarrow \quad \prod (1+ a_n) \to b \neq 0.
$$
I believe I'll need more properties of products later on, so just a proof of this would be appreciated but I'd also need the reference.
Thanks in advance,
| I will answer your question
"Most importantly I'd like to know why
$$
\prod (1+|a_n|) \to a < \infty \quad \Longrightarrow \quad \prod (1+ a_n) \to b \neq 0.
"$$
We will first prove that if $\sum \lvert a_n \rvert < \infty$, then the product $\prod_{n=1}^{\infty} (1+a_n)$ converges. Note that the condition you have $\prod (1+|a_n|) \to a < \infty$ is equivalent to the condition that $\sum \lvert a_n \rvert < \infty$, which can be seen from the inequality below.
$$\sum \lvert a_n \rvert \leq \prod (1+|a_n|) \leq \exp \left(\sum \lvert a_n \rvert \right)$$
Further, we will also show that the product converges to $0$ if and only if one of its factors is $0$.
If $\sum \lvert a_n \rvert$ converges, then there exists some $M \in \mathbb{N}$ such that for all $n > M$, we have that $\lvert a_n \rvert < \frac12$. Hence, we can write $$\prod (1+a_n) = \prod_{n \leq M} (1+a_n) \prod_{n > M} (1+a_n)$$ Throwing away the finitely many terms till $M$, we are interested in the infinite product $\prod_{n > M} (1+a_n)$. We can define $b_n = a_{n+M}$ and hence we are interested in the infinite product $\prod_{n=1}^{\infty} (1+b_n)$, where $\lvert b_n \rvert < \dfrac12$. The complex logarithm satisfies $1+z = \exp(\log(1+z))$ whenever $\lvert z \rvert < 1$ and hence $$ \prod_{n=1}^{N} (1+b_n) = \prod_{n=1}^{N} e^{\log(1+b_n)} = \exp \left(\sum_{n=1}^N \log(1+b_n)\right)$$
Let $f(N) = \displaystyle \sum_{n=1}^N \log(1+b_n)$. By the Taylor series expansion, we can see that $$\lvert \log(1+z) \rvert \leq 2 \lvert z \rvert$$ whenever $\lvert z \rvert < \frac12$. Hence, $\lvert \log(1+b_n) \rvert \leq 2 \lvert b_n \rvert$. Now since $\sum \lvert a_n \rvert$ converges, so does $\sum \lvert b_n \rvert$ and hence so does $\sum \lvert \log(1+b_n) \rvert$. Hence, $\lim_{N \rightarrow \infty} f(N)$ exists. Call it $F$. Now since the exponential function is continuous, we have that $$\lim_{N \to \infty} \exp(f(N)) = \exp(F)$$
This also shows that why the limit of the infinite product $\prod_{n=1}^{\infty}(1+a_n)$ cannot be $0$, unless one of its factors is $0$. From the above, we see that $\prod_{n=1}^{\infty}(1+b_n)$ cannot be $0$, since $\lvert F \rvert < \infty$. Hence, if the infinite product $\prod_{n=1}^{\infty}(1+a_n)$ is zero, then we have that $\prod_{n=1}^{M}(1+a_n) = 0$. But this is a finite product and it can be $0$ if and only if one of the factors is zero.
Most often this is all that is needed when you are interested in the convergence of the product expressions for the $L$ functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 3,
"answer_id": 1
} |
Matrix with no eigenvalues Here is another problem from Golan.
Problem: Let $F$ be a finite field. Show there exists a symmetric $2\times 2$ matrix over $F$ with no eigenvalues in $F$.
| The solution is necessarily split into two cases, because the theory of quadratic equations has a different appearance in characteristic two as opposed to odd characteristic.
Let $p=\mathrm{char}\, F$. Assume first that $p>2$. Consider the matrix
$$
M=\pmatrix{a&b\cr b&c\cr}.
$$
Its characteristic equation is
$$
\lambda^2-(a+c)\lambda-(ac-b^2)=0.\tag{1}
$$
The discriminant of this equation is
$$
D=(a+c)^2-4(ac-b^2)=(a-c)^2+(2b)^2.
$$
By choosing $a,c,b$ cleverly we see that we can arrange the quantities $a-c$ and $2b$ to have any value that we wish. It is a well-known fact that in a finite field of odd characteristic, any element can be written as a sum of two squares. Therefore we can arrange $D$ to be a non-square proving the claim in this case.
If $p=2$, then equation $(1)$ has roots in $F$, if and only if $tr((ac-b^2)/(a+c)^2)=0.$
By selecting $a$ and $c$ to be any distinct elements of $F$, we can then select $b$ in such a way that this trace condition is not met, and the claim follows in this case also.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Notation for an indecomposable module. If $V$ is a 21-dimensional indecomposable module for a group algebra $kG$ (21-dimensional when considered as a vector space over $k$), which has a single submodule of dimension 1, what is the most acceptable notation for the decomposition of $V$, as I have seen both $1\backslash 20$ and $20/1$ used (or are both equally acceptable)?
| My feeling is that this notation is not sufficiently standard for you to use either choice without explanation, hence whichever choice you make, you should signal it carefully in your paper. Given that, either choice looks fine to me.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Numerical Analysis over Finite Fields Notwithstanding that it isn't numerical analysis if it's over finite fields, but what topics that are traditionally considered part of numerical analysis still have some substance to them if the reals are replaced with finite fields or an algebraic closure thereof? Perhaps using Hamming distance as a metric for convergence purposes, with convergence of an iteration in a discrete setting just meaning that the Hamming distance between successive iterations becomes zero i.e. the algorithm has a fixed-point.
I ask about still having substance because I suspect that in the ff setting, na topics will mostly either not make sense, or be trivial.
| The people who factor large numbers using sieve algorithms (the quadratic sieve, the special and general number field sieves) wind up with enormous (millions by millions) systems of linear equations over the field of two elements, and they need to put a lot of thought into the most efficient ways to solve these systems if they want their algorithms to terminate before the sun does.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
In which case $M_1 \times N \cong M_2 \times N \Rightarrow M_1 \cong M_2$ is true? Usually for modules $M_1,M_2,N$
$$M_1 \times N \cong M_2 \times N \Rightarrow M_1 \cong M_2$$
is wrong. I'm just curious, but are there any cases or additional conditions where it gets true?
James B.
| A standard result in this direction is the Krull-Schmidt Theorem:
Theorem (Krull-Schmidt for modules) Let $E$ be a nonzero module that has both ACC and DCC on submodules (that is, $E$ is both artinian and noetherian). Then $E$ is a direct sum of (finitely many) indecomposable modules, and the direct summands are unique up to a permutation and isomorphism.
In particular:
Corollary. If $M_1$, $M_2$, and $N$ are both noetherian and artinian, and $M_1\times N \cong M_2\times N$, then $M_1\cong M_2$.
Proof. Both $M_1\times N$ and $M_2\times N$ satisfy the hypothesis of the Krull-Schmidt Theorem; decompose $M_1$, $M_2$, and $N$ into a direct sum of indecomposable modules. The uniqueness clause of Krull-Schmidt yields the isomorphism of $M_1$ with $M_2$. $\Box$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Why is there no functor $\mathsf{Group}\to\mathsf{AbGroup}$ sending groups to their centers? The category $\mathbf{Set}$ contains as its objects all small sets and arrows all functions between them. A set is "small" if it belongs to a larger set $U$, the universe.
Let $\mathbf{Grp}$ be the category of small groups and morphisms between them, and $\mathbf{Abs}$ be the category of small abelian groups and its morphisms.
I don't see what it means to say there is no functor $f: \mathbf{Grp} \to \mathbf{Abs}$ that sends each group to its center, when $U$ isn't even specified. Can anybody explain?
| This is very similar to Arturo Magidin's answer, but offers another point of view.
Consider the dihedral group $D_n=\mathbb Z_n\rtimes \mathbb Z_2$ with $2\nmid n$ (so the $Z(D_n)=1$). From the splitting lemma we get a short exact sequence
$$1\to\mathbb Z_n\rightarrow D_n\xrightarrow{\pi} \mathbb Z_2\to 1$$
and an arrow $\iota\colon \mathbb Z_2\to D_n$ such that $\pi\circ \iota=1_{\mathbb Z_2}$.
Hence the composite morphism
$$\mathbb Z_2\xrightarrow{\iota} D_n\xrightarrow{\pi}\mathbb Z_2$$
is an iso and would be mapped by the centre to an iso
$$\mathbb Z_2\to 1\to \mathbb Z_2$$
what is impossible. (One can also recognize a split mono and split epi above and analyze how they behave under an arbitrary functor).
Therefore the centre can't be functorial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 2,
"answer_id": 0
} |
Perturbation problem This is a mathematica exercise that I have to do, where $y(x) = x - \epsilon \sin(2y)$ and it wants me to express the solution $y$ of the equation as a power series in $ \epsilon$.
| We're looking for a perturbative expansion of the solution $y(x;\epsilon)$ to
$$ y(x) = x-\epsilon \sin(2y)$$
I don't know if you're asking for an expansion to all orders. If so, I have no closed form to offer. But, for illustration, the first four terms in the series may be found as follows by use of the addition theorem (this procedure may be continued to any desired order). Expand $$y(x;\epsilon)=y_o(x)+\epsilon y_1(x)+\epsilon^2 y_2(x)+\epsilon^3 y_3(x) + o(\epsilon^3).$$
Then clearly, $y_o(x)=x$ and $$\epsilon \,y_1(x) +o(\epsilon)=-\epsilon\sin(2x+\epsilon 2y_1+o(\epsilon))=-\epsilon\sin(2x)\underbrace{\cos(\epsilon\, 2y_1+o(\epsilon))}_{\sim 1}\\
\phantom{tttttttttttttttttttttttttttttttttttttttttttttttttt}+\epsilon\cos(2x)\underbrace{\sin(\epsilon\, 2y_1+o(\epsilon))}_{\sim 2\epsilon y_1 = O(\epsilon)}. \\$$
This implies $y_1(x)= -\sin(2x)$. Similarly, at the next two orders we find $y_2(x)=2\sin(2x)\cos(2x)=\sin(4x)$ and $y_3(x)=2\sin^3(x)-2\cos(2x)\sin(4x)$, if I haven't made a mistake in the algebra. Hence
$$y(x;\epsilon) = x - \epsilon \sin(x) + \epsilon^2 \sin(2x)\cos(x)+\epsilon^3 (2\sin^3(x)-2\cos(2x)\sin(4x)) +o(\epsilon^3). $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Orthonormal basis Consider $\mathbb{R}^3$ together with inner product $\langle (x_1, x_2, x_3), (y_1, y_2, y_3) \rangle = 2x_1 y_1+x_2 y_2+3 x_3 y_3$. Use the Gram-Schmidt procedure to find an orthonormal basis for $W=\text{span} \left\{(-1, 1, 0), (-1, 1, 2) \right\}$.
I don't get how the inner product $\langle (x_1, x_2, x_3), (y_1, y_2, y_3) \rangle = 2 x_1 y_1+x_2 y_2+3 x_3 y_3$ would affect the approach to solve this question.. When I did the gram-schmidt, I got $v_1=(-1, 1, 0)$ and $v_2=(0, 0, 2)$ but then realized that you have to do something with the inner product before finding the orthonormal basis. Can someone please help me?
Update: So far I got $\{(\frac{-1}{\sqrt{3}}, \frac{1}{\sqrt{3}}, 0), (0, 0, \frac{2}{\sqrt{12}})\}$ as my orthonormal basis but I'm not sure if I am doing it right with the given inner product.
| The choice of inner product defines the notion of orthogonality.
The usual notion of being "perpendicular" depends on the notion of "angle" which turns out to depend on the notion of "dot product".
If you change the way we measure the "dot product" to give a more general inner product then we change what we mean by "angle", and so have a new notion of being "perpendicular", which in general we call orthogonality.
So when you apply the Gram-Schmidt procedure to these vectors you will NOT necessarily get vectors that are perpendicular in the usual sense (their dot product might not be $0$).
Let's apply the procedure.
It says that to get an orthogonal basis we start with one of the vectors, say $u_1 = (-1,1,0)$ as the first element of our new basis.
Then we do the following calculation to get the second vector in our new basis:
$u_2 = v_2 - \frac{\langle v_2, u_1\rangle}{\langle u_1, u_1\rangle} u_1$
where $v_2 = (-1,1,2)$.
Now $\langle v_2, u_1\rangle = 3$ and $\langle u_1, u_1\rangle = 3$ so that we are given:
$u_2 = v_2 - u_1 = (0,0,2)$.
So your basis is correct. Let's check that these vectors are indeed orthogonal. Remember, this is with respect to our new inner product. We find that:
$\langle u_1, u_2\rangle = 3(-1)(0) + (1)(0) + 2(0)(2) = 0$
(here we also happened to get a basis that is perpendicular in the traditional sense, this was lucky).
Now is the basis orthonormal? (in other words, are these unit vectors?). No they arent, so to get an orthonormal basis we must divide each by its length. Now this is not the length in the usual sense of the word, because yet again this is something that depends on the inner product you use. The usual Pythagorean way of finding the length of a vector is:
$||x||=\sqrt{x_1^2 + ... + x_n^2} = \sqrt{x . x}$
It is just the square root of the dot product with itself. So with more general inner products we can define a "length" via:
$||x|| = \sqrt{\langle x,x\rangle}$.
With this length we see that:
$||u_1|| = \sqrt{2(-1)(-1) + (1)(1) + 3(0)(0)} = \sqrt{3}$
$||u_2|| = \sqrt{2(0)(0) + (0)(0) + 3(2)(2)} = 2\sqrt{3}$
(notice how these are different to what you would usually get using the Pythagorean way).
Thus an orthonormal basis is given by:
$\{\frac{u_1}{||u_1||}, \frac{u_2}{||u_2||}\} = \{(\frac{-1}{\sqrt{3}}, \frac{1}{\sqrt{3}}, 0), (0,0,\frac{1}{\sqrt{3}})\}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Evaluating $\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\exp(n x-\frac{x^2}{2}) \sin(2 \pi x)\ dx$ I want to evaluate the following integral ($n \in \mathbb{N}\setminus \{0\}$):
$$\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\exp\left(n x-\frac{x^2}{2}\right) \sin(2 \pi x)\ dx$$
Maple and WolframAlpha tell me that this is zero and I also hope it is zero, but I don't see how I can argue for it.
I thought of rewriting the sine via $\displaystyle \sin(x)=\frac{e^{ix}-e^{-ix}}{2i}$ or using Euler's identity on $\exp(n x-\frac{x^2}{2})$. However, in both ways I am stuck...
Thanks for any hint.
| $$I = \frac{e^{n^2/2}}{\sqrt{2\pi}} \int_{-\infty}^{+\infty} e^{-\frac{(x-n)^2}{2}} \sin (2\pi x) \, dx \stackrel{x = x-n}{=} \frac{e^{n^2/2}}{\sqrt{2\pi}} \int_{-\infty}^{+\infty} e^{-\frac{x^2}{2}} \sin (2\pi x) \, dx$$
Now divide the integral into two parts:
$$\int_{-\infty}^{+\infty} e^{-\frac{x^2}{2}} \sin (2\pi x) \, dx = \int_{-\infty}^{0} e^{-\frac{x^2}{2}} \sin (2\pi x) \, dx + \int_{0}^{+\infty} e^{-\frac{x^2}{2}} \sin (2\pi x) \, dx$$
Take one of them and substitute $t=-x$:
$$\int_{-\infty}^{0} e^{-\frac{x^2}{2}} \sin (2\pi x) \, dx = -\int_0^{+\infty}e^{-\frac{t^2}{2}} \sin (2\pi t) \, dt$$
Because these integrals are finite, i.e.:
$$\int_0^{+\infty} \left| e^{-\frac{t^2}{2}} \sin (2\pi t) \right| \, dt \le \int_0^{+\infty}e^{-\frac{t^2}{2}} \, dt = \sqrt{\frac{\pi}{2}}$$
We can write $I = 0$ and we are not dealing with any kind of indeterminate form like $\infty - \infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What is the result of sum $\sum\limits_{i=0}^n 2^i$
Possible Duplicate:
the sum of powers of $2$ between $2^0$ and $2^n$
What is the result of
$$2^0 + 2^1 + 2^2 + \cdots + 2^{n-1} + 2^n\ ?$$
Is there a formula on this? and how to prove the formula?
(It is actually to compute the time complexity of a Fibonacci recursive method.)
| Let us take a particular example that is large enough to illustrate the general situation. Concrete experience should precede the abstract.
Let $n=8$. We want to show that $2^0+2^1+2^2+\cdots +2^8=2^9-1$. We could add up on a calculator, and verify that the result holds for $n=8$. However, we would not learn much during the process.
We will instead look at the sum written backwards, so at
$$2^8+2^7+2^6+2^5+2^4+2^3+2^2+2^1+2^0.$$
A kangaroo is $2^9$ feet from her beloved $B$. She takes a giant leap of $2^8$ feet. Now she is $2^8$ feet from $B$. She takes a leap of $2^7$ feet. Now she is $2^7$ feet from $B$. She takes a leap of $2^6$ feet. And so on. After a while she is $2^1$ feet from $B$, and takes a leap of $2^0$ feet, leaving her $2^0$ feet from $B$.
The total distance she has covered is $2^8+2^7+2^6+\cdots+2^0$. It leaves her $2^0$ feet from $B$, and therefore
$$2^8+2^7+2^6+\cdots+2^0+2^0=2^9.$$
Since $2^0=1$, we obtain by subtraction that $2^8+2^7+\cdots +2^0=2^9-1$.
We can write out the same reasoning without the kangaroo. Note that
$2^0+2^0=2^1$, $2^1+2^1=2^2$, $2^2+2^2=2^3$, and so on until $2^8+2^8=2^9$.
Therefore
$$(2^0+2^0)+2^1+2^2+2^3+2^4+\cdots +2^8=2^9.$$
Subtract the front $2^0$ from the left side, and $2^0$, which is $1$, from the right side, and we get our result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 7,
"answer_id": 3
} |
Integral of $\int 2\,\sin^{2}{x}\cos{x}\,dx$ I am asked as a part of a question to integrate $$\int 2\,\sin^{2}{x}\cos{x}\,dx$$
I managed to integrate it using integration by inspection:
$$\begin{align}\text{let } y&=\sin^3 x\\
\frac{dy}{dx}&=3\,\sin^2{x}\cos{x}\\
\text{so }\int 2\,\sin^{2}{x}\cos{x}\,dx&=\frac{2}{3}\sin^3x+c\end{align}$$
However, looking at my notebook the teacher did this:
$$\int -\left(\frac{\cos{3x}-\cos{x}}{2}\right)$$
And arrived to this result:
$$-\frac{1}{6}\sin{3x}+\frac{1}{2}\sin{x}+c$$
I'm pretty sure my answer is correct as well, but I'm curious to find out what how did do rewrite the question in a form we can integrate.
| Another natural approach is the substitution $u=\sin x$.
The path your instructor chose is less simple. We can rewrite $\sin^2 x$ as $1-\cos^2x$, so we are integrating $2\cos x-2\cos^3 x$. Now use the identity $\cos 3x=4\cos^3 x-3\cos x$ to conclude that $2\cos^3 x=\frac{1}{2}\left(\cos 3x+3\cos x\right)$.
Remark: The identity $\cos 3x=4\cos^3 x-3\cos x$ comes up occasionally, for example in a formula for solving certain classes of cubic equations. The same identity comes up when we are proving that the $60^\circ$ angle cannot be trisected by straightedge and compass.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 3
} |
A series with prime numbers and fractional parts Considering $p_{n}$ the nth prime number, then compute the limit:
$$\lim_{n\to\infty} \left\{ \dfrac{1}{p_{1}} + \frac{1}{p_{2}}+\cdots+\frac{1}{p_{n}} \right\} - \{\log{\log n } \}$$
where $\{ x \}$ denotes the fractional part of $x$.
| This is by no means a complete answer but a sketch on how to possibly go about. To get the constant, you need some careful computations.
First get an asymptotic for $ \displaystyle \sum_{n \leq x} \dfrac{\Lambda(n)}{n}$ as $\log(x) - 2 + o(1)$.
To get this asymptotic, you need Stirling's formula and the fact that $\psi(x) = x + o(x)$ i.e. the PNT.
Then relate $ \displaystyle \sum_{\overset{p \leq x}{p- \text{prime}}} \dfrac{\Lambda(p)}{p}$ to $ \displaystyle \sum_{n \leq x} \dfrac{\Lambda(n)}{n}$. Essentially, you can get $$ \displaystyle \sum_{\overset{p \leq x}{p- \text{prime}}} \dfrac{\Lambda(p)}{p} = \displaystyle \sum_{n \leq x} \dfrac{\Lambda(n)}{n} + C + o(1)$$
Getting this constant $C$ is also hard. You can try your luck with partial summation.
Then relate $ \displaystyle \sum_{\overset{p \leq x}{p- \text{prime}}} \dfrac1{p}$ to $\displaystyle \sum_{\overset{p \leq x}{p- \text{prime}}} \dfrac{\Lambda(p)}{p}$ i.e. $\displaystyle \sum_{\overset{p \leq x}{p- \text{prime}}} \dfrac{\log(p)}{p}$ by partial summation to get $$\displaystyle \sum_{\overset{p \leq x}{p- \text{prime}}} \dfrac1{p} = \log(\log(x)) + C_1 + o(1)$$
You might need to invoke PNT here as well. The $C_1$ here obviously depends on he constant $C$ and constant $2$ before.
EDIT Thinking about it there might be a way to avoid PNT to get the constant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Is $A^{q+2}=A^2$ in $M_2(\mathbb{Z}/p\mathbb{Z})$? I'm wondering, why is it that for $q=(p^2-1)(p^2-p)$, that $A^{q+2}=A^2$ for any $A\in M_2(\mathbb{Z}/p\mathbb{Z})$?
It's not hard to see that $GL_2(\mathbb{Z}/p\mathbb{Z})$ has order $(p^2-1)(p^2-p)$, and so $A^q=1$ if $A\in GL_2(\mathbb{Z}/p\mathbb{Z})$, and so the equation holds in that case. But if $A$ is not invertible, why does the equality still hold?
| If $A$ is not invertible, then its characteristic polynomial is either $x^2$ or $x(x-a)$ for some $a\in\mathbb{Z}/p\mathbb{Z}$. In the former case, by the Cayley-Hamilton Theorem we have $A^2 = 0$, hence $A^{q+2}=A^2$. In the latter case, the matrix is similar to a diagonal matrix, with $0$ in one diagonal and $a$ in the other. So, up to conjugation, we have
$$A^{q+2}=\left(\begin{array}{cc}
0 & 0\\
0 & a\end{array}\right)^{q+2} = \left(\begin{array}{cc}
0 & 0\\
0 & a^{q+2}
\end{array}\right).$$
But $a^{p} = a$. Since $q = p^4 -p^3-p^2 + p$, we have
$$a^{q} = \frac{a^{p^4}a^p}{a^{p^3}a^{p^2}} = 1$$
so $a^{q+2} = a^2$, hence $A^{q+2}=A^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Parametric equation of a cone.
I have a cone with vertex (a, b, c) and base circumference with center $(x_0,y_0)$ and radius R. I can't understand what is the parametric representation of three dimensional space inside the cone. Any suggestions please?
| The parametric equation of the circle is:
$$
\gamma(u) = (x_0 + R\cos u, y_0 + R\sin u, 0)
$$
Each point on the cone lies on a line that passes through $p(a, b, c)$ and a point on the circle. Therefore, the direction vector of such a line is:
$$
\gamma(u) - p = (x_0 + R\cos u, y_0 + R\sin u, 0) - (a, b, c) = (x_0 - a + R\cos u, y_0 - b + R\sin u, -c)
$$
And since the line passes through $p(a, b, c)$, the parametric equation of the line is:
$$
p + v\left(\gamma(u) - p\right) = (a, b, c) + v \left((x_0 - a + R\cos u, y_0 - b + R\sin u, -c)\right)
$$
Simplify to get the parametric equation of the cone:
$$
\sigma(u, v) = \left(a(1-v) + v(x_0 + R\cos u), b(1-v) + v(y_0 + R\sin u), c(1 - v)\right)
$$
Here is a plot of the cone for $p(0, 0, 2)$, $(x_0, y_0) = (-1, -1)$ and $R = 1$, when $u$ scans $[0, 2\pi]$ and $v$ scans $[0, 1]$:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/158987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Signature of a manifold as an invariant Could you help me to see why signature is a HOMOTOPY invariant? Definition is below (from Stasheff)
The \emph{signature (index)} $\sigma(M)$ of a compact and oriented $n$ manifold $M$ is defined as follows. If $n=4k$ for some $k$, we choose a basis $\{a_1,...,a_r\}$ for $H^{2k}(M^{4k}, \mathbb{Q})$ so that the \emph{symmetric} matrix $[<a_i \smile a_j, \mu>]$ is diagonal. Then $\sigma (M^{4k})$ is the number of positive diagonal entries minus the number of negative ones. Otherwise (if $n$ is not a multiple of 4) $\sigma(M)$ is defined to be zero \cite{char}.
| You should be using a more invariant definition of the signature. First, cohomology and Poincaré duality are both homotopy invariant. It follows that the abstract vector space $H^{2k}$ equipped with the intersection pairing is a homotopy invariant. Now I further claim that the signature is an invariant of real vector spaces equipped with a nondegenerate bilinear pairing (this is just Sylvester's law of inertia). So after tensoring with $\mathbb{R}$ the conclusion follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Positive Semi-Definite matrices and subtraction I have been wondering about this for some time, and I haven't been able to answer the question myself. I also haven't been able to find anything about it on the internet. So I will ask the question here:
Question: Assume that $A$ and $B$ both are positive semi-definite. When is $C = (A-B)$ positive semi-definite?
I know that I can figure it out for given matrices, but I am looking for a necessary and sufficient condition.
It is of importance when trying to find solutions to conic-inequality systems, where the cone is the cone generated by all positive semi-definite matrices. The question I'm actually interested in finding nice result for are:
Let $x \in \mathbb{R}^n$, and let $A_1,\ldots,A_n,B$ be positive semi-definite. When is
$(\sum^n_{i=1}x_iA_i) - B$
positive semi-definite?
I feel the answer to my first question should yield the answer to the latter. I am looking for something simpler than actually calculating the eigenvalues.
| There's a form of Sylvester's criterion for positive semi-definiteness, which unfortunately requires a lot more computations than the better known test for positive definiteness. Namely, all principal minors (not just the leading ones) must be nonnegative. Principal minors are obtained by deleting some of the rows and the same-numbered columns. Source
The book Matrix Analysis by Horn and Johnson is the best reference for positive (semi)definiteness that I know.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Traces of all positive powers of a matrix are zero implies it is nilpotent Let $A$ be an $n\times n$ complex nilpotent matrix. Then we know that because all eigenvalues of $A$ must be $0$, it follows that $\text{tr}(A^n)=0$ for all positive integers $n$.
What I would like to show is the converse, that is,
if $\text{tr}(A^n)=0$ for all positive integers $n$, then $A$ is nilpotent.
I tried to show that $0$ must be an eigenvalue of $A$, then try to show that all other eigenvalues must be equal to 0. However, I am stuck at the point where I need to show that $\det(A)=0$.
May I know of the approach to show that $A$ is nilpotent?
| If the eigenvalues of $A$ are $\lambda_1$, $\dots$, $\lambda_n$, then the eigenvalues of $A^k$ are $\lambda_1^k$, $\dots$, $\lambda_n^k$. It follows that if all powers of $A$ have zero trace, then $$\lambda_1^k+\dots+\lambda_n^k=0\qquad\text{for all $k\geq1$.}$$ Using Newton's identities to express the elementary symmetric functions of the $\lambda_i$'s in terms of their power sums, we see that all the coefficients of the characteristic polynomial of $A$ (except that of greatest degree, of course) are zero. This means that $A$ is nilpotent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40",
"answer_count": 3,
"answer_id": 1
} |
What is the correct way to solve $\sin(2x)=\sin(x)$
I've found two different ways to solve this trigonometric equation
$\begin{align*}
\sin(2x)=\sin(x) \Leftrightarrow \\\\ 2\sin(x)\cos(x)=\sin(x)\Leftrightarrow \\\\ 2\sin(x)\cos(x)-\sin(x)=0 \Leftrightarrow\\\\ \sin(x) \left[2\cos(x)-1 \right]=0 \Leftrightarrow \\\\ \sin(x)=0 \vee \cos(x)=\frac{1}{2} \Leftrightarrow\\\\ x=k\pi \vee x=\frac{\pi}{3}+2k\pi \vee x=\frac{5\pi}{3}+2k\pi \space, \space k \in \mathbb{Z}
\end{align*}$
The second way was:
$\begin{align*}
\sin(2x)=\sin(x)\Leftrightarrow \\\\ 2x=x+2k\pi \vee 2x=\pi-x+2k\pi\Leftrightarrow \\\\ x=2k\pi \vee3x=\pi +2k\pi\Leftrightarrow \\\\x=2k\pi \vee x=\frac{\pi}{3}+\frac{2k\pi}{3} \space ,\space k\in \mathbb{Z}
\end{align*}$
What is the correct one?
Thanks
| These answers are equivalent and both are correct. Placing angle $x$ on a unit circle, your first decomposition gives all angles at the far west and east sides, then all the angles $60$ degrees north of east, then all the angles $60$ degrees south of east.
Your second decomposition takes all angles at the far east side first. Then it takes all angles spaced one-third around the circle starting at 60 degrees north of east. You have the same solution set either way.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 0
} |
Motivation for Koszul complex Koszul complex is important for homological theory of commutative rings.
However, it's hard to guess where it came from.
What was the motivation for Koszul complex?
| In this answer I would rather focus on why is the Koszul complex so widely used. In abstract terms, the Koszul complex arises as the easiest way to combine an algebra with a coalgebra in presence of quadratic data. You can find the modern generalization of the Koszul duality described in Aaron's comment by reading Loday, Valette Algebraic Operads (mostly chapters 2-3).
To my knowledge the Koszul complex is extremely useful because you can use it even with certain $A_\infty$-structures arising from deformation quantization of Poisson structures and you relate it to the other "most used resolution in homological algebra", i.e. the bar resolution.
For a quick review of this fact, please check my answer in Homotopy equivalent chain complexes
As you can see it is a flexibe object which has the property of being extremely "explicit". This helped alot its diffusion in the mathematical literature.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 2,
"answer_id": 0
} |
Generating function of Lah numbers Let $L(n,k)\!\in\!\mathbb{N}_0$ be the Lah numbers. We know that they satisfy
$$L(n,k)=L(n\!-\!1,k\!-\!1)+(n\!+\!k\!-\!1)L(n\!-\!1,k)$$
for all $n,k\!\in\!\mathbb{Z}$. How can I prove
$$\sum_nL(n,k)\frac{x^n}{n!}=\frac{1}{k!}\Big(\frac{x}{1-x}\Big)^k$$
without using the explicit formula $L(n,k)\!=\!\frac{n!}{k!}\binom{n-1}{k-1}$?
Attempt 1: $\text{LHS}=\sum_nL(n\!-\!1,k\!-\!1)\frac{x^n}{n!}+\sum_n(n\!+\!k\!-\!1)L(n\!-\!1,k)\frac{x^n}{n!}\overset{i.h.}{=}?$
Attempt 2: $\text{RHS}\overset{i.h.}{=}$ $\frac{1}{k}\frac{x}{1-x}\sum_nL(n,k\!-\!1)\frac{x^n}{n!}=$ $\frac{1}{k}\frac{x}{1-x}\sum_nL(n\!-\!1,k\!-\!1)\frac{x^{n-1}}{(n-1)!}=$
$\frac{1}{k(1-x)}\sum_nn\big(L(n,k)-(n\!+\!k\!-\!1)L(n\!-\!1,k)\big)\frac{x^n}{n!}=?$
| We have
\begin{align}
f_k(x)&:=\sum_{n\in\Bbb Z}L(n,k)\frac{x^n}{n!}\\
&=\sum_{n\in \Bbb Z}L(n-1,k-1)\frac{x^n}{n!}+\sum_{n\in \Bbb Z}(n+k-1)L(n-1,k)\frac{x^n}{n!}\\
&=\sum_{j\in \Bbb Z}L(j,k-1)\frac{x^{j+1}}{(j+1)!}+\sum_{j\in \Bbb Z}(j+1+k-1)L(j,k)\frac{x^{j+1}}{(j+1)!}\\
&=\sum_{j\in \Bbb Z}L(j,k-1)\frac{x^{j+1}}{(j+1)!}+\sum_{j\in \Bbb Z}L(j,k)\frac{x^{j+1}}{j!}+(k-1)\sum_{j\in \Bbb Z}L(j,k)\frac{x^{j+1}}{(j+1)!}\\
&=\sum_{j\in \Bbb Z}L(j,k-1)\frac{x^{j+1}}{(j+1)!}+xf_k(x)+(k-1)\sum_{j\in \Bbb Z}L(j,k)\frac{x^{j+1}}{(j+1)!}
\end{align}
hence
$$(1-x)f_k(x)=\sum_{j\in \Bbb Z}L(j,k-1)\frac{x^{j+1}}{(j+1)!}+(k-1)\sum_{j\in \Bbb Z}L(j,k)\frac{x^{j+1}}{(j+1)!}.$$
Now we take the derivatives to get
$$-f_k(x)+(1-x)f'_k(x)=f_{k-1}(x)+(k-1)f_k(x)$$
hence
$$(1-x)f'_k(x)-kf_k(x)=f_{k-1}(x).$$
Multipliying by $(1-x)^{k-1}$ and using the formula for $f_{k-1}$ we get
$$(1-x)^kf'_k(x)-k(1-x)^{k-1}f_k(x)=\frac{x^{k-1}}{(k-1)!}$$
so
$$((1-x)^kf_k(x))'=\frac{x^{k-1}}{(k-1)!}.$$
Integrating, we get the wanted result up to another term (namely $C(1-x)^k$) but it should vanish using the value at $0$ and the initial definition of Lah numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Prove the convergence/divergence of $\sum \limits_{k=1}^{\infty} \frac{\tan(k)}{k}$ Can be easily proved that the following series onverges/diverges?
$$\sum_{k=1}^{\infty} \frac{\tan(k)}{k}$$
I'd really appreciate your support on this problem. I'm looking for some easy proof here. Thanks.
| A proof that the sequence $\frac{\tan(n)}{n}$ does not have a limit for $n\to \infty$ is given in this article (Sequential tangents, Sam Coskey). This, of course, implies that the series does not converge.
The proof, based on this paper by Rosenholtz (*), uses the continued fraction of $\pi/2$, and, essentially, it shows that it's possible to find a subsequence such that $\tan(n_k)$ is "big enough", by taking numerators of the truncated continued fraction ("convergents").
(*) "Tangent Sequences, World Records, π, and the Meaning of Life: Some Applications of Number Theory to Calculus", Ira Rosenholtz - Mathematics Magazine Vol. 72, No. 5 (Dec., 1999), pp. 367-376
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44",
"answer_count": 3,
"answer_id": 1
} |
Proving the Möbius formula for cyclotomic polynomials We want to prove that
$$ \Phi_n(x) = \prod_{d|n} \left( x^{\frac{n}{d}} - 1 \right)^{\mu(d)} $$
where $\Phi_n(x)$ in the n-th cyclotomic polynomial and $\mu(d)$ is the Möbius function defined on the natural numbers.
We were instructed to do it by the following stages:
Using induction we assume that the formula is true for $n$ and we want to prove it for $m = n p^k$ where $p$ is a prime number such that $p\not{|}n$.
a) Prove that $$\prod_{\xi \in C_{p^k}}\xi = (-1)^{\phi(p^k)} $$ where $C_{p^k}$ is the set of all primitive $p^k$-th roots of unity, and $\phi$ is the Euler function. I proved that.
b) Using the induction hypothesis show that
$$ \Phi_m(x) = (-1)^{\phi(p^k)} \prod_{d|n} \left[ \prod_{\xi \in C_{p^k}} \left( (\xi^{-1}x)^{\frac{n}{d}} - 1 \right) \right]^{\mu(d)} $$
c) Show that
$$ \prod_{\xi \in C_{p^k}} \left( (\xi^{-1}x)^{\frac{n}{d}} - 1 \right) = (-1)^{\phi(p^k)} \frac{x^{\frac{m}{d}}-1}{x^{\frac{m}{pd}} - 1} $$
d) Use these results to prove the formula by substituting c) into b).
I am stuck in b) and c).
In b) I tried to use the recursion formula $$ x^m - 1 = \prod_{d|m}\Phi_d(x) $$ and
$$ \Phi_m(x) = \frac{x^m-1}{ \prod_{\stackrel{d|m}{d<m}} \Phi_d(x)} . $$
In c) I tried expanding the product by Newton's binom using $\phi(p^k) = p^k ( 1 - 1/p)$. I also tried replacing the product by $\xi \mapsto [ \exp(i2\pi / p^k) ]^j$ and let $j$ run on numbers that don't divide $p^k$. In both way I got stuck.
I would appreciate help here.
| I found a solution that is easy to understand for those who want to know how to solve without following the steps given in the problem.
Click to see the source.
First, we have a formula
$$
{x^{n}-1=\Pi_{d|n}\Phi_{d}(x)}
$$
Then, by taking the logarithm on the both sides,
$$
\log(x^{n}-1)=\log(\Pi_{d|n}\Phi_{d}(x))=\Sigma_{d|n}(\log\Phi_{d}(x))
$$
Now, we use the Mobius Inversion Formula by taking $f_{n}=\log\Phi_{n}$ and $F_{n}=\Sigma_{d|n}(\log\Phi_{d})$.
\begin{align*}
\log(\Phi_{n}(x)) & = \Sigma_{d|n}\mu(\frac{n}{d})\log(x^{d}-1)\\
& = \Sigma_{d|n}\log(x^{d}-1)^{\mu(\frac{n}{d})}\\
& = \log\Pi_{d|n}(x^{d}-1)^{\mu(\frac{n}{d})}
\end{align*}
Hence, we have
$$
\Phi_{n}(x)=\Pi_{d|n}(x^{d}-1)^{\mu(\frac{n}{d})}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 1
} |
Partial Integration - Where did I go wrong? For a Homework, I need $\int \frac{x}{(x-1)^2} dx$ as an intermediate result. Using partial integration, I derive $x$ and integrate $\frac{1}{(x-1)^2}$, getting: $$ \frac{-x}{x-1} + \int \frac{1}{x-1} dx = \ln(x-1)+\frac{x}{x-1} $$
WolframAlpha tells me this is wrong (it gives me $\frac{1}{1-x}$ where I have $\frac{x}{x-1}$). If I and WA disagree the error is usually somewhere on my side. Unfortunately WA uses partial fractions there instead of partial integration, so I'm not sure which step I screwed up. Supposedly $\int f'g dx = fg - \int fg' dx$ right?
(I leave the constant +C out because it's not relevant for the problem I need this for).
| the result is : $\ln|x-1| - \frac x{x-1} + C$ where $C$ is some constant.
If $C=1$ you get : $\ln |x-1| + \frac 1{1-x}$
The finaly result can be expressed :
$$F(x) = \ln |x-1| + \frac 1{1-x} + \lambda$$ where $\lambda$ is some constant.
Precesely :
$$F(x) = \ln (x-1) + \frac 1{1-x} + \lambda$$ on the intervale $]1,+\infty[$
and:
$$F(x) = \ln (1-x) + \frac 1{1-x} + \lambda$$ on the intervale: $]-\infty ,1[$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to calculate all the four solutions to $(p+5)(p-1) \equiv 0 \pmod {16}$? This is a kind of a plain question, but I just can't get something.
For the congruence and a prime number $p$: $(p+5)(p-1) \equiv 0\pmod {16}$.
How come that the in addition to the solutions
$$\begin{align*}
p &\equiv 11\pmod{16}\\
p &\equiv 1\pmod {16}
\end{align*}$$
we also have
$$\begin{align*}
p &\equiv 9\pmod {16}\\
p &\equiv 3\pmod {16}\ ?
\end{align*}$$
Where do the last two come from? It is always 4 solutions? I can see that they are satisfy the equation, but how can I calculate them?
Thanks
| First note that $p$ has to be odd. Else, $(p+5)$ and $(p-1)$ are both odd.
Let $p = 2k+1$. Then we need $16 \vert (2k+6)(2k)$ i.e. $4 \vert k(k+3)$.
Since $k$ and $k+3$ are of opposite parity, we need $4|k$ or $4|(k+3)$.
Hence, $k = 4m$ or $k = 4m+1$. This gives us $ p = 2(4m) + 1$ or $p = 2(4m+1)+1$.
Hence, we get that $$p = 8m +1 \text{ or }8m+3$$ which is what your claim is as well.
EDIT
You have obtained the first two solutions i.e. $p = 16m+1$ and $p=16m + 11$ by looking at the cases $16 \vert (p-1)$ (or) $16 \vert (p+5)$ respectively.
However, note that you are leaving out the following possibilities.
*
*$2 \vert (p+5)$ and $8 \vert (p-1)$. This combination also implies $16 \vert (p+5)(p-1)$
*$4 \vert (p+5)$ and $4 \vert (p-1)$. This combination also implies $16 \vert (p+5)(p-1)$
*$8 \vert (p+5)$ and $2 \vert (p-1)$. This combination also implies $16 \vert (p+5)(p-1)$
Out of the above possibilities, the second one can be ruled out since $4 \vert (p+5)$ and $4 \vert (p-1)$, then $4 \vert ((p+5)-(p-1))$ i.e. $4 \vert 6$ which is not possible.
The first possibility gives us $ p = 8m+1$ while the second possibility gives us $p = 8m +3$.
Combining this with your answer, we get that $$p = 8m +1 \text{ or }8m+3$$
In general, when you want to analyze $a \vert bc$, you need to write $a = d_1 d_2$, where $d_1,d_2 \in \mathbb{Z}$ and then look at the cases $d_1 \vert a$ and $d_2 \vert b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 0
} |
constructive proof of the infinititude of primes There are infinitely many prime numbers. Euclides gave a constructive proof as follows.
For any set of prime numbers $\{p_1,\ldots,p_n\}$, the prime factors of $p_1\cdot \ldots \cdot p_n +1$ do not belong to the set $\{p_1,\ldots,p_n\}$.
I'm wondering if the following can be made into a constructive proof too.
Let $p_1 = 2$. Then, for $n\geq 2$, define $p_n$ to be a prime number in the interval $(p_{n-1},p_{n-1} + \delta_n]$, where $\delta_n$ is a real number depending only on $n$. Is such a $\delta_n$ known?
Note that this would be a constructive proof once we find a $\delta_n$ because finding a prime number in $(p_{n-1},p_{n-1}+\delta]$ can be done in finite time algorithmically.
For some reason I believe such a $\delta_n$ is not known.
In this spirit, is it known that we can't take $\delta_n = 10n$ for example?
| As noted in the comments, we can take $\delta_n=p_{n-1}$. In fact, there are improvements on that in the literature. But if you want something really easy to prove, you can take $\delta_n$ to be the factorial of $p_{n-1}$, since that gives you an interval which includes Euclid's $p_1\times p_2\times\cdots\times p_{n-1}+1$ and therefore includes a new prime.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Is there any geometric way to characterize $e$? Let me explain it better: after this question, I've been looking for a way to put famous constants in the real line in a geometrical way -- just for fun. Putting $\sqrt2$ is really easy: constructing a $45^\circ$-$90^\circ$-$45^\circ$ triangle with unitary sides will make me have an idea of what $\sqrt2$ is. Extending this to $\sqrt5$, $\sqrt{13}$, and other algebraic numbers is easy using Trigonometry; however, it turned difficult working with some transcendental constants. Constructing $\pi$ is easy using circumferences; but I couldn't figure out how I should work with $e$. Looking at
made me realize that $e$ is the point $\omega$ such that $\displaystyle\int_1^{\omega}\frac{1}{x}dx = 1$. However, I don't have any other ideas. And I keep asking myself:
Is there any way to "see" $e$ geometrically? And more: is it true that one can build any real number geometrically? Any help will be appreciated. Thanks.
| Another approach might be finding a polar curve such that it's tangent line forms a constant angle with the segment from $(0,0)$ to $(\theta,\rho(\theta))$. The solution is the logarithmic spiral, defined by
$$\rho =c_0 e^{a\theta}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "58",
"answer_count": 14,
"answer_id": 0
} |
Diameter of wheel
If a wheel travels 1 mile in 1 minute at a rate of 600 revolutions per minute. What is the diameter of the wheel in feet ? The answer to this question is 2.8 feet.
Could someone please explain how to solve this problem ?
| The distance travelled by a wheel in one revolution is nothing but the circumference. If the circumference of the circle is $d$, then the distance travelled by the wheel on one revolution is $\pi d$.
It does $600$ revolutions per minute i.e. it travels a distance of $600 \times \pi d$ in one minute. We are also given that it travels $1$ mile in a minute.
Hence, we have that $$600 \times \pi d = 1 \text{ mile} = 5280 \text{ feet}\implies d = \dfrac{5280}{600 \pi} \text{ feet} \approx 2.801127 \text{ feet}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
$\gcd(n!+1,(n+1)!)$ The recent post didn't really provide sufficient help. It was too vague, most of it went over my head.
Anyway, I'm trying to find the $\gcd(n!+1,(n+1)!)$.
First I let $d=ab\mid(n!+1)$ and $d=ab\mid(n+1)n!$ where $d=ab$ is the GCD.
From $ab\mid(n+1)n!$ I get $a\mid(n+1)$ and $b|n!$.
Because $b\mid n!$ and $ab\mid(n!+1)$, $b$ must be 1.
Consequently, $a\mid(n!+1)$ and $a\mid(n+1)$.
So narrowing down options for $a$ should get me an answer. At this point I've tried to somehow bring it around and relate it to Wilson's theorem as this problem is from that section of my textbook, but I seem to be missing something. This is part of independent study, though help of any kind is appreciated.
| By Euclid $\rm\,(k,k\!+\!1)=1\:\Rightarrow\:(p\!\ k,k\!+\!1) = (p,k\!+\!1)\ [= p\ $ if $\rm\:p\:$ prime, $\rm\:k=(p\!-\!1)!\:$ by Wilson. See here for $\rm\:p = n\!+\!1\:$ composite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
} |
Automorphisms of the field of complex numbers Using AC one may prove that there are $2^{\mathfrak{c}}$ field automorphisms of the field $\mathbb{C}$. Certainly, only the identity map is $\mathbb{C}$-linear ($\mathbb{C}$-homogenous) among them but are all these automorphisms $\mathbb{R}$-linear?
| An automorphism of $\mathbb C$ must take $i$ into $i$ or $-i$. Thus an automorphism that is $\mathbb R$-linear must be the identity or conjugation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Why do mathematicians care so much about zeta functions? Why is it that so many people care so much about zeta functions? Why do people write books and books specifically about the theory of Riemann Zeta functions?
What is its purpose? Is it just to develop small areas of pure mathematics?
| For one thing, the Riemann Zeta function has many interesting properties. No one knew of a closed form of $\zeta (2)$ until Euler famously found it, along with all the even positive integers:
$$\zeta(2n) = (-1)^{n+1}\frac{B_{2n}(2\pi)^{2n}}{2(2n)!}$$
However, to this day, no nice closed form is known for values in the form $\zeta(2n+1)$.
Another major need of the Zeta function is relating to the Riemann hypothesis. This conjecture if fairly simple to understand. It essentially hypothesizes that the nontrivial zeros of the zeta function have a real part of 1/2. This hypothesis, if proven true, has major implications in number theory and the distribution of primes.
The Riemann zeta function also occurs in many fields and appears occasionally when evaluating different equations, just as many other functions do.
Lastly, the sum
$$\sum_{n=1}^{\infty} \frac{1}{n^s}$$
is a very natural one to try and study and evaluate and is especially interesting because of the above-mentioned properties and more.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 2,
"answer_id": 1
} |
norm for estimating the error of the numerical method In most of the books on numerical methods and finite difference methods the error is measured in discrete $L^2$ norm. I was wondering if people do the in Sobolev norm. I have never see that done and I want to know why no one uses that.
To be more specific look at the $$Au=f,$$ where assume $A_h$ is some approximation for $A$ and $U$ is the numerical solution for the system. Then if we plug the actual function $u$ into $A_hU=f$ and substruct we have $$A_h(u-U)=\tau$$ for $\tau$ being a local error. Thus I have an error equation $$e=A_h^{-1}\tau$$ What are the problems I am facing If I use discrete Sobolev norm?
| For one thing, it's a question of what norm measures how "accurate" the solution is. Which of the two error terms would you rather have: $0.1\sin(x)$ or $0.0001\sin(10000x)$? The first is smaller in the Sobolev norm, the second is smaller in the $L^2$ norm.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/159998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are There Any Symbols for Contradictions? Perhaps, this question has been answered already but I am not aware of any existing answer. Is there any international icon or symbol for showing Contradiction or reaching a contradiction in Mathematical contexts? The same story can be seen for showing that someone reached to the end of the proof of a theorem (i.e. as shown the tombstone symbol ∎, Halmos).
| The symbols are:
$\top$ for truth (example: $100 \in \mathbb{R} \to \top$)
and $\bot$ for false (example: $\sqrt{2} \in \mathbb{Q} \to \bot$)
In Latex, \top is $\top$ and \bot is $\bot$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37",
"answer_count": 12,
"answer_id": 11
} |
Continued fraction question I have been given an continued fraction for a number x:
$$x = 1+\frac{1}{1+}\frac{1}{1+}\frac{1}{1+}\cdots$$
How can I show that $x = 1 + \frac{1}{x}$? I played around some with the first few convergents of this continued fraction, but I don't get close.
| Just look at it. OK, if you want something more proofy-looking: if $x_n$ is the $n$'th convergent, then $x_{n+1} = 1 + 1/x_n$. Take limits.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Integration of $\int\frac{1}{x^{4}+1}\mathrm dx$ I don't know how to integrate $\displaystyle \int\frac{1}{x^{4}+1}\mathrm dx$. Do I have to use trigonometric substitution?
| I think you can do it this way.
\begin{align*}
\int \frac{1}{x^4 +1} \ dx & = \frac{1}{2} \cdot \int\frac{2}{1+x^{4}} \ dx \\\
&= \frac{1}{2} \cdot \int\frac{(1-x^{2}) + (1+x^{2})}{1+x^{4}} \ dx \\\ &=\frac{1}{2} \cdot \int \frac{1-x^2}{1+x^{4}} \ dx + \frac{1}{2} \int \frac{1+x^{2}}{1+x^{4}} \ dx \\\ &= \frac{1}{2} \cdot -\int \frac{1-\frac{1}{x^2}}{\Bigl(x+\frac{1}{x})^{2} - 2} \ dx + \text{same trick}
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 20,
"answer_id": 12
} |
Apply Cauchy-Riemann equations on $f(z)=z+|z|$? I am trying to check if the function $f(z)=z+|z|$ is analytic by using the Cauchy-Riemann equation.
I made
$z = x +jy$
and therefore
$$f(z)= (x + jy) + \sqrt{x^2 + y^2}$$
put into $f(z) = u+ jv$ form:
$$f(z)= x + \sqrt{x^2 + y^2} + jy$$
where
$u = x + \sqrt{x^2 + y^2}$
and that
$v = y$
Now I need to apply the Cauchy-Riemann equation, but don't know how would I go about doing that.
Any help would be much appreciated.
| In order for your function to be analytic, it must satisfy the Cauchy-Riemann equations (right? it's good to think about why this is true). So, what are the equations?
Well, du/dx = dv/dy.
Does this hold? Or you could consider du/dy = -dv/dx.
If either of these equations do not hold, then the function is not analytic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Evaluating $\lim\limits_{n\to\infty} e^{-n} \sum\limits_{k=0}^{n} \frac{n^k}{k!}$ I'm supposed to calculate:
$$\lim_{n\to\infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!}$$
By using WolframAlpha, I might guess that the limit is $\frac{1}{2}$, which is a pretty interesting and nice result. I wonder in which ways we may approach it.
| Edited. I justified the application of the dominated convergence theorem.
By a simple calculation,
$$ \begin{align*}
e^{-n}\sum_{k=0}^{n} \frac{n^k}{k!}
&= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k (n-k)! \\
(1) \cdots \quad &= \frac{e^{-n}}{n!} \sum_{k=0}^{n}\binom{n}{k} n^k \int_{0}^{\infty} t^{n-k}e^{-t} \, dt\\
&= \frac{e^{-n}}{n!} \int_{0}^{\infty} (n+t)^{n}e^{-t} \, dt \\
(2) \cdots \quad &= \frac{1}{n!} \int_{n}^{\infty} t^{n}e^{-t} \, dt \\
&= 1 - \frac{1}{n!} \int_{0}^{n} t^{n}e^{-t} \, dt \\
(3) \cdots \quad &= 1 - \frac{\sqrt{n} (n/e)^n}{n!} \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du.
\end{align*}$$
We remark that
*
*In $\text{(1)}$, we utilized the famous formula $ n! = \int_{0}^{\infty} t^n e^{-t} \, dt$.
*In $\text{(2)}$, the substitution $t + n \mapsto t$ is used.
*In $\text{(3)}$, the substitution $t = n - \sqrt{n}u$ is used.
Then in view of the Stirling's formula, it suffices to show that
$$\int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du \xrightarrow{n\to\infty} \sqrt{\frac{\pi}{2}}.$$
The idea is to introduce the function
$$ g_n (u) = \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \mathbf{1}_{(0, \sqrt{n})}(u) $$
and apply pointwise limit to the integrand as $n \to \infty$. This is justified once we find a dominating function for the sequence $(g_n)$. But notice that if $0 < u < \sqrt{n}$, then
$$ \log g_n (u)
= n \log \left(1 - \frac{u}{\sqrt{n}} \right) + \sqrt{n} u
= -\frac{u^2}{2} - \frac{u^3}{3\sqrt{n}} - \frac{u^4}{4n} - \cdots \leq -\frac{u^2}{2}. $$
From this we have $g_n (u) \leq e^{-u^2 /2}$ for all $n$ and $g_n (u) \to e^{-u^2 / 2}$ as $n \to \infty$. Therefore by dominated convergence theorem and Gaussian integral,
$$ \int_{0}^{\sqrt{n}} \left(1 - \frac{u}{\sqrt{n}} \right)^{n}e^{\sqrt{n}u} \, du = \int_{0}^{\infty} g_n (u) \, du \xrightarrow{n\to\infty} \int_{0}^{\infty} e^{-u^2/2} \, du = \sqrt{\frac{\pi}{2}}. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "257",
"answer_count": 9,
"answer_id": 7
} |
$k$th power of ideal of of germs Well,We denote set of germs at $m$ by $\bar{F_m}$ A germ $f$ has a well defined value $f(m)$ at m namely the value at $m$ of any representative of the germ. Let $F_m\subseteq \bar{F_m}$ be the set of germs which vanish at $m$. Then $F_m$ is an ideal of $\bar{F_m}$ and let $F_m^k$ denotes its $k$th power. Could any one tell me how the elements look like in this Ideal $F_m^k$? and they said all finite linear combination of $k-fold$ products of elements of $F_m$ But I dont get this. and they also said These forms $\bar{F_m}\supsetneq F_m\supsetneq F_m^2\supsetneq\dots \supsetneq$
| If $x_1,\ldots,x_n$ are local coordinates at the point $m$, then any smooth germ at $m$ has an associated Taylor series in the coordiantes $x_i$. The power $F_m^k$ is precisely the set of germs whose degree $k$ Taylor polynomial vanishes, i.e. whose Taylor series has no non-zero terms of degree $\leq k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Well definition of multiplicity of a root in a polynomial of ring. Let $R$ be a conmutative ring with identity, let $\displaystyle f=\sum_{k=0}^{n}a_k x^k \in R[x]$ and $r\in R$. If $f=(x-r)^m g$, $m\in\mathbb{N}$ and $g\in R[x]$ with $g(r)\neq 0$, then the root $r$ is said to have $multiplicity\,\,\, m$. If the multiplicity is 1 the root is called simple.
How to prove that the above definition of the multiplicity of a root is well defined?
| More generally, for $\rm\!\ c\in R\!\ $ any ring, every $\rm\!\ r\ne 0\,$ may be written uniquely in the form $\rm\!\ r = c^n\,\! b,\,$ where $\rm\,c\nmid b,\,$ assuming $\rm\,c\,$ is cancellable, and only $0\,$ is divisible by arbitrarily high powers of $\rm\,c.\,$ Indeed, by hypothesis there exists a largest natural $\rm\,n\,$ such that $\rm\,c^n\,|\,r,\,$ hence $\rm\,r = c^n\, b,\ c\nmid b.\,$ Suppose $\rm\,r = c^k\:\! d,\ c\nmid d.\,$ If $\rm\,k < n,\,$ then cancelling $\rm\,c^k\,$ in $\rm\,c^n\:\! b = c^k\:\!d\,$ yields $\rm\,c^{n-k}\:\!b = d,\,$ so $\rm\,c\,|\,d,\,$ contra hypothesis. Thus $\rm\,k = n,\,$ so cancelling $\rm\,c^k\,$ yields $\rm\,b = d,\,$ thus uniqueness. $\bf\small QED$
Your special case follows by applying the above to the cancellable element $\, {\rm c} = x-r\in R[x],\,$ which clearly satisfies the bounded divisibility hypothesis: $\,(x\!-\!r)^n\,|\,f\ne0\,\Rightarrow\, n\le \deg\ f.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Cutting cake into 5 equal pieces
If a cake is cut into $5$ equal pieces, each piece would be $80$ grams
heavier than when the cake is cut into $7$ equal pieces. How heavy is
the cake?
How would I solve this problem? Do I have to try to find an algebraic expression for this? $5x = 7y + 400$?
| The first step is to turn the word problem into an equation; one-fifth of the cake is $80$ grams heavier than one-seventh of the cake, so one-fifth of the cake equals one-seventh of the cake plus 80. "The cake" (specifically its mass) is $x$, and we can work from there:
$$\dfrac{x}{5} = \dfrac{x}{7}+80$$
$$\dfrac{x}{5} - \dfrac{x}{7} = 80$$
Here comes the clever bit; multiply each fraction by a form of one that will give both fractions the same denominator:
$$\dfrac{7}{7}\cdot \dfrac{x}{5} - \dfrac{5}{5}\cdot\dfrac{x}{7} = 80$$
$$\dfrac{7x}{35} - \dfrac{5x}{35} = 80$$
$$\dfrac{2x}{35} = 80$$
$$2x = 2800$$
$$x = 1400$$
You can check your answer by plugging it into the original equation; if the two sides are indeed equal the answer is correct:
$$\dfrac{1400}{5} = \dfrac{1400}{7} + 80$$
$$280 = 200 + 80$$
$$\ \ \ \ \ \ \ \ \ \ \ \ 280 = 280 \ \ \text{<-- yay!}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 9,
"answer_id": 8
} |
number prime to $bc$ with system of congruences Can you please help me to understand why all numbers $x$ prime to $bc$ are all the solutions of this system?
$$\begin{align*}
x&\equiv k\pmod{b}\\
x&\equiv t\pmod{c}
\end{align*}$$
Here $k$ is prime to $b$, and $t$ is prime to $c$.
| Hint $\rm\quad \begin{eqnarray} x\equiv k\,\ (mod\ b)&\Rightarrow&\rm\:(x,b) = (k,b) = 1 \\
\rm x\equiv t\,\,\ (mod\ c)\,&\Rightarrow&\rm\:(x,c) =\, (t,c) = 1\end{eqnarray} \Bigg\}\ \Rightarrow\ (x,bc) = 1\ \ by\ Euclid's\ Lemma$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Probably simple factoring problem I came across this in a friend's 12th grade math homework and couldn't solve it. I want to factor the following trinomial:
$$3x^2 -8x + 1.$$
How to solve this is far from immediately clear to me, but it is surely very easy. How is it done?
| A standard way of factorizing, when it is hard to guess the factors, is by completing the square.
\begin{align}
3x^2 - 8x + 1 & = 3 \left(x^2 - \dfrac83x + \dfrac13 \right)\\
& (\text{Pull out the coefficient of $x^2$})\\
& = 3 \left(x^2 - 2 \cdot \dfrac43 \cdot x + \dfrac13 \right)\\
& (\text{Multiply and divide by $2$ the coefficient of $x$})\\
& = 3 \left(x^2 - 2 \cdot \dfrac43 \cdot x + \left(\dfrac43 \right)^2 - \left(\dfrac43 \right)^2 + \dfrac13 \right)\\
& (\text{Add and subtract the square of half the coefficient of $x$})\\
& = 3 \left(\left(x - \dfrac43 \right)^2 - \left(\dfrac43 \right)^2 + \dfrac13 \right)\\
& (\text{Complete the square})\\
& = 3 \left(\left(x - \dfrac43 \right)^2 - \dfrac{16}9 + \dfrac13 \right)\\
& = 3 \left(\left(x - \dfrac43 \right)^2 - \dfrac{16}9 + \dfrac39 \right)\\
& = 3 \left(\left(x - \dfrac43 \right)^2 - \dfrac{13}9\right)\\
& = 3 \left(\left(x - \dfrac43 \right)^2 - \left(\dfrac{\sqrt{13}}3 \right)^2\right)\\
& = 3 \left(x - \dfrac43 + \dfrac{\sqrt{13}}3\right) \left(x - \dfrac43 - \dfrac{\sqrt{13}}3\right)\\
& (\text{Use $a^2 - b^2 = (a+b)(a-b)$ to factorize})
\end{align}
The same idea works in general.
\begin{align}
ax^2 + bx + c & = a \left( x^2 + \dfrac{b}ax + \dfrac{c}a\right)\\
& = a \left( x^2 + 2 \cdot \dfrac{b}{2a} \cdot x + \dfrac{c}a\right)\\
& = a \left( x^2 + 2 \cdot \dfrac{b}{2a} \cdot x + \left( \dfrac{b}{2a}\right)^2 - \left( \dfrac{b}{2a}\right)^2 + \dfrac{c}a\right)\\
& = a \left( \left( x + \dfrac{b}{2a}\right)^2 - \left( \dfrac{b}{2a}\right)^2 + \dfrac{c}a\right)\\
& = a \left( \left( x + \dfrac{b}{2a}\right)^2 - \dfrac{b^2}{4a^2} + \dfrac{c}a\right)\\
& = a \left( \left( x + \dfrac{b}{2a}\right)^2 - \left(\dfrac{b^2-4ac}{4a^2} \right)\right)\\
& = a \left( \left( x + \dfrac{b}{2a}\right)^2 - \left(\dfrac{\sqrt{b^2-4ac}}{2a} \right)^2\right)\\
& = a \left( x + \dfrac{b}{2a} + \dfrac{\sqrt{b^2-4ac}}{2a}\right) \left( x + \dfrac{b}{2a} - \dfrac{\sqrt{b^2-4ac}}{2a}\right)\\
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
A question about harmonic form of trigonometric functions. The question is:
i) Find the maximum and minimum values.
ii) the smallest non-negative value of x for which this occurs.
12cos(a)-9sin(a)
I think it should be changed into the form of Rcos(a+x) and it should be 15cos(a+36.87), and I get the answer i)+15 / -15 ii)323.13 (360-36.87) / 143.13 (180-36.87).
But the answer given by the book is " i)15, -15 ii)306.87, 143.13 "
I'm really confused by that answer..Am I wrong?
BTW, I'm self studying A-level further pure mathematics, but the book(written by BRIAN and MARK GAULTER published by Oxford university press) I get seems not very helpful.
so I truly hope someone can recommend some books/websites for self learning.
| We review the (correct) procedure that you went through. We have $12^2+9^2=15^2$, so we rewrite our expression as
$$15\left(\frac{12}{15}\cos a -\frac{9}{15}\sin a\right).$$
Now if $b$ is any angle whose cosine is $\frac{12}{15}$ and whose sine is $\frac{9}{15}$, we can rewrite our expression as
$$15\left(\cos a \cos b -\sin a \sin b\right),$$
that is, as $15\cos(a+b)$.
The maximum value of the cosine function is $1$, and the minimum is $-1$. So the maximum and minimum of our expression are $15$ and $-15$ respectively. The only remaining problem is to decide on the appropriate values of $a$.
For the maximum, $a+b$ should be (in degrees) one of $0$, $360$, $-360$, $720$, $-720$, and so on. The angle $b$ is about $36.87$ plus or minus a multiple of $360$. So we can get the desired kind of sum $a+b$ by choosing $a\approx 360-36.87$, about $323.13$.
It is not hard to do a partial verification our answer by calculator. If you compute $12\cos a -9\sin a$ for the above value of $a$, you will get something quite close to $15$. The book's value gives something smaller, roughly $14.4$. The book's value is mistaken. It was obtained by pressing the wrong button on the calculator, $\sin^{-1}$ instead of $\cos^{-1}$.
For the minimum, we want $a+b$ to be $180$ plus or minus a multiple of $360$. Thus $a$ is approximately $180-36.87$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Compute: $\sum_{k=1}^{\infty}\sum_{n=1}^{\infty} \frac{1}{k^2n+2nk+n^2k}$ I try to solve the following sum:
$$\sum_{k=1}^{\infty}\sum_{n=1}^{\infty} \frac{1}{k^2n+2nk+n^2k}$$
I'm very curious about the possible approaching ways that lead us to solve it. I'm not experienced with these sums, and any hint, suggestion is very welcome. Thanks.
| Here's another approach.
It depends primarily on the properties of telescoping series, partial fraction expansion, and the following identity for the $m$th harmonic number
$$\begin{eqnarray*}
\sum_{k=1}^\infty \frac{1}{k(k+m)}
&=& \frac{1}{m}\sum_{k=1}^\infty \left(\frac{1}{k} - \frac{1}{k+m}\right) \\
&=& \frac{1}{m}\sum_{k=1}^m \frac{1}{k} \\
&=& \frac{H_m}{m},
\end{eqnarray*}$$
where $m=1,2,\ldots$.
Then,
$$\begin{eqnarray*}
\sum_{k=1}^{\infty}\sum_{n=1}^{\infty} \frac{1}{k^2n+2nk+n^2k}
&=& \sum_{k=1}^{\infty} \frac{1}{k}
\sum_{n=1}^{\infty} \frac{1}{n(n+k+2)} \\
&=& \sum_{k=1}^{\infty} \frac{1}{k} \frac{H_{k+2}}{k+2} \\
&=& \frac{1}{2} \sum_{k=1}^{\infty}
\left( \frac{H_{k+2}}{k} - \frac{H_{k+2}}{k+2} \right) \\
&=& \frac{1}{2} \sum_{k=1}^{\infty}
\left( \frac{H_k +\frac{1}{k+1}+\frac{1}{k+2}}{k} - \frac{H_{k+2}}{k+2} \right) \\
&=& \frac{1}{2} \sum_{k=1}^{\infty}
\left( \frac{H_k}{k} - \frac{H_{k+2}}{k+2} \right)
+ \frac{1}{2} \sum_{k=1}^{\infty}
\left(\frac{1}{k(k+1)} + \frac{1}{k(k+2)}\right) \\
&=& \frac{1}{2}\left(H_1 + \frac{H_2}{2}\right)
+ \frac{1}{2}\left(H_1 + \frac{H_2}{2}\right) \\
&=& \frac{7}{4}.
\end{eqnarray*}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
what is the use of derivatives Can any one explain me what is the use of derivatives in real life. When and where we use derivative, i know it can be used to find rate of change but why?. My logic was in real life most of the things we do are not linear functions and derivatives helps to make a real life functions into linear.
eg converting parabola into a linear function $x^{2}\rightarrow 2x$
but then i find this; derivation of $\sin{x}\rightarrow \cos{x}$
why we cant use $\sin{x}$ itself to solve the equation. whats the purpose of using its derivative $\cos{x}$.
Please forgive me if i have asked a stupid questions, i want to improve my fundamentals in calculus.
| There can be also economic interpretations of derivatives. For example, let's assume that there is a function which measures the utility from consumption.
$U(C)$ where $C$ is the consumption. It is straightforward to say that your utility increases with consumption. This means that when you increase your consumption one unit marginally, you will have an increase of your utility, which you can find it by taking derivative of this function. (it is not a partial derivative because the only argument in your function is consumption.)
So you will have ;
$\frac{dU(C)}{dC} > 0 $
By intuition, the increase of the utility will be less more you consume. Let's take the simplest example, you have one coke and you drunk it. The second coke you will drink just after will provide you utility but less. The increase of your utility will be less fast more you consume. This one can be formulated by the second derivative of the utility function as follows ;
$\frac{d^{2}U(C)}{dC^{2}} < 0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 5,
"answer_id": 4
} |
Tensor product of sets The cartesian product of two sets $A$ and $B$ can be seen as a tensor product.
Are there examples for the tensor product of two sets $A$ and $B$ other than the usual cartesian product ?
The context is the following: assume one has a set-valued presheaf $F$ on a monoidal category, knowing $F(A)$ and $F(B)$ how does one define $F(A \otimes B)$ ?
| Simply being in a monoidal category is a rather liberal condition on the tensor product; it tells you very little about what the tensor product actually looks like.
Here is a (perhaps slightly contrived?) example:
Let $C$ be the category of vector spaces over a finite field $\mathbb F_p$ with linear transformations. The vector space tensor product makes this into a monoidal category with $\mathbb F_p$ itself as the unit. $C^{op}$ is then also a monoidal category, and the ordinary forgetful functor is a Set-valued presheaf on $C^{op}$.
However, $F(A\otimes B)$ cannot be the cartesian product $F(A)\times F(B)$, because $F(A)\times F(B)$ has the wrong cardinality when $A$ and $B$ are finite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $a$ in $R$ is prime, then $(a+P)$ is prime in $R/P$. Let $R$ be a UFD and $P$ a prime ideal. Here we are defining a UFD with primes and not irreducibles.
Is the following true and what is the justification?
If $a$ in $R$ is prime, then $(a+P)$ is prime in $R/P$.
| I think thats wrong. If $ a \in P $ holds, then $ a + P = 0 + P \in R/P$ and therefore $a+P$ not prime.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Intersection of compositum of fields with another field Let $F_1$, $F_2$ and $K$ be fields of characteristic $0$ such that $F_1 \cap K = F_2 \cap K = M$, the extensions $F_i / (F_i \cap K)$ are Galois, and $[F_1 \cap F_2 : M ]$ is finite. Then is $[F_1 F_2 \cap K : M]$ finite?
| No. First, the extension $\mathbb{C}(z)/\mathbb{C}$ is Galois for any $z$ that is transcendental over $\mathbb{C}$: if $p(z)$ is a nonconstant polynomial, and $\alpha$ is a root and $\beta$ is a nonroot, then the automorphism $\sigma\colon z\mapsto z+\alpha-\beta$ maps $z-\alpha$ to $z-\beta$, so that $p(z)$ has a factor of $z-\alpha$ and no factor of $z-\beta$ but $\sigma p$ has a factor of $z-\beta$, so $p\neq\sigma p$. Thus, no polynomial is fixed by all automorphisms. If $\frac{p(z)}{q(z)}$ is a rational function with $q(z)$ nonconstant, then a similar argument shows that we can find a $\sigma$ that "moves the zeros" of $q$, so that $p(z)/q(z)$ will have a pole at $\alpha$ but no pole at $\beta$, while $\sigma p/\sigma q$ has a pole at $\beta$. Thus, the fixed field of $\mathrm{Aut}(\mathbb{C}(z)/\mathbb{C})$ is $\mathbb{C}$, hence the extension is Galois.
Now, let $x$ and $y$ be transcendental over $\mathbb{C}$. Take $F_1=\mathbb{C}(x)$, $F_2=\mathbb{C}(y)$, $K=\mathbb{C}(xy)$, all subfields of $\mathbb{C}(x,y)$. Then $M=\mathbb{C}$, so $F_i/M$ is Galois; $F_1\cap F_2=\mathbb{C}$, so $[F_1\cap F_2\colon M]=1$. But $F_1F_2\cap K = \mathbb{C}(x,y)\cap \mathbb{C}(xy) = \mathbb{C}(xy)$, and $\mathbb{C}(xy)$ is of infinite degree over $M=\mathbb{C}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/160969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Tangent line of parametric curve I have not seen a problem like this so I have no idea what to do.
Find an equation of the tangent to the curve at the given point by two methods, without elimiating parameter and with.
$$x = 1 + \ln t,\;\; y = t^2 + 2;\;\; (1, 3)$$
I know that $$\dfrac{dy}{dx} = \dfrac{\; 2t\; }{\dfrac{1}{t}}$$
But this give a very wrong answer. I am not sure what a parameter is or how to eliminate it.
| One way to do this is by considering the parametric form of the curve:
$(x,y)(t) = (1 + \log t, t^2 + 2)$, so $(x,y)'(t) = (\frac{1}{t}, 2t)$
We need to find the value of $t$ when $(x,y)(t) = (1 + \log t, t^2 + 2) = (1,3)$, from where we deduce $t=1$. The tangent line at $(1,3)$ has direction vector $(x,y)'(1) = (1,2)$, and since it passes by the point $(1,3)$ its parametric equation is given by: $s \mapsto (1,2)t + (1,3)$.
Another way (I suppose this is eliminating the parameter) would be to express $y$ in terms of $x$ (this can't be done for any curve, but in this case it is possible). We solve for $t$: $x = 1 + \log x \Rightarrow x = e^{x-1}$, so $y = t^2 + 2 = (e^{x-1})^2 + 2 = e^{2x-2} + 2$. The tangent line has slope $\frac{dy}{dx}=y_x$ evaluated at $1$: we have $y_x=2e^{2x-2}$ and $y_x(1)=2e^0 = 2$, so it the line has equation $y=2x +b$. Also, it passes by the point $(1,3)$, so we can solve for $b$: $3 = 2 \cdot 1 + b \Rightarrow b = 1$. Then, the equation of the tangent line is $y = 2x + 1$.
Note that $s \mapsto (1,2)t + (1,3)$ and $y = 2x + 1$ are the same line, expressed in different forms.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
A $C^{\infty}$ function from $\mathbb{R}^2$ to $\mathbb{R}$ Сould any one help me how to show $C^{\infty}$ function from $\mathbb{R}^2$ to $\mathbb{R}$ can not be injective?
| If we remove three points from the domain it will be connected. In $\mathbb{R}$ the connected sets are intervals, so if we remove three point from an interval it will be disconnected. So there can not exist a continuous injective function from $\mathbb{R}^2$ to $\mathbb{R}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
} |
Limit of exponentials Why is $n^n (n+m)^{-{\left(n+m\over 2\right)}}(n-m)^{-{\left(n-m\over 2\right)}}$ asymptotically equal to $\exp\left(-{m^2\over 2n}\right)$ as $n,m\to \infty$?
| Let $m = n x$.
Take the logarithm:
$$
n \log(n) - n \frac{1+x}{2} \left(\log n + \log\left(1+x\right) \right) - n \frac{1-x}{2} \left( \log n + \log\left(1-x\right) \right)
$$
Notice that all the terms with $\log(n)$ cancel out, so we are left with
$$
-\frac{n}{2} \left( (1+x) \log(1+x) - (1-x) \log(1-x) \right)
$$
It seems like the you need to assume that $x$ is small here, meaning that $ m \ll n$. Then, using Taylor series expansion of the logarithm:
$$
(1+x) \log(1+x) + (1-x) \log(1-x) = (1+x) \left( x - \frac{x^2}{2} + \mathcal{o}(x^2) \right) + (1-x) \left(-x - \frac{x^2}{2} + \mathcal{o}(x^2)\right) = x^2 + \mathcal{o}(x^3)
$$
Hence the original expression, asymptotically, equals
$$
\exp\left( -\frac{n}{2} x^2 + \mathcal{o}(n x^3)\right) = \exp\left(- \frac{m^2}{2n} + \mathcal{o}\left(\frac{m^3}{n^2}\right) \right)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Looking for some simple topology spaces such that $nw(X)\le\omega$ and $|X|>2^\omega$ I believe there are some topology spaces which satisfying the network weight is less than $\omega$, and its cardinality is more than $2^\omega$ (not equal to $2^\omega$), even much larger.
*
*Network: a family $N$ of subsets of a topological space $X$ is a network for $X$ if for every point $x \in X$ and any nbhd $U$ of $x$ there exists an $M \in N$ such that $x \in M \subset U$.
Here I want to look for some simple topology spaces which are familiar with us. However, a little complex topology space is also welcome!
Thanks for any help:)
| If $X$ is $T_0$ and $nw(X)=\omega$ (network weight) then $|X|\leq 2^\omega$. Let $\mathcal N$ be a countable network. For each $x\in X$ consider $N_x=\{N\in\mathcal N: x\in N\}$. Since $X$ is $T_0$ it follows that $N_x\ne N_y$ for $x\ne y$. Thus, $|X|\leq |P(\mathcal N)|=2^\omega$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does an injective endomorphism of a finitely-generated free R-module have nonzero determinant? Alternately, let $M$ be an $n \times n$ matrix with entries in a commutative ring $R$. If $M$ has trivial kernel, is it true that $\det(M) \neq 0$?
This math.SE question deals with the case that $R$ is a polynomial ring over a field. There it was observed that there is a straightforward proof when $R$ is an integral domain by passing to the fraction field.
In the general case I have neither a proof nor a counterexample. Here are three general observations about properties that a counterexample $M$ (trivial kernel but zero determinant) must satisfy. First, recall that the adjugate $\text{adj}(M)$ of a matrix $M$ is a matrix whose entries are integer polynomials in those of $M$ and which satisfies
$$M \text{adj}(M) = \det(M).$$
If $\det(M) = 0$ and $\text{adj}(M) \neq 0$, then some column of $\text{adj}(M)$ lies in the kernel of $M$. Thus:
If $M$ is a counterexample, then $\text{adj}(M) = 0$.
When $n = 2$, we have $\text{adj}(M) = 0 \Rightarrow M = 0$, so this settles the $2 \times 2$ case.
Second observation: recall that by Cayley-Hamilton $p(M) = 0$ where $p$ is the characteristic polynomial of $M$. Write this as
$$M^k q(M) = 0$$
where $q$ has nonzero constant term. If $q(M) \neq 0$, then there exists some $v \in R^n$ such that $w = q(M) v \neq 0$, hence $M^k w = 0$ and one of the vectors $w, Mw, M^2 w,\dots, M^{k-1} w$ necessarily lies in the kernel of $M$. Thus if $M$ is a counterexample we must have $q(M) = 0$ where $q$ has nonzero constant term.
Now for every prime ideal $P$ of $R$, consider the induced action of $M$ on $F^n$, where $F = \overline{ \text{Frac}(R/P) }$. Then $q(\lambda) = 0$ for every eigenvalue $\lambda$ of $M$. Since $\det(M) = 0$, one of these eigenvalues over $F$ is $0$, hence it follows that $q(0) \in P$. Since this is true for all prime ideals, $q(0)$ lies in the intersection of all the prime ideals of $R$, hence
If $M$ is a counterexample and $q$ is defined as above, then $q(0)$ is nilpotent.
This settles the question for reduced rings. Now, $\text{det}(M) = 0$ implies that the constant term of $p$ is equal to zero, and $\text{adj}(M) = 0$ implies that the linear term of $p$ is equal to zero. It follows that if $M$ is a counterexample, then $M^2 \mid p(M)$. When $n = 3$, this implies that
$$q(M) = M - \lambda$$
where $\lambda$ is nilpotent, so $M$ is nilpotent and thus must have nontrivial kernel. So this settles the $3 \times 3$ case.
Third observation: if $M$ is a counterexample, then it is a counterexample over the subring of $R$ generated by the entries of $M$, so
We may assume WLOG that $R$ is finitely-generated over $\mathbb{Z}$.
| Lam's Exercises in modules and rings includes the following:
which tells us that your determinant is not a zero-divisor.
The paper where McCoy does that is [Remarks on divisors of zero, MAA Monthly 49 (1942), 286--295] If you have JStor access, this is at http://www.jstor.org/stable/2303094
There is a pretty corollary there: a square matrix is a zero-divisor in the ring of matrices over a commmutative ring iff its determinant is a zero divisor.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 4,
"answer_id": 0
} |
What is the total number of combinations of 5 items together when there are no duplicates? I have 5 categories - A, B, C, D & E.
I want to basically create groups that reflect every single combination of these categories without there being duplicates.
So groups would look like this:
*
*A
*B
*C
*D
*E
*A, B
*A, C
*A, D
*A, E
*B, C
*B, D
*B, E
*C, D
.
.
.
etc.
This sounds like something I would use the binomial coefficient $n \choose r$ for, but I am quite fuzzy on calculus and can't remember exactly how to do this.
Any help would be appreciated.
Thanks.
| There are $\binom{5}{1}$ combinations with 1 item, $\binom{5}{2}$ combinations with $2$ items,...
So, you want :
$$\binom{5}{1}+\cdots+\binom{5}{5}=\left(\binom{5}{0}+\cdots+\binom{5}{5}\right)-1=2^5-1=31$$
I used that $$\sum_{k=0}^n\binom{n}{k}=(1+1)^n=2^n$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 1
} |
Course for self-study I have basically completed a good deal of Single Variable Calculus from Spivak's Calculus and since I leave school in May next year,I intend to put in some effort to pick up college mathematics.I am a bit confused as to what to study next.I did buy Herstein's Topics in Algebra.
Question: So can anyone please tell me what I should study and in what order or what constitutes a coherent course of study .I am open to various suggestions!
Thanks in advance.
| Hoffman and Kunze is a great book. If you understand the abstractions in Spivak's book, you should be able to handle it. The problems are great. I have worked many of them.
Go there next.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How can I convert between powers for different number bases? I am writing a program to convert between megabits per second and mebibits per second; A user would enter 1 Mebibits p/s and get 1.05 Megabits p/s as the output.
These are two units of computer data transfer rate. A megabit (SI unit of measurement in deny) is 1,000,000 bits, or 10^6. A mebibit (IEC binary prefix) is 1,048,576 bits or 2^20.
A user will specify if they have given a number in mega-or-mebi bits per second. So I need to know, how can I convert between these two powers? If the user inputs "1" and selects "mebibits" as the unit, how can I convert from this base 2 number system to the base 10 number system for "megabits"?
Thank you for reading.
| If you have $x \text{ Mebibits p/s}$, since a Mebibit is $\displaystyle \frac{2^{20}}{10^6} = 1.048576$ Megabits, you have to multiply by $1.048576$, getting $1.048576x \text{ Megabits p/s}$. Likewise, if you have $y\text{ Megabits p/s}$, since a Megabit is $\displaystyle \frac{10^6}{2^{20}} = 0.95367431640625$ Mebibits, you have to multiply by $0.95367431640625$, getting $0.95367431640625y\text{ Mebibits p/s}$. Round up as necessary.
To find these conversion factors, you can see that a Mebibit is $2^{20}$ bits, and a Megabit is $10^6$ bits. Therefore a Mebibit is $\displaystyle 2^{20} \text{ bits} = \frac{2^{20}}{10^6} \text{ Megabits}$, and the other direction is analogous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the distribution of a random variable that is the product of the two normal random variables ? What is the distribution of a random variable that is the product of the two normal random variables ?
Let $X\sim N(\mu_1,\sigma_1), Y\sim N(\mu_2,\sigma_2)$
and $Z=XY$
That is, what is its probability density function, its expected value, and its variance ?
I'm kind of stuck and I can't find a satisfying answer on the web.
If anybody knows the answer, or a reference or link, I would be really thankful...
| For the special case that both Gaussian random variables $X$ and $Y$ have zero mean and unit variance, and are independent, the answer is that $Z=XY$ has the probability density $p_Z(z)={\rm K}_0(|z|)/\pi$. The brute force way to do this is via the transformation theorem:
\begin{align}
p_Z(z)&=\frac{1}{2\pi}\int_{-\infty}^\infty{\rm d}x\int_{-\infty}^\infty{\rm d}y\;{\rm e}^{-(x^2+y^2)/2}\delta(z-xy) \\
&= \frac{1}{\pi}\int_0^\infty\frac{{\rm d}x}{x}{\rm e}^{-(x^2+z^2/x^2)/2}\\
&= \frac{1}{\pi}{\rm K}_0(|z|) \ .
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28",
"answer_count": 5,
"answer_id": 0
} |
What's an intuitive explanation of the max-flow min-cut theorem? I'm about to read the proof of the max-flow min-cut theorem that helps solve the maximum network flow problem. Could someone please suggest an intuitive way to understand the theorem?
| Imagine a complex pipeline with a common source and common sink. You start to pump the water up, but you can't exceed some maximum flow. Why is that? Because there is some kind of bottleneck, i.e. a subset of pipes that transfer the fluid at their maximum capacity--you can't push more through. This bottleneck will be precisely the minimum cut, i.e. the set of edges that block the flow. Please note, that there may be more that one minimum cut. If you find one, the you know the maximum flow; knowing the maximum flow you know the capacity of the cut.
Hope that explains something ;-)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Irreducible polynomial over an algebraically closed field of characteristic distinct from 2 Let $k$ be an algebraically closed field such that $\textrm{char(k)} \neq 2$ and let $n$ be a fixed positive integer greater than $3$
Suppose that $m$ is a positive integer such that $3 \leq m \leq n$.
Is it always true that $f(x_{1},x_{2},\ldots,x_{m})=x_{1}^{2}+x_{2}^{2}+\cdots+x_{m}^{2}$ is irreducible over $k[x_{1},x_{2},\ldots,x_{n}]$?
I think yes. For $m=3$ we need to check that $f(x,y,z)=x^{2}+y^{2}+z^{2}$ is irreducible, yes? can't we use Eisenstein as follows?
Note $y+iz$ divides $y^{2}+z^{2}$ and $y+iz$ is irreducible over $k[y,z]$ and $(y+iz)^{2}$ does not divide $y^{2}+z^{2}$.
Therefore $f(x,y,z)=x^{2}+y^{2}+z^{2}$ is irreducible. Now we induct on $m$. Suppose the result holds for $m$ and let us show
it holds for $m+1$.
So we must look at the polynomial $x_{1}^{2}+\cdots+x_{m}^{2}+x_{m+1}^{2}$. Consider the ring $k[x_{m+1}][x_{1},..,x_{m}]$,
we have a monic polynomial and by hypothesis $x_{1}^{2}+\cdots+x_{m}^{2}$ is irreducible over $k[x_{1},\ldots,x_{m}]$ and $(x_{1}^{2}+\cdots+x_{m}^{2}
)^{2}$
does not divides $x_{1}^{2}+\cdots+x_{m}^{2}$ so Eisenstein applies again and we are done.
Question(s): Is this OK? In case not, can you please provide a proof?
| Let $A$ be a UFD.
Let $a$ be a non-zero square-free non-unit element of $A$.
Then $X^n - a \in A[X]$ is irreducible by Eisenstein.
$Y^2 + Z^2 = (Y + iZ)(Y - iZ)$ is square-free in $k[Y, Z]$.
Hence $X^2 + Y^2 + Z^2$ is irreducible in $k[X, Y, Z]$ by the above result.
Let $m \gt 2$.
By the induction hypothesis, $X_{1}^{2}+\cdots+X_{m}^{2}$ is irreducible in $k[X_{1},\ldots,X_{m}]$.
Hence $X_{1}^{2}+\cdots+X_{m+1}^{2}$ is irreducible in $k[X_{1},\ldots,X_{m+1}]
$by the above result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Algorithm for computing Smith normal form in an algebraic number field of class number 1 Let $K$ be an algebraic number field of class number 1.
Let $\frak{O}$ be the ring of algebraic integers in $K$.
Let $A$ be a nonzero $m\times n$ matrix over $\frak{O}$.
Since $\frak{O}$ is a PID, $A$ has Smith normal form $S$.
I'm looking for an algorithm to compute $S$.
It seems to me that we need to solve Bézout's identity.
If it's too difficult, we may assume K is a quadratic number field of class number 1.
| This is routine and is already implemented in several computer algebra systems, including Sage, Pari and (I think) Magma. (I wrote the Sage version some while back). As you point out, the standard existence proof for Smith form is completely algorithmic once you know how to find a GCD of two elements, which any of the above packages will do for you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/161950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
When does the vanishing wedge product of two forms require one form to be zero? Let $\alpha$ and $\beta$ be two complex $(1,1)$ forms defined as:
$\alpha = \alpha_{ij} dx^i \wedge d\bar x^j$
$\beta= \beta_{ij} dx^i \wedge d\bar x^j$
Let's say, I know the following:
1) $\alpha \wedge \beta = 0$
2) $\beta \neq 0$
I want to somehow show that the only way to achieve (1) is by forcing $\alpha = 0$.
Are there general known conditions on the $\beta_{ij}$ for this to happen?
The only condition I could think of is if all the $\beta_{ij}$ are the same. However, this is a bit too restrictive. I'm also interested in the above problem when $\beta$ is a $(2,2)$ form.
| \begin{eqnarray}
\alpha\wedge\beta
&=&\sum_{i,j,k,l}\alpha_{ij}\beta_{kl}dx^i\wedge d\bar{x}^j\wedge dx^k\wedge d\bar{x}^l\cr
&=&(\sum_{i<k,j<l}+\sum_{i<k,l<j}+\sum_{k<i,j<l}+\sum_{k<i,l<j})\alpha_{ij}\beta_{kl}dx^i\wedge d\bar{x}^j\wedge dx^k\wedge d\bar{x}^l\cr
&=&\sum_{i<k,j<l}(-\alpha_{ij}\beta_{kl}+\alpha_{kj}\beta_{il}+\alpha_{il}\beta_{kj}-
\alpha_{kl}\beta_{ij})dx^i\wedge dx^k\wedge d\bar{x}^j\wedge d\bar{x}^l.
\end{eqnarray}
Thus
$$
\alpha\wedge\beta=0 \iff \alpha_{ij}\beta_{kl}+
\alpha_{kl}\beta_{ij}=\alpha_{kj}\beta_{il}+\alpha_{il}\beta_{kj}
\quad \forall \ i<k,j<l.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
If a function has a finite limit at infinity, does that imply its derivative goes to zero? I've been thinking about this problem: Let $f: (a, +\infty) \to \mathbb{R}$ be a differentiable function such that $\lim\limits_{x \to +\infty} f(x) = L < \infty$. Then must it be the case that $\lim\limits_{x\to +\infty}f'(x) = 0$?
It looks like it's true, but I haven't managed to work out a proof. I came up with this, but it's pretty sketchy:
$$
\begin{align}
\lim_{x \to +\infty} f'(x) &= \lim_{x \to +\infty} \lim_{h \to 0} \frac{f(x+h)-f(x)}{h} \\
&= \lim_{h \to 0} \lim_{x \to +\infty} \frac{f(x+h)-f(x)}{h} \\
&= \lim_{h \to 0} \frac1{h} \lim_{x \to +\infty}[f(x+h)-f(x)] \\
&= \lim_{h \to 0} \frac1{h}(L-L) \\
&= \lim_{h \to 0} \frac{0}{h} \\
&= 0
\end{align}
$$
In particular, I don't think I can swap the order of the limits just like that. Is this correct, and if it isn't, how can we prove the statement? I know there is a similar question already, but I think this is different in two aspects. First, that question assumes that $\lim\limits_{x \to +\infty}f'(x)$ exists, which I don't. Second, I also wanted to know if interchanging limits is a valid operation in this case.
| Let a function oscillate between $y=1/x$ and $y=-1/x$ in such a way that it's slope oscillates between $1$ and $-1$. Draw the picture. It's easy to see that such functions exist. Then the function approaches $0$ but the slope doesn't approach anything.
One could ask: If the derivative also has a limit, must it be $0$? And there, I think, the answer is "yes".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44",
"answer_count": 6,
"answer_id": 5
} |
Techniques of Integration w/ absolute value I cannot solve this integral.
$$
\int_{-3}^3 \frac{x}{1+|x|} ~dx
$$
I have tried rewriting it as:
$$
\int_{-3}^3 1 - \frac{1}{x+1}~dx,
$$
From which I obtain:
$$
x - \ln|x+1|\;\;\bigg\vert_{-3}^{\;3}
$$
My book (Stewart's Calculus 7e) has the answer as 0, and I can intuitively see this from a graph of the function, it being symmetric about $y = x$, but I cannot analytically solve this.
| You have an odd function integrated over an interval symmetric about the origin. The answer is $0$. This is, as you point out, intuitively reasonable from the geometry. A general analytic argument is given in a remark at the end.
If you really want to calculate, break up the region into two parts, where (i) $x$ is positive and (ii) where $x$ is negative. For $x$ positive, you are integrating $\frac{x}{1+x}$. For $x$ negative, you are integrating $\frac{x}{1-x}$, since for $x\le 0$ we have $|x|=-x$. Finally, calculate
$$\int_{3}^0 \frac{x}{1-x}\,dx\quad\text{and}\quad\int_0^3 \frac{x}{1+x}\,dx$$
and add. All that work for nothing!
Remarks: $1.$ In general, when absolute values are being integrated, breaking up into appropriate parts is the safe way to go. Trying to handle both at the same time carries too high a probability of error.
$2.$ Let $f(x)$ be an odd function. We want to integrate from $-a$ to $a$, where $a$ is positive. We look at
$$\int_{-a}^0 f(x)\,dx.$$
Let $u=-x$. Then $du=-dx$, and $f(u)=-f(x)$. So our integral is
$$\int_{u=a}^0 (-1)(-f(u))\,du.$$
the two minus signs cancel. Changing the order of the limits, we get $-\int_0^a f(u)\,du$. so this is just the negative of the integral over the interval from $0$ to $a$. That gives us the desired cancellation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Dixon's random squares algorithm: a step in the proof of its subexp. running time I am currently working to understand Dixon's running time proof of his subexp integer factorization algorithm (random squares).
But unfortunately I can not follow him at a certain point in his work. His work is available for free here: http://www.ams.org/journals/mcom/1981-36-153/S0025-5718-1981-0595059-1/home.html
My problem occurs on the last page (page 6 of file, named page 260) where he states that in case of success the execution of step 1 is bound by 4hv+2, where h is the number of primes smaller v, and v is a fixed integer depending on n (the number we want to factorize). I really do not have a clue where this bound comes from. It might have to do something with the expected values of some steps in the algorithms, but these bound seems not probalistic as far as I unterstand him.
I guess this problem might only be comprehensible when you already read the whole paper. I am hoping to fluke here.
Best regards!
Robert
| The $4hv+2$ bound is indeed deterministic, but only applies when we're not in a "bad" case. So the question we need to ask is how "bad" cases are defined.
I think the idea of the author is that the previous paragraph can be read by relaxing the condition $N=v^2+1$ to any $N\ge 4hv$, and in particular $N=4hv$. But then we need to change the last line of the paragraph:
$$2c^{-1}+2^{-h}=O(vN^{-1})+O(n^{-1})=O(h^{-1})$$
This means that we also need to change the next sentence so that it reads
All but $O(h^{-1})$ of the $A_L$ will have $N_1\le 4hv+2$.
instead of $O(v^{-1})$. I don't think there is any reason for choosing $4hv+2$ instead of $4hv+1$.
But this doesn't matter in the grand scheme of things, because it still allows us to write the bound
$$\begin{align}
&O(h^{-1}(N+1)h\log n + (4hv+2)h\log n)\quad\text{(*)}\\
=&O(N\log n)+O(vh^2\log n)\\
=&O(vh^2\log n)\\
=&O(v^3)
\end{align}$$
(*): in the original text, $n\log n$ is a typo and should of course read $h\log n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Preimage of a point by a non-constant harmonic function on $\mathbb{R}$ is unbounded
Let $u$ be a non-constant harmonic function on $\mathbb{R}$. Show that $u^{-1}(c)$ is unbounded.
I am not getting what theorem or result to apply. Could anyone help me?
| Let $u(x)=x$, for all $x \in \mathbb{R}$. Then $u''(x)=0$ for all $x$. But $u^{-1}(\{c\}) = \{c\}$. I think somebody is cheating you :-)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
how to change polar coordinate into cartesian coordinate using transformation matrix I would like to change $(3,4,12)$ in $xyz$ coordinate to spherical coordinate using the following relation
It is from the this link. I do not understand the significance of this matrix (if not for coordinate transformation) or how it is derived. Also please check my previous question building transformation matrix from spherical to cartesian coordinate system. Please I need your insight on building my concept.
Thank you.
EDIT::
I understand that $ \left [ A_x \sin \theta\cos \phi \hspace{5 mm} A_y \sin \theta\sin\phi \hspace{5 mm} A_z\cos\theta\right ]$ gives $A_r$ but how is other coordinates $ (A_\theta, A_\phi)$ equal to their respective respective rows from Matrix multiplication?
| The transformation from Cartesian to polar coordinates is not a linear function, so it cannot be achieved by means of a matrix multiplication.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
} |
How can I calculate this limit: $\lim\limits_{x\rightarrow 2}\frac{2-\sqrt{2+x}}{2^{\frac{1}{3}}-(4-x)^\frac{1}{3}}$? I was given this limit to solve, without using L'Hospital rule. It's killing me !! Can I have the solution please ?
$$\lim_{x\rightarrow 2}\frac{2-\sqrt{2+x}}{2^{\frac{1}{3}}-(4-x)^\frac{1}{3}}$$
| The solution below may come perilously close to the forbidden L'Hospital's Rule, though the Marquis is not mentioned.
To make things look more familiar, change signs, and then divide top and bottom by $x-2$. The expression we want the limit of becomes
$$\frac{\sqrt{2+x}-2}{x-2} \cdot \frac{x-2}{(4-x)^{1/3}-2^{1/3}}.$$
We recognize the first part of the product as the "difference quotient" $\frac{f(x+a)-f(a)}{x-a}$ where $f(x)=\sqrt{2+x}$ and $a=2$.
We recognize the second part of the product as the reciprocal of the difference quotient $\frac{g(x+a)-g(a)}{x-a}$ where $g(x)=(4-x)^{1/3}$ and $a=2$.
Now take the derivatives.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
A limit involving polynomials Let be the polynomial:
$$P_n (x)=x^{n+1} - (x^{n-1}+x^{n-2}+\cdots+x+1)$$
I want to prove that it has a single positive real root we'll denote by $x_n$, and then to compute:
$$\lim_{n\to\infty} x_{n}$$
| Since it's not much more work, let's study the roots in $\mathbb{C}$.
Note that $x=1$ is not a solution unless $n=1$, since $P_n(1) = 1-n$.
Since we are interested in the limit $n\to\infty$, we can assume $x\ne 1$.
Sum the geometric series,
$$\begin{eqnarray*}
P_n (x) &=& x^{n+1} - (x^{n-1}+x^{n-2}+\cdots+x+1) \\
&=& x^{n+1} - \frac{x^n-1}{x-1}.
\end{eqnarray*}$$
The roots will satisfy
$$x_n^{n}(x_n^2-x_n-1) = -1.$$
(Addendum: If there are concerns about convergence of the sum, think of summing the series as a shorthand that reminds us that $(x-1)P_n(x) = x^{n}(x^2-x-1) + 1$ for all $x$.)
If $0\le |x_n|<1$, $\lim_{n\to\infty} x^n = 0$, thus, in the limit, there are no complex roots in the interior of the unit circle.
If $|x_n|>1$, $\lim_{n\to\infty} 1/x^n = 0$, thus, in the limit, the roots must satisfy
$$x_n^2 - x_n - 1 = 0.$$
There is one solution to this quadratic equation with $|x_n|>1$, it is real and positive,
$$x_n = \frac{1}{2}(1+\sqrt{5}).$$
This is the golden ratio.
It is the only root exterior to the unit circle.
The rest of the roots must lie on the boundary of the unit circle.
Figure 1. Contour plot of $|P_{15}(x+i y)|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 2
} |
Which theories and concepts exist where one calculates with sets? Recently I thought about concepts for calculating with sets instead of numbers. There you might have axioms like:
*
*For every $a\in\mathbb{R}$ (or $a\in\mathbb{C}$) we identify the term $a$ with $\{a\}$.
*For any operator $\circ$ we define $A\circ B := \{a\circ b : a\in A\land b\in B\}$.
*For any function $f$ we define $f(A) := \{ f(a) : a\in A \}$. (More general: For a function $f(x_1,\ldots, x_n)$ we define $f(A_1,\ldots, A_n):= \{f(a_1,\ldots, a_n): a_1\in A_1 \land \dots \land a_n\in A_n \}$).
*One has to find a good definition for $f^{-1}(A)$ which might be the inverse image of $A$.
((3.) is just the normal definition of the image and (2.) is a special case of (3.))
Now I am interested to learn about theories and concepts where one actually calculates with sets (similar to the above axioms).
After a while I found interval arithmetic. What theories or approaches do you know?
Because there will not be just one answer to my question, I will accept the answer with the most upvotes.
Update: The theories do not have to follow the above axioms. It's okay when they make there own definitions how a function shall act on sets. It is just important that you calculate with sets in the theory, concept or approach.
| I like Minkowski addition, aka vector addition. It is a basic operation in the geometry of convex sets. See: zonotopes & zonoids, Brunn-Minkowski inequality, polar sets... and here's a neat inequality for an arbitrary convex set $A\subset\mathbb R^n$:
$$
\mathrm{volume}\,(A-A)\le \binom{2n}{n}\mathrm{volume}\,(A)
$$
with equality when $A$ is a simplex. (Due to Rogers and Shepard, see here)
The case $n=1$ isn't nearly as exciting.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Describing multivariable functions So I am presented with the following question:
Describe and sketch the largest region in the $xy$-plane that corresponds to the domain of the function:
$$g(x,y) = \sqrt{4 - x^2 - y^2} \ln(x-y).$$
Now to be I can find different restrictions like $4 - x^2 - y^2 \geq 0$... but I'm honestly not even sure where to begin this question! Any help?
| So, you need two things:
$$ 4 - x^2 - y^2 \geq 0 $$
to get make the square root work, and also
$$ x-y > 0 $$
to make the logarithm work.
You will be graphing two regions in the $xy$-plane, and your answer will be the area which is in both regions.
A good technique for graphing a region given by an inequality is to first replace the inequality by an equality. For the first region this means
$$ 4 - x^2 - y^2 = 0$$
$$ 4 = x^2 + y^2 $$
Therefore, we're talking about the circle of radius two centered at the origin. The next question to answer: do we want the inside or outside of that circle? To determine that, we use a test point: pick any point not on the circle and plug it into the inequality. I'll choose $(x,y) = (5,0)$. Note that this point is on the outside of the circle.
$$ 4 - x^2 - y^2 \geq 0 $$
$$ 4 - 5^2 - 0^2 \geq 0 $$
That's clearly false, so we do not want the outside of the circle. Our region is the inside of the circle. Shade that lightly on your drawing.
Now, do the line by the same algorithm.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to determine if 2 points are on opposite sides of a line How can I determine whether the 2 points $(a_x, a_y)$ and $(b_x, b_y)$ are on opposite sides of the line $(x_1,y_1)\to(x_2,y_2)$?
| Writing $A$ and $B$ for the points in question, and $P_1$ and $P_2$ for the points determining the line ...
Compute the "signed" areas of the $\triangle P_1 P_2 A$ and $\triangle P_1 P_2 B$ via the formula (equation 16 here)
$$\frac{1}{2}\left|\begin{array}{ccc}
x_1 & y_1 & 1 \\
x_2 & y_2 & 1 \\
x_3 & y_3 & 1
\end{array}\right|$$
with $(x_3,y_3)$ being $A$ or $B$. The points $A$ and $B$ will be on opposite sides of the line if the areas differ in sign, which indicates that the triangles are being traced-out in different directions (clockwise vs counterclockwise).
You can, of course, ignore the "$1/2$", as it has not affect the sign of the values. Be sure to keep the row-order consistent in the two computations, though.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 3
} |
smooth quotient Let $X$ be a smooth projective curve over the field of complex numbers. Assume it comes with an action of $\mu_3$. Could someone explain to me why is the quotient $X/\mu_3$ a smooth curve of genus zero?
| No, the result is not true for genus $1$.
Consider an elliptic curve $E$ (which has genus one) and a non-zero point $a\in E$ of order three .
Translation by $a$ is an automorphism $\tau_a:E\to E: x\mapsto x+a$ of order $3$ of the Riemann surface $E$.
It generates a group $G=\langle \tau_a\rangle\cong \mu_3$ of order $3$ acting freely on $E$ and the quotient $E/G$ is an elliptic curve, a smooth curve of genus one and not of genus zero.
Remarks
a) There are eight choices for $a$, since the $3$-torsion of $E$ is isomorphic to $\mathbb Z/3\mathbb Z\times \mathbb Z/3\mathbb Z$.
b) The quotient morphism $E\to E/G$ is a non-ramified covering
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Direct limit of localizations of a ring at elements not in a prime ideal For a prime ideal $P$ of a commutative ring $A$, consider the direct limit of the family of localizations $A_f$ indexed by the set $A \setminus P$ with partial order $\le$ such that $f \le g$ iff $V(f) \subseteq V(g)$. (We have for such $f \le g$ a natural homomorphism $A_f \to A_g$.) I want to show that this direct limit, $\varinjlim_{f \not\in P} A_f$, is isomorphic to the localization $A_P$ of $A$ at $P$. For this I consider the homomorphism $\phi$ that maps an equivalence class $[(f, a/f^n)] \mapsto a/f^n$. (I denote elements of the disjoint union $\sqcup_{f \not\in P} A_f$ by tuples $(f, a/f^n)$.) Surjectivity is clear, because for any $a/s \in A_P$ with $s \not\in P$, we have the class $[(s, a/s)] \in \varinjlim_{f \not\in P} A_f$ whose image is $a/s$. For injectivity, suppose we have a class $[(f, a/f^n)]$ whose image $a/f^n = 0/1 \in A_P$. Then there exists $t \notin P$ such that $ta = 0$. We want to show that $[(f, a/f^n)]] = [(f, 0/1)]$, which I believe is equivalent to finding a $g \notin P$ such that $V(f) \subseteq V(g)$ and $g^ka = 0$ for some $k \in \mathbb{N}$. Well, $t$ seems to almost work, but I couldn’t prove that $V(f) \subseteq V(t)$, so maybe we need a different $g$? Or am I using the wrong map entirely?
| If $a/f^n \in A_f$ is mapped to $0$ in $A_p,$ then there is a $g \not \in p,$ s.t. $ga=0,$ therefore, $a/f^n=0 \in A_{gf}.$ Hence the injectivity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
principal value as distribution, written as integral over singularity Let $C_0^\infty(\mathbb{R})$ be the set of smooth functions with compact support on the real line $\mathbb{R}.$ Then, the map
$$\operatorname{p.\!v.}\left(\frac{1}{x}\right)\,: C_0^\infty(\mathbb{R}) \to \mathbb{C}$$
defined via the Cauchy principal value as
$$ \operatorname{p.\!v.}\left(\frac{1}{x}\right)(u)=\lim_{\varepsilon\to 0+} \int_{\mathbb{R}\setminus [-\varepsilon;\varepsilon]} \frac{u(x)}{x} \, \mathrm{d}x \quad\text{ for }u\in C_0^\infty(\mathbb{R})$$
Now why is
$$ \lim_{\varepsilon\to 0+} \int_{\mathbb{R}\setminus [-\varepsilon;\varepsilon]} \frac{u(x)}{x} \, \mathrm{d}x = \int_0^{+\infty} \frac{u(x)-u(-x)}{x}\, \mathrm{d}x $$
why is the integral defined on the left.
| We can write
$$I(\varepsilon):=\int_{\Bbb R\setminus [-\varepsilon,\varepsilon]}\frac{u(x)}xdx=\int_{-\infty}^{-\varepsilon}\frac{u(x)}xdx+\int_{\varepsilon}^{\infty}\frac{u(x)}xdx.$$
In the first integral of the RHs, we do the substitution $t=-x$, then
$$I(\varepsilon)=-\int_{\varepsilon}^{+\infty}\frac{u(t)}tdt+\int_{\varepsilon}^{\infty}\frac{u(x)}xdx=\int_{\varepsilon}^{+\infty}\frac{u(t)-u(-t)}tdt.$$
Now we can conclude, since, by fundamental theorem of analysis, the integral $\int_0^{+\infty}\frac{u(t)-u(-t)}tdt$ is convergent. Indeed,
$$u(t)-u(-t)=\int_{-t}^tu'(s)ds=\left[su'(s)\right]_{-t}^t-\int_{-t}^tsu''(s)ds\\=
t(u'(t)+u'(-t))-\int_{-t}^tsu''(s)ds$$
hence, for $0<t\leq 1$
$$\frac{|u(t)-u(-t)|}t\leq 2\sup_{|s|\leq 1}|u'(s)|+2\sup_{|s|\leq 1}|u''(s)|.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Closest point of line segment without endpoints I know of a formula to determine shortest line segment between two given line segments, but that works only when endpoints are included. I'd like to know if there is a solution when endpoints are not included or if I'm mixing disciplines incorrectly.
Example : Line segment $A$ is from $(1, 1)$ to $(1, 4)$ and line segment $B$ is from $(0, 0)$ to $(0, 2)$, so shortest segment between them would be $(0, 1)$ to $(1, 1)$. But of line segment $A$ did not include those end points, how would that work since $(1, 1)$ is not part of line segment $A$?
| There would not be a shortest line segment. Look at line segments from $(0,1)$ to points very close to $(1,1)$ on the segment that joins $(1,1)$ and $(1,4)$. These segments get shorter and shorter, approaching length $1$ but never reaching it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/162996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
First order ordinary differential equations involving powers of the slope
Are there any general approaches to differential equations like
$$x-x\ y(x)+y'(x)\ (y'(x)+x\ y(x))=0,$$
or that equation specifically?
The problem seems to be the term $y'(x)^2$. Solving the equation for $y'(x)$ like a qudratic equation gives some expression $y'(x)=F(y(x),x)$, where $F$ is "not too bad" as it involves small polynomials in $x$ and $y$ and roots of such object. That might be a starting point for a numerical approach, but I'm actually more interested in theory now.
$y(x)=1$ is a stationary solution. Plugging in $y(x)\equiv 1+z(x)$ and taking a look at the new equation makes me think functions of the form $\exp{(a\ x^n)}$ might be involved, but that's only speculation. I see no symmetry whatsoever and dimensional analysis fails.
| Hint:
$x-xy+y'(y'+xy)=0$
$(y')^2+xyy'+x-xy=0$
$(y')^2=x(y-1-yy')$
$x=\dfrac{(y')^2}{y-1-yy'}$
Follow the method in http://science.fire.ustc.edu.cn/download/download1/book%5Cmathematics%5CHandbook%20of%20Exact%20Solutions%20for%20Ordinary%20Differential%20EquationsSecond%20Edition%5Cc2972_fm.pdf#page=234:
Let $t=y'$ ,
Then $\left(1+\dfrac{t^3(t-1)}{y^2(t-1)+1}\right)\dfrac{dy}{dt}=\dfrac{2t^2}{y(1-t)-1}+\dfrac{yt^3}{y(1-t^2)-1}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Is it true that $\max\limits_D |f(x)|=\max\limits\{|\max\limits_D f(x)|, |\min\limits_D f(x)|\}$? I came across an equality, which states that
If $D\subset\mathbb{R}^n, n\geq 2$ is compact, for each $ f\in C(D)$, we have the following equality
$$\max\limits_D |f(x)|=\max\limits\{|\max\limits_D f(x)|, |\min\limits_D f(x)|\}.$$
Actually I can not judge if it is right. Can anyone tell me if it is right, and if so, how to prove it?
Thanks a lot.
| I assume that you are assuming that $\max$ and $\min$ exists or that you are assuming that $f(x)$ is continuous which in-turn guarantees that $\max$ and $\min$ exists, since $D$ is compact.
First note that if we have $a \leq y \leq b$, then $\vert y \vert \leq \max\{\vert a \vert, \vert b \vert\}$, where $a,b,y \in \mathbb{R}$. Hence, $$\min_D f(x) \leq f(x) \leq \max_D f(x) \implies \vert f(x) \vert \leq \max\{\vert \max_D f(x) \vert, \vert \min_D f(x) \vert\}, \, \forall x$$
Hence, we have that $$\max_D \vert f(x) \vert \leq \max\{\vert \max_D f(x) \vert, \vert \min_D f(x) \vert\}$$
Now we need to prove the inequality the other way around. Note that we have $$\vert f(x) \vert \leq \max_D \vert f(x) \vert$$ This implies $$-\max_D \vert f(x) \vert \leq f(x) \leq \max_D\vert f(x) \vert$$
This implies
$$-\max_D \vert f(x) \vert \leq \max_D f(x) \leq \max_D\vert f(x) \vert$$
$$-\max_D \vert f(x) \vert \leq \min_D f(x) \leq \max_D\vert f(x) \vert$$
Hence, we have that $$\vert \max_D f(x) \rvert \leq \max_D\vert f(x) \vert$$ $$\vert \min_D f(x) \rvert \leq \max_D\vert f(x) \vert$$
The above two can be put together as $$\max \{\vert \max_D f(x) \rvert, \vert \min_D f(x) \rvert \} \leq \max_D\vert f(x) \vert$$
Hence, we can now conclude that $$\max \{\vert \max_D f(x) \rvert, \vert \min_D f(x) \rvert \} = \max_D\vert f(x) \vert$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Sum of Natural Number Ranges? Given a positive integer $n$, some positive integers $x$ can be represented as follows:
$$1 \le i \le j \le n$$
$$x = \sum_{k=i}^{j}k$$
Given $n$ and $x$ determine if it can be represented as the above sum (if $\exists{i,j}$), and if so determine the $i$ and $j$ such that the sum has the smallest number of terms. (minimize $j-i$)
I am not sure how to approach this. Clearly closing the sum gives:
$$x = {j^2 + j - i^2 + i \over 2}$$
But I'm not sure how to check if there are integer solutions, and if there are to find the one with smallest $j-i$.
| A start: Note that
$$2x=j^2+j-i^2+i=(j+i)(j-i+1).$$
The two numbers $j+i$ and $j-i$ have the same parity (both are even or both are odd). So we must express $2x$ as a product of two numbers of different parities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
for what value of $a$ has equation rational roots? Suppose that we have following quadratic equation containing some constant $a$
$$ax^2-(1-2a)x+a-2=0.$$
We have to find all integers $a$,for which this equation has rational roots.
First I have tried to determine for which $a$ this equation has a real solution, so I calculated the discriminant (also I guessed that $a$ must not be equal to zero, because in this situation it would be a linear form, namely $-x-2=0$ with the trivial solution $x=-2$).
So $$D=(1-2a)^2-4a(a-2)$$ if we simplify,we get $$D=1-4a+4a^2-4a^2+8a=1+4a$$
So we have the roots: $x_1=(1-2a)-\frac{\sqrt{1+4a})}{2a}$ and $x_2=(1-2a)+\frac{\sqrt{1+4a})}{2a}$
Because we are interesting in rational numbers, it is clear that $\sqrt{1+4a}$ must be a perfect square, because $a=0$ I think is not included in solution set,then what i have tried is use $a=2$,$a=6$,$a=12$,but are they the only solutions which I need or how can I find all values of $a$?
| Edited in response to Simon's comment.
Take any odd number; you can write it as $2m+1$, for some integer $m$; its square is $4m^2+4m+1$; so if $a=m^2+m$ then, no matter what $m$ is, you get rational roots.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why does the Gauss-Bonnet theorem apply only to even number of dimensons? One can use the Gauss-Bonnet theorem in 2D or 4D to deduce topological characteristics of a manifold by doing integrals over the curvature at each point.
First, why isn't there an equivalent theorem in 3D? Why can not the theorem be proved for odd number of dimensions (i.e. what part of the proof prohibits such generalization)?
Second and related, if there was such a theorem, what interesting and difficult problem would become easy/inconsistent? (the second question is intentionally vague, no need to answer if it is not clear)
| First, for a discussion involving Chern's original proof, check here, page 18.
I think the reason is that the original Chern-Gauss-Bonnet theorem can be treated topologically as
$$
\int_{M} e(TM)=\chi_{M}
$$
and for odd dimensional manifolds, the Euler class is "almost zero" as $e(TM)+e(TM)=0$. So it vanishes in de rham cohomology. On the other hand, $\chi_{M}=0$ if $\dim(M)$ is odd. So the theorem now trivially holds for odd dimension cases.
Another perspective is through Atiyah-Singer index theorem. The Gauss-Bonnet theorem can be viewed as a special case involving the index of the de rham Dirac operator:
$$
D=d+d^{*}
$$
But on odd dimensional bundles, the index of $D$ is zero. Therefore both the left and right hand side of Gauss-Bonnet are zero.
I heard via street rumor that there is some hope to "twist" the Dirac operator in K-theory, so that the index theorem gives non-trivial results for odd dimensions. But this can be rather involved, and is not my field of expertise. One expert on this is Daniel Freed, whom you may contact on this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
Is the range of this operator closed? I think I am stuck with showing closedness of the range of a given operator. Given a sequence $(X_n)$ of closed subspaces of a Banach space $X$. Define $Y=(\oplus_n X_n)_{\ell_2}$ and set $T\colon Y\to X$ by $T(x_n)_{n=1}^\infty = \sum_{n=1}^\infty \frac{x_n}{n}$. Is the range of $T$ closed?
| The range is not necessarily closed. For example, if $X=(\oplus_{n \in \mathbb{N}} X_n)_{\ell_2}$=Y:
if $T(Y)$ is closed, $T(Y)$ is a Banach space. $T$ is a continuous bijective map from $X$ onto $T(Y)$, so is an homeomorphism (open map theorem) . But $T^{-1}$ is not continuous, because $T^{-1}(x_n)=nx_n$, for $x_n \in X_n$.
So $T(Y)$ is not closed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163335",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Projection onto Singular Vector Subspace for Singular Value Decomposition I am not very sophisticated in my linear algebra, so please excuse any messiness in my terminology, etc.
I am attempting to reduce the dimensionality of a dataset using Singular Value Decomposition, and I am having a little trouble figuring out how this should be done. I have found a lot of material about reducing the rank of a matrix, but not reducing its dimensions.
For instance, if I decompose using SVD: $A = USV^T$, I can reduce the rank of the matrix by eliminating singular values below a certain threshold and their corresponding vectors. However, doing this returns a matrix of the same dimensions of $A$, albeit of a lower rank.
What I actually want is to be able to express all of the rows of the matrix in terms of the top principal components (so an original 100x80 matrix becomes a 100x5 matrix, for example). This way, when I calculate distance measures between rows (cosine similarity, Euclidean distance), the distances will be in this reduced dimension space.
My initial take is to multiply the original data by the singular vectors: $AV_k$. Since $V$ represents the row space of $A$, I interpret this as projecting the original data into a subspace of the first $k$ singular vectors of the SVD, which I believe is what I want.
Am I off base here? Any suggestions on how to approach this problem differently?
| If you want to do the $100\times80$ to $100\times5$ conversion, you can just multiply $U$ with the reduced $S$ (after eliminating low singular values). What you will be left with is a $100\times80$ matrix, but the last $75$ columns are $0$ (provided your singular value threshold left you with only $5$ values). You can just eliminate the columns of $0$ and you will be left with $100\times5$ representation.
The above $100\times5$ matrix can be multiplied with the $5\times80$ matrix obtained by removing the last $75$ rows of $V$ transpose, this results in the approximation of $A$ that you are effectively using.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
$p=4n+3$ never has a Decomposition into $2$ Squares, right? Primes of the form $p=4k+1\;$ have a unique decomposition as sum of squares $p=a^2+b^2$ with $0<a<b\;$, due to Thue's Lemma.
Is it correct to say that, primes of the form $p=4n+3$, never have a decomposition into $2$ squares, because sum of the quadratic residues $a^2+b^2$ with $a,b\in \Bbb{N}$
$$
a^2 \bmod 4 +b^2 \bmod 4 \le 2?
$$
If so, are there alternate ways to prove it?
| Yes, as it has been pointed out, if $a^2+b^2$ is odd, one of $a$ and $b$ must be even and the other odd. Then
$$
(2m)^2+(2n+1)^2=(4m^2)+(4n^2+4n+1)=4(m^2+n^2+n)+1\equiv1\pmod{4}
$$
Thus, it is impossible to have $a^2+b^2\equiv3\pmod{4}$.
In fact, suppose that a prime $p\equiv3\pmod{4}$ divides $a^2+b^2$. Since $p$ cannot be written as the sum of two squares, $p$ is also a prime over the Gaussian integers. Therefore, since $p\,|\,(a+ib)(a-ib)$, we must also have that $p\,|\,a+ib$ or $p\,|\,a-ib$, either of which implies that $p\,|\,a$ and $p\,|\,b$. Thus, the exponent of $p$ in the factorization of $a^2+b^2$ must be even.
Furthermore, each positive integer whose prime factorization contains each prime $\equiv3\pmod{4}$ to an even power is a sum of two squares.
Using the result about quadratic residues in this answer, for any prime $p\equiv1\pmod{4}$, we get that $-1$ is a quadratic residue $\bmod{p}$. That is, there is an $x$ so that
$$
x^2+1\equiv0\pmod{p}\tag{1}
$$
This means that
$$
p\,|\,(x+i)(x-i)\tag{2}
$$
since $p$ can divide neither $x+i$ nor $x-i$, $p$ is not a prime in the Gaussian integers, so it must be the product of two Gaussian primes (any more, and we could find a non-trivial factorization of $p$ over the integers). That is, we can write
$$
p=(u+iv)(u-iv)=u^2+v^2\tag{3}
$$
Note also that
$$
2=(1+i)(1-i)=1^2+1^2\tag{4}
$$
Suppose $n$ is a positive integer whose prime factorization contains each prime $\equiv3\pmod{4}$ to even power. Each factor of $2$ and each prime factor $\equiv1\pmod{4}$ can be split into a pair of conjugate Gaussian primes. Each pair of prime factors $\equiv3\pmod{4}$ can be split evenly. Thus, we can split the factors into conjugate pairs:
$$
n=(a+ib)(a-ib)=a^2+b^2\tag{5}
$$
For example,
$$
\begin{align}
90
&=2\cdot3^2\cdot5\\
&=(1+i)\cdot(1-i)\cdot3\cdot3\cdot(2+i)\cdot(2-i)\\
&=[(1+i)3(2+i)]\cdot[(1-i)3(2-i)]\\
&=(3+9i)(3-9i)\\
&=3^2+9^2
\end{align}
$$
Thus, we have shown that a positive integer is the sum of two squares if and only if each prime $\equiv3\pmod{4}$ in its prime factorization occurs with even exponent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
How many subsets are there? I'm having trouble simplifying the expression for how many sets I can possibly have.
It's a very specific problem for which the specifics don't actually matter, but for $q$, some power of $2$ greater than $4$, I have a set of $q - 3$ elements. I am finding all subsets which contain at least two elements and at most half ($q / 2$, which would then also be a power of $2$) of the elements. I know that the total number of subsets which satisfy these conditions is (sorry my TeX may be awful),
$$\sum\limits_{i=2}^{\frac{q}{2}} \binom{q-3}{i}= \sum\limits_{i=0}^{\frac{q}{2}} \binom{q-3}{i}- (q - 3 + 1)$$
but I'm having a tough time finding a closed-form expression for this summation. I'm probably missing something, but it has stumped me this time.
| You're most of the way there; now just shave a couple of terms from the upper limit of your sum. Setting $r=q-3$ for clarity, and noting that $\lfloor r/2\rfloor = q/2-2$ (and that $r$ is odd since $q$ is a power of $2$),
$$\sum_{i=0}^{q/2}\binom{r}{i} = \binom{r}{q/2}+\binom{r}{q/2-1} + \sum_{i=0}^{\lfloor r/2\rfloor}\binom{r}{i}$$
And the last sum should be easy to compute, since by symmetry it's one half of $\displaystyle\sum_{i=0}^r\binom{r}{i}$...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Examples of logs with other bases than 10 From a teaching perspective, sometimes it can be difficult to explain how logarithms work in Mathematics. I came to the point where I tried to explain binary and hexadecimal to someone who did not have a strong background in Mathematics. Are there some common examples that can be used to explain this?
For example (perhaps this is not the best), but we use tally marks starting from childhood. A complete set marks five tallies and then a new set is made. This could be an example of a log with a base of 5.
| The most common scales of non-decimal logrithms are the music scale.
For example, the octave is a doubling of frequency over 12 semitones. The harmonics are based on integer ratios, where the logrithms of 2, 3, and 5 approximate to 12, 19 and 28 semitones. One can do things like look at the ratios represented by the black keys or the white keys on a paino keyboard. The black keys are a more basic set than the white keys (they are all repeated in the white keys, with two additions).
The brightness of stars are in steps of 0.4 dex (ie 5 orders of magnitude = 100), while there is the decibel scale (where the same numbers represent intensity in $10\log_{10}$ vs power in $20\log_{10}$.
The ISO R40 series is a series of decimal preferred numbers, the steps are in terms of $40\log_{10}$, it's very close to the semi-tone scales.
One can, for example, with just rude approximations like $5<6$, and considering a graph of areas of $x=log_2(3)$ vs $y=log_2(5)$, draw the inequality above as a line saying that the point represented by the true value of $log_2(3), log_2(5)$, must be restricted to particular areas above or below a line. One finds that the thing converges quite rapidly, with inequalities less than 100.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Integrating multiple times. I am having problem in integrating the equation below. If I integrate it w.r.t x then w.r.t y and then w.r.t z, the answers comes out to be 0 but the actual answer is 52. Please help out. Thanks
| $$\begin{align}
\int_1^3 (6yz^3+6x^2y)\,dx &= \left[6xyz^3+2x^3y\right]_{x=1}^3
=12yz^3+52y
\\
\int_0^1 (12yz^3+52y)\,dy &=\left[6y^2z^3+26y^2\right]_{y=0}^1
=6z^3+26
\\
\int_{-1}^1 (6z^3+26)\,dz &= \left[\frac{3}{2}z^4+26z\right]_{z=-1}^1 = 52 .
\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Saturated Boolean algebras in terms of model theory and in terms of partitions Let $\kappa$ be an infinite cardinal. A Boolean algebra $\mathbb{B}$ is said to be $\kappa$-saturated if there is no partition (i.e., collection of elements of $\mathbb{B}$ whose pairwise meet is $0$ and least upper bound is $1$) of $\mathbb{B}$, say $W$, of size $\kappa$. Is there any relationship between this and the model theoretic meaning of $\kappa$-saturated (namely that all types over sets of parameters of size $<\kappa$ are realized)?
| As far as I know, there is no connection; it's just an unfortunate clash of terminology. It's especially unfortunate because the model-theoretic notion of saturation comes up in the theory of Boolean algebras. For example, the Boolean algebra of subsets of the natural numbers modulo finite sets is $\aleph_1$-saturated but (by a definitely non-trivial result of Hausdorff) not $\aleph_2$-saturated, in the model-theoretic sense, even if the cardinality of the continuum is large.
When (complete) Boolean algebras are used in connection with forcing, it is customary to say "$\kappa$-chain condition" instead of "$\kappa$-saturated" in the antichain sense.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
Common factors of the ideals $(x - \zeta_p^k)$, $x \in \mathbb Z$, in $\mathbb Z[\zeta_p]$ I'm trying to understand a proof of the following Lemma (regarding Catalan's conjecture):
Lemma:
Let $x\in \mathbb{Z}$, $2<q\not=p>2$ prime, $G:=\text{Gal}(\mathbb{Q}(\zeta_p):\mathbb{Q})$, $x\equiv 1\pmod{p}$ and $\lvert x\lvert >q^{p-1}+q$. Then the map $\phi:\{\theta\in\mathbb{Z}:(x-\zeta_p)^\theta\in\mathbb{Q}(\zeta_p)^{*q}\}\rightarrow\mathbb{Q}(\zeta_p)$, $\ $ $\phi(\theta)=\alpha$ such that $\alpha^q=(x-\zeta)^\theta$ is injective.
I don't understand the following step in the proof:
The ideals $(x-\sigma(\zeta_p))_{\sigma \in G}$ in $\mathbb{Z}[\zeta_p]$ have at most the factor $(1-\zeta_p)$ in common. Since $x\equiv 1\pmod{p}$ the ideals do have this factor in common.
Could you please explain to me why these two statements are true?
| In the world of ideals gcd is the sum. Pick any two ideals $(x-\sigma(\zeta_p))$ and $(x-\sigma'(\zeta_p))$. Then $\sigma(\zeta_p)-\sigma'(\zeta_p)$ will be in the sum ideal. This is a difference of two distinct powers of $\zeta_p$, so generates the same ideal as one of the $1-\zeta_p^k$, $0<k<p$. All of these generate the same prime ideal as $(1-\zeta_p)$, because the quotient
$$\frac{1-\zeta_p^k}{1-\zeta_p}=\frac{1-\zeta_p^k}{1-(\zeta_p^k)^{k'}}=\left(\frac{1-(\zeta_p^k)^{k'}}{1-\zeta_p^k}\right)^{-1}$$
is manifestly a unit in the ring $\mathbb{Z}[\zeta_p]$ (here $kk'\equiv1\pmod p$).
Because the ideal $(1-\zeta_p)$ is a non-zero prime ideal, hence maximal,
this gives the first claim you asked about.
The second claim follows from the well known fact that up to a unit factor (i.e. as a principal ideal) $(p)$ is the power $(1-\zeta_p)^{p-1}$. If $x=mp+1$, $m$ a rational integer, then
$(1-\zeta_p)$ divides both $mp$ and $(1-\sigma(\zeta_p))$, and therefore also $x-\sigma(\zeta_p)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/163900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Silly question about Fourier Transform What is the Fourier Transform of :
$$\sum_{n=1}^N A_ne^{\large-a_nt} u(t)~?$$
This is a time domain function, how can I find its Fourier Transform (continuous not discrete) ?
| Tips:
*
*The Fourier transform is linear; $$\mathcal{F}\left\{\sum_l a_lf_l(t)\right\}=\sum_l a_l\mathcal{F}\{f_l(t)\}.$$
*Plug $e^{-ct}u(t)$ into $\mathcal{F}$ and then discard part of the region of integration ($u(t)=0$ when $t<0$):
$$\int_{-\infty}^\infty e^{-ct}u(t)e^{-2\pi i st}dt=\int_0^\infty e^{(c-2\pi is)t}dt=? $$
Now put these two together..
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What does this mean: $\mathbb{Z_{q}^{n}}$? I can't understand the notation $\mathbb{Z}_{q}^{n} \times \mathbb{T}$ as defined below. As far as I know $\mathbb{Z_{q}}$ comprises all integers modulo $q$. But with $n$ as a power symbol I can't understand it. Also: $\mathbb{R/Z}$, what does it denote?
"... $ \mathbb{T} = \mathbb{R}/\mathbb{Z} $ the additive group on reals modulo one. Denote by $ A_{s,\phi} $ the distribution on $ \mathbb{Z}^n_q \times \mathbb{T}$ ..."
| You write "afaik $\mathbb Z_q$ comprises..." You have to be careful here what is meant by this notation. There are two common options:
1) $\mathbb Z_q$ is the ring of integers module $q$. Many people think this should be better written as $\mathbb Z/q$, to avoid confusion wth 2). However it is not uncommon to write $\mathbb Z_q$.
2) The ring of $q$-adic integers, i.e. formal powerseries in $q$ with coefficients in $\mathbb Z\cap [0,q-1]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to solve $3x+\sin x = e^x$ How doese one solve this equation?
$$
3x+\sin x = e^x
$$
I tried graphing it and could only find approximate solutions, not the exact solutions.
My friends said to use Newton-Raphson, Lagrange interpolation, etc., but I don't know what these are as they are much beyond the high school syllabus.
| sorry
f(b)=3(1)+sin(1)−e1= is Not Equal to ---> 1.12...
f(b)=3(1)+sin(1)−e1= is Equal to ----> 0.299170578
And
f(0.5)=3(0.5)+sin(0.5)−e0.5= is Not Equal to ---> 0.33...>0
f(0.5)=3(0.5)+sin(0.5)−e0.5= is Equal to ----> -0.1399947352
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Asymptotic behavior of the expression: $(1-\frac{\ln n}{n})^n$ when $n\rightarrow\infty$ The well known results states that:
$\lim_{n\rightarrow \infty}(1-\frac{c}{n})^n=(1/e)^c$ for any constant $c$.
I need the following limit: $\lim_{n\rightarrow \infty}(1-\frac{\ln n}{n})^n$.
Can I prove it in the following way? Let $x=\frac{n}{\ln n}$, then we get: $\lim_{n\rightarrow \infty}(1-\frac{\ln}{n})^n=\lim_{x\rightarrow \infty}(1-\frac{1}{x})^{x\ln n}=(1/e)^{\ln n}=\frac{1}{n}$.
So, $\lim_{n\rightarrow \infty}(1-\frac{\ln}{n})^n=\frac{1}{n}$.
I see that this is wrong to have an expression with $n$ after the limit. But how to show that the asymptotic behavior is $1/n$?
Thanks!
| According to the comments, your real aim is to prove that $x_n=n\left(1-\frac{\log n}n\right)^n$ has a non degenerate limit.
Note that $\log x_n=\log n+n\log\left(1-\frac{\log n}n\right)$ and that $\log(1-u)=-u+O(u^2)$ when $u\to0$ hence $n\log\left(1-\frac{\log n}n\right)=-\log n+O\left(\frac{(\log n)^2}n\right)$ and $\log x_n=O\left(\frac{(\log n)^2}n\right)$.
In particular, $\log x_n\to0$, hence $x_n\to1$, that is,
$$
\left(1-\frac{\log n}n\right)^n\sim\frac1n.
$$
Edit: In the case at hand, one knows that $\log(1-u)\leqslant-u$ for every $u$ in $[0,1)$. Hence $\log x_n\leqslant0$ and, for every $n\geqslant1$,
$$
\left(1-\frac{\log n}n\right)^n\leqslant\frac1n.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 0
} |
How to find $\lim\limits_{n\rightarrow \infty}\frac{(\log n)^p}{n}$ How to solve $$\lim_{n\rightarrow \infty}\frac{(\log n)^p}{n}$$
| Apply $\,[p]+1\,$ times L'Hospital's rule to$\,\displaystyle{f(x):=\frac{\log^px}{x}}$:
$$\lim_{x\to\infty}\frac{\log^px}{x}\stackrel{L'H}=\lim_{x\to\infty}p\frac{\log^{p-1}(x)}{x}\stackrel{L'H}=\lim_{x\to\infty}p(p-1)\frac{\log^{p-2}(x)}{x}\stackrel{L'H}=...\stackrel{L'H}=$$
$$\stackrel{L'H}=\lim_{x\to\infty}p(p-1)(p-2)...(p-[p])\frac{\log^{p-[p]-1}(x)}{x}=0$$
since $\,p-[p]-1<0\,$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 4
} |
Calculate $\lim\limits_{ x\rightarrow 100 }{ \frac { 10-\sqrt { x } }{ x+5 } }$ $$\lim_{ x\rightarrow 100 }{ \frac { 10-\sqrt { x } }{ x+5 } } $$
Could you explain how to do this without using a calculator and using basic rules of finding limits?
Thanks
| I suppose that you asked this question not because it's a difficult question, but because you don't know very well the rules to take care of over the limits.
First of all you need to know what a limit
is, what the indefinite case are, and
why they are indefinite, what's the meaning behind this word (i.e. $ \frac{\infty}{\infty})$, and how to look things when facing a limit. You need to start learning basic things, and you may also play with them by using a computer to see the graph of a function when it takes certain values near the critical values
you are looking for. I suppose the best way for you it's to receive an elementary explanation (this is possible) but i don't know what book i may recommend you for it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Markov and independent random variables This is a part of an exercise in Durrett's probability book.
Consider the Markov chain on $\{1,2,\cdots,N\}$ with $p_{ij}=1/(i-1)$ when $j<i, p_{11}=1$ and $p_{ij}=0$ otherwise. Suppose that we start at point $k$. We let $I_j=1$ if $X_n$ visits $j$. Then $I_1,I_2,\cdots,I_{k-1}$ are independent.
I don't find it obvious that $I_1,\cdots,I_{k-1}$ are independent. It is possible to prove the independence if we calculate all $P(\cap_{j\in J\subset\{1,\cdots,k-1\}}I_j)$, but this work is long and tedious. Since the independence was written as an obvious thing in this exercise, I assume that there is an easier way.
| For any $j$, observe that $X_{3}|X_{2}=j-1,X_{1}=j$ has the same distribution as $X_{2}|X_{2} \neq j-1, X_{1}=j$. Since $X_{2}=j-1$ iif $I_{j-1}=1$, by Markovianity conclude that $I_{j-1}$ is independent of $(I_{j-2},\ldots,I_{1})$ given that $X_{1}=j$.
Let's prove by induction that $I_{j-1}$ independent of $(I_{j-2},\ldots,I_{1})$ given that $X_{1}=k$.
I) $j=k$ follows straight from the first paragraph.
II) Now assume $I_{a-1}$ independent of $(I_{a-2},\ldots,I_{1})$ for all $a \geq j+1$. Thus, $(I_{k-1},\ldots,I_{j})$ is independent of $(I_{j-1},\ldots,I_{1})$. Hence, in order to prove that $I_{j-1}$ is independent of $(I_{j-2},\ldots,I_{1})$ we can condition on $(I_{k-1}=1,\ldots,I_{j}=1)$. This is the same as conditioning on $(X_{2}=k-1,\ldots,X_{k-j+1}=j)$. By markovianity and temporal homogeneity, $(X_{k-j+2}^{\infty}|X_{k-j+1}=j,\ldots,X_{1}=k)$ is identically distributed to $(X_{2}^{\infty}|X_{1}=j)$. Using the first paragraph, we know that $I_{j-1}$ is independent of $(I_{j-1},\ldots,I_{1})$ given that $X_{1}=j$. Hence, by the equality of distributions, $I_{j-1}$ is independent of $(I_{j-2},\ldots,I_{1})$ given that $X_{1}=k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Difference between power law distribution and exponential decay This is probably a silly one, I've read in Wikipedia about power law and exponential decay. I really don't see any difference between them. For example, if I have a histogram or a plot that looks like the one in the Power law article, which is the same as the one for $e^{-x}$, how should I refer to it?
| $$
\begin{array}{rl}
\text{power law:} & y = x^{(\text{constant})}\\
\text{exponential:} & y = (\text{constant})^x
\end{array}
$$
That's the difference.
As for "looking the same", they're pretty different: Both are positive and go asymptotically to $0$, but with, for example $y=(1/2)^x$, the value of $y$ actually cuts in half every time $x$ increases by $1$, whereas, with $y = x^{-2}$, notice what happens as $x$ increases from $1\text{ million}$ to $1\text{ million}+1$. The amount by which $y$ gets multiplied is barely less than $1$, and if you put "billion" in place of "million", then it's even closer to $1$. With the exponential function, it always gets multiplied by $1/2$ no matter how big $x$ gets.
Also, notice that with the exponential probability distribution, you have the property of memorylessness.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41",
"answer_count": 5,
"answer_id": 3
} |
Test for convergence $\sum_{n=1}^{\infty}\{(n^3+1)^{1/3} - n\}$ I want to expand and test this $\{(n^3+1)^{1/3} - n\}$ for convergence/divergence.
The edited version is: Test for convergence $\sum_{n=1}^{\infty}\{(n^3+1)^{1/3} - n\}$
| Let $y=1/n$, then $\lim_{n\to \infty}{(n^3+1)}^{1/3}-n=\lim_{y\to0^+}\frac{(1+y^3)^{1/3}-1}{y}$. Using L'Hopital's Rule, this limit evaluates to $0$.Hence, the expression converges.
You can also see that as $n$ increases, the importance of $1$ in the expression ${(n^3+1)}^{1/3}-n$ decreases and $(n^3+1)^{1/3}$ approaches $n$.Hence, the expression converges to $0$(as verified by limit).
For the series part as @siminore pointed out,this difference ${(n^3+1)}^{1/3}-n$ is of the order of $1/n^2$,therefore, the sum of these is of the order of this sum :$\sum_{1}^{\infty}1/n^2$ which is $= {\pi}^2/6$. Thus the series is bounded and hence converges .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Finding a point having the radius, chord length and another point Me and a friend have been trying to find a way to get the position of a second point (B on the picture) having the first point (A), the length of the chord (d) and the radius (r).
It must be possible right? We know the solution will be two possible points but since it's a semicircle we also know the x coordinate of B will have to be lower than A and the y coordinate must be always greater than 0.
Think you can help?
Here's a picture to illustrate the example:
Thanks in advance!
| There are two variables $(a,b)$ the coordinates of B. Since B lies on the circle,it satisfies the equation of the circle. Also,the distance of $B$ from $A$ is $d$.You can apply distance formula to get an equation from this condition.Now you have two variables and two equations from two conditions,you can solve it now yourself.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Implicit Differentiation $y''$ I'm trying to find $y''$ by implicit differentiation of this problem: $4x^2 + y^2 = 3$
So far, I was able to get $y'$ which is $\frac{-4x}{y}$
How do I go about getting $y''$? I am kind of lost on that part.
| You have $$y'=-\frac{4x}y\;.$$ Differentiate both sides with respect to $x$:
$$y''=-\frac{4y-4xy'}{y^2}=\frac{4xy'-4y}{y^2}\;.$$
Finally, substitute the known value of $y'$:
$$y''=\frac4{y^2}\left(x\left(-\frac{4x}y\right)-y\right)=-\frac4{y^2}\cdot\frac{4x^2+y^2}y=-\frac{4(4x^2+y^2)}{y^3}\;.$$
But from the original equation we know that $4x^2+y^2=3$, so in the end we have
$$y''=-\frac{12}{y^3}\;.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Derivative of $f(x)= (\sin x)^{\ln x}$ I am just wondering if i went ahead to solve this correctly?
I am trying to find the derivative of $f(x)= (\sin x)^{\ln x}$
Here is what i got below.
$$f(x)= (\sin x)^{\ln x}$$
$$f'(x)=\ln x(\sin x) \Rightarrow f'(x)=\frac{1}{x}\cdot\sin x + \ln x \cdot \cos x$$
Would that be the correct solution?
| It's instructive to look at this particular logarithmic-differentiation situation generally:
$$\begin{align}
y&=u^{v}\\[0.5em]
\implies \qquad \ln y &= v \ln u & \text{take logarithm of both sides}\\[0.5em]
\implies \qquad \frac{y^{\prime}}{y} &= v \cdot \frac{u^{\prime}}{u}+v^{\prime}\ln u & \text{differentiate}\\
\implies \qquad y^{\prime} &= u^{v} \left( v \frac{u^{\prime}}{u} + v' \ln u \right) & \text{multiply through by $y$, which is $u^{v}$} \\
&= v \; u^{v-1} u^{\prime} + u^{v} \ln u \; v^{\prime} & \text{expand}
\end{align}$$
Some (most?) people don't bother with the "expand" step, because right before that point the exercise is over anyway and they just want to move on. (Plus, generally, we like to see things factored.) Even so, look closely at the parts you get when you do bother:
$$\begin{align}
v \; u^{v-1} \; u^{\prime} &\qquad \text{is the result you'd expect from the Power Rule if $v$ were constant.} \\[0.5em]
u^{v} \ln u \; v^{\prime} &\qquad \text{is the result you'd expect from the Exponential Rule if $u$ were constant.}
\end{align}$$
So, there's actually a new Rule here: the Function-to-a-Function Rule is the "sum" of the Power Rule and Exponential Rule!
Knowing FtaF means you can skip the logarithmic differentiation steps. For example, your example:
$$\begin{align}
\left( \left(\sin x\right)^{\ln x} \right)^{\prime} &= \underbrace{\ln x \; \left( \sin x \right)^{\ln x - 1} \cos x}_{\text{Power Rule}} + \underbrace{\left(\sin x\right)^{\ln x} \; \ln \sin x \; \frac{1}{x}}_{\text{Exponential Rule}}
\end{align}$$
As I say, we generally like things factored, so you might want to manipulate the answer thusly,
$$
\left( \left(\sin x\right)^{\ln x} \right)^{\prime} = \left( \sin x \right)^{\ln x} \left( \frac{\ln x \cos x}{\sin x} + \frac{\ln \sin x}{x} \right) = \left( \sin x \right)^{\ln x} \left( \ln x \cot x + \frac{\ln \sin x}{x} \right)
$$
Another example:
$$\begin{align}
\left( \left(\tan x\right)^{\exp x} \right)^{\prime} &= \underbrace{ \exp x \; \left( \tan x \right)^{\exp x-1} \; \sec^2 x}_{\text{Power Rule}} + \underbrace{ \left(\tan x\right)^{\exp x} \ln \tan x \; \exp x}_{\text{Exponential Rule}} \\
&= \exp x \; \left( \tan x \right)^{\exp x} \left( \frac{\sec^2 x}{\tan x} + \ln \tan x \right) \\
&= \exp x \; \left( \tan x \right)^{\exp x} \left( \sec x \; \csc x + \ln \tan x \right)
\end{align}$$
Note. Be careful invoking FtaF in a class --especially on a test-- where the instructor expects (demands) that you go through the log-diff steps every time. (Of course, learning and practicing those steps is worthwhile, because they apply to situations beyond FtaF.) On the other hand, if you explain FtaF to the class, you could be a hero for saving everyone a lot of effort with function-to-a-function derivatives.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 0
} |
Subsets and Splits