Q
stringlengths 70
13.7k
| A
stringlengths 28
13.2k
| meta
dict |
---|---|---|
Finding a generating function of the sequence $(2,2,2,3,3,2,2,2,3,3...)$ I need to find generating function of sequence: $(2,2,2,3,3,2,2,2,3,3,2,2,2,3,3...)$
My attempt so far:
I tried splitting the sequence into 2 new sequences: $(2,2,2,0,0,2,2,2,0,0,...)$ and $(0,0,0,3,3,0,0,0,3,3...).$
Then I found that the generating function for the first sequence is: $(2+2x+2x^2)/(1-x^5)$, and for the second sequence: $(3x^3 / (1-x^4) + 3x^4/(1-x^5)$.
Then the answer is just the sum of these 2 sequences. Is this good?
| It's periodic with period $5$, so start with
$$\frac{1}{1-x^5}$$
and the numerator is $2,2,2,3,3$, so
$$\frac{2+2x+2x^2+3x^3+3x^4}{1-x^5}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4622540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Do 1.$\sum_{n=2}^{\infty} \frac{\cos{n}}{n^\frac{2}{3} + (-1)^n}$ 2. $\sum_{n=1}^\infty(e^{\cos n/n}−\cos\frac{1}n)$ converge? Does
$$ 1. \hspace{8mm} \sum_{n=2}^{\infty} \frac{\cos{n}}{n^\frac{2}{3} + (-1)^n} $$
and
$$ 2. \hspace{8mm} \sum_{n=1}^{\infty} \left(e^\frac{\cos{n}}{n} - \cos\frac{1}{n}\right) $$
converge? In $\sum_{n=2}^{\infty} \frac{\cos{n}}{n^\frac{2}{3} + (-1)^n}$, I allocate two sequences of 1. cos n is bounded and $\frac{1}{n^\frac{2}{3} + (-1)^n}$, where $(-1)^n$ irrelevant next to $n^\frac{2}{3}$ for large and, so I get sequence $\frac{1}{n^\frac{2}{3}}$, this sequence is monotonically decreases to $0$.
On the basis of Dirichlet, the series converges. Is it so?
But I don't have any ideas about the second number..
| For the first one you have
$$\frac{\cos(n)}{n^{2/3}+(-1)^n} = \frac{\cos(n)}{n^{4/3}-1}\,\left(n^{2/3}-(-1)^n\right) = \frac{\cos(n)}{n^{2/3}-n^{-2/3}} - \frac{(-1)^n}{n^{4/3}-1}$$
and so the first term converges by the Dirichlet-test, since $\left(n^{2/3}-n^{-2/3}\right)^{-1}$ is decreasing. The second term converges absolutely.
For the second you could argue as follows:
$$\sum_{n=1}^{\infty} \left(e^\frac{\cos{n}}{n} - \cos\frac{1}{n}\right) = \sum_{n=1}^\infty \left(1 + \frac{\cos(n)}{n} + O(1/n^2) - 1 - O(1/n^2) \right) \\= \sum_{n=1}^\infty \left( \frac{\cos(n)}{n} + O(1/n^2) \right)$$
The O-term converges clearly, while the first term is a convergent Fourier-series.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4625043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Solve the reccurence relation for $T(n) = 2T(n/2) + n − 1, n > 1, T(1) = 0, n = 2^k$ Even though it's pretty similar to other questions, I'm confusing myself with the answer because I always end up with:
$$
T(2^k)=\log_2n-\log_2n
$$
which doesn't seem right at all. Plus, I'm not sure whether the final big theta notation would be $\Theta(\log_2n)$ or $\Theta(1)$.
I don't know what else to do. Any help would be appreciated.
EDIT: Here's the obvious extremely wrong attempt:
\begin{align*}
A(2^k )&=2A(2^{k-1} )+2^k-1\\
&=(2A(2^{k-2} )+2^k-1)+2^k-1
&&=2A(2^{k-2} )+2(2^k)-2\\
&=(2A(2^{k-3} )+2^k-1)+2(2^k)-2
&&=2A(2^{k-3} )+3(2^k)-3
\end{align*}
$$
A(2^k)=2A(2^{k-i} )+i(2^k)-i
$$
sub in $i = k$:
\begin{align*}
A(2^k)&=2A(2^{k-k} )+k(2^k)-k \\
&=2A(2^0 )+k(2^k)-k && \\
&=\log_2n-\log_2n\\
\end{align*}
| You need to be more careful about your substitutions and expanding. Observe that:
\begin{align*}
A(2^k)
&= 2A(2^{k-1}) + 2^k - 1 = 2 [ 2A(2^{k-2}) + 2^{k-1} - 1] + 2^k - 1 \\
&= 2^2A(2^{k-2}) + 2(2^k) - (1 + 2) = 2^2[ 2A(2^{k-3}) + 2^{k-2} - 1] + 2(2^k) - (1 + 2) \\
&= 2^3A(2^{k-3}) + 3(2^k) - (1 + 2 + 2^2) = 2^3[ 2A(2^{k-4}) + 2^{k-3} - 1] + 3(2^k) - (1 + 2 + 2^2) \\
&= 2^4A(2^{k-4}) + 4(2^k) - (1 + 2 + 2^2 + 2^3)
\end{align*}
Do you see the pattern? Can you take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4625310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove $d \mid a^2b+1$ and $d \mid b^2a+1 \implies d \mid a^3+1$ and $d \mid b^3+1$ Prove for all $a, b, d$ integers greater than zero, that when $d$ is a divisor of both $a^2b+1$ and $b^2a+1$ then $d$ is also a divisor of both $a^3+1$ and $b^3+1$
There is a trivial case when $a = b$, but I found that there are more solutions, for example: $a = 1, b = 3, d = 2$. I tried to use equation $a^3+1=(a+1)(a^2-a+1)$ but I am not sure how to progress.
| Since $d \mid a^2\ b+1$ and $d \mid a\ b^2+1$ then $d \mid (1-b\ a^2)(a^2\ b+1)+a^3(a\ b^2+1) = a^3+1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4626166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are there any ways to convert inverse trigonometric values to radicals? When we solve a cubic equation $ax^3+bx^2+cx+d=0$, the roots are supposed to be in the form of radicals in real numbers or complex realm. However, if the discriminant is less than 0, the solution is ended up with roots represented by inverse trigonometric function in most cases. For example, the three roots for $x^3−4x+1=0$ are all in trigonometric form. And the equation $x^3−2x+1=0$ has 1 rational root, and two other roots that could be in radical form if solved by factorization method or inverse trigonometric values if solved by Cardano's solution and trigonometric method. By comparing their decimals, the roots obtained by two different methods are equal. My question is - are there any general ways to convert these inverse trigonometric values to radicals?
| For a cubic equation when the discriminant is less than zero, the roots may be expressed in the form of trigonometric function of an angle in inverse trigonometric form if solved by Cardano method. For example, $x^3−2x+1=0$
\begin{cases} x_1=2\sqrt{\dfrac{2}{3}} \cos \bigg[ \dfrac{1}{3}\cdot \arccos\big(-\dfrac{3}{4}\sqrt{\dfrac{3}{2}}\big)\bigg] \\ x_2=2\sqrt{\dfrac{2}{3}} \cos \bigg[ \dfrac{1}{3}\cdot \arccos\big(-\dfrac{3}{4}\sqrt{\dfrac{3}{2}}\big)+\dfrac{2\pi}{3}\bigg] \\ x_3=2\sqrt{\dfrac{2}{3}} \cos \bigg[ \dfrac{1}{3}\cdot \arccos\big(-\dfrac{3}{4}\sqrt{\dfrac{3}{2}}\big)+\dfrac{4\pi}{3} \bigg] \end{cases}
However, the roots may be given in a very clean radical form if solved by factorization method.
\begin{cases} x_1 = 1\\x_2 =-\dfrac{1}{2}-\dfrac{\sqrt{5}}{2} \\ x_3=-\dfrac{1}{2}+\dfrac{\sqrt{5}}{2} \end{cases}
If one of the roots is rational, the trigonometric forms could be converted to radical form.
In this case, given $\cosθ = -\dfrac{3}{4}\sqrt{\dfrac{3}{2}}$
, then by using the identity $4\cos^3θ-3\cos θ-\cos3θ =0$, $\cos\dfrac{θ}{3}=\dfrac{1}{2}\sqrt{\dfrac{3}{2}}$
Then $x_2 = 2\sqrt{\dfrac{2}{3}} \cos \bigg[ \dfrac{θ}{3} +\dfrac{2\pi}{3}\bigg] =-\dfrac{1}{2}-\dfrac{\sqrt{5}}{2}$
For those cubic equations without rational root, for example, $ x^3−4x+1=0$, the roots look like
\begin{cases} x_1=4\sqrt{\dfrac{1}{3}}\cos \bigg[\dfrac{1}{3}\cdot \arccos\big(-\dfrac{3}{16}\sqrt{3}\big)\bigg] \\ x_2=4\sqrt{\dfrac{1}{3}} \cos \bigg[ \dfrac{1}{3}\cdot \arccos\big(-\dfrac{3}{16}\sqrt{3}\big)+\dfrac{2\pi}{3}\bigg] \\ x_3=4\sqrt{\dfrac{1}{3}} \cos \bigg[ \dfrac{1}{3}\cdot \arccos\big(-\dfrac{3}{16}\sqrt{3}\big)+\dfrac{4\pi}{3} \bigg] \end{cases}
which may take up broad number of cases, still remains unresolved.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4627428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Min-Max points with lagrange multipliers "Find the min and max points of the function
$$f(x) = x^ 2 +yz-5$$
on the sphere
$$x^2+y^2+z^2=1$$
using Lagrange multipliers".
I'm having problem with equations while finding the points:
$$\nabla f(x) = \nabla g(x) \lambda$$
$$ 2x = 2x \lambda $$
$$z = 2y \lambda $$
$$y = 2z \lambda $$
$$ x^2+y^2+z^2-1 = 0 $$
| Then, if $\lambda=0$, then $(x,y,z)=(0,0,0)$ but since $x^2+y^2+z^2-1=0$ we have a contradiction. Thus, consider $\lambda\not=0$ and $x\not=0$, then since $2x=2x\lambda$ then $\lambda=1$ and then $(x,y,z,\lambda)=(\pm 1,0,0,0)$ because $z=2y=2(2z)$ give $z=0$ but also $y=2z=2(2y)$ give $y=0$. However, for $\lambda \not=0,1$ and $x=0$ also we have $z=2y\lambda=2(2z)\lambda^2 $ give $\lambda=\pm \frac{1}{2}$. If $\lambda =-\frac{1}{2}$ we have $(x,y,z,\lambda)=(0,\pm \frac{1}{\sqrt{2}}, \mp \frac{1}{\sqrt{2}}, -\frac{1}{2})$. If $\lambda=\frac{1}{2}$ we have $(x,y,z,\lambda)=(0,\mp\frac{1}{\sqrt{2}},\mp\frac{1}{\sqrt{2}},\frac{1}{2})$. Thus, critical points are given by
$\boxed{(\pm 1,0,0)}$ and $\boxed{(0,\pm\frac{1}{\sqrt{2}},\mp\frac{1}{\sqrt{2}})}$. Then, we need check it into $f(x,y,z)=x^2+yz-5$ in order to find the extremes values.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4628122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to show that $\ln{x}=2\sum_{n=0}^{\infty}\frac{1}{(2n+1)}\left(\frac{x-1}{x+1}\right)^{2n+1}$?
How do I show that $\displaystyle\ln{x}=2\sum_{n=0}^{\infty}\frac{1}{(2n+1)}\left(\frac{x-1}{x+1}\right)^{2n+1},\ x>0$?
I have tried using Mclaurin series of $\ln(1+x)$ and replacing $x$ with $\dfrac{x-1}{x+1}$ in it but that didn't worked.
| Simplify the expression inside the sum using an integral of a much more elementary function in the following way:
\begin{align*}
2\sum_{n=0}^{\infty}\frac 1{2n+1}\left(\frac{x-1}{x+1}\right)^{2n+1}&=
2\sum_{n=0}^{\infty}\int_0^{\frac{x-1}{x+1}}y^{2n}\,\mathrm dy=
2\int_0^{\frac{x-1}{x+1}}\sum_{n=0}^{\infty}y^{2n}\,\mathrm dy=
2\int_0^{\frac{x-1}{x+1}}\frac{\mathrm dy}{1-y^2}\\[10pt]
&=\left.\ln\left(\frac{1+y}{1-y}\right)\right|_{y=0}^{y=\frac{x-1}{x+1}}=\ln x
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4628541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Turn a real number (with a complex closed form) into its trigonometric form This post outlines 'fake' complex numbers (real numbers with complex closed form that usually come from the roots of unfactorable cubics (the example I need right now), or they can come from things like $i^i = e^{-\frac{\pi}{2}}$), in his own answer he gives the example: $\sqrt[3]{1+i \sqrt{7}}+\sqrt[3]{1-i \sqrt{7}}$, which from first glance doesn't look like it could simplified any futher, then states:
Using Euler's formula we can find a trigonometric representation of this number. $2 \sqrt{2} \cos{\left(\frac{\tan^{-1}{\left(\sqrt{7}\right)}}{3}\right)}$
Which when put into WolframAlpha is confirmed, they are both approximately equal to $2.602$
How did he do that? Euler's formula, $e^{ix} = \cos(x) + i\sin(x)$ doesn't seem relevant, the example has no clear $a+ib$ form (and it shouldn't because it is a real number, $b$ will just be $0$, so a will just be the answer anyway) and there is an inverse tangent present in the expression, also covered by Euler's formula somehow? How does he just go from to $A$ to $B$ like it's common knowledge?
Question applies to the example but it would be nice to have further advice on how to turn other fake complex numbers into real forms like this without WolframAlpha.
| Notice that there is a clearly written complex number inside each of the cube roots, so we can express those in polar form via Euler's formula:
$$\begin{eqnarray} 1 + i\sqrt{7} & = & re^{i \theta} \\
& = & r \cos \theta + ir \sin \theta \\
r \cos \theta & = & 1 \\
r \sin \theta & = & \sqrt{7} \\
r^2(\cos^2 \theta + \sin^2 \theta) & = & 1^2 + \sqrt{7}^2 \\
r^2 & = & 8 \\
\tan \theta & = & \frac{\sin \theta}{\cos \theta} \\
& = & \frac{r \sin \theta}{r \cos \theta} \\
& = & \frac{\sqrt{7}}{1} \\
& = & \sqrt{7}\end{eqnarray}$$
So $1 + i \sqrt{7} = \sqrt{8} e^{i \tan^{-1} \sqrt{7}}$, and similarly we can find $1 - i \sqrt{7} = \sqrt{8} e^{i \tan^{-1} -\sqrt{7}} = \sqrt{8}e^{-i \tan^{-1} \sqrt{7}}$. We could then just say $1 + i\sqrt{7} = \sqrt{8} e^{i \theta}$ and $1 - i\sqrt{7} = \sqrt{8}e^{-i\theta}$, where $\tan \theta = \sqrt{7}$.
Then when we take the cube roots (presumably taking the principal cube root, since technically there are three choices), we have $\sqrt[3]{1 + i \sqrt{7}} = \sqrt[3] {\sqrt{8} e^{i \theta}} = \sqrt{2} e^{i \frac{\theta}{3}}$, by applying exponent rules. Likewise, $\sqrt[3]{1 - i \sqrt{7}} = \sqrt{2} e^{-i \frac{\theta}{3}}$.
Finally, to get to the final result, we can use the complex definition of cosine (or just expand out the exponentials via Euler's formula in the reverse direction and note that the imaginary parts cancel), giving the required expression.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4630629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Convergence of sum with binomial coefficient I found this exercise in a book and I don't know how to start:
With the help of probabilistic methods, prove that
$$
\lim_{n\rightarrow\infty} \frac{1}{4^n} \sum_{k=0}^n \binom{2n}{k} = 1/2
$$
One cannot use the binomial theorem because the sum just goes to $n$ and not to $2n$ and besides that, am I supposed to use one of the limit theorems in stochastics? If so, which one and how can I approach this exercise.
| We obtain
\begin{align*}
\color{blue}{\sum_{k=0}^n\binom{2n}{k}}&=\frac{1}{2}\sum_{k=0}^{n-1}\binom{2n}{k}+\binom{2n}{n}+\frac{1}{2}\sum_{k=n+1}^{2n}\binom{2n}{k}\tag{1}\\
&=\frac{1}{2}\sum_{k=0}^{2n}\binom{2n}{k}+\frac{1}{2}\binom{2n}{n}\\
&\,\,\color{blue}{=\frac{1}{2}2^{2n}+\frac{1}{2}\binom{2n}{n}}\tag{2}
\end{align*}
In (1) we use the symmetry $\binom{2n}{k}=\binom{2n}{2n-k}$. We use Stirling's approximation formula
\begin{align*}
n!\sim \left(\frac{n}{e}\right)^n\sqrt{2\pi n}
\end{align*}
and get
\begin{align*}
\color{blue}{\binom{2n}{n}}=\frac{(2n)!}{n!n!}
&\sim\left(\frac{2n}{e}\right)^{2n}\sqrt{4\pi n}\left(\frac{e}{n}\right)^{2n}\frac{1}{2\pi n}\\
&\,\,\color{blue}{\sim 4^n\frac{1}{\sqrt{\pi n}}}\tag{3}
\end{align*}
We conclude from (2) and (3)
\begin{align*}
\color{blue}{\frac{1}{4^n}}\color{blue}{\sum_{k=0}^n\binom{2n}{k}}
&=\frac{1}{4^n}\left(\frac{1}{2}2^{2n}+\frac{1}{2}\binom{2n}{n}\right)\\
&=\frac{1}{2}+\frac{1}{2}\,\frac{1}{4^n}\binom{2n}{n}\\
&\sim\frac{1}{2}+\frac{1}{2}\frac{1}{\sqrt{\pi n}}\color{blue}{\underset{n\to\infty }{\longrightarrow} \frac{1}{2}}
\end{align*}
and the claim follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4631006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Solve $\dfrac{4}{x+3}-\dfrac{2}{x+1}=\dfrac{5}{2x+6}-\dfrac{2\frac{1}{2}}{2x+2}$ Solve $\dfrac{4}{x+3}-\dfrac{2}{x+1}=\dfrac{5}{2x+6}-\dfrac{2\frac{1}{2}}{2x+2}$
$\Rightarrow \dfrac{4}{x+3}-\dfrac{2}{x+1}=\dfrac{5}{2x+6}-\dfrac{5(2x+2)}{2} \ \ \ ...(1)$
$\Rightarrow \dfrac{4}{x+3}-\dfrac{2}{x+1}=\dfrac{5}{2(x+3)}-5(x+1) \ \ \ ...(2)$
$\Rightarrow \dfrac{4(x+1)-2(x+3)}{(x+1)(x+3)}=\dfrac{5-10(x+1)(x+3)}{2(x+3)} \ \ \ ...(3)$
$\Rightarrow \dfrac{8(x+1)-4(x+3)}{(x+1)}=5-10(x+1)(x+3) \ \ \ ...(4)$
So far computationally all looks correct, however if I keep going obviously on the right hand side there will be a $x^3$ term and then it becomes quite complicated and tedious. I'm wondering if I missed a simplification somewhere or there's something else I can do to avoid a third power of $x$ showing up. Thanks for the help.
| Solution:
Notice that there are $2x+6=2(x+3)\ne0$ and $2x+2=2(x+1)\ne 0$, then we have $x\ne-3$, $x\ne -1$, and
\begin{align}
&\frac{4}{x+3}-\frac{2}{x+1}=\frac{5}{2x+6}-\frac{\frac{5}{2}}{2x+2}\\
\Leftrightarrow~~~~&\frac{2\times4}{2(x+3)}-\frac{2\times2}{2(x+1)}=\frac{5}{2x+6}-\frac{\frac{5}{2}}{2x+2}\\
\Leftrightarrow~~~~&\color{red}{\frac{8}{2x+6}}-\color{blue}{\frac{4}{2x+2}}=\color{red}{\frac{5}{2x+6}}-\color{blue}{\frac{\frac{5}{2}}{2x+2}}\\
\Leftrightarrow~~~~&\color{red}{\frac{8}{2x+6}}-\color{red}{\frac{5}{2x+6}}=\color{blue}{\frac{4}{2x+2}}-\color{blue}{\frac{\frac{5}{2}}{2x+2}}\\
\Leftrightarrow~~~~&\color{red}{\frac{3}{2x+6}}=\color{blue}{\frac{\frac{3}{2}}{2x+2}}\\
\Leftrightarrow~~~~&3(2x+2)=\frac{3}{2}(2x+6)\\
\Leftrightarrow~~~~&6x+6=3x+9\\
\Leftrightarrow~~~~&3x=3\\
\Leftrightarrow~~~~&x=1.
\end{align}
The solution is done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4634064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find the roots of the equation $(1+\tan^2x)\sin x-\tan^2x+1=0$ which satisfy the inequality $\tan x<0$ Find the roots of the equation $$(1+\tan^2x)\sin x-\tan^2x+1=0$$ which satisfy the inequality $$\tan x<0$$
Shold I solve the equation first and then try to find which of the roots satisfy the inequality? Should I use $\tan x$ in the solution itself and obtain only the needed roots?
I wasn't able to see how the inequality can be used beforehand, so I solved the equation. For $x\ne \dfrac{\pi}{2}(2k+1),k\in\mathbb{Z}$, the equation is $$\dfrac{\sin^2x+\cos^2x}{\cos^2x}\sin x-\left(\dfrac{\sin^2x}{\cos^2x}-\dfrac{\cos^2x}{\cos^2x}\right)=0$$ which is equivalent to $$\sin x+\cos2x=0\\\sin x+1-2\sin^2x=0\\2\sin^2x-\sin x-1=0$$ which gives for the sine $-\dfrac12$ or $1$. Then the solutions are $$\begin{cases}x=-\dfrac{\pi}{6}+2k\pi\\x=\dfrac{7\pi}{6}+2k\pi\end{cases}\cup x=\dfrac{\pi}{2}+2k\pi$$ How do I use $\tan x<0$?
| Let $\sin x = s$; the given equation reduces to:
$$2s^2-s-1=0,~(s-1)(2s+1)=0$$
we have in the first full rotation of the radius vector
$$s=1\to x= \frac{\pi}{2},~\frac{3 \pi}{2}$$
$$s=-\frac{1}{2}\to x= \frac{7 \pi}{6},~\frac{11 \pi}{6}$$
between which only $x= \dfrac{11 \pi}{6}$ has its tangent negative.
This and its co-terminals $ x= 11 \pi/6 +2 k \pi$ satisfy the given equation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4639727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Show that if $x,y>0$, $\left(\frac{x^3+y^3}{2}\right)^2≥\left(\frac{x^2+y^2}{2}\right)^3$ Through some rearrangement of the inequality and expansion, I have been able to show that the inequality is equivalent to
$$x^6-3x^4y^2+4x^3y^3-3x^2y^4+y^6≥0$$
However, I am not sure how to prove the above or if expansion and rearrangement are even correct steps.
| Use that for positive $a$ and $b$
$$a^2\ge b^3\iff \left(a^2\right)^\frac16\ge \left(b^3\right)^\frac16\iff a^{\frac13}\ge b^{\frac12}$$
then refer to the generalized mean inequality that is
$$\left(\frac{x^3+y^3}{2}\right)^\frac13≥\left(\frac{x^2+y^2}{2}\right)^\frac12\iff \left(\frac{x^3+y^3}{2}\right)^2≥\left(\frac{x^2+y^2}{2}\right)^3$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4641593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Compute the integral $\int_{-\infty}^\infty \frac{\sin(x) (1+\cos(x))}{x(2+\cos(x))} dx$ Background
I recently found out about Lobachevsky's integral formula, so I tried to create a problem on my own for which I'd be able to apply this formula. The problem is presented below.
Problem
Compute the integral $\int_{-\infty}^\infty \frac{\sin(x) (1+\cos(x))}{x(2+\cos(x))} dx$
Attempt
If we define $f(x) := \frac{1+\cos(x)}{2+\cos(x)}$, which most definitely is $\pi$ - periodic.
The integral is, using our notation above, on the form
$$I = \int_{-\infty}^\infty \frac{\sin(x)}{x}f(x) dx.$$
The integrand is even, so we might as well compute
$$ I = 2 \int_{0}^\infty \frac{\sin(x)}{x}f(x) dx.$$
We will now have to make use of a theorem.
Lobachevsky's integral formula states that if $f(x)$ is a continous $\pi$ - periodic function then we have that $$ \int_0^\infty \frac{\sin(x)}{x}f(x) dx= \int_0^{\pi/2} f(x) dx.$$
Substituing our $f(x)$ yields us
$$ \int_0^{\pi/2} \frac{1+\cos(x)}{2+\cos(x)} dx = \pi/2 - \int_0^{\pi/2}\frac{1}{2+\cos(x)}dx $$
where
$$I_2 = \int_0^{\pi/2}\frac{1}{2+\cos(x)}dx = \int_0^{\pi/2}\frac{\sec^2(x/2)}{3+\tan^2(x/2)}dx.$$
Letting $ u = \tan(x/2)/\sqrt{3}$, for which $du = \sec^2(x/2)/(2\sqrt{3})dx$, therefore gives us:
$$ I_2 = \int_0^{1/\sqrt{3}}\frac{2\sqrt{3}}{3u^2+3} = \frac{\pi}{3\sqrt{3}}.$$
Finally we can compute $I$ to
$$I = 2\left(\frac{\pi}{2} - \frac{\pi}{3\sqrt{3}}\right) = \frac{\pi(3\sqrt{3}-2)}{3\sqrt{3}}.$$
I've tried calculating this integral in Desmos where it gives me $0$ when I calculate the integrand on the interval $(-\infty, \infty)$, and something negative for $(0,\infty)$. This contradicts my answer.
I also tried typing it into Wolfram, without success. Can anyone confirm the validity of my result?
| Utilize the series
$$\frac{p\sin x}{1+2p\cos x+p^2}=-\sum_{k=1}^\infty (-p)^{k}\sin kx
$$
with $p=2-\sqrt3$ to express
\begin{align}
\frac{\sin x}{2+\cos x}= -2\sum_{k=1}^\infty (\sqrt3-2)^k\sin kx
\end{align}
Then, integrate
\begin{align}
&\int_{-\infty}^\infty \frac{\sin x}x\frac{1+\cos x}{2+\cos x}dx\\
=& \ 2\int_{-\infty}^\infty \frac{\sin x}xdx- 2\int_{-\infty}^\infty \frac{1}x\frac{\sin x}{2+\cos x}dx\\
=&\ \pi +4\sum_{k=1}^\infty (\sqrt3-2)^k\int_0^\infty \frac{\sin kx}xdx\\
=&\ \pi + 2\pi \sum_{k=1}^\infty (\sqrt3-2)^k= \frac\pi{\sqrt3}
\end{align}
where the summation is geometric.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4645216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 4
} |
Relation between full elliptic integrals of the first and third kind I am working on a calculation involving the Ronkin function of a hyperplane in 3-space.
I get a horrible matrix with full elliptic integrals as entries. A priori I know that the matrix is symmetrical and that give me a relation between full elliptic integrals of the first and third kind.
I can not find transformations in the literature that explain the relation and I think I need one in order to simplify my matrix.
The relation
With the notation
$\operatorname{K}(k) = \int_0^{\frac{\pi}{2}}\frac{d\varphi}{\sqrt{1-k^2\sin^2\varphi}},$
$\qquad$ $\Pi(\alpha^2,k)=\int_0^{\frac{\pi}{2}}\frac{d\varphi}{(1-\alpha^2\sin^2\varphi)\sqrt{1-k^2\sin^2\varphi}}$
$k^2 = \frac{(1+a+b-c)(1+a-b+c)(1-a+b+c)(-1+a+b+c)}{16abc},\quad a,b,c > 0$
the following is true:
$2\frac{(1+a+b-c)(1-a-b+c)(a-b)}{(a-c)(b-c)}\operatorname{K}(k)+$
$(1-a-b+c)(1+a-b-c)\Pi\left( \frac{(1+a-b+c)(-1+a+b+c)}{4ac},k\right) +$
$\frac{(a+c)(1+b)(1-a-b+c)(-1-a+b+c)}{(a-c)(-1+b)}\Pi\left( \frac{(1+a-b+c)(-1+a+b+c)(a-c)^2}{4ac(1-b)^2},k\right)+$
$(1-a-b+c)(-1+a+b+c)\Pi\left( \frac{(1-a+b+c)(-1+a+b+c)}{4ac},k\right)+$
$\frac{(1+a)(b+c)(-1-a+b+c)(-1+a+b-c)}{(1-a)(c-b)}\Pi\left( \frac{(1-a+b+c)(-1+a+b+c)(b-c)^2}{4ac(1-a)^2},k\right)$
$==0$.
Is there some addition formula or transformation between elliptic integrals of the first and third kind that will explain this?
| Have you tried MGfun from Frédéric Chyzak? It is a (Maple) package to deal with computations in multivariate Ore Algebras. The main application of this is exactly the kind of problem you pose: finding relations between (multivariate) holonomic functions. It is a generalization of WZ theory to the multi-variate case.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/14898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Any sum of 2 dice with equal probability The question is the following: Can one create two nonidentical loaded 6-sided dice such that when one throws with both dice and sums their values the probability of any sum (from 2 to 12) is the same. I said nonidentical because its easy to verify that with identical loaded dice its not possible.
Formally: Let's say that $q_{i}$ is the probability that we throw $i$ on the first die and $p_{i}$ is the same for the second die. $p_{i},q_{i} \in [0,1]$ for all $i \in 1\ldots 6$. The question is that with these constraints are there $q_{i}$s and $p_{i}$s that satisfy the following equations:
$ q_{1} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{2} + q_{2} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{3} + q_{2} \cdot p_{2} + q_{3} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{4} + q_{2} \cdot p_{3} + q_{3} \cdot p_{2} + q_{4} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{5} + q_{2} \cdot p_{4} + q_{3} \cdot p_{3} + q_{4} \cdot p_{2} + q_{5} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{6} + q_{2} \cdot p_{5} + q_{3} \cdot p_{4} + q_{4} \cdot p_{3} + q_{5} \cdot p_{2} + q_{6} \cdot p_{1} = \frac{1}{11}$
$ q_{2} \cdot p_{6} + q_{3} \cdot p_{5} + q_{4} \cdot p_{4} + q_{5} \cdot p_{3} + q_{6} \cdot p_{2} = \frac{1}{11}$
$ q_{3} \cdot p_{6} + q_{4} \cdot p_{5} + q_{5} \cdot p_{4} + q_{6} \cdot p_{3} = \frac{1}{11}$
$ q_{4} \cdot p_{6} + q_{5} \cdot p_{5} + q_{6} \cdot p_{4} = \frac{1}{11}$
$ q_{5} \cdot p_{6} + q_{6} \cdot p_{5} = \frac{1}{11}$
$ q_{6} \cdot p_{6} = \frac{1}{11}$
I don't really now how to start with this. Any suggestions are welcome.
| I believe the following punchline to the generating function argument doesn’t depend on whether the number of sides on each die is even or odd. Plug $z = \zeta = \exp 2 \pi i /11$ into the purported equality $1+z+z^2+\dots+z^{10} = p(z) q(z)$ with $p,q$ quintic polynomials with nonnegative coefficients. The LHS vanishes but the RHS can’t since all terms contributing to $p(\zeta)$ lie in the upper half-plane and similarly for $q$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/41310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 6,
"answer_id": 1
} |
sums of rational squares It is a well-known fact that if an integer is a sum of two rational squares then it is a sum of two integer squares. For example, Cohen vol. 1 page 314 prop. 5.4.9. Cohen gives a short proof that relies on Hasse-Minkowski, but he attributes the theorem (without reference) to Fermat, who didn't have Hasse-Minkowski available. So my question is, how did Fermat prove this theorem? and part 2 of the question is, what is the simplest direct proof? I googled for this result and found a manuscript with a proof that doesn't use Hasse-Minkowski, but it's not very short.
| In his early days, Fermat realized that a natural number that can
be written as a sum of two rational squares actually is a sum of
two integral squares, but he did not come back to this claim
when eventually he discovered the proof of the Two-Squares
Theorem. The result in question can be proved with the methods
available to Fermat, as I will show here.
Theorem 1 If $n$ is a sum of two rational squares, then
every prime $q = 4n+3$ divides $n$ an even number of times.
Theorem 2 Every prime number $p = 4n+1$ is the sum of two
integral squares.
Now we invoke the product formula for sums of two squares
$$ (a^2 + b^2)(c^2 + d^2) = (ac-bd)^2 + (ad+bc)^2. $$
It implies that every product of prime numbers $4n+1$ and some
power of $2$ can be written as a sum of two integral squares,
and multiplying through by squares of primes $q = 4n+3$, the
claim follows.
Proof of Theorem 1
We will show that primes $p = 4n+3$ do not divide a sum of two
coprime squares.
Assume that $p \mid x^2 + y^2$ with $\gcd(x,y) = 1$.
Reducing $x$ and $y$ modulo $p$ we may assume that
$-p/2 < x, y < p/2$; cancelling possible common divisors
we then have $p \mid x^2 + y^2$ with $\gcd(x,y) = 1$ and
$x^2 + y^2 < \frac12 p^2$.
If $x$ and $y$ are both odd, then the identity
$$ \Big(\frac{x+y}2\Big)^2 + \Big(\frac{x-y}2\Big)^2 = \frac{x^2+y^2}2 $$
allows us to remove any remaining factor of $2$ from the sum, and we
may therefore assume that $a^2 + b^2 = pr$ for some odd number $r < p$.
Since $a$ and $b$ then have different parity, $a^2 + b^2$ must have the
form $4n+1$, and therefore the number $r$ must have the form $4n+2$.
But then $r$ must have at least one prime factor $q \equiv 3 \bmod 4$,
and since $a^2 + b^2 < p^2$, we must have $q \le r < p$.
Thus if $p \equiv 3 \bmod 4$ divides a sum of two coprime squares,
then there must be some prime $q \equiv 3 \bmod 4$ less than $p$ with the
same property. Applying descent we get a contradiction.
Proof of Theorem 2
*
*The prime $p$ divides a sum of two squares.
For every prime $p = 4n+1$ there is an integer $x$ such that
$p$ divides $x^2+1$.
By Fermat's Theorem, the prime $p$ divides
$$ a^{p-1}-1 = a^{4n}-1 = (a^{2n}-1)(a^{2n}+1). $$
If we can show that there is an integer $a$ for which $p$
does not divide the first factor we are done, because
then $p \mid (a^n)^2+1$. By Euler's criterion it is sufficient
to choose $a$ as a quadratic nonresidue modulo $p$. Equivalently
we may observe that the polynomial $x^{2n}-1$ has at most $2n$
roots modulo $p$.
*The descent. The basic idea is the following: Assume that the prime number
$p = 4n+1$ is not a sum of two squares. Let $x$ be an integer
such that $p \mid x^2 + 1$. Reducing $x$ modulo $p$ shows that
we may assume that $p \mid x^2+1$ for some even integer $x$ with
$0 < x < p$. This implies that $x^2+1 = pm$ for some integer
$m < p$. Fermat then shows that there must be som prime divisor
$q$ of $m$ such that $q$ is not a sum of two squares; since
prime divisors of sums of two coprime squares have the form
$4n+1$, Fermat now has found a prime number $q = 4n+1$ strictly
less than $p$ that is not a sum of two squares. Repeating this
step eventually shows that $p = 5$ is not a sum of two squares,
which is nonsense since $5 = 1^2 + 5^2$.
Assume that $p = 4n+1$ is not a sum of two squares.
We know that $pm = x^2 + 1$ is a sum of two squares for some
odd integer $m < p$. By Theorem 1, $m$ is only divisible
by primes of the form $4k+1$. We now use the following
Lemma Assume that $pm = x^2 + y^2$ for coprime integers $x, y$,
and let $q$ be a prime dividing $m$, say $m = qm_1$. If $q = a^2 + b^2$
is a sum of two squares, then $pm_1 = x_1^2 + y_1^2$ is also a sum of two
squares.
Applying this lemma repeatedly we find that if every $q \mid m$ is a
sum of two squares, then so is $p$ in contradiction to our assumption.
Thus there is a prime $q = 4k+1$, strictly smaller than $p$, which is
not a sum of two squares, and now descent takes over.
It remains to prove the Lemma. To this end, we have to perform
the division $\frac{x^2+y^2}{a^2+b^2}$. By the product formula
$$ (a^2+b^2)(c^2+d^2) = (ac-bd)^2 + (bc+ad)^2 $$
we have to find integers $c, d$ such that $x = ac-bd$ and $y = bc+ad$.
Since $q \mid (a^2 + b^2)$ and $q \mid (x^2 + y^2)$ we also have
$$ q \mid (a^2+b^2)x^2 - (x^2+y^2)b^2
= a^2x^2 - b^2y^2 = (ax-by)(ax+by). $$
Since $q$ is prime, it must divide one of the factors. Replacing $b$
by $-b$ if necessary we may assume that $q$ divides $ax-by$. We know
that $q$ divides $a^2 + b^2$ as well as $pm = x^2 + y^2$, hence $q^2$
divides $(ax - by)^2 + (ay+bx)^2 $ by the product formula. Since $q$
divides the first summand, it must divide the second as well, and we
have $pm_1 = (\frac{ax-by}q)^2 + (\frac{ay+bx}q)^2$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/88539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 6,
"answer_id": 3
} |
Smallest length of {0,1} vectors to satisfy some orthogonality conditions Let $n$ be a positive integer.
The output of the problem is another positive integer $r$ which must be as small as possible.
I want to construct $2n$ binary vectors $x_i\in\{0,1\}^r$ and $y_i\in\{0,1\}^r$, with $i\in\{1,...,n\}$, which must respect the following conditions:
*
*$x_i^Ty_i=0$ with $i\in\{1,...,n\}$,
*$x_{i+1}^Ty_i=0$ with $i\in\{1,...,n-1\}$, and $x_1^Ty_n=0$ (it is cyclic).
*$x_{j}^Ty_i>0$ with $i\in\{1,...,n-1\}$, $j\in\{1,...,n\}\setminus \{i,i+1\}$.
All the vectors $x_i$ must be distinct and all the vectors $y_i$ must be distinct.
For a given number $n$ of vectors, what is the smallest dimension $r=f(n)$ for which it is possible to construct vectors $x_i,y_i$ respecting the
conditions described above?
An obvious upper bound is $f(n)\leq n$.
Since the $x_i$ must be distinct, a lower bound is $f(n)\geq \lceil \log_2(n)\rceil$.
However, this lower bound seems rather weak.
Example for $n=7$, there is a solution with $r=6$ (but apparently, not with $r=5$):
*
*$x_1=\begin{pmatrix}0&0&1&0&1&1\end{pmatrix}^T$ $\hspace{1cm }$ $y_1=\begin{pmatrix}0&1&0&1&0&0\end{pmatrix}^T$.
*$x_2=\begin{pmatrix}1&0&0&0&1&1\end{pmatrix}^T$ $\hspace{1cm }$ $y_2=\begin{pmatrix}0&1&1&0&0&0\end{pmatrix}^T$.
*$x_3=\begin{pmatrix}1&0&0&1&0&1\end{pmatrix}^T$ $\hspace{1cm }$ $y_3=\begin{pmatrix}0&1&0&0&1&0\end{pmatrix}^T$.
*$x_4=\begin{pmatrix}0&0&1&1&0&1\end{pmatrix}^T$ $\hspace{1cm }$ $y_4=\begin{pmatrix}1&0&0&0&1&0\end{pmatrix}^T$.
*$x_5=\begin{pmatrix}0&1&1&1&0&0\end{pmatrix}^T$ $\hspace{1cm }$ $y_5=\begin{pmatrix}1&0&0&0&0&1\end{pmatrix}^T$.
*$x_6=\begin{pmatrix}0&1&0&1&1&0\end{pmatrix}^T$ $\hspace{1cm }$ $y_6=\begin{pmatrix}1&0&1&0&0&0\end{pmatrix}^T$.
*$x_7=\begin{pmatrix}0&1&0&0&1&1\end{pmatrix}^T$ $\hspace{1cm }$ $y_7=\begin{pmatrix}1&0&0&1&0&0\end{pmatrix}^T$.
| The $\log_2 n$ lower bound is actually within a constant factor of optimal for large $n$.
Now let $x_1, \dots, x_n$ be formed by randomly setting each coordinate of each $x_i$ to $1$ with probability $\frac{1}{3}$ (with the events $x_i(k)=1$ independent for all $1 \leq i \leq n$ and $1 \leq k \leq r$). Let the vectors $y_1, \dots, y_n$ be defined by
$$y_i(k)=\left\{\begin{array}{cc} 1 & \textrm{ if } x_i(k)=x_{i+1}(k)=0 \\
0 & \textrm{ otherwise } \end{array} \right.$$
The first two conditions are immediately satisfied by our definition of $y$. What remains to check is that $x_j^T y_i>0$ for $j \notin \{i, i+1\}$.
Note that for these $(i,j)$ the vectors $x_j$ and $y_i$ are independent. So for any $k$, we have
$$P(x_j(k)=y_i(k)=1)=\frac{1}{3} \left(\frac{2}{3}\right)^2 = \frac{4}{27}$$
Multiplying over all coordinates and taking the union bound over all pairs $(i,j)$, we have that the probability the third condition is violated is at most
$$n^2 \left(\frac{23}{27} \right)^r$$
So if $r=c \log n$ for $c>\log_{27/23} 2 \approx 4.323$ and $n$ is sufficiently large, then the probability some condition is violated tends to $0$, so there's a set of vectors which works.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/228514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Something interesting about the quintic $x^5 + x^4 - 4 x^3 - 3 x^2 + 3 x + 1=0$ and its cousins (Update):
Courtesy of Myerson's and Elkies' answers, we find a second simple cyclic quintic for $\cos\frac{\pi}{p}$ with $p=10m+1$ as,
$$F(z)=z^5 - 10 p z^3 + 20 n^2 p z^2 - 5 p (3 n^4 - 25 n^2 - 625) z + 4 n^2 p(n^4 - 25 n^2 - 125)=0$$
where $p=n^4 + 25 n^2 + 125$. Its discriminant is
$$D=2^{12}5^{20}(n^2+7)^2n^4(n^4 + 25 n^2 + 125)^4$$
Finding a simple parametric cyclic quintic was one of the aims of this post.
(Original post):
We have
$$x^5 + x^4 - 4 x^3 - 3 x^2 + 3 x + 1=0,\quad\quad x =\sum_{k=1}^{2}\,\exp\Bigl(\tfrac{2\pi\, i\, 10^k}{11}\Bigr)$$
and so on for prime $p=10m+1$. Let $A$ be this class of quintics with $x =\sum_{k=1}^{2m}\,\exp\Bigl(\tfrac{2\pi\, i\, n^k}{p}\Bigr).$ I was trying to find a pattern to the coefficients of this infinite family, perhaps something similar to the Diophantine equation $u^2+27v^2=4N$ for the cubic case.
First, we can depress these (get rid of the $x^{n-1}$ term ) by letting $x=\frac{y-1}{5}$ to get the form,
$$y^5+ay^3+by^2+cy+d=0$$
Call the depressed form of $A$ as $B$.
Questions:
*
*Is it true that for $B$, there is always an ordering of its roots such that$$\small y_1 y_2 + y_2 y_3 + y_3 y_4 + y_4 y_5 + y_5 y_1 - (y_1 y_3 + y_3 y_5 + y_5 y_2 + y_2 y_4 + y_4 y_1) = 0$$
*Do its coefficients $a,b,c,d$ always obey the Diophantine relations,
$$a^3 + 10 b^2 - 20 a c= 2z_1^2$$ $$5 (a^2 - 4 c)^2 + 32 a b^2 = z_2^2$$ $$(a^3 + 10 b^2 - 20 a c)\,\big(5 (a^2 - 4 c)^2 + 32 a b^2\big) = 2z_1^2z_2^2 = 2(a^2 b + 20 b c - 100 a d)^2$$ for integer $z_i$?
I tested the first forty such quintics and they answer the two questions in the affirmative. But is it true for all prime $p=10m+1$?
| I think the depressed quintic in the question is what Emma Lehmer called the reduced quintic in her paper, The quintic character of 2 and 3, Duke Math. J. Volume 18, Number 1 (1951), 11-18, MR0040338. In the proof of Theorem 4, she writes that the reduced quintic is $$F(z)=z^5-10pz^3-5pxz^2-5p\left({x^2-125w^2\over4}-p\right)z+p^2x-{x^3+625(u^2-v^2)w\over8}$$ where $16p=x^2+50u^2+50v^2+125w^2$, $xw=v^2-u^2-4uv$ determines $x$ uniquely. You might plug these into your conjectured diophantine relations, to see whether they hold.
EDIT:
A slightly different formula is given by Berndt and Evans, The determination of Gauss sums, Bull Amer Math Soc 5, no. 2 (1981) 107-129, on page 119 (formula 5.2). It goes
$$
F(z)=z^5-10pz^3-5pxz^2-5p\left({x^2-125w^2\over4}-p\right)z+p^2x-p{x^3+625(u^2-v^2)w\over8}
$$
the difference being the factor of $p$ in the last term. Berndt and Evans note that Lehmer inadvertently omitted this factor.
Update: (by OP)
Using the edited coefficients of $F(z)$ by Berndt and Evans, if we express it as,
$$z^5+az^3+bz^2+cz+d=0$$
and taking into account $16p=x^2+50u^2+50v^2+125w^2,\,x=(v^2-u^2-4uv)/w$, then
$$a^3 + 10 b^2 - 20a c =2\times125^2p^2w^2=2z_1^2$$
$$5(a^2 - 4c)^2 + 32a b^2 = 500^2 p^2(u^2 - u v - v^2)^2=z_2^2$$
$$(a^3 + 10 b^2 - 20 a c)\,\big(5 (a^2 - 4 c)^2 + 32 a b^2\big) = 2(a^2 b + 20 b c - 100 a d)^2$$
which confirms all three Diophantine relations by the OP.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/255416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Special values of the modular J invariant A special value:
$$
J\big(i\sqrt{6}\;\big) = \frac{(14+9\sqrt{2}\;)^3\;(2-\sqrt{2}\;)}{4}
\tag{1}$$
I wrote $J(\tau) = j(\tau)/1728$.
How up-to-date is the Wikipedia listing of known special values for the modular
j invariant ?
Value (1) is not on it.
Alternatively, is there a compilation of special values elsewhere?
The value (1) is related to this hypergeometric value, too:
$$
{}_2F_1\left(\frac{1}{6},\frac{1}{3};1;\frac{1}{2}\right) =
\eta(i\sqrt{6}\;)^2 \,2^{1/2} \,3^{3/4} \,(1+\sqrt{2}\;)^{1/6}
\tag{2}$$
Here, $\eta(\tau)$ is the Dedekind eta function
(I came to this while working on my unanswered question
at math.SE)
| I do not know enough about CM (complex multiplication?) or
class field theory to tell whether Joe's answer is
sensible or feasible. And (so far) we have received no
references as I had hoped. So eventually I came up with
a proof, perhaps more elementary.
Write $\tau_1 = i\sqrt{6}/6$. Then
$$
\frac{-1}{\tau_1} = i\sqrt{6} = 6\tau_1 .
$$
Now both $j(\tau)$ and $j(6\tau)$ are
modular functions for the group $\Gamma_0(6)$.
Thus, they are algebraically related to each other.
And since
$$
j(6\tau_1) = j\left(\frac{-1}{\tau_1}\right) = j(\tau_1),
$$
if we put these into the algebraic relation we get
an algebraic equation for $j(\tau_1)$.
Evaluating $j(\tau_1)$ numerically, we can tell which zero of
that equation is the right one.
To determine the algebraic relationship between
$j(\tau)$ and $j(6\tau)$, I chose the Hauptmodul
$$
j_{6E}(\tau) = \frac{\eta(2\tau)^3\eta(3\tau)^9}
{\eta(\tau)^3\eta(6\tau)^9}
= q^{-1} + 3 + 6q + 4q^2 - 3 q^3 +\dots
$$
Then $j(\tau)$ and $j(6\tau)$ are both rational functions of it.
My computer calculates—writing $x=j_{6E}(\tau)$:
\begin{align*}
j(\tau) &= \frac{(x+4)^3(x^3+228 x^2 + 48 x + 64)^3}
{x^2(x-8)^6(x+1)^3}
\\
j(6\tau) &=\frac{x^6(8/x^3+12/x^2+6/x-1)^3(1-2/x)^3}
{8/x^3+15/x^2+6/x-1}
\end{align*}
When $\tau=\tau_1$, these are equal. Equating them,
we get an equation to solve for $x$.
Of degree $10$. (It factors somewhat.)
Maple doesn't numerically evaluate eta functions directly.
But we can write them in terms of theta functions
$$
j_{6E}(\tau) =
{\frac {{{\rm e}^{-2\,i\pi \,\tau}}
\theta_4 \left( \pi \,\tau,{{\rm e}^{6\,i\pi \,\tau}}
\right) ^{3}
\theta_4 \left(
\frac{3}{2}\,\pi \,\tau,{{\rm e}^{9\,i\pi \,\tau}}
\right) ^{9}}{ \theta_4 \left( \frac{1}{2}\,\pi \,
\tau,{{\rm e}^{3\,i\pi \,\tau}} \right) ^{3}
\theta_4 \left( 3\,\pi \,\tau,{{\rm e}^{18\,i\pi \,\tau}}
\right) ^{9}}}
$$
which Maple does numerically evaluate.
We get
$$
j_{6E}(\tau_1) \approx 16.48528137423857 .
$$
The only zero of our polynomial that matches this is
$x = 8 + 6\sqrt{2}$. Plugging it in, we get
\begin{align*}
j(i\sqrt{6}\,) &= 2417472+1707264\sqrt{2}
=2^6 \;3^3 \;(1+\sqrt{2}\,)^5\;(3\sqrt{2}-1)^3
\\
J(i\sqrt{6}\,) &= 1399+988\sqrt{2}=
(1+\sqrt{2}\,)^5\;(3\sqrt{2}-1)^3 .
\end{align*}
| {
"language": "en",
"url": "https://mathoverflow.net/questions/260786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 0
} |
Need to require a assertion for two subsets of natural numbers Let $P\ne Q$ be an arbitrary pair of primes, $M=PQ$.
$S$ = sum of all $m<M$ co-prime to $M$ such that equation $Px+Qy=m\ (1)$ has a solutions in natural numbers.
$s$ = sum of all $m<M$ co-prime to $M$ such that this equation is not solvable.
Is the assertion $S-s=\dfrac{(P^2-1)(Q^2-1)}{6}$ right?
| Consider the set
$$T = \{ (x,y)\ :\ 1\leq x\leq Q,\ 1\leq y\leq P,\ Px+Qy\leq PQ\}.$$
It is easy to see that
$$S = \sum_{(x,y)\in T} (Px+Qy).$$
Switching to double summation in $S$, we get
$$S = \sum_{y=1}^{P} \sum_{x=1}^{X_y} (Px+Qy) = \sum_{y=1}^{P} QyX_y + P\frac{X_y(X_y+1)}2,$$
where
$$X_y = \left\lfloor \frac{PQ-Qy}P\right\rfloor = Q + \frac{-Qy - Z_y}P$$
and
$$Z_y = -Qy\bmod P.$$
Hence,
$$S=\sum_{y=1}^{P} \left(\frac{PQ^2+PQ}2 - \frac{Q}{2}y - (Q+\frac{1}{2})Z_y + \frac{1}{2P}Z_y^2 - \frac{Q^2}{2P}y^2\right).$$
Since $(P,Q)=1$, $Z_y$ runs over $\{0,1,\dots,P-1\}$ when $y$ runs over $\{1,2,\dots,P\}$.
Therefore,
$$S = \frac{P^2Q^2+P^2Q}2-\frac{QP(P+1)}{2}-(Q+\frac{1}{2})\frac{P(P-1)}2+\frac{(P-1)P(2P-1)}{12P}-\frac{Q^2P(P+1)(2P+1)}{12P}$$
$$=\frac{(P-1)(Q-1)(4PQ+P+Q+1)}{12}.$$
Now, the sum of all $m<PQ$ coprime to $PQ$ equals
$$S+s=\frac{PQ(PQ-1)}2 - P\frac{Q(Q-1)}2 - Q\frac{P(P-1)}2=\frac{PQ(P-1)(Q-1)}2,$$
implying that
$$s = \frac{PQ(P-1)(Q-1)}2 - S = \frac{(P-1)(Q-1)(2PQ-P-Q-1)}{12}.$$
Finally,
$$S-s = \frac{(P-1)(Q-1)(4PQ+P+Q+1)}{12} - \frac{(P-1)(Q-1)(2PQ-P-Q-1)}{12}$$
$$=\frac{(P^2-1)(Q^2-1)}6$$
as expected.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/280486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Limit of a Combinatorial Function I need help with the following problem, proposed by Iurie Boreico:
Two players, $A$ and $B$, play the following game: $A$ divides an $n \times n $ square into strips of unit width (and various integer lengths). After that, player $B$ picks an integer $k$, $1 \leq k \leq n$, and removes all strips of length $k$. Let $l(n)$ be the largest area that $B$ can remove, regardless how $A$ divides the square into strips. Evaluate
$$ \lim_{n \to \infty} \frac{l(n)}{n}. $$
Some progress that I made on this problem:
Observe that there is at most $l(n)/k$ strips of length $k$. So,
$l(n)n - \sum_{k = 1}^n l(n) \pmod k = \sum_{k = 1}^n k \lfloor l(n)/k \rfloor \geq n^2$.
Now I'm stumped on how to asymptotically bound the left hand side, and I cannot find this problem posted anywhere online. Any solutions, observations, progress is appreciated.
| Here is another way to get the bound obtained by Fedor Petrov.
One observation is that from the covering of the board using strips of size $1 \times i$ and $1 \times n-i$ to cover row $r$ for rows $1 \leq r \leq n$, then $l(n) \leq (n-1) + (n-1) = 2n-2$. From the Pigeonhole principle, we also have that $n \leq l(n)$. So, $n \leq l(n) \leq 2n-2$.
As stated in the OP, we have that
$$ l(n)n - \sum_{k = 1}^n l(n) \pmod k \geq n^2. $$
We will try to get a bound the second sum. To make notation more simple, we let $l(n) = L$. Splitting the sum from $1 \leq k \leq L/2$ and $L/2 < k \leq n$, we have that
\begin{align*}
\sum_{n \geq k > L/2} l(n) \pmod k & = \sum_{n \geq k > L/2} L - k \\
& = \left ( n - \frac{L}{2} \right ) \left ( \frac{3L}{4} - \frac{n}{2} \right) + O(L) \\
& = nL - \frac{n^2}{2} - \frac{3L^2}{8} + O(L).
\end{align*}
For the second sum, we have that
\begin{align*}
\sum_{k \leq L/2} L \pmod k & = \sum_{i = 2}^L \sum_{\frac{L}{i+1} < k \leq \frac{L}{i}} L \pmod k\\
& = \sum_{i = 2}^L \sum_{\frac{L}{i+1} < k \leq \frac{L}{i}} L - ik \\
& = \sum_{i = 2}^L \frac{L^2}{2i(i+1)^2} + L \cdot O\left (\frac{1}{i+1} \right) \\
& = O(L \log L) + \frac{L^2}{2} \sum_{i = 2}^L \frac{1}{i(i+1)^2}.
\end{align*}
Since
$$\sum_{i = 2}^L \frac{1}{i(i+1)^2} = \sum_{i = 2}^\infty \frac{1}{i(i+1)^2} - \sum_{i \geq L+1} \frac{1}{i(i+1)^2} = \frac{7}{4} - \frac{\pi^2}{6} + O \left (\frac{1}{L^2}\right ).$$
So,
$$\sum_{k \leq L/2} L \pmod k = O(L \log L) + \left ( \frac{7}{8} - \frac{\pi^2}{12} \right ) L^2.$$
Adding our two expressions together, we have that
$$\sum_{k = 1}^n L \pmod k = nL - \frac{n^2}{2} + \left ( \frac{1}{2} - \frac{\pi^2}{12} \right ) L^2 + O(L \log L).$$
Hence,
\begin{align*}
nL - \sum_{k = 1}^n l(n) \pmod k & \geq n^2 \\
\left ( \frac{\pi^2}{12} - \frac{1}{2} \right ) L^2 + O(L \log L) & \geq \frac{n^2}{2} \\
\left ( \frac{L}{n} \right )^2 & \geq \frac{6}{\pi^2 - 6} + o(1) \\
\frac{L}{n} & \geq \sqrt{ \frac{6}{\pi^2 - 6} } + o(1).
\end{align*}
Taking the limit as $n \to \infty$, we find that the limit is $\boxed{\geq \sqrt{6/(\pi^2 - 6)}}$.
The only upper bound I've arrived at is $2$ which can be achieved by the first observation.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/298670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
Strengthened version of Isoperimetric inequality with n-polygon Let $ABCD$ be a convex quadrilateral with the lengths $a, b, c, d$ and the area $S$. The main result in our paper equivalent to:
\begin{equation} a^2+b^2+c^2+d^2 \ge 4S + \frac{\sqrt{3}-1}{\sqrt{3}}\sum{(a-b)^2}\end{equation}
where $\sum{(a-b)^2}=(a-b)^2+(a-c)^2+(a-d)^2+(b-c)^2+(b-d)^2+(c-d)^2$ and note that $\frac{\sqrt{3}-1}{\sqrt{3}}$ is the best constant for this inequality.
Similarly with the same form above apply to n-polygon, I propose a conjecture that:
Let a convex polygon $A_1A_2...A_n$ with the lengths are $a_1, a_2, ...a_n$ and area $S$ we have:
\begin{equation} \sum_i^n{a_i^2} \ge 4\tan{\frac{\pi}{n}}S + k\sum_{i < j}{(a_i-a_j)^2}\end{equation}
I guess that $k=tan{\frac{\pi}{n}}-tan{\frac{\pi}{n+2}}$
I am looking for a proof of the inequality above.
Note that using Isoperimetric inequality we can prove that:
\begin{equation} \sum_i^n{a_i^2} \ge 4\tan{\frac{\pi}{n}}S\end{equation}
Case n=3:
*
*Hadwiger–Finsler inequality, $k=1 \approx 1.00550827956352=tan{\frac{\pi}{3}}-tan{\frac{\pi}{5}}$:
\begin{equation}a^{2} + b^{2} + c^{2} \geq 4 \sqrt{3}S+ (a - b)^{2} + (b - c)^{2} + (c - a)^{2}\end{equation}
Case n=4: our paper, $k=\frac{\sqrt{3}-1}{\sqrt{3}}$, $k=tan{\frac{\pi}{4}}-tan{\frac{\pi}{6}}$
\begin{equation} a^2+b^2+c^2+d^2 \ge 4S + \frac{\sqrt{3}-1}{\sqrt{3}}\sum{(a-b)^2}\end{equation}
See also:
*
*Weitzenböck's inequality
*Hadwiger–Finsler inequality
*Isoperimetric inequality.
| The keyword you are looking for is "quantitative isoperimetric inequalities". The case of polygons was solved in the following paper: https://arxiv.org/abs/1402.4460
The "quantitative" term in the paper above is not necessarily of the form you seek. It involves the standard deviation of the radii and central angles.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/299056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
Primality test for generalized Fermat numbers This question is successor of Primality test for specific class of generalized Fermat numbers .
Can you provide a proof or a counterexample for the claim given below?
Inspired by Lucas–Lehmer–Riesel primality test I have formulated the following claim:
Let $P_m(x)=2^{-m}\cdot((x-\sqrt{x^2-4})^m+(x+\sqrt{x^2-4})^m)$ .
Let $F_n(b)= b^{2^n}+1 $ where $b$ is an even natural number and $n\ge2$ . Let $a$ be a natural number greater than two such that $\left(\frac{a-2}{F_n(b)}\right)=-1$ and $\left(\frac{a+2}{F_n(b)}\right)=-1$ where $\left(\frac{}{}\right)$ denotes Jacobi symbol. Let $S_i=P_b(S_{i-1})$ with $S_0$ equal to the modular $P_{b/2}(P_{b/2}(a))\phantom{5} \text{mod} \phantom{5} F_n(b)$. Then $F_n(b)$ is prime if and only if $S_{2^n-2} \equiv 0 \pmod{F_n(b)}$ .
You can run this test here. A list of generalized Fermat primes sorted by base $b$ can be found here. I have tested this claim for many random values of $b$ and $n$ and there were no counterexamples.
A command line program that implements this test can be found here.
Android app that implements this test can be found here .
Python script that implements this test can be found here.
Mathematica notebook that implements this test can be found here.
| This is a partial answer.
This answer proves that if $F_n(b)$ is prime, then $S_{2^n-2} \equiv 0 \pmod{F_n(b)}$.
Proof :
Let $N:=F_n(b)=b^{2^n}+1$. It can be proven by induction that
$$S_i\equiv 2^{-b^{i+2}/4}(p^{b^{i+2}/4}+q^{b^{i+2}/4})\pmod N\tag1$$
where $p=a-\sqrt{a^2-4},q=a+\sqrt{a^2-4}$.
From $(1)$, we get, using $\sqrt{a\pm\sqrt{a^2-4}}=\frac 1{\sqrt 2}(\sqrt{a+2}\pm\sqrt{a-2})$,
$$\begin{align}&2^{N+1}\cdot S_{2^n-2}^2-2^{N+2}
\\\\&\equiv \left(\sqrt{a+2}+\sqrt{a-2}\right)\left(\sqrt{a+2}-\sqrt{a-2}\right)^{N}
\\&\qquad\qquad +\left(\sqrt{a+2}-\sqrt{a-2}\right)\left(\sqrt{a+2}+\sqrt{a-2}\right)^{N}\pmod N
\\\\&\equiv \sqrt{a+2}\left(\left(\sqrt{a+2}-\sqrt{a-2}\right)^{N}
+\left(\sqrt{a+2}+\sqrt{a-2}\right)^{N}\right)
\\&\qquad\qquad-\sqrt{a-2}\left(\left(\sqrt{a+2}+\sqrt{a-2}\right)^{N}
-\left(\sqrt{a+2}-\sqrt{a-2}\right)^{N}\right)\pmod N
\\\\&\equiv \sqrt{a+2}\sum_{k=0}^{N}\binom Nk(\sqrt{a+2})^{N-k}((-\sqrt{a-2})^k+(\sqrt{a-2})^k)
\\&\qquad\qquad -\sqrt{a-2}\sum_{k=0}^{N}\binom Nk(\sqrt{a+2})^{N-k}((\sqrt{a-2})^k-(-\sqrt{a-2})^k)\pmod N
\\\\&\equiv \sum_{j=0}^{(N-1)/2}\binom{N}{2j}(a+2)^{(N-2j+1)/2}\cdot 2(a-2)^j
\\&\qquad\qquad -\sum_{j=1}^{(N+1)/2}\binom{N}{2j-1}(a+2)^{(N-2j+1)/2}\cdot 2(a-2)^j\pmod N
\\\\&\equiv 2(a+2)\cdot\left(\frac{a+2}{N}\right)-2(a-2)\cdot\left(\frac{a-2}{N}\right)\pmod N
\\\\&\equiv 2(a+2)\cdot (-1)-2(a-2)\cdot (-1)\pmod N
\\\\&\equiv -8\pmod N
\end{align}$$
So, we get
$$2^{N+1}\cdot S_{2^n-2}^2-2^{N+2}\equiv -8\pmod N$$
It follows from $2^{(N-1)/2}\equiv 1\pmod N$ that
$$S_{2^n-2}\equiv 0\pmod{F_n(b)}$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/311457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Matrix logarithm for d-dimensional cyclic permutation matrix I want to find the matrix $\hat{H}_d$ which, when exponentiated, leads to a d-dimensional cyclic permutation transformation matrix.
I have solutions for d=2:
$$
\hat{U}_2 =\left( \begin{matrix}
0 & 1 \\
1 & 0 \\
\end{matrix} \right)=-i \exp\left(i\hat{H_2}\right) \to \\
\hat{H}_2 =\left( \begin{matrix}
0 & 1 \\
1 & 0 \\
\end{matrix} \right)
$$
d=3:
$$
\hat{U}_3 =\left( \begin{matrix}
0 & 1 & 0 \\
0 & 0 & 1 \\
1 & 0 & 0\\
\end{matrix} \right)=-i \exp\left(i\hat{H_3}\right) \to \\
\hat{H}_3 =\frac{\pi}{3}\left( \begin{matrix}
-\frac{1}{2} & \left(-\frac{1}{\sqrt{3}} + i \right) & \left(\frac{1}{\sqrt{3}} + i \right) \\
\left(\frac{1}{\sqrt{3}} + i \right) & -\frac{1}{2} & \left(-\frac{1}{\sqrt{3}} + i \right)\\
\left(\frac{1}{\sqrt{3}} + i \right) & \left(-\frac{1}{\sqrt{3}} + i \right) & -\frac{1}{2}
\end{matrix} \right)
$$
d=4:
$$
\hat{U}_4 =\left( \begin{matrix}
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 \\
\end{matrix} \right)=-i \exp\left(i\hat{H_4}\right) \to \\
\hat{H}_4 =\frac{\pi}{4}\left( \begin{matrix}
i & (1+i) & -i & (-1+i) \\
(-1+i) & i & (1+i) & -i\\
-i & (-1+i) & i & (1+i)\\
(1+i) & -i & (-1+i) & i \\
\end{matrix} \right)
$$
Unfortunately, I am missing a general form of $H_d$ for $d>4$, so my question is:
Question: How does $\hat{H}_d$ look when I want that $\hat{U}_d=-i \exp\left(i \hat{H}_d \right)$ is a d-dimensional cyclic permutation transformation?
| The article below presents the general form:
https://arxiv.org/abs/2001.11909
| {
"language": "en",
"url": "https://mathoverflow.net/questions/338533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Natural number solutions for equations of the form $\frac{a^2}{a^2-1} \cdot \frac{b^2}{b^2-1} = \frac{c^2}{c^2-1}$ Consider the equation $$\frac{a^2}{a^2-1} \cdot \frac{b^2}{b^2-1} = \frac{c^2}{c^2-1}.$$
Of course, there are solutions to this like $(a,b,c) = (9,8,6)$.
Is there any known approximation for the number of solutions $(a,b,c)$, when $2 \leq a,b,c \leq k$ for some $k \geq 2.$
More generally, consider the equation $$\frac{a_1^2}{a_1^2-1} \cdot \frac{a_2^2}{a_2^2-1} \cdot \ldots \cdot \frac{a_n^2}{a_n^2-1} = \frac{b_1^2}{b_1^2-1} \cdot \frac{b_2^2}{b_2^2-1}\cdot \ldots \cdot \frac{b_m^2}{b_m^2-1}$$
for some natural numbers $n,m \geq 1$. Similarly to the above question, I ask myself if there is any known approximation to the number of solutions $(a_1,\ldots,a_n,b_1,\ldots,b_m)$, with natural numbers $2 \leq a_1, \ldots, a_n, b_1, \ldots, b_m \leq k$ for some $k \geq 2$. Of course, for $n = m$, all $2n$-tuples are solutions, where $(a_1,\ldots,a_n)$ is just a permutation of $(b_1,\ldots,b_n)$.
| Here's another infinite family. Let $x,y$ be positive integers such that $x^2-2y^2=\pm1$ – there are infinitely many such pairs. Let $a=x^2$, $b=2y^2$, $c=xy$, then a little algebra will show that $(a,b,c)$ satisfy the equation in the title.
E.g., $x=3$, $y=2$ leads to $(9,8,6)$, and $x=7$, $y=5$ yields $(49,50,35)$, two triples already found by Dmitry, while $x=17$, $y=12$ gets us $(289,288,204)$.
This infinite family is much thinner than the one in the other answer.
[I seem to have become disconnected from the account under which I posted the other answer.]
EDIT: A third infinite family. $$a=4n(n+1)(n^2+n-1),\ b=(2n+1)(2n^2+2n-1),\ c=2(2n+1)(n^2+n-1)$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/359481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Two remarkable weighted sums over binary words This question builds off of the previous MO question Number of collinear ways to fill a grid.
Let $A(m,n)$ denote the set of binary words $\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_{m+n-2})$ consisting of $m-1$ $0's$ and $n-1$ $1's$. Evidently $\#A(m,n) = \binom{m+n-2}{m-1}$.
For $\alpha \in A(m,n)$ and $1\leq i \leq m+n-2$, set
$$ b^\alpha_i := \#\{1\leq j < i\colon \alpha_i\neq\alpha_j\} +1.$$
$$ c^\alpha_i := \#\{1\leq j \leq i\colon \alpha_i=\alpha_j\}=(i+1)-b^\alpha_i.$$
The resolution of the above-linked question implies that
$$ \sum_{\alpha \in A(m,n)} \frac{b^\alpha_1b^\alpha_2 \cdots b^\alpha_{m+n-2}}{b^\alpha_{m+n-2} (b^\alpha_{m+n-2}+b^\alpha_{m+n-3})\cdots (b^\alpha_{m+n-2}+b^\alpha_{m+n-3}+\cdots+b^\alpha_1) } = \frac{mn}{(m+n-1)!}$$
Meanwhile, in this answer, it is explained that
$$ \sum_{\alpha \in A(m,n)} \frac{1}{c^\alpha_{m+n-2} (c^\alpha_{m+n-2}+c^\alpha_{m+n-3})\cdots (c^\alpha_{m+n-2}+c^\alpha_{m+n-3}+\cdots+c^\alpha_1) } = \frac{2^{m-1}2^{n-1}}{(2m-2)! (2n-2)!}$$
Considering the similarities of these two remarkable weighted sums over binary words, we ask:
Question: Is there a more general formula which specializes to the above two formulas?
EDIT:
Since for any $\alpha \in A(m,n)$ we have
$$ \{c_1^{\alpha},c_2^{\alpha},\ldots,c_{m+n-2}^{\alpha}\} = \{1,2,\ldots,m-1,1,2,\ldots,n-1\},$$
we can rewrite that second sum to be
$$ \sum_{\alpha \in A(m,n)} \frac{c^\alpha_1 c^\alpha_2 \cdots c^\alpha_{m+n-2}}{c^\alpha_{m+n-2} (c^\alpha_{m+n-2}+c^\alpha_{m+n-3})\cdots (c^\alpha_{m+n-2}+c^\alpha_{m+n-3}+\cdots+c^\alpha_1) } = \frac{2^{m-1}(m-1)!2^{n-1}(n-1)!}{(2m-2)! (2n-2)!},$$
to be even more similar to the first sum.
If we set
$$ d^{\alpha}_i = xb^{\alpha}_i+yc^{\alpha}_i,$$
then the above commentary explains that
$$ \sum_{\alpha \in A(m,n)} \frac{d^\alpha_1 d^\alpha_2 \cdots d^\alpha_{m+n-2}}{d^\alpha_{m+n-2} (d^\alpha_{m+n-2}+d^\alpha_{m+n-3})\cdots (d^\alpha_{m+n-2}+d^\alpha_{m+n-3}+\cdots+d^\alpha_1)}$$
has a product formula for $x,y\in \{0,1\}$. Maybe it has a product formula for general $x,y$.
| One can perhaps look at the bivariate generating functions,
$$
\sum_{m,n \geq 0} \frac{x^m y^n mn}{(m+n-1)!}
$$
and
$$
\sum_{m,n \geq 0} \frac{x^m y^n 2^{m+n-2}}{(2m-2)!(2n-2)!}.
$$
Mathematica expresses these as
$$
\frac{x y \left(e^x x^3-e^x x^2 y+x e^y y^2-2 e^x x y+2 x e^y y-e^y y^3\right)}{(x-y)^3}
$$
and
$$
\frac{1}{2} x y \left(\cosh \left(\sqrt{2} \sqrt{x}\right) \left(\sqrt{2} \sqrt{y} \sinh \left(\sqrt{2} \sqrt{y}\right)+2 \cosh \left(\sqrt{2} \sqrt{y}\right)\right)+\sinh \left(\sqrt{2} \sqrt{x}\right) \left(\sqrt{x y} \sinh \left(\sqrt{2} \sqrt{y}\right)+\sqrt{2} \sqrt{x} \cosh \left(\sqrt{2} \sqrt{y}\right)\right)\right).
$$
There is not a lot of similarities here, but maybe there is a natural interpolation...
| {
"language": "en",
"url": "https://mathoverflow.net/questions/372678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Improving the lower bound $I(n^2) > \frac{2(q-1)}{q}$ when $q^k n^2$ is an odd perfect number Let $N = q^k n^2$ be an odd perfect number with special prime $q$ satisfying $q \equiv k \equiv 1 \pmod 4$ and $\gcd(q,n)=1$.
Define the abundancy index
$$I(x)=\frac{\sigma(x)}{x}$$
where $\sigma(x)$ is the classical sum of divisors of $x$.
Since $q$ is prime, we have the bounds
$$\frac{q+1}{q} \leq I(q^k) < \frac{q}{q-1},$$
which implies, since $N$ is perfect, that
$$\frac{2(q-1)}{q} < I(n^2) = \frac{2}{I(q^k)} \leq \frac{2q}{q+1}.$$
We now prove the following claim:
CLAIM: $$I(n^2) > \bigg(\frac{2(q-1)}{q}\bigg)\bigg(\frac{q^{k+1} + 1}{q^{k+1}}\bigg)$$
PROOF: We know that
$$\frac{\sigma(n^2)}{q^k}=\frac{2n^2}{\sigma(q^k)}=\frac{2n^2 - \sigma(n^2)}{\sigma(q^k) - q^k}=\gcd(n^2,\sigma(n^2)),$$
(since $\gcd(q^k,\sigma(q^k))=1$).
However, we have
$$\sigma(q^k) - q^k = 1 + q + \ldots + q^{k-1} = \frac{q^k - 1}{q - 1},$$
so that we obtain
$$\frac{\sigma(n^2)}{q^k}=\frac{\bigg(q - 1\bigg)\bigg(2n^2 - \sigma(n^2)\bigg)}{q^k - 1}=\sigma(n^2) - \bigg(q - 1\bigg)\bigg(2n^2 - \sigma(n^2)\bigg)$$
$$=q\sigma(n^2) - 2(q - 1)n^2.$$
Dividing both sides by $qn^2$, we get
$$I(n^2) - \frac{2(q-1)}{q} = \frac{I(n^2)}{q^{k+1}} > \frac{1}{q^{k+1}}\cdot\frac{2(q-1)}{q},$$
which implies that
$$I(n^2) > \bigg(\frac{2(q-1)}{q}\bigg)\bigg(\frac{q^{k+1} + 1}{q^{k+1}}\bigg).$$
QED.
To illustrate the improved bound:
(1) Unconditionally, we have
$$I(n^2) > \frac{2(q-1)}{q} \geq \frac{8}{5} = 1.6.$$
(2) Under the assumption that $k=1$:
$$I(n^2) > 2\bigg(1 - \frac{1}{q}\bigg)\bigg(1 + \left(\frac{1}{q}\right)^2\bigg) \geq \frac{208}{125} = 1.664.$$
(3) However, it is known that under the assumption $k=1$, we actually have
$$I(q^k) = 1 + \frac{1}{q} \leq \frac{6}{5} \implies I(n^2) = \frac{2}{I(q^k)} \geq \frac{5}{3} = 1.\overline{666}.$$
Here are my questions:
(A) Is it possible to improve further on the unconditional lower bound for $I(n^2)$?
(B) If the answer to Question (A) is YES, my next question is "How"?
I did notice that
$$\frac{2(q-1)}{q}+\frac{2}{q(q+1)}=I(n^2)=\frac{2q}{q+1}$$
when $k=1$.
| Here is a way to come up with an improved lower bound for $I(n^2)$, albeit in terms of $q$ and $n$:
We write
$$I(n^2) - \frac{2(q - 1)}{q} = \frac{I(n^2)}{q^{k+1}} = \frac{\sigma(n^2)}{q^k}\cdot\frac{1}{qn^2} > \frac{1}{qn^2},$$
from which it follows that
$$I(n^2) > \frac{2(q - 1)}{q} + \frac{1}{qn^2}.$$
This improved lower bound for $I(n^2)$, which does not contain $k$, can then be used to produce an upper bound for $k$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/382050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to prove this high-degree inequality Let $x$,$y$,$z$ be positive real numbers which satisfy $xyz=1$. Prove that:
$(x^{10}+y^{10}+z^{10})^2 \geq 3(x^{13}+y^{13}+z^{13})$.
And there is a similar question: Let $x$,$y$,$z$ be positive real numbers which satisfy the inequality
$(2x^4+3y^4)(2y^4+3z^4)(2z^4+3x^4) \leq(3x+2y)(3y+2z)(3z+2x)$. Prove this inequality:
$xyz\leq 1$.
| Another way. By my previous post it's enough to prove that:
$$(x^5+y^5+z^5)^2\geq3xyz(x^7+y^7+z^7)$$ for positive $x$, $y$ and $z$.
Indeed, let $x^5+y^5+z^5=\text{constant}$ and $x^7+y^7+z^7=\text{constant}$.
Thus, by the Vasc's EV Method (see here: Cîrtoaje - The equal variable method Corollary 1.8(b)) it's enough to prove the last inequality (because it's homogeneous) for $y=z=1$, which gives
$$(x^5+2)^2\geq3x(x^7+2)$$ or
$$(x-1)^2(x^8+2x^7-2x^5-4x^4-2x^3+2x+4)\geq0$$ and the rest is smooth.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/385942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 2
} |
$1^2+17^2+19^2=3\cdot 7\cdot 31$ $1^2+17^2+19^2=3\cdot 7\cdot 31$.
$3$, $7$ and $31$ are the first three Mersenne primes
Let be $a$, $b$ and $c$ positive integers.
Let $M_n$ denote the n-th Mersenne prime.
Can be it proven that there are infinitely many positive a,b,c such that:
$a^2+b^2+c^2=P_n$, where $P_n$ is the product of the first $n$ Mersenne primes?
| For $n \geq 3$, the product of the first $n$ Mersenne primes is congruent to $3 \times (-1)^{n-1},$ (mod $8$), since every Mersenne prime other than $3$ is congruent to $7$ (mod $8$).On the other hand, Legendre's three-square theorem asserts that the only positive integers which are not expressible as the sum of three integer squares are integers of the form $4^{a}(8b+7)$ with $a,b$ integers. No integer of the latter form is congruent to $\pm 3$ (mod $8$), so for $n \geq 3$, the product of the first $n$ Mersenne primes is expressible as the sum of three integer squares (but clearly, each such number is so expressible in only finitely many ways in this way).
| {
"language": "en",
"url": "https://mathoverflow.net/questions/394299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Number of 1's in binary expansion of $a_n = \frac{2^{\varphi(3^n)}-1}{3^n}$ My question is about the Hamming Weight (or number of 1's in binary expansion) of $a_n = \frac{2^{\varphi(3^n)}-1}{3^n}$ A152007
For example, $a_3 = 9709 = (10110111101001)_2 $ has nine 1's in binary expansion
I guess the answer is $3^{(n-1)}$ but I can't prove it
Is that correct?
| It is, indeed, correct. Notice first that $2-(-1)=3$ is divisible by $3$, so by lifting-the-exponent lemma the number
$$
A=\frac{2^{3^{n-1}}-(-1)^{3^{n-1}}}{3^n}=\frac{2^{3^{n-1}}+1}{3^n}
$$
is an integer. Notice also that for $n>0$ it has less than $3^{n-1}$ binary digits. Assume that it has $m$ binary digits. We have
$$
\frac{2^{\varphi(3^{n})}-1}{3^n}=\frac{2^{2\cdot 3^{n-1}}-1}{3^n}=A(2^{3^{n-1}}-1).
$$
Next, let $l=3^{n-1}-m$, $B=2^l-1$ and $C=2^m-A$. We claim that your number's binary expansion looks like a concatenation of $A-1$, $B$ and $C$, where we also add some zeros in expansion of $C$ until we get exactly $m$ digits. So, we should have
$$
a_n=C+2^mB+2^{m+l}(A-1).
$$
This equality is true, because
$$
C+2^mB+2^{m+l}(A-1)=2^m-A+2^m(2^l-1)+2^{m+l}(A-1)=
$$
$$
=(A-1)(2^{m+l}-1)+2^{m+l}-1=A(2^{m+l}-1)=A(2^{3^{n-1}}-1).
$$
Now, if the sum of digits of $A-1$ is equal to $s$, then the sum of digits of $C$ is $m-s$ and the sum of digits of $B$ is, of course, $l$, so the sum of digits of $a_n$ is
$$
s+m-s+l=m+l=3^{n-1},
$$
as needed.
For the particular case $n=3$, described in the post, $A=10011_2=19=\frac{2^9+1}{27}$ and $C=2^5-A=13=01101_2$ (notice the added zero)
| {
"language": "en",
"url": "https://mathoverflow.net/questions/396297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
Irrationality of this trigonometric function I'd like to prove the following conjecture.
Let $x = \frac{p}{q}\pi$ be a rational angle ($p,q$ integers, $q \geq 1$).
Then
$f(x) = \frac{2}{\pi} \arccos{\left(2\cos^4(2x)-1 \right)}$
is irrational if $x$ is not an integer multiple of $\frac{\pi}{8}$. Is this true? Is the other direction true too?
I've tried to use the standard manipulations, Chebyshev polynomials, etc. (see https://math.stackexchange.com/questions/398687/how-do-i-prove-that-frac1-pi-arccos1-3-is-irrational), but am otherwise quite stuck.
Some evaluations just to see a trend (we can restrict domain of $x$ to be $[0,\pi/2]$):
\begin{align}
x & = \frac{\pi}{2} \implies f(x)= 0 \text{ (Rational) }, \\
x & = \frac{\pi}{3}, \implies f(x) = 2\arccos(-7/8)/\pi, \\
x & = \frac{\pi}{4}, \implies f(x) = 2 \text{ (Rational) }, \\
x & = \frac{\pi}{5}, \implies f(x) = 2\arccos(-3(3+\sqrt{5})/16)/\pi, \\
x & = \frac{2\pi}{5}, \implies f(x) = 2\arccos(3(-3+\sqrt{5})/16)/\pi, \\
x & = \frac{\pi}{6}, \implies f(x) = 2\arccos(-7/8)/\pi, \\
x & = \frac{\pi}{7}, \implies f(x) = 2\arccos(-1+2\sin^4(3\pi/14))/\pi \\
x & = \frac{2\pi}{7}, \implies f(x) = 2\arccos(-1+2\sin^4(\pi/14))/\pi \\
x & = \frac{2\pi}{7}, \implies f(x) = 2\arccos(-1+2\sin^4(\pi/7))/\pi \\
x & = \frac{\pi}{8}, \frac{3\pi}{8}, \implies f(x) = 4/3 \text{ (Rational) }\\
\cdots
\end{align}
| Lemma. Let $n$ be a positive integer such that each number in the open interval $(n/4,3n/4)$ is not coprime with $n$. Then $n\in \{1,4,6\}$.
Proof.
*
*If $n=2m+1$ is odd, and $m\geqslant 1$, then $m\in (n/4,3n/4)$.
*If $n=4m+2$ and $m>1$, then $2m-1\in (n/4,3n/4)$
*If $n=2$, then $1\in (n/4,3n/4)$
*If $n=4m$ and $m>1$, then $2m-1\in (n/4,3n/4)$
Now assume that $f(x)=4t$ with rational $t$. Then $2\cos^42x-1=\cos (2\pi t)$ and $2\cos^42x=2\cos^2(\pi t)$, $\cos \pi t=\pm \cos^2 2x$. Replacing $t$ to $t+1$ if necessary, we may suppose that $\cos \pi t=\cos^22x$. Let $\pi t=2\pi a/b$, $2x=2\pi c/b$ for coprime (not necessarily mutually coprime) integers $a,b,c$. In other words, let $b$ be a minimal positive integer for which $\pi tb$, $2xb$ are both divisible by $2\pi$. Denote $w=\exp(2\pi i/b)$. In these notations we get $$\frac{w^a+w^{-a}}2=\frac{(w^c+w^{-c})^2}4.$$
This is polynomial relation for $w$ with rational coefficients. Thus, all algebraic conjugates of $w$ satisfy it aswell. These algebraic conjugates are all primitive roots of unity of degree $b$ (here I use the well known fact that the cyclotomic polynomial $\Phi_b$ is irreducible). So, we are allowed to replace $w$ to $w^m$, where $\gcd (m,b)=1$. RHS remains non-negative, therefore so does LHS, and we get
$\cos (2\pi am/b)\geqslant 0$ for all $m$ coprime to $b$.
This is rare. Namely, denote $a=da_1$, $b=db_1$ where $d=\gcd(a,b)$ and $\gcd(a_1,b_1)=1$. Let $k$ be arbitrary integer coprime with $b_1$. Denote
$m=k+Nb_1$, where $N$ equals to the product of those prime divisors of $d$ which do not divide $k$. Now each prime divisor of $d$ divides exactly one of the numbers $k$ and $Nb_1$, therefore it does not divide their sum $m$. Clearly $m$ is also coprime with $b_1$, and totally $\gcd(m,b)=1$. Therefore $\cos (2\pi a_1k/b_1)=\cos(2\pi a_1m/b_1)=\cos (2\pi am/b)$ is non-negative. Next, $a_1k$ takes all residues modulo $b_1$ which are coprime to $b_1$. It follows that there are no residues coprime with $b_1$ in the interval $(b_1/4,3b_1/4)$. Thus by Lemma we get $b_1\in \{1,4,6\}$, and $\cos \pi t=\cos 2\pi a/b=\cos 2\pi a_1/b_1\in \{0,1/2,1\}$ and $\cos 2x=\pm \sqrt{\cos \pi t}\in \{0,\pm \sqrt{2}/2,\pm 1\}$. This indeed implies that $2x$ is divisible by $\pi/4$, in other words, $x$ is divisible by $\pi/8$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/397285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
EM-wave equation in matter from Lagrangian Note
I am not sure if this post is of relevance for this platform, but I already asked the question in Physics Stack Exchange and in Mathematics Stack Exchange without success.
Setup
Let's suppose a homogeneous dielectric with a (spatially) local dielectric response function $\underline{\underline{\epsilon}} (\omega)$ (in general a tensor), such that we have the linear response relation $$\mathbf{D}(\mathbf{x}, \omega) = \underline{\underline{\epsilon}} \left(\omega \right) \mathbf{E}(\mathbf{x}, \omega) \, ,$$ for the displacement field $\mathbf{D}$ in the dielectric.
We can now write down a Lagrangian in Fourier space, describing the EM-field coupling to the dielectric body
$$\mathcal{L}=\frac{1}{2}\left[\mathbf{E}^{*}\left(x,\omega\right) \cdot (\underline{\underline{\epsilon}} \left(\omega \right)-1) \mathbf{E}\left(x,\omega\right)+|\mathbf{E}|^{2}\left(x,\omega\right)-|\mathbf{B}|^{2}\left(x,\omega\right) \right] \, .$$
If we choose a gauge
\begin{align}
\mathbf{E} &= \frac{i \omega}{c} \mathbf{A} \\
\mathbf{B} &= \nabla \times \mathbf{A} \, ,
\end{align}
such that we can write the Lagrangian (suppressing arguments) in terms of the vector potential $\mathbf{A}$ as
$$\mathcal{L} =\frac{1}{2}\left[\frac{\omega^2}{c^2} \mathbf{A}^{*} \cdot (\underline{\underline{\epsilon}}- \mathbb{1}) \mathbf{A}+ \frac{\omega^2}{c^2} |\mathbf{A}|^{2}-|\nabla \times \mathbf{A}|^{2}\right] \, . $$
And consequently we have the physical action
$$
S[\mathbf{A}] = \int d \mathbf{x} \int \frac{d \omega}{2 \pi} \; \mathcal{L} \left(\mathbf{A}\right) \, .
$$
Goal
My goal is to derive the EM-wave equation for the electric field in the dielectric media.
Idea
So my ansatz is the following: If we use Hamilton's principle, we want the first variation of the action to be zero
\begin{align}
0 = \delta S[\mathbf{A}] &= \left.\frac{\mathrm{d}}{\mathrm{d} \varepsilon} S[\mathbf{A} + \varepsilon \mathbf{h}] \right|_{\varepsilon=0} \\
&= \left.\frac{\mathrm{d}}{\mathrm{d} \varepsilon} \int d \mathbf{x} \int \frac{d \omega}{2 \pi} \; \mathcal{L} (\mathbf{A} + \varepsilon \mathbf{h}) \right|_{\varepsilon=0} \\ &= \int d \mathbf{x} \int \frac{d \omega}{2 \pi} \; \frac{1}{2} \Bigg( \frac{\omega^2}{c^2} \mathbf{A}^* \cdot ({\underline{\underline{\epsilon}}}-\mathbb{1}) \mathbf{h} + \frac{\omega^2}{c^2} \mathbf{h}^* \cdot ({\underline{\underline{\epsilon}}}- \mathbb{1}) \mathbf{A} + \frac{\omega^2}{c^2} \mathbf{A}^* \cdot \mathbf{h} + \frac{\omega^2}{c^2} \mathbf{h}^* \cdot \mathbf{A} \\ &\quad \quad \quad \quad \quad \quad \quad \quad- (\nabla \times \mathbf{A}^* ) \cdot ( \nabla \times \mathbf{h}) - (\nabla \times \mathbf{h}^* ) \cdot ( \nabla \times \mathbf{A}) \Bigg) \\
&= \int d \mathbf{x} \int \frac{d \omega}{2 \pi} \; \frac{1}{2} \Bigg( \frac{\omega^2}{c^2} \left[ ({\underline{\underline{\epsilon}}}^{\dagger}-\mathbb{1}) \mathbf{A} \right]^* \cdot \mathbf{h} + \frac{\omega^2}{c^2} \left[({\underline{\underline{\epsilon}}}- \mathbb{1}) \mathbf{A} \right] \cdot \mathbf{h}^* + \frac{\omega^2}{c^2} \mathbf{A}^* \cdot \mathbf{h} + \frac{\omega^2}{c^2} \mathbf{A} \cdot \mathbf{h}^* \\ &\quad \quad \quad \quad \quad \quad \quad \quad- (\nabla \times \nabla \times \mathbf{A}^* ) \cdot \mathbf{h} - (\nabla \times \nabla \times \mathbf{A} ) \cdot \mathbf{h}^* \Bigg) \\
&= \int d \mathbf{x} \int \frac{d \omega}{2 \pi} \; \frac{1}{2} \Bigg( \underbrace{\left[ \frac{\omega^2}{c^2} \left[ ({\underline{\underline{\epsilon}}}^{\dagger}-\mathbb{1}) \mathbf{A} \right]^* + \frac{\omega^2}{c^2} \mathbf{A}^* - \nabla \times \nabla \times \mathbf{A}^* \right]}_{\stackrel{!}{=} 0} \cdot \mathbf{h} \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad + \underbrace{\left[ \frac{\omega^2}{c^2} \left[({\underline{\underline{\epsilon}}}- \mathbb{1}) \mathbf{A} \right] + \frac{\omega^2}{c^2} \mathbf{A} - \nabla \times \nabla \times \mathbf{A} \right]}_{\stackrel{!}{=} 0} \cdot \mathbf{h}^* \Bigg) \, ,
\end{align}
for all $\mathbf{h}(\mathbf{x}, \omega)$. And consequently we get the equations
\begin{align}
\frac{\omega^2}{c^2} \left[ ({\underline{\underline{\epsilon}}}^{\dagger}-\mathbb{1}) \mathbf{A} \right]^* + \frac{\omega^2}{c^2} \mathbf{A}^* - \nabla \times \nabla \times \mathbf{A}^* &= 0 \\
\frac{\omega^2}{c^2} \left[({\underline{\underline{\epsilon}}}- \mathbb{1}) \mathbf{A} \right] + \frac{\omega^2}{c^2} \mathbf{A} - \nabla \times \nabla \times \mathbf{A} &= 0 \, .
\end{align}
If we suppose a lossy dielectric body, such that $\underline{\underline{\epsilon}}^{\dagger} \neq \underline{\underline{\epsilon}}$, the equations are in contradiction.
Time-domain
An analogues derivation in the time-domain (which I can post here on request), yields the wave equation (in Fourier space)
$$\frac{\omega^2}{2 c^2} (\underline{\underline{\epsilon}}-1) \mathbf{A} + \frac{\omega^2}{2 c^2} (\underline{\underline{\epsilon}}^{\dagger} -1) \mathbf{A} + \frac{\omega^2}{c^2} \mathbf{A} - \left( \nabla \times \nabla \times \mathbf{A} \right) = 0 \, .$$
This result is also not resembling the expected result, for $\underline{\underline{\epsilon}}^{\dagger} \neq \underline{\underline{\epsilon}}$.
Question
What went wrong in the calculation?
| In the Lagrangian he got, there is a term that reflects the interaction of the electromagnetic field with matter which has the form of equation (2) in this article ( to within a multiplicative factor : derivation of the energy per unit of volume)) but with a different factor which contains the tensor and its Hermitian conjugate.
we do not know how he obtained this Lagrangian.
For more information, there is this link: https://www.worldscientific.com/doi/pdf/10.1142/9789811203145_0001 where the relations (1,94), (1,95) express the negligent (absence) of the absorption.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/404380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Combinatorics: on the number of Celtic knots in an n × m frame Everybody has seen a Celtic knot like the ones below.
Mathematically speaking a rectangular Celtic knot of size $(n, m)$ may be built as:
We draw the boundaries of a $2n \times 2m$ rectangle .
Then we draw some barriers, which are horizontal or vertical segments whose starting, ending and crossing points all have integer even-sum coordinates.
Let $K_{n, m}$ be the number of Celtic knots of size $(n, m)$, two knots that differ only by rotations and reflections are considered the same.
Of course $K_{0, 0} = 1$, $K_{1, 1} = 1$, $K_{n, m} = K_{m, n}$.
I have been able to calculate $K_{1, n}$, but failed to calculate $K_{2, 2}$.
Any idea?
Explanation of the examples:
In #8 the green barrier starts at point $(0, 2)$ whose sum is even, and ends at point $(4, 2)$ whose sum is even. The orange barrier starts at point $(1, 1)$ whose sum is even and ends at point $(3, 1)$ whose sum is even.
In #18 the green barrier starts at point $(1, 3)$ whose sum is even, and ends at point $(3, 3)$ whose sum is even. The orange barriers start at point $(3, 1)$ whose sum is even and ends at point $(3, 3)$ whose sum is even.
In the third example, the green and the red barriers cross each other at point $(2, 3)$ whose sum is odd, hence not valid.
The 21 $K_{2, 2}$ I know:
Examples of the construction:
Celtic knots:
| Thanks to Brian, I've been able to solve the general problem.
I wrote a 9-page document to explain it all, unfortunately I cannot upload it here, so I give you the final results (the formulas could be written with just one formula but it would be a lot more confusing)
Non Square Celtic Knot
$\begin{array} {|c|r|l|}\hline
n \text{ even} / m \text{ even} & K_{(n, m)} = & \dfrac{3^{2nm - (n+m)} + 3^\frac{2nm - (n+m)}{2} + 3^\frac{2nm - m}{2} + 3^\frac{2nm - n}{2}}{4} \\ \hline
n \text{ even} / m \text{ odd} & K_{(n, m)} = & \dfrac{ 3^{2nm - (n+m)}+ 3^\frac{2nm - (n+m)+1}{2} + 3^\frac{2nm - m -1}{2} + 3^\frac{2nm - n}{2}}{4} \\ \hline
n \text{ odd} / m \text{ even} & K_{(n, m)} = & \dfrac{3^{2nm - (n+m)} + 3^\frac{2nm - (n+m)+1}{2} + 3^\frac{2nm - m}{2} + 3^\frac{2nm - n - 1}{2}}{4} \\ \hline
n \text{ odd} / m \text{ odd} & K_{(n, m)} = & \dfrac{ 3^{2nm - (n+m)} + 3^\frac{2nm - (n+m)}{2} + 3^\frac{2nm - m -1}{2} + 3^\frac{2nm - n - 1}{2}}{4} \\ \hline
\end{array}$
Square Celtic Knots
$\begin{array} {|c|r|l|}\hline
\rule[-3ex]{0pt}{8ex}n \text{ even} & K_{(n, n)} = & K_{(n, n)} = \dfrac{3^{2n^2 - 2n} + 2*3^\frac{n(n-1)} {2} + 3*3^{n(n-1)} + 2*3^\frac{2n^2-n} {2}}{8} \\ \hline
\rule[-3ex]{0pt}{8ex}n \text{ odd} & K_{(n, n)} = & K_{(n, n)} = \dfrac{ 3^{2n^2 - 2n} + 2*3^\frac{n(n-1)} {2} + 3*3^{n(n-1)} + 2*3^\frac{2n^2-n-1} {2} }{8} \\ \hline
\end{array}$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/412380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Resultant of linear combinations of Chebyshev polynomials of the second kind The Chebyshev polynomial $U_n(x)$ of the second kind is characterized by
$$
U_n(\cos\theta)=\frac{\sin(n+1)\theta}{\sin(\theta)}.
$$
It seems that
$$\operatorname*{Res}_x \left( U_n(x)+tU_{n-1}(x),\sum_{k=0}^{n-1}U_k(x) \right) =(-1)^{\frac{n(n-1)}{2}} t^{\left\lfloor\frac{k}{2} \right\rfloor}2^{n(n-1)},$$
where Res denotes the resultant of two polynomials.
I do not know how to prove this "equality". Is it a known result?
Similar results appeared in Dilcher and Stolarsky [1] theorem 2 and in Jacobs, Rayes and Trevisan [2].
References
[1] Karl Dilcher and Kenneth Stolarsky, "Resultants and discriminants of Chebyshev and related polynomials", Transactions of the American Mathematical Society, 357, pp. 965-981 (2004), MR2110427, Zbl 1067.12001.
[2] David P. Jacobs and Mohamed O. Rayes and Vilmar Trevisan, "
The resultant of Chebyshev polynomials",
Canadian Mathematical Bulletin 54, No. 2, 288-296 (2011), MR2884245, Zbl 1272.12006.
| Since $U_n + t U_{n-1}$ is of degree $n$ and $\sum_{k=0}^{n-1} U_k$ is of degree $n-1$ with leading coefficient $2^{n-1}$, the resultant factors as
$$ 2^{n(n-1)} (-1)^{n(n-1)} \prod_{j=1}^{n-1} (U_n(x_j) + t U_{n-1}(x_j))$$
where $x_1,\dots,x_{n-1}$ are the zeroes of $\sum_{k=0}^{n-1} U_k$.
Fortunately, these zeroes can be located explicitly using the usual trigonometric addition and subtraction identities. Telescoping the trig identity $\sin k \theta = -\frac{\cos\left(k+\frac{1}{2}\right) \theta - \cos\left(k-\frac{1}{2}\right) \theta}{2 \sin \frac{\theta}{2} }$ we conclude that
$$ \sum_{k=0}^{n-1} U_k(\cos \theta) = -\frac{\cos\left(\left(n+\frac{1}{2}\right) \theta\right) - \cos\left(\frac{\theta}{2}\right)}{2 \sin \theta \sin \frac{\theta}{2}} = \frac{\sin\left(\frac{n}{2} \theta\right) \sin\left(\frac{n+1}{2} \theta\right)}{2 \cos \frac{\theta}{2} \sin^2 \frac{\theta}{2}}$$
and so the $n-1 = \lfloor \frac{n}{2} \rfloor + \lfloor \frac{n-1}{2} \rfloor$ zeroes of $\sum_{k=0}^{n-1} U_k$ take the form $\cos( \frac{2\pi j}{n+1} )$ for $1 \leq j < (n+1)/2$ and $\cos( \frac{2\pi j}{n} )$ for $1 \leq j < n/2$.
Since the first class $\cos( \frac{2\pi j}{n+1} )$ of zeroes are also zeroes of $U_n$, and the second class $\cos( \frac{2\pi j}{n} )$ are zeroes of $U_{n-1}$, the resultant therefore simplifies to
$$ 2^{n(n-1)} (-1)^{n(n-1)} t^{\lfloor \frac{n}{2} \rfloor}
\prod_{1 \leq j < \frac{n+1}{2}} U_{n-1}\left( \cos\left( \frac{2\pi j}{n+1} \right) \right)
\prod_{1 \leq j < \frac{n}{2}} U_{n}\left( \cos\left( \frac{2\pi j}{n} \right) \right).$$
But
$$ U_{n-1}\left( \cos\left( \frac{2\pi j}{n+1} \right) \right) = \left. \sin\left(n\frac{2\pi j}{n+1}\right) \right/ \sin\left(\frac{2\pi j}{n+1}\right) = -1$$
and similarly
$$ U_{n}\left( \cos\left( \frac{2\pi j}{n} \right) \right) = \left. \sin\left((n+1)\frac{2\pi j}{n}\right) \right/ \sin\left(\frac{2\pi j}{n}\right) = +1$$
and the claim then follows after counting up the signs.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/427585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Trivial (?) product/series expansions for sine and cosine In an old paper of Glaisher, I find the following formulas:
$$\dfrac{\sin(\pi x)}{\pi x}=1-\dfrac{x^2}{1^2}-\dfrac{x^2(1^2-x^2)}{(1.2)^2}-\dfrac{x^2(1^2-x^2)(2^2-x^2)}{(1.2.3)^2}-\cdots$$
$$\cos(\pi x/2)=1-x^2-\dfrac{x^2(1^2-x^2)}{(1.3)^2}-\dfrac{x^2(1^2-x^2)(3^2-x^2)}{(1.3.5)^2}-\cdots$$
These are trivial since the $n$th partial sum is equal to the $n$th partial product of
the product formulas for the sine and cosine
But he also gives
$$\sin(\pi x/2)=x-\dfrac{x(x^2-1^2)}{3!}+\dfrac{x(x^2-1^2)(x^2-3^2)}{5!}-\cdots$$
$$\cos(\pi x/2)=1-\dfrac{x^2}{2!}+\dfrac{x^2(x^2-2^2)}{4!}-\dfrac{x^2(x^2-2^2)(x^2-4^2)}{6!}+\cdots$$
$$\dfrac{\sin(\pi x/3)}{\sqrt{3}/2}=x-\dfrac{x(x^2-1^2)}{3!}+\dfrac{x(x^2-1^2)(x^2-2^2)}{5!}-\cdots$$
$$\cos(\pi x/3)=1-\dfrac{x^2}{2!}+\dfrac{x^2(x^2-1^2)}{4!}-\dfrac{x^2(x^2-1^2)(x^2-2^2)}{6!}+\cdots$$
Apparently he considers them trivial. If they are, please explain and feel free to
downvote me.
| You can derive at least some of these formulas from expansions of $\sin x\theta$ and $\cos x\theta$ as Taylor series in $\sin\theta$,
$$\sin x\theta=x\sin \theta-\frac{x(x^2-1)}{3!}\sin^3\theta+\frac{x(x^2-1)(x^2-3^2)}{5!}\sin^5\theta+-\cdots$$
$$\cos x\theta=1-\frac{x^2}{2!}\sin^2 \theta+\frac{x^2(x^2-2^2)}{4!}\sin^4\theta-\frac{x^2(x^2-2^2)(x^2-4^2)}{6!}\sin^6\theta+-\cdots$$
(For a worked out proof, see this MSE post.)
Substituting $\theta=\pi/2$ gives the $\sin(\pi x/2)$ and $\cos(\pi x/2)$ series. Substituting $\theta=\pi/3$ gives slightly different series than in the OP...
| {
"language": "en",
"url": "https://mathoverflow.net/questions/432026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Lucas–Lehmer test and triangle of coefficients of Chebyshev's In the Lucas–Lehmer test with $ \quad p \quad $ an odd prime.
we know that $ \quad S_0=4 \quad $ and $ \quad S_i=S_{i-1}^2-2 \quad $ for $\quad i>0 \quad$
$M_p=2^p-1 \quad$ is prime if $ \quad S_{p-2} \equiv 0 \bmod {(2^p-1)}$
after some observations i found a link with Triangle of coefficients of Chebyshev's OEIS A053122
$a(n,k)=(-1)^{n-k}\left(\begin{matrix} n+1+k \\ 2 \cdot k+1 \end{matrix}\right)$
from which
$S_{i+1} \cdot S_{i+2}\cdot \quad \ldots \quad \cdot S_{i+m} = \sum\limits_{k=0}^{2^m-1}{(-1)^{k+1}\left(\begin{matrix} 2^m+k \\ 2 \cdot k+1 \end{matrix}\right) \cdot (S_i^2)^k }$
can anyone help me find any reference or pointer on how to derive this result?
| By the composition property of Chebyshev polynomials $T_m(T_n(x))=T_{mn}(x)$. Since $x^2-2 = 2T_2(\tfrac{x}2)$, we have $S_i = 2T_{2^i}(2)$ for all $i\geq 0$.
Furthermore, since $T_{2^k}(x) = \frac{U_{2^{k+1}-1}(x)}{2U_{2^{k}-1}(x)}$, we have
\begin{split} S_{i+1}\cdot S_{i+1}\cdots S_{i+m} &= 2^m T_{2^{i+1}}(2)\cdot T_{2^{i+2}}(2)\cdots T_{2^{i+m}}(2) \\
&=\frac{U_{2^{m+i+1}-1}(2)}{U_{2^{i+1}-1}(2)} \\
&= U_{2^m-1}(T_{2^{i+1}}(2))\\
&= \frac{U_{2^{m+1}-1}(T_{2^i}(2))}{2T_{2^i}(2)}\\
&= \frac{U_{2^{m+1}-1}(\tfrac{S_i}2)}{S_i}.
\end{split}
and it remains to use an explicit expression for $U_{2^{m+1}-1}(x)$ to obtain the formula in question.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/439313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Relation between full elliptic integrals of the first and third kind I am working on a calculation involving the Ronkin function of a hyperplane in 3-space.
I get a horrible matrix with full elliptic integrals as entries. A priori I know that the matrix is symmetrical and that give me a relation between full elliptic integrals of the first and third kind.
I can not find transformations in the literature that explain the relation and I think I need one in order to simplify my matrix.
The relation
With the notation
$\operatorname{K}(k) = \int_0^{\frac{\pi}{2}}\frac{d\varphi}{\sqrt{1-k^2\sin^2\varphi}},$
$\qquad$ $\Pi(\alpha^2,k)=\int_0^{\frac{\pi}{2}}\frac{d\varphi}{(1-\alpha^2\sin^2\varphi)\sqrt{1-k^2\sin^2\varphi}}$
$k^2 = \frac{(1+a+b-c)(1+a-b+c)(1-a+b+c)(-1+a+b+c)}{16abc},\quad a,b,c > 0$
the following is true:
$2\frac{(1+a+b-c)(1-a-b+c)(a-b)}{(a-c)(b-c)}\operatorname{K}(k)+$
$(1-a-b+c)(1+a-b-c)\Pi\left( \frac{(1+a-b+c)(-1+a+b+c)}{4ac},k\right) +$
$\frac{(a+c)(1+b)(1-a-b+c)(-1-a+b+c)}{(a-c)(-1+b)}\Pi\left( \frac{(1+a-b+c)(-1+a+b+c)(a-c)^2}{4ac(1-b)^2},k\right)+$
$(1-a-b+c)(-1+a+b+c)\Pi\left( \frac{(1-a+b+c)(-1+a+b+c)}{4ac},k\right)+$
$\frac{(1+a)(b+c)(-1-a+b+c)(-1+a+b-c)}{(1-a)(c-b)}\Pi\left( \frac{(1-a+b+c)(-1+a+b+c)(b-c)^2}{4ac(1-a)^2},k\right)$
$==0$.
Is there some addition formula or transformation between elliptic integrals of the first and third kind that will explain this?
| Are you sure that $a$, $b$, and $c$ have no relations among them?
I ask this because I attempted to (painstakingly!) copy your proposed equation over to Mathematica (taking care of the fact that the functions in it take the modulus $m=k^2$ as argument), and I keep getting complex results instead of 0.
On the other hand, your problem might actually be easier to solve if you use Carlson's symmetric elliptic integrals. Maybe you can try re-expressing the Legendre forms as Carlson forms?
| {
"language": "en",
"url": "https://mathoverflow.net/questions/14898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
solution of the equation $a^2+pb^2-2c^2-2kcd+(p+k^2)d^2=0$ i am wondering if there is a complete solution for the equation $a^2+pb^2-2c^2-2kcd+(p+k^2)d^2=0$ in which $a,b,c,d,k$ are integer(not all zero) and $p$ is odd prime.
| For the equation:
$$a^2+pb^2+(p+k^2)z^2=2c^2+2kcz$$
If the number $k$ is the problem any, and $p$ is such as this: $p=\frac{t^2}{2}-1$
Then the solution can be written:
$$a=\pm{t}n^2+2(tpr\mp(p+1)kj)ns-(2p(p+1)kjr\pm{t}((p+1)(p+k^2)j^2+pr^2))s^2$$
$$b=\pm{t}n^2-2(tr\pm(p+1)kj)ns+(2(p+1)kjr\mp{t}((p+1)(p+k^2)j^2+pr^2))s^2$$
$$z=2(p+1)j((p+1)kjs-tn)s$$
$$c=(p+1)(n^2+((p+1)(p+k^2)j^2+pr^2)s^2)$$
$n,s,j,r$ - integers which we are set.
If you can represent numbers as: $p=3k^2-t^2$
This decision when the coefficients are related through the equation of Pell. $t^2-3k^2=-p$
To simplify calculations we will make this change.
$$x=(\pm{t}-2k)n^2+2j(t\mp3k)ns-(2kj^2+2kpe^2\pm{t}(pe^2-2j^2))s^2$$
$$y=(\pm{t}-2k)n^2+2j(2t\mp3k)ns-(8kj^2+2kpe^2\pm{t}(pe^2-2j^2))s^2$$
$$r=2e(tn-3kjs)s$$
$$f=n^2+(pe^2-2j^2)s^2$$
Then the solution can be written:
$$a=pr^2+(p+k^2)f^2-xy$$
$$b=r(x+y)$$
$$z=f(x+y)$$
$$c=pr^2+(p+k^2)f^2+x^2$$
$n,s,e,j$ - integers which we ask.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/38354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Any sum of 2 dice with equal probability The question is the following: Can one create two nonidentical loaded 6-sided dice such that when one throws with both dice and sums their values the probability of any sum (from 2 to 12) is the same. I said nonidentical because its easy to verify that with identical loaded dice its not possible.
Formally: Let's say that $q_{i}$ is the probability that we throw $i$ on the first die and $p_{i}$ is the same for the second die. $p_{i},q_{i} \in [0,1]$ for all $i \in 1\ldots 6$. The question is that with these constraints are there $q_{i}$s and $p_{i}$s that satisfy the following equations:
$ q_{1} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{2} + q_{2} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{3} + q_{2} \cdot p_{2} + q_{3} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{4} + q_{2} \cdot p_{3} + q_{3} \cdot p_{2} + q_{4} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{5} + q_{2} \cdot p_{4} + q_{3} \cdot p_{3} + q_{4} \cdot p_{2} + q_{5} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{6} + q_{2} \cdot p_{5} + q_{3} \cdot p_{4} + q_{4} \cdot p_{3} + q_{5} \cdot p_{2} + q_{6} \cdot p_{1} = \frac{1}{11}$
$ q_{2} \cdot p_{6} + q_{3} \cdot p_{5} + q_{4} \cdot p_{4} + q_{5} \cdot p_{3} + q_{6} \cdot p_{2} = \frac{1}{11}$
$ q_{3} \cdot p_{6} + q_{4} \cdot p_{5} + q_{5} \cdot p_{4} + q_{6} \cdot p_{3} = \frac{1}{11}$
$ q_{4} \cdot p_{6} + q_{5} \cdot p_{5} + q_{6} \cdot p_{4} = \frac{1}{11}$
$ q_{5} \cdot p_{6} + q_{6} \cdot p_{5} = \frac{1}{11}$
$ q_{6} \cdot p_{6} = \frac{1}{11}$
I don't really now how to start with this. Any suggestions are welcome.
| You can write a polynomial that encodes the probabilities for each die:
$$ P(x) = p_1 x^1 + p_2 x^2 + p_3 x^3 + p_4 x^4 + p_5 x^5 + p_6 x^6 $$
and similarly
$$ Q(x) = q_1 x^1 + q_2 x^2 + q_3 x^3 + q_4 x^4 + q_5 x^5 + q_6 x^6. $$
Then the coefficient of $x^n$ in $P(x) Q(x)$ is exactly the probability that the sum of your two dice is $n$. As Robin Chapman points out, you want to know if it's possible to have
$$ P(x) Q(x) = (x^2 + \cdots + x^{12})/11 $$
where $P$ and $Q$ are both sixth-degree polynomials with positive coefficients and zero constant term.
For simplicity, I'll let $p(x) = P(x)/x, q(x) = Q(x)/x$. Then we want
$$ p(x) q(x) = (1 + \cdots + x^{10})/11 $$
where $p$ and $q$ are now fifth-degree polynomials. We can rewrite the right-hand side to get
$$ p(x) q(x) = {(x^{11}-1) \over 11(x-1)} $$
or
$$ 11 (x-1) p(x) q(x) = x^{11} - 1. $$
The roots of the right-hand side are the eleventh roots of unity. Therefore the roots of $p$ must be five of the eleventh roots of unity which aren't equal to one, and the roots of $q$ must be the other five.
But the coefficients of $p$ and $q$ are real, which means that their roots must occur in complex conjugate pairs. So $p$ and $q$ must have even degree! Since five is not even, this is impossible.
(This proof would work if you replace six-sided dice with any even-sided dice. I suspect that what you want is impossible for odd-sided dice, as well, but this particular proof doesn't work.)
| {
"language": "en",
"url": "https://mathoverflow.net/questions/41310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 6,
"answer_id": 2
} |
Is the set of undecidable problems decidable? I would like to know if the set of undecidable problems (within ZFC or other standard system of axioms) is decidable (in the same sense of decidable). Thanks in advance, and I apologize if the question is too basic.
| For every consistent recursively axiomatizable theory $T$ (one of which $ZFC$ is widely believed to be) there exists an integer number $K$ such that the following Diophantine equation (where all letters except $K$ are variables) has no solutions over non-negative integers, but this fact cannot be proved in $T$:
\begin{align}&(elg^2 + \alpha - bq^2)^2 + (q - b^{5^{60}})^2 + (\lambda + q^4 - 1 - \lambda b^5)^2 + \\
&(\theta + 2z - b^5)^2 + (u + t \theta - l)^2 + (y + m \theta - e)^2 + (n - q^{16})^2 + \\
&((g + eq^3 + lq^5 + (2(e - z \lambda)(1 + g)^4 + \lambda b^5 + \lambda b^5 q^4)q^4)(n^2 - n) + \\
&(q^3 - bl + l + \theta \lambda q^3 + (b^5-2)q^5)(n^2 - 1) - r)^2 + \\
&(p - 2w s^2 r^2 n^2)^2 + (p^2 k^2 - k^2 + 1 - \tau^2)^2 + \\
&(4(c - ksn^2)^2 + \eta - k^2)^2 + (r + 1 + hp - h - k)^2 + \\
&(a - (wn^2 + 1)rsn^2)^2 + (2r + 1 + \phi - c)^2 + \\
&(bw + ca - 2c + 4\alpha \gamma - 5\gamma - d)^2 + \\
&((a^2 - 1)c^2 + 1 - d^2)^2 + ((a^2 - 1)i^2c^4 + 1 - f^2)^2 + \\
&(((a + f^2(d^2 - a))^2 - 1) (2r + 1 + jc)^2 + 1 - (d + of)^2)^2 + \\
&(((z+u+y)^2+u)^2 + y-K)^2 = 0.
\end{align}
The set of numbers with this property is not recursively enumerable (let alone decidable). But curiously enough, one can effectively construct an infinite sequence of such numbers from the axioms of $T$.
The equation is derived from Universal Diophantine Equation by James P. Jones.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/81429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
} |
Question on a Basel-like sum Hello all,
I have happened upon the following sum:
$ 1^2 + \Big(1 \times \frac{1}{3} + \frac{1}{3} \times 1 \Big)^2 + \Big(1 \times \frac{1}{5} + \frac{1}{3} \times \frac{1}{3} + \frac{1}{5} \times 1 \Big)^2 + \Big(1 \times \frac{1}{7} + \frac{1}{3} \times \frac{1}{5} + \frac{1}{5} \times \frac{1}{3} + \frac{1}{7} \times 1 \Big)^2 + \dots = \frac{\pi^4}{32} $
However, the proof I know is fairly nonilluminative: it more or less just falls from the sky. I'm wondering whether anyone can see another way to prove it, in particular whether it is implied by Euler's Basel sum for odd integers:
$ 1+\frac{1}{3^2} + \frac{1}{5^2} + \ldots = \frac{\pi^2}{8}$
The new sum looks like some sort of convolution of Euler's sum, and the value of the first is double the square of the value of the second. If anyone can shed any light, I'll be much obliged.
P.S. For the proof known to me, see
http://www.tandfonline.com/doi/abs/10.1080/10652469.2012.689301
http://arxiv.org/abs/1205.2458
| Interesting problem. Here is my version.
$$\begin{align}
f(u) &= u+ \frac{u^3}{3}+\frac{u^5}{5}+\dots = \frac{1}{2}\log\frac{1+u}{1-u}
\cr
f(u)^2 &= u^2 + \left(1\cdot\frac{1}{3}+\frac{1}{3}\cdot 1\right)u^4 +
\left(1\cdot \frac{1}{5}+\frac{1}{3}\cdot\frac{1}{3}+\frac{1}{5}\cdot 1\right)u^6+\dots
\end{align}$$
out of time now, more later...
added
OK, Noam has now done much of my intended solution. Continuing (avoiding $L$ which I don't know):
$$
\frac{1}{2\pi}\int_{-\pi}^{\pi} f(e^{ix})^2 f(e^{-ix})^2 dx =
1^2 + \left(1\cdot\frac{1}{3}+\frac{1}{3}\cdot 1\right)^2 +
\left(1\cdot \frac{1}{5}+\frac{1}{3}\cdot\frac{1}{3}+\frac{1}{5}\cdot 1\right)^2+\dots
=A
$$
So by trigonometry and symmetry
$$
A=\frac{1}{128\pi}\int_{0}^{\pi/2} \left(\log\left(\cot^2\frac{x}{2}\right)^2+
\pi^2\right)^2dx
$$
change variables $z=\log(\cot(x/2))$ to get
$$
A = \frac{1}{128\pi}\int_0^\infty(4z^2+\pi^2)^2 \mathrm{sech} z dz
$$
Then consult Gradsteyn & Ryzhik 3.532 for identities
$$
\int_0^\infty \mathrm{sech}z dz = \frac{\pi}{2},
\int_0^\infty z^2\mathrm{sech}z dz = \frac{\pi^3}{8},
\int_0^\infty z^4\mathrm{sech}z dz = \frac{5\pi^5}{32},
$$
which can be plugged in to get
$$
A = \frac{\pi^4}{32} .
$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/102974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
A closed form of infinite products of complex zeros involving $\Im(\rho_n)$. Does a proof of this closed form imply RH? Building on this question scaling the imaginary part of $\rho$s in infinite products, I like to conjecture that:
$$\displaystyle \prod_{n=1}^\infty \left(1- \frac{s}{\mu_n} \right) \left(1- \frac{s}{1-\mu_n} \right)\left(1- \frac{s}{\overline{\mu_n}} \right) \left(1- \frac{s}{\overline{1-\mu_n}} \right)$$
with $\mu_n = a + \Im(\rho_n)x i$ and $a,x \in \mathbb{R},x \ne 0, s \in \mathbb{C}$ and $\rho_n$ the n-th non-trivial zero of $\zeta(z)$,
has the following closed form:
$$\displaystyle H(s,a,x):= \frac{\xi(\frac12 - \frac{a}{x} + \frac{s}{x})}{\xi(\frac12 - \frac{a}{x})} \frac{\xi(\frac12 - \frac{a}{x} + \frac{1}{x} - \frac{s}{x})}{\xi(\frac12 - \frac{a}{x}+ \frac{1}{x})}$$
where $\xi(z) = \frac12 z(z-1) \pi^{-\frac{z}{2}} \Gamma(\frac{z}{2}) \zeta(z)$ is the Riemann xi-function.
If this formula is correct, the 'constructed' zeros $\mu_n$ can be stretched/condensed vertically via $x$ on the imaginary axis and shifted left/right on the real line via $a$. In all cases they would yield an entire function expressed by this closed form (think of it as a reversed application of the Weierstrass factorization theorem, i.e. starting with products of 'constructed' zeros).
Further factorization also seems possible with:
$$\frac{\xi(\frac12 - \frac{a}{x} + \frac{s}{x})}{\xi(\frac12 - \frac{a}{x})} = \prod_{n=1}^\infty \left(1- \frac{s}{\mu_n} \right) \left(1-\frac{s}{\overline{\mu_n}} \right)$$
and
$$\frac{\xi(\frac12 - \frac{a}{x} + \frac{1}{x} - \frac{s}{x})}{\xi(\frac12 - \frac{a}{x}+ \frac{1}{x})} = \prod_{n=1}^\infty \left(1- \frac{s}{1-\mu_n} \right) \left(1- \frac{s}{\overline{1-\mu_n}} \right)$$
When $a=\frac12$ and $x=1$, the formula correctly reduces to:
$$\frac{\xi(s)}{\xi(0)} = \prod_{\rho} \left(1- \frac{s}{\rho} \right) \left(1- \frac{s}{1-\rho} \right)$$
from which the known Hadamard product for $\zeta(s)$ can been derived.
Unfortunately I do not have a proof for this formula, however rigorously checked it against many 'brute force' calculations using the first 2mln $\rho$s (all correct results, but accurate up to 5 decimals max). I manufactured the formula by replicating the symmetry of the closed form for $\mu_n = a + n x i$ (i.e. running through the integers rather than $\Im(\rho_n)$, see the linked question). Since until today, all non-trivial zeros appear to be lying on the critical line, I have used $\frac12$ as the "source" for all zeros for different $a$'s i.e.: $\frac12 - \frac{a}{x} + \frac{s}{x}$ just inserts $\frac12$ when $\Re(s)=a$. I guess I have thereby implicitly assumed the RH in constructing the formula.
My questions:
*
*Is this a known closed form?
*Does a proof of this closed form imply the RH, i.e. does it "force" the Hadamard product into a "straight jacket" that only allows it to be valid when all $a=\Re(\rho_n)=\frac12$ ?
UPDATE:
Assuming RH is true, I believe that I
have found a nice proof for the
equation in the OP. Since "the
comments section is too small to
contain it", I decided to put it as an
answer to my own question to round it up.
| Below is a proposed proof, that assuming the RH, the following equation is true:
$$\displaystyle \frac{\xi(\frac12 - \frac{a}{x} + \frac{s}{x})}{\xi(\frac12 - \frac{a}{x})} = \prod_{n=1}^\infty \left(1- \frac{s}{\mu_n} \right) \left(1- \frac{s}{\overline{\mu_n}} \right)$$
where $\mu_n = a + i x \gamma_n$ and $\gamma_n = \Im(\rho_n)$, with $\rho_n$ the n-th non-trivial zero of $\zeta(s)$.
Take $t, \gamma_n \in \mathbb{R},a,x,s \in \mathbb{C}$ with $x \ne 0$ and $\gamma_n > 0$.
Starting from Hadamard's proof that:
$$\xi(s) = \xi(0) \prod_{n=1}^\infty \left(1- \frac{s}{\rho_n} \right) \left(1- \frac{s}{1-\rho_n} \right) \qquad (1)$$
with $\xi(s) = \frac12 s(s-1) \pi^{-\frac{s}{2}} \Gamma(\frac{s}{2}) \zeta(s)$ being the Riemann xi-function.
Now we assume the RH and all $\Re(\rho_n)=\frac12$. Take $s=\dfrac{a+ i t}{x}$.
This gives:
$$\xi\left(\dfrac{a+ i t}{x}\right) = \xi(0) \prod_{n=1}^\infty \left(1- \frac{\frac{a}{x} +\frac{i t}{x}}{\frac12 + i \gamma_n} \right) \left(1- \frac{\frac{a}{x} +\frac{i t}{x}}{\frac12 - i \gamma_n} \right)$$
and can be expanded into:
$$\xi\left(\dfrac{a+ i t}{x}\right) = \xi(0) \prod_{n=1}^\infty \left(\frac{\frac12 - \frac{a}{x} + i \gamma_n -\frac{i t}{x}}{\frac12 + i \gamma_n} \right) \left(\frac{\frac12 - \frac{a}{x} - i \gamma_n - \frac{i t}{x}}{\frac12 - i \gamma_n} \right)$$
By multiplying each factor with $\dfrac{(\frac{a}{x} + i \gamma_n)}{(\frac{a}{x} + i \gamma_n)}$ and $\dfrac{(\frac{a}{x} - i \gamma_n)}{(\frac{a}{x} - i \gamma_n)}$ respectively, we get:
$$\displaystyle \xi\left(\dfrac{a+ i t}{x}\right) = \xi(0) \prod_{n=1}^\infty \left(\frac{\frac{a}{x} + i \gamma_n}{\frac12 + i \gamma_n} \right) \left(\frac{\frac{a}{x} - i \gamma_n}{\frac12 - i \gamma_n} \right) \prod_{n=1}^\infty \left(\frac{\frac12 - \frac{a}{x} + i \gamma_n -\frac{i t}{x}}{\frac{a}{x} + i \gamma_n} \right) \left(\frac{\frac12 - \frac{a}{x} - i \gamma_n - \frac{i t}{x}}{\frac{a}{x} - i \gamma_n} \right) $$
$$\displaystyle = \xi(0) \prod_{n=1}^\infty \left(1- \frac{\frac12 -\frac{a}{x}}{\frac12 + i \gamma_n)} \right) \left(1- \frac{\frac12-\frac{a}{x}}{\frac12 - i \gamma_n} \right) \prod_{n=1}^\infty \left(1- \frac{\frac{2a}{x}-\frac12+\frac{i t}{x}}{\frac{a}{x} + i \gamma_n)} \right) \left(1- \frac{\frac{2a}{x}-\frac12+\frac{i t}{x}}{\frac{a}{x}- i \gamma_n} \right)$$
By now injecting $\dfrac12 - \dfrac{a}{x}$ into equation (1), this can be simplified as:
$$\xi\left(\dfrac{a+ i t}{x}\right) = \xi\left(\frac12 - \frac{a}{x}\right) \prod_{n=1}^\infty \left(1- \frac{\frac{2a}{x}-\frac12+\frac{i t}{x}}{\frac{a}{x} + i \gamma_n}\right) \left(1- \frac{\frac{2a}{x}-\frac12+\frac{i t}{x}}{\frac{a}{x}- i \gamma_n} \right)$$
and since $\dfrac{2a}{x} + \dfrac{i t}{x} = s + \dfrac{a}{x}$ this can be rewritten into:
$$\xi\left(\dfrac{a+ i t}{x}\right) = \xi\left(\frac12 - \frac{a}{x}\right) \prod_{n=1}^\infty \left(1- \frac{x(s+\frac{a}{x}-\frac12)}{a+ i x \gamma_n}\right) \left(1- \frac{x (s+\frac{a}{x}-\frac12)}{a- i x \gamma_n} \right)$$
so that we can now say that:
$$\xi\left(\frac12 - \frac{a}{x} + \frac{s}{x}\right) = \xi\left(\frac12 - \frac{a}{x}\right) \prod_{n=1}^\infty \left(1- \frac{s}{a+ i x \gamma_n} \right) \left(1- \frac{s}{a- i x \gamma_n} \right)$$
and the desired result is obtained:
$$\displaystyle \frac{\xi(\frac12 - \frac{a}{x} + \frac{s}{x})}{\xi(\frac12 - \frac{a}{x})} = \prod_{n=1}^\infty \left(1- \frac{s}{\mu_n} \right) \left(1- \frac{s}{\overline{\mu_n}} \right)$$
When starting from $s = \dfrac{1-(a+i t)}{x}$ the equivalent result is $\displaystyle \prod_{n=1}^\infty \left(1- \frac{s}{1- \mu_n} \right) \left(1- \frac{s}{\overline{1-\mu_n}} \right)$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/117874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Writing a matrix as a sum of two invertible matrices Let $n\geq 2$. Is it true that any $n\times n$ matrix with entries from a given ring (with identity) can be written as a sum of two invertible matrices with entries from the same ring ?
| This may help. Any $2\times 2$ matrix is a sum of four units:
$$ \begin{pmatrix} x & y \\ z & t \end{pmatrix} = \begin{pmatrix} x & 1 \\ -1 & 0 \end{pmatrix} + \begin{pmatrix} 0 & -1 \\ 1 & t \end{pmatrix} + \begin{pmatrix} 1 & 0 \\ z & -1 \end{pmatrix} + \begin{pmatrix} -1 & y \\ 0 & 1 \end{pmatrix}$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/141382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 0
} |
On integers as sums of three integer cubes revisited It is easy to find binary quadratic form parameterizations $F(x,y)$ to,
$$a^3+b^3+c^3+d^3 = 0\tag{1}$$
(See the identity (5) described in this MSE post.) To solve,
$$x_1^3+x_2^3+x_3^3 = 1\tag{2}$$
in the integers, all one has to do is to check if one term of $(1)$ can be solved as a Pell-like equation $F_i(x,y) = 1$. For example, starting with the cubes of the taxicab number 1728 as $a,b,c,d = 1,-9,12,-10$, we get,
$$a,b = x^2-11xy+9y^2,\;-9x^2+11xy-y^2$$
$$c,d = 12x^2-20xy+10y^2,\;-10x^2+20xy-12y^2$$
We can then solve $a = x^2-11xy+9y^2 = \pm 1$ since it can be transformed to the Pell equation $p^2-85q^2 = \pm 1$, thus giving an infinite number of integer solutions to $(2)$.
Question: How easy is it to find a quadratic form parameterization to,
$$x_1^3+x_2^3+x_3^3 = Nx_4^3\tag{3}$$
for $N$ a non-cube integer? I'm sure one can see where I'm getting at. If one can solve,
$$x_4 = c_1x^2+c_2xy+c_3y^2 = \pm 1\tag{4}$$
as a Pell-like equation, then that would prove that,
$$x_1^3+x_2^3+x_3^3 = N\tag{5}$$
is solvable in the integers in an infinite number of ways. (So far, this has only been shown for $N = 1,2$). The closest I've found is a cubic identity for $N = 3$ in a 2010 paper by Choudhry,
$$\begin{aligned}
x_1 &= 2x^3+3x^2y+3xy^2\\
x_2 &= 3x^2y+3xy^2+2y^3\\
x_3 &= -2x^3-3x^2y-3xy^2-2y^3\\
x_4 &= xy(x+y)
\end{aligned}$$
which is a special case of eq.58 in the paper.
Anybody knows how to find a quadratic form parametrization to $(3)$? (If one can be found, hopefully $(4)$ can also be solved.)
| Perhaps if you start with my three-rational-cubes identity
$$
ab^2 = \biggl(\frac{(a^2+3b^2)^3+(a^2-3b^2)(6ab)^2}{6a(a^2+3b^2)^2}\biggr)^{\!3}
- \biggl(\frac{(a^2+3b^2)^2-(6ab)^2}{6a(a^2+3b^2)}\biggr)^{\!3}
- \biggl(\frac{(a^2-3b^2)6ab^2}{(a^2+3b^2)^2}\biggr)^{\!3}
$$
you might be able to find something?
| {
"language": "en",
"url": "https://mathoverflow.net/questions/142654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Solving the quartic equation $r^4 + 4r^3s - 6r^2s^2 - 4rs^3 + s^4 = 1$ I'm working on solving the quartic Diophantine equation in the title. Calculations in maxima imply that the only integer solutions are
\begin{equation}
(r,s) \in \{(-3, -2), (-2, 3), (-1, 0), (0, -1), (0, 1), (1, 0), (2, -3), (3, 2)\}.
\end{equation}
Evidently, the set above are all solutions, and furthermore if $(r,s)$ is a solution then so is $(-r,-s)$; hence we only need to prove that there are no solutions with $r > 3$. I have factored the equation as both
\begin{align}
4r^2s(3s-2r) &= (r-s)^4-1 = (r-s-1)(r-s+1)\bigl((r-s)^2+1\bigr)
\end{align}
and
\begin{align}
4rs^2(3r+2s) &= (r+s)^4-1 = (r+s-1)(r+s+1)\bigl((r+s)^2+1\bigr),
\end{align}
but don't know where to go from that point. I am hoping there is an elementary solution, even if it's not particularly "simple".
Any help would be appreciated.
Thanks,
Kieren.
EDIT: Note that for all known solutions, $\lvert r + s\rvert = 1$ or $\lvert r - s\rvert = 1$.
EDIT: In a comment, Peter M. pointed out that this can be written as the Pell equation
$$
(r^2+2rs-s^2)^2-2(2rs)^2=1.
$$
Curiously — and perhaps not coincidentally — the fundamental solution to that Pell equation is $(3,2)$, which is also the largest [conjectured] positive integer solution. As the fundamental solution in this case is $(r^2+2rs-s^2,2rs)$, whereas the largest integer solution is $(r,s)$, perhaps there's a way of using that to force some sort of descent or contradiction?
EDIT: Adding $4r^4$ and $4s^4$ to both sides of the equation and factoring yields, respectively
\begin{align*}
(r-s)^2(r+s)(r+5s) &= (2s^2-2s+1)(2s^2+2s+1)
\end{align*}
and
\begin{align*}
(r-s)(r+s)^2(5r-s) &= (2r^2-2r+1)(2r^2+2r+1)
\end{align*}
Note that, in each case, the two factors on the right-hand side are relatively prime (because they're odd, and evidently $\gcd(r,s)=1$). So far, this is the most interesting factorization I've found.
EDIT: Considering the equation modulo $r-s$ and modulo $r+s$, one can (I believe) prove that if a prime $p \mid (r-s)(r+s)$, then $p \equiv 1\!\pmod{4}$.
EDIT: Still holding out for an elementary proof. In addition to the restriction
$$p \mid (r-s)(r+s) \implies p \equiv 1\!\pmod{4},$$
I've found the following list of divisibility restrictions:
\begin{align}
r &\mid (s-1)(s+1)(s^2+1) \\
s &\mid (r-1)(r+1)(r^2+1) \\
(r-s) &\mid (4r^4+1) \\
(r+s) &\mid (4s^4+1) \\
(r+s)^2 &\mid (4r^4+1) \\
(r-s)^2 &\mid (4s^4+1) \\
(r-s-1) &\mid 4(s-2)s(s+1)^2 \\
(r-s+1) &\mid 4(s-1)^2s(s+2) \\
(r+s-1) &\mid 4(s-3)(s-1)s^2 \\
(r+s+1) &\mid 4s^2(s+1)(s+3),
\end{align}
as well as a host of other [less immediately compelling] restrictions. Based on this, I'm hoping to prove that one of $r-s$ or $r+s$ must be $\pm 1$; bonus if I can show that the other divides $5$.
EDIT: I can show that $4s > 3r$. Calculations in maxima suggest that no numbers $r,s$ with $4 \le r \le 13000$ and $r > s \ge 1$ and $r$ odd and $s$ even and $r-s>1$ and $4s>3r$ also satisfy the six divisibility requirements
\begin{align}
r &\mid (s-1)(s+1)(s^2+1) \\
s &\mid (r-1)(r+1)(r^2+1) \\
(r-s-1) &\mid 4(s-2)s(s+1)^2 \\
(r-s+1) &\mid 4(s-1)^2s(s+2) \\
(r+s-1) &\mid 4(s-3)(s-1)s^2 \\
(r+s+1) &\mid 4s^2(s+1)(s+3).
\end{align}
Note that I didn't even need all of the congruences in the previous list. Next I'll run $r$ up to $10^6$ or so. Hopefully, though, I can obtain an algebraic proof of all of this!
EDIT: So far, the best bounds I can prove are $4/3 < r/s < 3/2$.
| Inspired by @PeterMueller, I believe I found a proof that $r = 3$.
Because of how this equation was obtained in the first place, I can assume $s \ge 2$ is even, and $r \ge s+1$ is odd. Writing $s=2v$ and $r=2v+2t+1$, substituting, and factoring yields
\begin{align}
2v(v-2t-1)(2v+2t+1)^2 &= t(t+1)(2t^2+2t+1),
\end{align}
at which point a reverse substitution gives
\begin{align}
(v-2t-1)sr^2 &= t(t+1)(2t^2+2t+1). \qquad(\star)
\end{align}
For any hypothetical solution with $r > 3$, we have $3s > 2r$ (shown earlier). A quick calculation then yields $t < (r-2)/4$, which can be shown to contradict ($\star$).
Does that look right?
Thanks!
Kieren.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/143599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 2
} |
Taylor series coefficients This question arose in connection with A hard integral identity on MATH.SE.
Let
$$f(x)=\arctan{\left (\frac{S(x)}{\pi+S(x)}\right)}$$
with $S(x)=\operatorname{arctanh} x -\arctan x$, and let
$$f(x)=\sum_{n=0}^\infty a_nx^n=\frac{2}{3\pi}x^3-\frac{4}{9\pi^2}x^6+\frac{2}{7\pi}x^7+\frac{16}{81\pi^3}x^9-\frac{8}{21\pi^2}x^{10}+\ldots$$
be its Taylor series expansion at $x=0$. Some numerical evidence suggests the following interesting properties of the $a_n$ coefficients ($b_n$, $c_n$, $d_n$, $\tilde{c}_n$, $\tilde{d}_n$ are some positive rational numbers, $k>0$ is an integer):
1) $a_n=0$, for $n=4k$.
2) $a_n=\frac{2/n}{\pi}-\frac{b_n}{\pi^5}+(\text{maybe other terms of higher order in} 1/\pi)$, for $n=4k+3$.
3) $a_n=-\frac{c_n}{\pi^2}+\frac{d_n}{\pi^6}+(\text{maybe other terms of higher order in} 1/\pi)$, for $n=4k+2$.
4) $a_n=\frac{\tilde{c}_n}{\pi^3}-\frac{\tilde{d}_n}{\pi^7}+(\text{maybe other terms of higher order in } 1/\pi)$, for $n=4k+1$, $k>1$.
How can these properties (if correct) be proved?
P.S. We have
$$\arctan{\left(1+\frac{2S}{\pi}\right)}-\frac{\pi}{4}=\arctan{\left(\frac{1+2S/\pi-1}{1+(1+2S/\pi)}\right)}=\arctan{\left(\frac{S}{\pi+S}\right)} .$$
Using
$$\arctan(1+x)=\frac{\pi}{4}+\frac{1}{2}x-\frac{1}{4}x^2+\frac{1}{12}x^3-\frac{1}{40}x^5+\frac{1}{48}x^6-\frac{1}{112}x^7+\ldots$$
we get
$$\arctan{\left(\frac{S}{\pi+S}\right)}=\frac{S}{\pi}-\frac{S^2}{\pi^2}+\frac{2S^3}{3\pi^3}-\frac{4S^5}{5\pi^5}+\frac{4S^6}{3\pi^6}-\frac{8S^7}{7\pi^7}+\ldots$$
This proves 2), 3) and 4), because
$$S=2\left(\frac{x^3}{3}+\frac{x^7}{7}+\frac{x^{11}}{11}+\ldots\right)=2\sum_{k=0}^\infty \frac{x^{4k+3}}{4k+3} .$$
To prove 1), we need to prove the analogous property for $\arctan(1+x)$ and the proof can be based on the formula
$$\frac{d^n}{dx^n}(\arctan x)=\frac{(-1)^{n-1}(n-1)!}{(1+x^2)^{n/2}}\sin{\left (n\,\arcsin{\left(\frac{1}{\sqrt{1+x^2}}\right)}\right )}$$
proved in K. Adegoke and O. Layeni, The Higher Derivatives Of The Inverse Tangent Function and Rapidly Convergent BBP-Type Formulas For Pi, Applied Mathematics E-Notes, 10(2010), 70-75, available at http://www.math.nthu.edu.tw/~amen/2010/090408-2.pdf. This formula enables us to get a closed-form expression
$$\arctan{\left(\frac{S}{\pi+S}\right)}=\sum_{n=1}^\infty \frac{(-1)^{n-1}}{n}\,2^{n/2}\,\sin{\left(\frac{n\pi}{4}\right)}\,\frac{S^n}{\pi^n} .$$
So the initial questions are not actual now. However I'm still interested to know whether one can calculate in a closed-form the integral $$\int\limits_0^1 S^n(x)\frac{dx}{x} .$$
| $\newcommand{\Catalan}{\operatorname{Catalan}}$
I made an attempt for $\int_0^1 S^2\;dx/x$, but with limited success.
Let
$$
q_1 := \frac{1}{16}\left(
\ln \left( 1-i \right) {\pi }^{2}+16\,\zeta \left( 3 \right) -4\,i
\ln \left( 1+i \right) \pi \,\ln \left( 2 \right) +i{\pi }^{3}+4\,i
\ln \left( 1-i \right) \pi \,\ln \left( 2 \right) \\
-2\,\ln \left( 1+
i \right) {\pi }^{2}-4\,i \left( \ln \left( 1-i \right) \right) ^{2}
\pi -16\,i\ln \left( 1+i \right) {\it \Catalan}+10\,i \left( \ln
\left( 2 \right) \right) ^{2}\pi
\\
+8\,\pi \,{\it \Catalan}-2\,{\pi }^{2}\ln \left( 2 \right) +{\pi }^{2}\ln \left( i\sqrt {2}+\sqrt {2}+2 \right) +{\pi }^{2}\ln \left( 2-\sqrt {2}-i\sqrt {2} \right)
\\
+20\,\ln \left( 2 \right) {\rm Li}_2 \left(2 \right) -20\,{\rm Li}_3 \left(2 \right) -8\,{\rm Li}_3 \left(-i \right) -8\,
{\rm Li}_3 \left(i \right) +16\,i\ln \left( 1-i \right) {\it
\Catalan}
\\
+4\,i \left( \ln \left( 1+i \right) \right) ^{2}\pi -2\,{
\pi }^{3}\right) \approx −0.4990969
$$
and
$$
q_2 :=
\sum _{m=1}^{\infty }{\frac {\Psi \left( (m+1)/2\right) -\Psi
\left(m/2 \right) }{ 2\left( 2\,m-1 \right) ^{2}}} \approx 0.7416483,
$$
Then
$$
\int_0^1({\rm arctanh}\; x - \arctan x)^2\frac{dx}{x} = q_1+q_2 \approx 0.2425514
$$
added
Combining the above with Eckhard's alternate version, we get the interesting equation
$$
q_2 =
\frac{\left(
\ln \left( 2 \right) \right) ^{2}\pi}{16} +{\frac {5}{64}}\,{\pi }^{3}
-{ \Catalan}\,\ln \left( 2 \right)
-2
\,{\rm Im} \; {\rm Li_3} \left( \frac{1+i}{2} \right)
$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/155263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Inequality for an integral involving $ \exp $, $ \sin $ and $ \cos $ Let $ t > 0 $ and $ k \in \{ 0,1,2,\ldots \} $. Does the following inequality hold?
$$
\int_{k + 1/2}^{k + 3/2}
\frac{x \sin(2 \pi x)}{1 + 2 e^{2 \pi t} \cos(2 \pi x) + e^{4 \pi t}}
\mathrm{d}{x}
\leq \frac{1}{2 \pi} \cdot \frac{1}{(1 - e^{2 \pi t})^{2}}.
$$
Such an inequality appears in the study of Selberg $ \zeta $-functions.
| The inequality is true, and follows upon integrating by parts. The integral is
$$
\int_{k+1/2}^{k+3/2} x d\Big( -\frac{\log (1+2 e^{2\pi t} \cos(2\pi x) +e^{4\pi t}}{4\pi e^{2\pi t}} \Big)
$$
and integration by parts gives
$$
= \frac{1}{4\pi e^{2\pi t}} \int_{k+1/2}^{k+3/2} \log \frac{1+2e^{2\pi t} \cos (2\pi x) + e^{4\pi t}}{1-2e^{2\pi t} +e^{4\pi t}} dx.
$$
Using $\log (1+y) \le y$, the above is
$$
\le \frac{1}{4\pi e^{2\pi t}} \int_{k+1/2}^{k+3/2} \frac{(2+2\cos(2\pi x))e^{2\pi t}}{(1-e^{2\pi t})^2} dx = \frac{1}{2\pi} \frac{1}{(1-e^{2\pi t})^2}.
$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/184745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Sum of Stirling numbers with exponents I have a trouble with the following sum
$\sum_{i=0}^n\binom{n}{i}S(i,m)3^i$, where $S(i,m)$ is the Stirling number of the second kind (the number of all partitions of $i$ elements into $m$ nonempty sets).
Below it was obtained the sum
$f(n,m)=\frac{1}{m!}\sum_{k=0}^m(-1)^{m-k}\binom{m}{k}(1+3k)^n$. Is there a further simplification of this expression?
| here is a table of $f(m,n)$ for small values of $m$ and $n\geq m$ (note that $f(m,n)=0$ for $n<m$):
$f(n,1)=-1+4^n$
$f(n,2)=\tfrac{1}{2}\left(1-2\cdot 4^n+7^n\right)$
$f(n,3)=\tfrac{1}{6}\left(-1+3\cdot 4^n-3\cdot 7^n+10^n\right)$
$f(n,4)=\tfrac{1}{24}\left(1-4\cdot 4^n+6\cdot 7^n-4\cdot 10^n+13^n\right)$
$f(n,5)=\tfrac{1}{120}\left(-1+5\cdot 4^n-10\cdot 7^n+10\cdot 10^n-5\cdot 13^n+16^n\right)$
$f(n,6)=\tfrac{1}{720}\left(1-6\cdot 4^n +15\cdot 7^n-20\cdot 10^n+15\cdot 13^n-6\cdot 16^n+19^n\right)$
there is a pattern, which as Todd Trimble points out is
$$f(n,m)=\frac1{m!} \sum_{k=0}^m (-1)^{m-k} \binom{m}{k} (1 + 3k)^n$$
note: Mathematica gets $f(n,2)$ wrong... [SE post]
| {
"language": "en",
"url": "https://mathoverflow.net/questions/220479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Rate of Convergence of Borwein Algorithm for computing Pi In a book "Pi and the AGM" in 1987, authors, Jonathan Borwein and Peter Borwein, introduced a magical algorithm to compute $\pi$. However there is a problem that I couldn't understand and couldn't find any proof or explaination. It is related to rate of convergence of the algorithm.
Borwein brothers suggested two sequences $\{x_n\}_{ n\ge 0}$ and $\{y_n\}_{ n\ge 1}$ like:
\begin{equation}
x_{0} := \sqrt{2} ~~~~,~~~~y_{1}:= 2^{1/4}
\end{equation}
\begin{equation}
x_{n+1}=\frac{\sqrt{x_n} ~+~ 1/\sqrt{x_n} }{2} ~~~,~~~
y_{n+1}=\frac{y_{n} \sqrt{x_n}~+~ 1/\sqrt{x_n}}{y_{n} +1}
\end{equation}
Then my question is about a proof for the next inequality:
\begin{equation} \label{eq}
\forall n \in \mathbb{N}~,~~~\frac{x_n -1}{y_n -1} < 2- \sqrt{3}
\end{equation}
How can I prove this inequality?
| Even for different starting values around $1$, both $x_n$ and $y_n$ rapidly approach $1$.
Define $\alpha_n = x_n-1$ and $\beta_n = y_n-1$. These will be very small quantities. $\beta_5 \lt 10^{-40}$ and $\alpha_5$ is smaller. We want to estimate $\alpha_n/\beta_n$.
$$\sqrt{x_n} = \sqrt{1+\alpha_n} = 1+\frac{1}{2}\alpha_n - \frac{1}{8}\alpha_n^2 + O(\alpha_n^3).$$
$$1/\sqrt{x_n} = 1/\sqrt{1+\alpha_n} = 1-\frac{1}{2}\alpha_n +\frac{3}{8} \alpha_n^2 + O(\alpha_n^3)$$
$$x_{n+1} = \frac{1}{2}(\sqrt{x_n} + 1/\sqrt{x_n}) = 1+\frac{1}{8}\alpha_n^2 +O(\alpha_n^3) $$
So, $\alpha_{n+1} = \frac{1}{8} \alpha_n^2 + O(\alpha_n^3)$.
$$\begin{eqnarray}y_{n+1} &=& \frac{y_n\sqrt{x_n} + \sqrt{x_n} - \sqrt{x_n} + 1/\sqrt{x_n}}{y_n+1} \newline &=& \sqrt{x_n} + \frac{-\sqrt{x_n} +1/\sqrt{x_n}}{y_n+1} \newline &=&1+\frac{1}{2}\alpha_n-\frac{1}{8}\alpha_n^2 + O(\alpha_n^3) + (-\alpha_n+\frac{1}{2}\alpha_n^2+O(\alpha_n^3))(\frac{1}{2} - \frac{1}{4}\beta_n + O(\beta_n^2)) \newline &=& 1 + \frac{1}{8}\alpha_n^2 + \frac{1}{4}\alpha_n\beta_n + O(\beta_n^3)\end{eqnarray}$$
So, $\beta_{n+1} = \frac{1}{8}\alpha_n^2 + \frac{1}{4}\alpha_n\beta_n + O(\beta_n^3)$.
Instead of $\alpha_n/\beta_n$, consider the reciprocal.
$\beta_{n+1}/\alpha_{n+1} = 1 + 2 \beta_n/\alpha_n + O(\alpha_n).$ This ratio grows exponentially, and we only need to establish that it is greater than $1/(2-\sqrt{3}) = 2+\sqrt{3} \lt 4$.
For example, $\beta_4/\alpha_4 = 99.53, \beta_5/\alpha_5 = 200.06$.
To turn this into a rigorous argument, you can use effective instead of asymptotic estimates for $\sqrt{1+\alpha_n}$, $1/\sqrt{1+\alpha_n}$, and $1/(2+\beta_n)$ when $\alpha_n$ and $\beta_n$ are small.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/228068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
A surprising conjecture about twin primes Just for fun, I began to play with numbers of two distinct ciphers. I noticed that most of the cases if you consider the numbers $AB$ and $BA$ (written in base $10$), these have few common divisors: for example $13$ and $31$ are coprime, $47$ and $74$ are coprime. Obviously this is not always the case, because one can take non-coprime ciphers, however I realized that, for $0 \le a<b \le 9$, the quantity $$\gcd (10a+b, 10b+a)$$ is never too big. Using brute force I computed
$$\max \{ \gcd (10a+b, 10b+a) : 0 \le a<b \le 9\} = \gcd (48,84)=12$$
After that, I passed to an arbitrary base $n \ge 2$, and considered
$$f(n)= \max \{ \gcd (an+b, bn+a) : 0 \le a<b \le n-1 \}$$
For example $f(2)=1$ and $f(3)=2$.
Considering $n \ge 4$,
I noticed that, picking $a=2, b=n-3$ we have
$$2n+(n-3) = 3(n-1)$$
$$(n-3)n+2 = (n-2)(n-1)$$
so that $f(n)$ has a trivial lower bound
$$(n-1) \le \gcd (2n+(n-3), (n-3)n+2) \le f(n) $$
(which holds for $n=2,3$ as well).
A second remark is
$$\gcd (0n+b, bn+0) = b \le n-1$$
so that we can restrict ourselves to the case $a \neq 0$: in other words
$$f(n)=\max \{ \gcd (an+b, bn+a) : 1 \le a<b \le n-1 \}$$
I wrote a very simple program which computes the value of $f(n)$ for $n \le 400$, selecting those numbers such that $f(n)=n-1$. Surprisingly, I found out that many numbers appeared:
$$4, 6, 12, 18, 30, 42, 60, 72, 102, 108, 138, 150, 180, 192, 198, 228, 240, 270, 282, 312, 348$$
More surprisingly these turned out to be the numbers between couples of twin primes!
What is going on here?
| Suppose $n-1$ and $n+1$ are both primes.
$\gcd(an+b,bn+a)$ divides $an+b - (bn+a) = (a-b)(n-1)$.
There are two cases. If $n-1$ divides $\gcd(an+b,bn+a)$ then $b=n-1-a$ so $an+b= (n-1) (a+1)$ and $bn+a=(n-1)(b+1)$, so $\gcd(an+b,bn+a) = (n-1)\gcd(a+1,b+1)$.
$(a+1)+(b+1)=n+1$. Because $n+1$ is prime, two numbers that sum to it must be relatively prime (any common prime factor would be a prime factor of $n+1$, so woul be $n+1$, but $a+1$ and $b+1$ are both less than $n+1$.) So in this case $\gcd(an+b,bn+a) = n-1$.
On the other hand, because $n-1$ is prime, if $n-1$ does not divide $\gcd(an+b,bn+a)$ then $\gcd(an+b,bn+a)$ divides $a-b$ and so is at most $n-2$.
So in this case the maximum value is $n-1$, attained whenever $a+b=n-1$.
If $n+1$ is not prime you can get greater than $n-1$ in the first case by taking a prime $\ell$ dividing $n+1$, setting $a=\ell-1$, $b=n-\ell$ for a gcd of $\ell (n-1)$.
If $n-1$ is not prime but instead $n-1= cd$ with $c \leq d$, you can set $a=d+1$, $b=(c-1)(d+1) \leq cd <n$ so that $an+b= (d+1) (cd+1) + (c-1)(d+1) = c d^2 +2cd + c= c(d+1)^2$ and $(b-a)(n-1)=(c-2) (d+1) cd$ are both divisible by $c (d+1) > n-1$, so the gcd is divisible by $c(d+1)$ and hence greater than $n-1$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/257822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 1,
"answer_id": 0
} |
Periodic tilings of the plane by regular polygons Let $A$ be a tiling of $\mathbb{R}^{2}$ using regular polygons. Assume that the tiling is edge-to-edge. Assume also that there are two directions of periodicity, so that $\mathbf{u},\mathbf{v}\in \mathbb{R}^{2}$ are linearly independent vectors, and $A+\mathbf{u}=A+\mathbf{v}=A$.
Question: Must there always exist orthogonal directions of periodicity? That is, must there always exist non-zero vectors $\mathbf{u},\mathbf{v}\in \mathbb{R}^{2}$ such that $\mathbf{u}\cdot \mathbf{v}=0$, and $A+\mathbf{u}=A+\mathbf{v}=A$?
Note that the assumption of regularity is necessary, since we can tile the plane with identical parallelograms in such a way that there are no orthogonal directions of periodicity. It is also necessary that we have an edge-to-edge tiling, since otherwise we can construct an example with identical squares that does not have orthogonal directions of periodicity.
Does this follow from some feature of the wallpaper groups?
| I claim that the tiling https://upload.wikimedia.org/wikipedia/commons/6/66/5-uniform_310.svg does not admit a orthogonal period.
The basis for the period lattice is given by the two vectors $$ v_1 = \begin{pmatrix} 3 + \sqrt{3} \\ -1 \end{pmatrix}, v_2 = \begin{pmatrix} 1/2 \\ (2+\sqrt{3})/2 \end{pmatrix}. $$ So if there is an orthogonal period, then there has to be integers $a_1, a_2, b_1, b_2$ such that $\langle a_1 v_1 + a_2 v_2, b_1 v_1 + b_2 v_2 \rangle = 0$ and $(a_1, a_2), (b_1, b_2) \neq (0,0)$. Expanding out, we can write this as
$$(13 + 6\sqrt{3}) a_1 b_1 + \frac{1}{2} (a_1 b_2 + a_2 b_1) + (2 + \sqrt{3}) a_2 b_2 = 0. $$
Since $\sqrt{3}$ is irrational, we then obtain
\begin{align*} 0 &= 6 a_1 b_1 + a_2 b_2, \\ 0 &= 26 a_1 b_1 + (a_1 b_2 + a_2 b_1) + 4 a_2 b_2. \end{align*}
Multiplying $-25 + \sqrt{7}$ to the first equation and multiplying $6$ to the second equation and adding them up gives
\begin{align*} 0 &=(6 + 6 \sqrt{7}) a_1 b_1 + 6(a_1 b_2 + a_2 b_1) + (-1 + \sqrt{7}) a_2 b_2 \\ &= (-1 + \sqrt{7}) ((1 + \sqrt{7}) a_1 + a_2) ((1 + \sqrt{7}) b_1 + b_2). \end{align*}
Then again by irrationality of $\sqrt{7}$, either $a_1 = a_2 = 0$ or $b_1 = b_2 = 0$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/263692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 1
} |
Dividing squares by sums This question is out of curiosity and came to me thinking about another MO question which is linked below.
Question: Do there exist positive integers $a,b,c$ such that $\gcd(a,b,c) =1 $ and each of $\frac{a^2}{b+c},\frac{b^2}{a+c},$ and $\frac{c^2}{a+b}$
are also integers?
My question was inspired by this MO question where MAEA2 asked about coprime positive integer solutions to:
$$\begin{equation}\frac{a^2}{b+c} + \frac{b^2}{a+c} + \frac{c^2}{a+b} \in \mathbb{Z}\tag{1}\end{equation}$$
My naive self started looking for solutions to (1) via my question above, but could not find any. Jeremy Rouse has given an excellent answer using elliptic curves, but none of the points produced satisfy my question.
Note if we ask for each of $\frac{a^n}{b+c},\frac{b^n}{a+c},$ and $\frac{c^n}{a+b}$ to be integers there is clearly no solution for $n = 1,$ and for $n \geq 3$ we can take $a = 3, b = 5,$ and $c = 22$ since
$$\begin{align*}
\frac{3^n}{5+22} &= \frac{3^n}{3^3}\\
\frac{5^n}{3+22} &= \frac{5^n}{5^2}\\
\frac{22^n}{3+5} &= \frac{2^n 11^n}{2^3}.
\end{align*}.$$
Also if we remove the positive condition or the $\gcd$ condition we can find solutions like $(1,-2,3)$ or $(2,2,2)$ as well as many others.
| First, notice that such $a$, $b$, $c$ must be pairwise coprime (e.g., if prime $p\mid \gcd(a,b)$, then $(a+b)\mid c^2$ implies $p\mid c$, a contradiction to $\gcd(a,b,c)=1$).
As divisors of pairwise coprime numbers, $a+b$, $a+c$, $b+c$ are also pairwise coprime.
Now, since $(a+b)\mid c^2$, $(a+c)\mid b^2$, $(b+c)\mid c^2$, each of $a+b$, $a+c$, $b+c$ divides
$$D = (a+b)^2 + (a+c)^2 + (b+c)^2 - a^2 - b^2 - c^2.$$
Then their product $(a+b)(a+c)(b+c)$ must also divide $D$.
Without loss of generality, assume that $a\leq b\leq c$ and so $a+b\leq a+c\leq b+c$. Then
$(a+b)(a+c)(b+c) \leq D < 3(b+c)^2$ and hence
$$(a+b)c < (a+b)(a+c) < 3(b+c) \leq 6c,$$
implying that $a+b < 6$.
So, there is a finite number of cases to consider. It is easy to check that none of them gives a solution. Namely, for any fixed $a,b$, possible $c$ must belong to the finite set:
$$\{ d-b\ :\ d\mid a^2\} \cap \{ d-a\ :\ d\mid b^2\}.$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/264405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 1,
"answer_id": 0
} |
Combinatorial identity: $\sum_{i,j \ge 0} \binom{i+j}{i}^2 \binom{(a-i)+(b-j)}{a-i}^2=\frac{1}{2} \binom{(2a+1)+(2b+1)}{2a+1}$ In my research, I found this identity and as I experienced, it's surely right. But I can't give a proof for it.
Could someone help me?
This is the identity:
let $a$ and $b$ be two positive integers; then:
$\sum_{i,j \ge 0} \binom{i+j}{i}^2 \binom{(a-i)+(b-j)}{a-i}^2=\frac{1}{2} \binom{(2a+1)+(2b+1)}{2a+1}$.
| Let us denote
$$S=\sum_{i,j \ge 0} \binom{i+j}{i}^2 \binom{(a-i)+(b-j)}{a-i}^2.$$
First, let $s=i+j$ so that
$$S = \sum_{s\geq 0}\sum_{i=0}^s \binom{s}{i}^2 \binom{a+b-s}{a-i}^2.$$
Consider the generating function
$$F(x,y) = \sum_{s,i} \binom{s}{i}^2 x^i y^s = (1-2y+y^2-2xy-2xy^2+x^2y^2)^{-1/2}.$$
Then $S$ is nothing else but the coefficient of $x^a y^{a+b}$ in
$$F(x,y)^2 = (1-2y+y^2-2xy-2xy^2+x^2y^2)^{-1}$$
$$ = \frac{1}{4y\sqrt{x}}\left(\frac{1}{1-y(1+x+2\sqrt{x})} - \frac{1}{1-y(1+x-2\sqrt{x})}\right)$$
$$ = \frac{1}{4y\sqrt{x}}\left(\frac{1}{1-y(1+\sqrt{x})^2} - \frac{1}{1-y(1-\sqrt{x})^2}\right).$$
(derivation simplified)
The coefficient of $y^{a+b}$ equals
$$[y^{a+b}]\ F(x,y)^2
=\frac{1}{4} \frac{(1+\sqrt{x})^{2(a+b+1)} - (1-\sqrt{x})^{2(a+b+1)}}{\sqrt{x}}.$$
Now we trivially conclude that
$$S = [x^ay^{a+b}]\ F(x,y)^2 = \frac{1}{2}\binom{2(a+b+1)}{2a+1}.$$
UPDATE. Alternatively to computing the coefficient of $x^ay^{a+b}$, one can follow the venue of Fedor Petrov's proof. This way one needs to consider the generating function
$$G(x,y) = \sum_{m,n}\binom{m}{n} x^ny^m = \frac{1}{1-y-xy}$$
and verify that
$$8xy^2F(x^2,y^2)^2 = G(x,y) + G(x,-y) - G(-x,y) - G(-x,-y).$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/283540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30",
"answer_count": 3,
"answer_id": 2
} |
Algebraic inequalities on different means If $a^2+b^2+c^2+d^2=1$ in which $a,b,c,d>0$, prove or disprove
\begin{equation*}
\begin{aligned}
(a+b+c+d)^8&\geq 2^{12}abcd;\\
a+b+c+d+\frac{1}{2(abcd)^{1/4}}&\geq 3.
\end{aligned}
\end{equation*}
Can you tell any general algorithm for this type of problems? Thanks.
| Put
\begin{align*}
f(a,b,c,d) &= \frac{(a+b+c+d)^8}{2^{12}abcd} \\
g(a,b,c,d) &= \frac{a+b+c+d}{3} + \frac{1}{6(abcd^{1/4})}
\end{align*}
so the conjecture is that $f,g\geq 1$. I used Maple to search randomly for places where $f$ is as small as possible, then used Maple's fsolve() to find a local minimimum of $f$ numerically, then used Maple's identify() function to see whether there was any simple closed expression for the local minimum. I found that
$$ f\left(\frac{1}{2\sqrt{3}},\frac{1}{2\sqrt{3}},\frac{1}{2\sqrt{3}},\frac{3}{2\sqrt{3}}\right) = \frac{3^5}{2^8} = \frac{243}{256} < 1;
$$
this is a local minimum and probably a global minimum.
Numerical methods also indicate that $g$ has a local minimum at a point of the form $(a,a,a,d)$. This must have $3a^2+d^2=1$, so for some $t\in(0,1)$ we have
\begin{align*}
a &= \frac{1}{\sqrt{3}} \frac{1-t^2}{1+t^2} \\
d &= \frac{2t}{1+t^2}.
\end{align*}
Define $G(t)=g(a,a,a,d)$, with $a$ and $d$ as above. We find that for $t_0=2-\sqrt{3}$ we have $G(t_0)=1$ and $G'(t_0)=G''(t_0)=0$, so this is not a local minimum. Instead, there is a local minimum at a point $t_1\simeq 0.3868865016$ which is algebraic of degree 21 over $\mathbb{Q}(\sqrt{3})$, with $G(t_1)\simeq 0.99970268716$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/302192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Bounding a sum of products of binomial coefficients I am trying to understand the following sums for $k\le n$ :
$$
\sum_{s=0}^{k} \begin{pmatrix} 2n-s/2\\ s\end{pmatrix}\begin{pmatrix} 2n-3s/2\\ k-s\end{pmatrix}
$$
$$
\sum_{s=0}^{k} \begin{pmatrix} 2n-s\\ s\end{pmatrix}\begin{pmatrix} 2n-s\\ k-s\end{pmatrix}
$$
More precisely, I want to know if there is an $\alpha$, respectively $\beta$, such that for any $\epsilon > 0$ and sufficiently big $n$, we have
$$
\begin{pmatrix} 4n-(\alpha+\epsilon) k\\ k\end{pmatrix}
\le \sum_{s=0}^{k} \begin{pmatrix} 2n-s/2\\ s\end{pmatrix}\begin{pmatrix} 2n-3s/2\\ k-s\end{pmatrix}
\le \begin{pmatrix} 4n-(\alpha-\epsilon) k\\ k\end{pmatrix},
$$
respectively
$$
\begin{pmatrix} 4n-(\beta+\epsilon) k\\ k\end{pmatrix}
\le \sum_{s=0}^{k} \begin{pmatrix} 2n-s\\ s\end{pmatrix}\begin{pmatrix} 2n-s\\ k-s\end{pmatrix}
\le \begin{pmatrix} 4n-(\beta-\epsilon) k\\ k\end{pmatrix} .
$$
I am also fine with having rational function factors or rational powers of such factors in the bounding terms.
One can see
$$
\begin{pmatrix} 4n-2k\\ k\end{pmatrix} \le \sum_{s=0}^{k} \begin{pmatrix} 2n-k/2\\ s\end{pmatrix}\begin{pmatrix} 2n-3k/2\\ k-s\end{pmatrix}
\le \sum_{s=0}^{k} \begin{pmatrix} 2n-s/2\\ s\end{pmatrix}\begin{pmatrix} 2n-3s/2\\ k-s\end{pmatrix}\le \sum_{s=0}^{k} \begin{pmatrix} 2n\\ s\end{pmatrix}\begin{pmatrix} 2n\\ k-s\end{pmatrix}
= \begin{pmatrix} 4n\\ k\end{pmatrix},
$$
hence if there is such $\alpha$ then $0\le \alpha \le 2$.
| Suppose that $k$ is fixed. The sums represent polynomials of degree $k$ in $n$. To answer your question, it's enough to compute two first leading terms of these polynomials. Let me do that for the second sum (the first sum is treated similarly), replacing $2n$ with $n$.
We have
\begin{split}
S(n,k)&:=\sum_{s=0}^{k} \binom{n-s}s \binom{n-s}{k-s} \\
&= \sum_{s=0}^{k} \frac{n^s - \frac{s(3s-1)}2n^{s-1}}{s!}\cdot \frac{n^{k-s} - \frac{(k-s)(k+s-1)}2n^{k-s-1}}{(k-s)!} + O(n^{k-2})\\
&=\frac1{k!}\sum_{s=0}^{k}\binom{k}s \big(n^k - (\frac{k(k-1)}2+s^2) n^{k-1}\big) + O(n^{k-2})\\
&=\frac{2^k}{k!}n^k - \frac{2^{k-2} (3k-1)}{(k-1)!}n^{k-1} + O(n^{k-2}).
\end{split}
Since
$$\binom{2n-(\beta\pm\epsilon)k}{k} = \frac{2^k}{k!}n^k - \frac{2^{k-2}((2(\beta\pm\epsilon)+1)k-1)}{(k-1)!}n^{k-1} + O(n^{k-2}),$$
it follows with necessity that $\beta=1$. Then from
$$S(n,k) - \binom{2n-(1\pm\epsilon)k}{k} = \pm \epsilon\frac{2^{k-1} k}{(k-1)!}n^{k-1} + O(n^{k-2})$$
it follows that the proposed bounds do hold for large enough $n$.
P.S. Just in case, $S(n,k)$ has the following generating function:
$$\sum_{n,k\geq 0} S(n,k)z^n t^k = \frac1{1-(1+t)z-t(1+t)z^2}.$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/304151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Prove that this expression is greater than 1/2 Let $0<x < y < 1$ be given. Prove
$$4x^{2}+4y^{2}-4xy-4y+1 + \frac{4}{\pi^2}\Big[
\sin^{2}(\pi x)+ \sin^{2}(\pi y) + \sin^{2}[\pi(y-x)] \Big] \geq \frac{1}{2}$$
I have been working on this problem for a while now and I have hit a wall. I have plotted this and it seems to be true. I also tried to prove that it is greater than some other fraction such as $\frac{2}{5}$ or something. So far, I have tried doing partial derivative with respect to x but unfortunately, it is very hard to find the roots as it involve equation involving $\sin(2 \pi x)$, $\sin[2 \pi (x-y)]$ and $x$. I think Taylor Series expansion would look very ugly. I would really appreciate it if anyone has any suggestions on what to try or any branch of math I can read on, as I am not sure how to deal with this kind of expression except the elementary tool I have.
| By the Max Alekseyev's hint we need to prove that $\sum\limits_{cyc}f(a)\geq\frac{3}{4},$
where $f(x)=x^2+\frac{2}{\pi^2}\sin^2\pi x,$ $a$, $b$ and $c$ are positives such that $a+b+c=1.$
We have $$f''(x)=4\left(\frac{1}{2}+\cos2\pi x\right),$$
which gives that $f$ is a convex function on $\left[0,\frac{1}{3}\right]$ and on $\left[\frac{2}{3},1\right]$ and $f$ is a concave function on $\left[\frac{1}{3},\frac{2}{3}\right]$.
Now, since it's impossible that $0<a\leq\frac{1}{3}<b\leq\frac{2}{3}<c<1$ and two of variables are placed on $\left(\frac{2}{3},1\right]$,
it's enough to consider two following cases.
*
*$\{a,b\}\subset\left[0,\frac{1}{3}\right]$.
Let $\frac{a+b}{2}=x$.
Thus, by Jensen:
$$\sum_{cyc}f(a)\geq2f\left(\frac{a+b}{2}\right)+f(c)=2f(x)+f(1-2x)=$$
$$=2x^2+\frac{4}{\pi^2}\sin^2\pi x+(1-2x)^2+\frac{2}{\pi^2}\sin^2\pi(1-2x).$$
Id est, it's enough to prove that
$$2x^2+\frac{4}{\pi^2}\sin^2\pi x+(1-2x)^2+\frac{2}{\pi^2}\sin^2\pi(1-2x)\geq\frac{3}{4}$$ or
$$6x^2-4x+\frac{1}{4}+\frac{2}{\pi^2}(2\sin^2\pi x+\sin^22\pi x)\geq0,$$ which is very strong, but true.
*$\{a,b\}\subset\left[\frac{1}{3},\frac{2}{3}\right]$ and $a\geq b$.
Thus, since $\left(a+b-\frac{1}{3},\frac{1}{3}\right)\succ(a,b),$ by Karamata we obtain:
$$\sum_{cyc}f(a)\geq f\left(a+b-\frac{1}{3}\right)+f\left(\frac{1}{3}\right)+f(c)=$$
$$=f\left(\frac{2}{3}-c\right)+f\left(\frac{1}{3}\right)+f(c)=$$
$$=2c^2-\frac{4}{3}c+\frac{5}{9}+\frac{3}{2\pi^2}+\frac{2}{\pi^2}\left(\sin^2\left(\frac{2\pi}{3}-\pi c\right)+\sin^2\pi c\right)\geq\frac{3}{4},$$
where the last inequality is true for all $c\in\left(0,\frac{1}{3}\right).$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/336472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 1
} |
Natural number solutions for equations of the form $\frac{a^2}{a^2-1} \cdot \frac{b^2}{b^2-1} = \frac{c^2}{c^2-1}$ Consider the equation $$\frac{a^2}{a^2-1} \cdot \frac{b^2}{b^2-1} = \frac{c^2}{c^2-1}.$$
Of course, there are solutions to this like $(a,b,c) = (9,8,6)$.
Is there any known approximation for the number of solutions $(a,b,c)$, when $2 \leq a,b,c \leq k$ for some $k \geq 2.$
More generally, consider the equation $$\frac{a_1^2}{a_1^2-1} \cdot \frac{a_2^2}{a_2^2-1} \cdot \ldots \cdot \frac{a_n^2}{a_n^2-1} = \frac{b_1^2}{b_1^2-1} \cdot \frac{b_2^2}{b_2^2-1}\cdot \ldots \cdot \frac{b_m^2}{b_m^2-1}$$
for some natural numbers $n,m \geq 1$. Similarly to the above question, I ask myself if there is any known approximation to the number of solutions $(a_1,\ldots,a_n,b_1,\ldots,b_m)$, with natural numbers $2 \leq a_1, \ldots, a_n, b_1, \ldots, b_m \leq k$ for some $k \geq 2$. Of course, for $n = m$, all $2n$-tuples are solutions, where $(a_1,\ldots,a_n)$ is just a permutation of $(b_1,\ldots,b_n)$.
| Above equation shown below, has solution:
$\frac{a^2}{a^2-1} \cdot \frac{b^2}{b^2-1} = \frac{c^2}{c^2-1}$
$a=9w(2p-1)(18p-7)$
$b=4w(72p^2-63p+14)$
$c=3w(72p^2-63p+14)$
Where, w=[1/(36p^2-7)]
For, $p=0$ we get:
$(a,b,c)=(9,8,6)$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/359481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Faster than Euler's substitution. How to derive this formula? I wish someone could help me derive this expression. ($K$ is a constant coefficient. $P_n(x)$ is a polynomial function of degree n.)
$$
\int\frac{P_n(x)\mathrm{d}x}{\sqrt{ax^2+bx+c}} \equiv P_{n-1}(x) \cdot\sqrt{ax^2+bx+c} + K\cdot\int\frac{\mathrm{d}x}{\sqrt{ax^2+bx+c}}, (a\neq0)
$$
After finding the derivatives of both sides it is easy to find the coefficients of the polynomial $P_{n-1}(x)$. Then we are left with this simple integral:
$$\int\frac{\mathrm{d}x}{\sqrt{ax^2+bx+c}}$$
| This is a special case of a technique known as Hermite reduction (here is the original article from 1872). It can be derived by means of the following identity,
$$\frac{d}{dx}\left(x^{n-1}\sqrt{ax^2+bx+c}\right)=\frac{c (n-1) x^{n-2}+b(n-1/2) x^{n-1}}{ \sqrt{a x^2+b x+c}}+\frac{a n x^n}{\sqrt{a x^2+b x+c}}.$$
So for $a\neq 0$ and $n\geq 1$ we have
$$\int \frac{x^n}{\sqrt{ax^2+bx+c}}dx=\frac{1}{an}x^{n-1}\sqrt{ax^2+bx+c}+\int\frac{P_{n-1}}{\sqrt{a x^2+b x+c}}.$$
We can now repeat this reduction on the integral with the polynomial $P_{n-1}(x)$ in the numerator, until we reach $P_0$, producing the identity in the OP.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/388720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Three squares in a rectangle One of my colleagues gave me the following problem about 15 years ago:
Given three squares inside a 1 by 2 rectangle, with no two squares overlapping, prove that the sum of side lengths is at most 2. (The sides of the squares and the rectangle need not be parallel to each other.)
I couldn't find a solution or even a source for this problem. Does anyone know about it? I was told that this was like an exercise in combinatorial geometry for high school contests such as IMO, but I'm not sure if this problem was listed in such competitions. Has anyone heard of this problem, or does anyone know how to solve it?
Any kind of help will be appreciated.
| Since the squares are convex, we can draw lines which separate them. In particular, if two separating lines go from $(b-a,0)$ to $(b,1)$ and from $(c,1)$ to $(c+d,0)$, then we can prove the result in terms of those lines and those variables.
So: let the rectangle go from $(0,0)$ to $(2,1)$. Let $A$ be the leftmost square (or one such square). Let $C$ be the rightmost square (or one such square). Let $B$ be the other square.
Draw a line separating $A$ and $B$, and let $(b,1)$ and $(b-a,0)$ be its intersections with the lines $y=1$ and $y=0$.
Draw a line separating $B$ and $C$, and let $(c,1)$ and $(c+d,0)$ be its intersections with the lines $y=1$ and $y=0$.
Reasoning as in user21820's answer, we assume wlog that:
*
*$0<a$, so $A$ is left and above the line separating $A$ and $B$;
*$0<d$, so $C$ is right and above the line separating $B$ and $C$;
*$0<b$ and $c<2$ and $a-b+c+d>0$, so $B$ is below both lines.
(Since the lines may leave the rectangle, we do not assume $b-a>0$ or $b<2$ or $c>0$ or $c+d<2$.)
Lemma (proved at the end):
\begin{align}
\text{sidelength of }A &\le \min\!\left(\frac{b}{a+1},\,1\right)\\
\text{sidelength of }B &\le \min\!\left((a-b+c+d)u,\,1\right)\\
\text{sidelength of }C &\le \min\!\left(\frac{2-c}{d+1},\,1\right)
\end{align}
where
$$u=\max\left(
\frac{1}{a+d+1},
\frac{\sqrt{a^2+1}}{a^2+a+d+1},
\frac{\sqrt{d^2+1}}{d^2+d+a+1}
\right)$$
The factor $u$ satisfies $1/(a+1)>u$ and $1/(d+1)>u$ so long as $a<3.66$ and $d<3.66$ respectively. I will assume those inequalities for now to show that some functions are increasing or decreasing; I don't have a clean proof for those inequalities or without them yet.
In the corner case of $b=a+1$ and $c=1-d$, the side lengths are $1$, $0$ and $1$, and they sum to exactly $2$. We now use this in analyzing four cases.
Case I, $b\le a+1$ and $c\le 1-d$: The sum of the sidelengths is at most
$$\frac{b}{a+1}+(a-b+c+d)u+1$$ This is increasing in both $b$ and $c$, so its value is at most the corner value of $2$.
Case II, $b\le a+1$ and $c\ge 1-d$: The sum of the sidelengths is at most
$$\frac{b}{a+1}+(a-b+c+d)u+\frac{2-c}{d+1}$$ This is increasing in $b$ and decreasing in $c$, so its value is at most the corner value of $2$.
Case III, $b\ge a+1$ and $c\le 1-d$: The sum of the sidelengths is at most
$$1+(a-b+c+d)u+1$$
This is decreasing in $b$ and increasing in $c$, so its value is at most the corner value of $2$.
Case IV, $b\ge a+1$ and $c\ge 1-d$: The sum of the sidelengths is at most
$$1+(a-b+c+d)u+\frac{2-c}{d+1}$$
This is decreasing in both $b$ and $c$, so its value is at most the corner value of $2$.
So the sum of the sidelengths is at most $2$ in each case.
Proof of Lemma:
The sidelength of $A$ is clearly less than 1, and also clearly less than the maximum sidelength inscribed in the right triangle bounded by $x=0$, $y=1$, and the separator of $A$ and $B$. We use Polya's formula here to calculate the sidelength in the triangle as the maximum of $b/(a+1)$ and $b\sqrt{a^2+1}/(a^2+a+1)$; since $a>0$, the maximum is just $b/(a+1)$.
A similar use of Polya's result bounds the sidelength of $C$. Yet another use of that result, now for a triangle which may be acute or obtuse, bounds the sidelength of $B$ by $|a-b+c+d|u$. Since we assumed $a-b+c+d>0$, we write this bound as $(a-b+c+d)u$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/396776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 2,
"answer_id": 0
} |
Question on OEIS A000085 The OEIS sequence A000085 is defined by
$$ a_n \!=\! (n-1)a_{n-2} + a_{n-1} \;\text{with }\; a_0\!=\!1, a_1\!=\!1.$$
If $n$ of the form $b^2-b+1, b \in \mathbb{N}, b > 2, \;\text{then: }\;$ $$ \left\lfloor \frac{a_n}{a_{n-1}} \right\rfloor > \left\lfloor \frac{a_{n-1}}{a_{n-2}} \right\rfloor$$
How to prove this?
| $\newcommand{\fl}[1]{\lfloor #1 \rfloor}\newcommand\N{\mathbb N}$User LeechLattice gave a complete answer to the original post.
This post is to complement that answer by confirming the empirical observation, made in my previous comment, that the inequality
$$ \left\lfloor \frac{a_n}{a_{n-1}} \right\rfloor > \left\lfloor \frac{a_{n-1}}{a_{n-2}} \right\rfloor \tag{1}$$
does not hold if $n\ge3$ and $n\notin\{b^2−b+1\colon b\ge3,b\in\N\}$. (Everywhere here, $n\in\{0,1,\dots\}$.)
The proof of this observation is based on the inequality
$$n-1<r_n^2-r_n<n \tag{2}$$
for $n\ge4$, used in the LeechLattice's answer, where
$$r_n:=\frac{a_n}{a_{n-1}}.$$
It is enough to show that
$$\fl{r_n}=\fl{r_{n-1}} \tag{3}$$
if $n\ge3$ and $n\notin\{b^2−b+1\colon b\ge3,b\in\N\}$.
Note that $r_1=1$ and
$$r_n=1+\frac{n-1}{r_{n-1}} \tag{4}$$
for $n\ge2$.
It is straightforward to check that (3) holds for $n=3,4,5,6$, whereas $7=b^2−b+1$ for $b=3$. So, it remains to show that (3) holds if $b\ge3$, $b\in\N$, and $b^2−b+1<n<(b+1)^2−(b+1)+1$, that is, if $b\in\{3,4,\dots\}$ and
$$b^2−b+2\le n\le(b+1)^2−(b+1).$$
For such $n$ and $b$, by (2),
$$b^2−b<n-1<r_n^2-r_n<n\le(b+1)^2−(b+1) \tag{5}$$
and
$$b^2−b\le n-2<r_{n-1}^2-r_{n-1}<n-1<(b+1)^2−(b+1). \tag{6}$$
By (4), $r_n\ge1$ for all $n$. Also, $r^2-r$ is strictly increasing in $r\ge1$. So, (5) and (6) imply $b<r_n<b+1$ and $b<r_{n-1}<b+1$, whence $\fl{r_n}=b=\fl{r_{n-1}}$, so that (3) follows.
The proof of the crucial inequality (2) (given in the paper cited by LeechLattice) is very simple but perhaps not easy to find. Indeed, (2) can be rewritten as
$$c_{n-1}<r_n<c_n, \tag{7}$$
where $c_n:=(1+\sqrt{4n+1})/2$, the positive root of the equation $c^2-c-n=0$, and, in turn, the bracketing (7) of $r_n$ is easily verified by induction on $n$ using the recurrence (4).
| {
"language": "en",
"url": "https://mathoverflow.net/questions/412605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
An inequality about e I happen to encounter the following inequality which I need to prove:
$$\left(k+(1+k)\left(1-\frac{1}{e}\left(1+\frac{1}{k}\right)^k\right)\right)\log\left(1+\frac{1}{k}\right)>1,$$ for $k\in\mathbb{Z}^{+}$.
My current idea is to expand $$k+(1+k)\left(1-\frac{1}{e}\left(1+\frac{1}{k}\right)^k\right)=k+\frac{1}{2}+\frac{1}{24k}+O\left(\frac{1}{k^2}\right).$$
It seems like $$\left(1+\frac{1}{k}\right)^{k+\frac{1}{2}}>e$$ is always true? But not sure how can I proceed with the remaining $O\left(\frac{1}{k^2}\right)$ terms.
| We can actually prove this directly for $k \ge 7$ and it should be easy to check it for $k<7$
We note that for $x < 1$ we have $x/2-x^2/3+x^3/4-x^4/5... \ge x/2-x^2/3$ (series converges absolutely and grouping in pairs the remainder is positive).
For $y<1$ we also have $1-e^{-y} \ge y-y^2/2$ again by using the Taylor series and grouping
So with $x=1/k, k \ge 2$ we have, $$1-\frac{1}{e}\left(1+\frac{1}{k}\right)^k=1-e^{k\log(1+1/k)-1}=1-e^{-x/2+x^2/3-x^3/4+x^4/5 \ldots} \ge 1-e^{-x/2+x^2/3}$$
Also $$1-e^{-x/2+x^2/3} \ge x/2-x^2/3-(x/2-x^2/3)^2/2=x/2-11x^2/24+x^3/6-x^4/18$$
So $(1+k)\left(1-\frac{1}{e}\left(1+\frac{1}{k}\right)^k\right) \ge \frac{1}{2}+\frac{1}{24 k}-\frac{7}{24 k^2}+\frac{1}{9 k^3}-\frac{1}{18 k^4} > \frac{1}{2}$ for $k \ge 7$
Hence $\left(k+(1+k)\left(1-\frac{1}{e}\left(1+\frac{1}{k}\right)^k\right)\right)\log\left(1+\frac{1}{k}\right) > (k+\frac{1}{2})\log\left(1+\frac{1}{k}\right), k \ge 7$
But if $f(x)=(x+\frac{1}{2})\log\left(1+\frac{1}{x}\right), x>1$ we have $f'(x)=\log\left(1+\frac{1}{x}\right)-\frac{1}{2x}-\frac{1}{2(x+1)}$ and then $f''(x)=-\frac{1}{x(x+1)}+\frac{1}{2x^2}+\frac{1}{2(x+1)^2}> 0$ so $f'$ increasing hence $f'<0$ (it is $0$ at infinity), hence $f$ decreasing so $f(x) >\lim_{y\to \infty}f(y)=1$ so indeed $(k+\frac{1}{2})\log\left(1+\frac{1}{k}\right)>1$ and finally $$\left(k+(1+k)\left(1-\frac{1}{e}\left(1+\frac{1}{k}\right)^k\right)\right)\log\left(1+\frac{1}{k}\right)>1, k \ge 7$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/441848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is $ \frac{\pi^2}{12}=\ln(2)$ not true? This question may sound ridiculous at first sight, but let me please show you all how I arrived at the aforementioned 'identity'.
Let us begin with (one of the many) equalities established by Euler:
$$ f(x) = \frac{\sin(x)}{x} = \prod_{n=1}^{\infty} \Big(1-\frac{x^2}{n^2\pi^2}\Big) $$
as $(a^2-b^2)=(a+b)(a-b)$, we can also write: (EDIT: We can not write this...)
$$ f(x) = \prod_{n=1}^{\infty} \Big(1+\frac{x}{n\pi}\Big) \cdot \prod_{n=1}^{\infty} \Big(1-\frac{x}{n\pi}\Big) $$
We now we arrange the terms with $ (n = 1 \land n=-2)$, $ (n = -1 \land n=2$), $( n=3 \land -4)$ , $ (n=-3 \land n=4)$ , ..., $ (n = 2n \land n=-2n-1) $ and $(n=-2n \land n=2n+1)$ together.
After doing so, we multiply the terms accordingly to the arrangement. If we write out the products, we get:
$$ f(x)=\big((1-x/2\pi + x/\pi -x^2/2\pi^2)(1+x/2\pi-x/\pi - x^2/2\pi^2)\big) \cdots $$
$$
\cdots \big((1-\frac{x}{(2n)\pi} + \frac{x}{(2n-1)\pi} -\frac{x^2}{(2n(n-1))^2\pi^2})(1+\frac{x}{2n\pi} -\frac{x}{(2n-1)\pi} -\frac{x^2}{(2n(2n-1))^2\pi^2)})\big) $$
Now we equate the $x^2$-term of this infinite product, using
Newton's identities (notice that the '$x$'-terms are eliminated) to the $x^2$-term of the Taylor-expansion series of $\frac{\sin(x)}{x}$. So,
$$ -\frac{2}{\pi^2}\Big(\frac{1}{1\cdot2} + \frac{1}{3\cdot4} + \frac{1}{5\cdot6} + \cdots + \frac{1}{2n(2n-1)}\Big) = -\frac{1}{6} $$
Multiplying both sides by $-\pi^2$ and dividing by 2 yields
$$\sum_{n=1}^{\infty} \frac{1}{2n(2n-1)} = \pi^2/12 $$
That (infinite) sum 'also' equates $\ln(2)$, however (According to the last section of this paper).
So we find $$ \frac{\pi^2}{12} = \ln(2) . $$
Of course we all know that this is not true (you can verify it by checking the first couple of digits). I'd like to know how much of this method, which I used to arrive at this absurd conclusion, is true, where it goes wrong and how it can be improved to make it work in this and perhaps other cases (series).
Thanks in advance,
Max Muller
(note I: 'ln' means 'natural logarithm)
(note II: with 'to make it work' means: 'to find the exact value of)
| Maybe I'm too late to be of much use to the original question-asker,
but I was surprised to see that all of the previous answers seem to
not quite address the real point in this question.
*
*While it is important to be aware of the dangers of rearranging conditionally convergent series, it not true that any rearrangement is invalid, in terms of changing the value of the sum.
Namely, any finite rearrangement of terms will obviously leave the sum unchanged.
So will any collection of disjoint finite rearrangements.
For example, by a standard Taylor series argument the following sum converges conditionally to $\ln (2)$:
$$ H_{\pm} = \sum_{m\geq 1}(-1)^{m+1}\frac1{m} = 1 - \frac12 + \frac13 - \frac14 + \cdots = \ln(2).$$
(Note that by grouping terms,
$ H_{\pm} = \sum_{n\geq 1} \left( \frac{1}{2n-1} - \frac{1}{2n}\right)= \sum_{n\geq1} \frac1{2n(2n-1)}$.)
The following ''rearranged'' sum also converges to the same value:
$$ H_{\pm}^* = \sum_{n\geq 1} \left( - \frac{1}{2n} + \frac{1}{2n-1}\right)
= -\frac12 + 1 - \frac14 + \frac13 - \cdots .$$
The partial sums of $H_{\pm}$ and $H_{\pm}^*$ share a subsequence in common,
the partial sums indexed by even numbers,
so the two series must both converge to the same value.
This is essentially the same type of rearrangement that Max is considering in the question.
The product of
$$ \textstyle\left(1 + \frac1{2n-1}\frac{x}{\pi}\right)
\left(1 -\frac1{2n-1} \frac{x}{\pi} \right)
\qquad\text{and}\qquad
\left(1 + \frac1{2n}\frac{x}{\pi}\right)
\left(1 - \frac1{2n} \frac{x}{\pi} \right) $$
can be rearranged as the product
$$ \textstyle\left(1 + \frac1{2n-1}\frac{x}{\pi}\right)
\left(1 -\frac1{2n} \frac{x}{\pi} \right)
\qquad\text{and}\qquad
\left(1 - \frac1{2n-1}\frac{x}{\pi}\right)
\left(1 +\frac1{2n} \frac{x}{\pi} \right) .$$
This does not change the value of the (conditionally convergent) infinite product
for $\frac{\sin x}{x}$.
So if the error in this ''proof'' of $\pi^2/6 = \ln(2)$ is not in rearranging terms,
where is the actual mistake?
*The mistake is in leaving out a term when "foiling" the product of two polynomials.
(Or, in a misapplication of Newton's identities.)
The valid infinite product expression
$$ \frac{\sin x}{x} = \prod_{n\geq 1}
\textstyle\left(1 + \frac1{2n-1} \frac x{\pi}\right)\left(1 -\frac1{2n} \frac{x}{\pi}\right) \left(1 - \frac1{2n-1}\frac{x}{\pi}\right)\left(1 +\frac1{2n}\frac{x}{\pi} \right) $$
$$\qquad\qquad \qquad\quad= \prod_{n\geq 1}
\textstyle\left(1 + \frac1{2n(2n-1)} \frac{x}{\pi} - \frac1{2n(2n-1)}\frac{x^2}{\pi^2} \right) \left(1 - \frac1{2n(2n-1)} \frac{x}{\pi} - \frac1{2n(2n-1)}\frac{x^2}{\pi^2} \right) $$
simplifies to
$$\frac{\sin x}{x} = \prod_{n\geq 1}
\textstyle\left(1 - \frac{2}{2n(2n-1)}\frac{x^2}{\pi^2} - \frac{1}{(2n)^2(2n-1)^2} \frac{x^2}{\pi^2} + O(x^4)\right) .$$
The coefficient of $x^2$ in this product is the series
$$ -\frac1{\pi^2} \sum_{n\geq 1}\left( \frac{2}{2n(2n-1)} + \frac{1}{(2n)^2(2n-1)^2} \right)$$
which does, in fact, converge to $ -\frac1{\pi^2}\zeta(2) = -\frac1 6$.
This can be checked through some algebra, or by asking WolframAlpha.
(This explains why
$ \sum_{n\geq 1} \frac{1}{(2n)^2(2n-1)^2} = \frac{\pi^2}{6}-2\ln(2),$
which I would not have known how to evaluate otherwise.)
| {
"language": "en",
"url": "https://mathoverflow.net/questions/27592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 4,
"answer_id": 2
} |
Any sum of 2 dice with equal probability The question is the following: Can one create two nonidentical loaded 6-sided dice such that when one throws with both dice and sums their values the probability of any sum (from 2 to 12) is the same. I said nonidentical because its easy to verify that with identical loaded dice its not possible.
Formally: Let's say that $q_{i}$ is the probability that we throw $i$ on the first die and $p_{i}$ is the same for the second die. $p_{i},q_{i} \in [0,1]$ for all $i \in 1\ldots 6$. The question is that with these constraints are there $q_{i}$s and $p_{i}$s that satisfy the following equations:
$ q_{1} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{2} + q_{2} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{3} + q_{2} \cdot p_{2} + q_{3} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{4} + q_{2} \cdot p_{3} + q_{3} \cdot p_{2} + q_{4} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{5} + q_{2} \cdot p_{4} + q_{3} \cdot p_{3} + q_{4} \cdot p_{2} + q_{5} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{6} + q_{2} \cdot p_{5} + q_{3} \cdot p_{4} + q_{4} \cdot p_{3} + q_{5} \cdot p_{2} + q_{6} \cdot p_{1} = \frac{1}{11}$
$ q_{2} \cdot p_{6} + q_{3} \cdot p_{5} + q_{4} \cdot p_{4} + q_{5} \cdot p_{3} + q_{6} \cdot p_{2} = \frac{1}{11}$
$ q_{3} \cdot p_{6} + q_{4} \cdot p_{5} + q_{5} \cdot p_{4} + q_{6} \cdot p_{3} = \frac{1}{11}$
$ q_{4} \cdot p_{6} + q_{5} \cdot p_{5} + q_{6} \cdot p_{4} = \frac{1}{11}$
$ q_{5} \cdot p_{6} + q_{6} \cdot p_{5} = \frac{1}{11}$
$ q_{6} \cdot p_{6} = \frac{1}{11}$
I don't really now how to start with this. Any suggestions are welcome.
| I've heard this brainteaser before, and usually its phrased that 2-12 must come up equally likely (no comment about other sums). With this formulation (or interpretation) it becomes possible. Namely, {0,0,0,6,6,6} and {1,2,3,4,5,6}. In this case, you can also generate the sum 1, but 2-12 are equally likely (1-12 are equally likely). Without allowing for other sums I do suspect its impossible (and looks like proofs have been given). I arrived at this answer by noting we are asking for equal probability for 11 events that come from 36 (6*6) possible outcomes, which immediately seems unlikely. However equal probability for 12 events from 36 outcomes is far more manageable :)
Phil
| {
"language": "en",
"url": "https://mathoverflow.net/questions/41310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 6,
"answer_id": 3
} |
Length of Hirzebruch continued fractions Suppose $a,b$ are two natural numbers relatively prime to $n$ and to each other. Assume $n\geq ab+1$. Suppose further that $\frac{a}{b}\equiv k \pmod{n}$ for some $k\in \lbrace 1,2,\dots, n-1\rbrace$ and $\frac{a}{b}\equiv k'\pmod{n+ab}$ for some $k'\in \lbrace 1,2,\dots, n+ab-1\rbrace$.
Question: Is there an elementary proof that the length of the continued fraction of $\frac{n}{k}$ is equal to the length of the continued fraction of $\frac{n+ab}{k'}$?
This came out of a broader result, and for this particular case I can prove it using routine toric geometry, however I would like to know of some elementary tricks to deal with continued fractions.
Here by continued fraction I mean the Hirzebruch continued fraction
$$\frac{n}{k}=a_0-\frac{1}{a_1-\frac{1}{a_2-\cdots}}.$$
For example, when $a=2, b=3$ and $n=17$, we get $k=12$ and $k'=16$, so the fractions are
$$\frac{17}{12}=2-\frac{1}{2-\frac{1}{4-\frac{1}{2}}}\qquad and \qquad\frac{23}{16}=2-\frac{1}{2-\frac{1}{5-\frac{1}{2}}}.$$
| Forgive me, this should be in the comments, but I am still building my reputation up to comment. If a=5, b=4, n=7, then k=3, k'=8, and n+ab=27.
Here, $\frac{n}{k}=\frac{7}{3}=3-\frac{1}{2-\frac{1}{2}}$ but $\frac{n+ab}{k'}=\frac{27}{8}=4-\frac{1}{2-\frac{1}{3-\frac{1}{2}}}$. Am I missing something or is there a further assumption on these numbers?
| {
"language": "en",
"url": "https://mathoverflow.net/questions/111312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
What is known about a^2 + b^2 = c^2 + d^2 Could you state or direct me to results regarding the Diophantine equation $a^2+b^2=c^2+d^2$ over integers? Specifically, I am looking for a complete parametrization. In the case that a complete parametrization does not exist, I would be interested in seeing complete parametrizations for special cases.
For example, in the case that any one of the variables is zero, we are led to Pythagorean triples. Taking a variable to be some non-zero constant would be interesting. I'm looking for known results as I am not as familiar with the literature as many MO-ers out there.
| This equation is quite symmetrical so formulas making too much can be written:
So for the equation:
$X^2+Y^2=Z^2+R^2$
solution:
$X=a(p^2+s^2)$
$Y=b(p^2+s^2)$
$Z=a(p^2-s^2)+2psb$
$R=2psa+(s^2-p^2)b$
solution:
$X=p^2-2(a-2b)ps+(2a^2-4ab+3b^2)s^2$
$Y=2p^2-4(a-b)ps+(4a^2-6ab+2b^2)s^2$
$Z=2p^2-2(a-2b)ps+2(b^2-a^2)s^2$
$R=p^2-2(3a-2b)ps+(4a^2-8ab+3b^2)s^2$
solution:
$X=p^2+2(a-2b)ps+(10a^2-4ab-5b^2)s^2$
$Y=2p^2+4(a+b)ps+(20a^2-14ab+2b^2)s^2$
$Z=-2p^2+2(a-2b)ps+(22a^2-16ab-2b^2)s^2$
$R=p^2+2(7a-2b)ps+(4a^2+8ab-5b^2)s^2$
solution:
$X=2(a+b)p^2+2(a+b)ps+(5a-4b)s^2$
$Y=2((2a-b)p^2+2(a+b)ps+(5a-b)s^2)$
$Z=2((a+b)p^2+(7a-2b)ps+(a+b)s^2)$
$R=2(b-2a)p^2+2(a+b)ps+(11a-4b)s^2$
solution:
$X=2(b-a)p^2+2(a-b)ps-as^2$
$Y=2((b-2a)p^2+2(a-b)ps+(b-a)s^2)$
$Z=2((b-a)p^2+(3a-2b)ps-(a-b)s^2)$
$R=2(b-2a)p^2+2(a-b)ps+as^2$
number $a,b,p,s$ integers and sets us, and may be of any sign.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/130131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
can we say that $(p^2+1)/2\ne p_0^2$ where $p$ is a Mersenne prime Let $p=2^a-1>7$ be a Mersenne prime and so $a$ is an odd prime.
Can we say that $(p^2+1)/2$ is not equal to the square of a prime number?
Many thanks for your help
BHZ
| Suppose, $p^2-2p_1^2=-1.$ Substituting $p=2^a-1,$ we arrive at
$$2^a(2^{a-1}-1)=(p_1-1)(p_1+1).$$ Observe, $(p_1-1,p_1+1)=2,$ so we must have the following options: $p_1-1=2^{a-1}k$ and $p_1+1=2l$ and $kl=2^{a-1}-1.$ This is impossible unless $k,l$ and thus $a$ are small. Indeed, if $k\ge 2,$ then $p_1\ge 2^a+1$ and $l\ge 2^{a-1}$ so $kl\ge2^a.$ If $k=1,$ then $p_1=2^{a-1}+1$ and $l=2^{a-2}+1$ and $kl=2^{a-2}+1=2^{a-1}-1.$ This implies $a=3.$ Otherwise, $p_1-1=2k$ and $p_1+1=2^{a-1}l$ and $kl=2^{a-1}-1.$ Again it is possible only if $k,l$ and $a$ are small. These case can be checked by hands.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/131570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Sequences with integral means Let $S(n)$ be the sequence whose first element is $n$, and from then onward,
the next element is the smallest natural number ${\ge}1$ that ensures that the
mean of all the numbers in the sequence is an integer.
For example, the second element of $S(4)$ cannot be $1$ (mean $\frac{5}{2}$),
but $2$ works: $S(4)=4,2,\ldots$. Then the third element cannot be
$1$ (mean $\frac{7}{3}$)
or $2$ (mean $\frac{8}{3}$), but $3$ works: $S(4)=4,2,3,\ldots$.
And from then on, the elements are all $3$'s, which I'll write as
$S(4)=4,2,\overline{3}$.
Here are a few more examples:
$$S(1)=1,\overline{1}$$
$$S(2)=2,2,\overline{2}$$
$$S(3)=3,1,\overline{2}$$
$$S(4)=4,2,\overline{3}$$
$$S(5)=5,1,\overline{3}$$
$$S(11)=11,1,3,1,\overline{4}$$
$$S(32)=32,2,2,4,5,3,1,\overline{7}$$
$$S(111)=111,1,2,2,4,6,7,3,8,6,4,2,\overline{13}$$
$$S(112)=112,2,3,3,5,1,7,3,8,6,4,2,\overline{13}$$
$$S(200)=200,2,2,4,2,6,1,7,1,5,1,9,7,5,3,1,\overline{16}$$
Has anyone studied these sequences?
Is there a simple proof that each sequence ends with a repeated number
$\overline{m}$?
Is there a way to predict the value of $m$ from the start $n$ without computing the
entire sequence up to $\overline{m}$?
Might it be that the repeat value $m=r(n)$ satisfies $r(n+1) \ge r(n)$?
This question occurred to me when thinking about streaming computation of means,
a not-infrequent calculation (e.g., computing mean temperatures).
Added, supporting the discoveries of several commenters: $r(n)$ red,
and now fit with $1.135 \sqrt{n}$ blue for $n\le 10000$:
As per Eckhard's request, with Will's function $\sqrt{4n/3} -1/2$:
| Let $k$ be sufficiently large .Denote the sequence $n,a_1,...a_{k-1}$ with $a_i\leq i+1$ where $i\leq k-1$.
The sum of the first $k$ elements including $n$ (denote it by $S_+(a_k)$) must be always a multiple of $k$ .
So let $d$ be a positive integer with $S_+(a_k)=d\cdot k=n+a_1+...+a_{k-1}\leq n+2+...+k=n-1+k\frac{k+1}{2}$ .
So $d=\frac{S_+(a_k)}{k}=\frac{n-1}{k}+\frac{k
+1}{2}<k$ for $k$ large enough.
We will see that the period we want is the number $d$.
We will show that the next element $a_k$ is the number $d$ and inductively we wil reach our goal.
Let $a_k$ be the next element ,so we must have $k+1|S_+(a_{k+1})$ meaning that $k+1|d\cdot k+a_k$
which gives $k+1|a_k-d$.
But $a_k\leq {k+1}$ and $d<k$ so we have only the case $a_k-d=0$ and so, $a_k=d$
(Repeat the argument again)
So we only wait until $\frac{n-1}{k}+\frac{k+1}{2}<k$ to find the period $d$.
This gives the bound closely to $\sqrt n$ which Lucia commented.
I hope this helps.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/146733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 5,
"answer_id": 4
} |
Taylor series coefficients This question arose in connection with A hard integral identity on MATH.SE.
Let
$$f(x)=\arctan{\left (\frac{S(x)}{\pi+S(x)}\right)}$$
with $S(x)=\operatorname{arctanh} x -\arctan x$, and let
$$f(x)=\sum_{n=0}^\infty a_nx^n=\frac{2}{3\pi}x^3-\frac{4}{9\pi^2}x^6+\frac{2}{7\pi}x^7+\frac{16}{81\pi^3}x^9-\frac{8}{21\pi^2}x^{10}+\ldots$$
be its Taylor series expansion at $x=0$. Some numerical evidence suggests the following interesting properties of the $a_n$ coefficients ($b_n$, $c_n$, $d_n$, $\tilde{c}_n$, $\tilde{d}_n$ are some positive rational numbers, $k>0$ is an integer):
1) $a_n=0$, for $n=4k$.
2) $a_n=\frac{2/n}{\pi}-\frac{b_n}{\pi^5}+(\text{maybe other terms of higher order in} 1/\pi)$, for $n=4k+3$.
3) $a_n=-\frac{c_n}{\pi^2}+\frac{d_n}{\pi^6}+(\text{maybe other terms of higher order in} 1/\pi)$, for $n=4k+2$.
4) $a_n=\frac{\tilde{c}_n}{\pi^3}-\frac{\tilde{d}_n}{\pi^7}+(\text{maybe other terms of higher order in } 1/\pi)$, for $n=4k+1$, $k>1$.
How can these properties (if correct) be proved?
P.S. We have
$$\arctan{\left(1+\frac{2S}{\pi}\right)}-\frac{\pi}{4}=\arctan{\left(\frac{1+2S/\pi-1}{1+(1+2S/\pi)}\right)}=\arctan{\left(\frac{S}{\pi+S}\right)} .$$
Using
$$\arctan(1+x)=\frac{\pi}{4}+\frac{1}{2}x-\frac{1}{4}x^2+\frac{1}{12}x^3-\frac{1}{40}x^5+\frac{1}{48}x^6-\frac{1}{112}x^7+\ldots$$
we get
$$\arctan{\left(\frac{S}{\pi+S}\right)}=\frac{S}{\pi}-\frac{S^2}{\pi^2}+\frac{2S^3}{3\pi^3}-\frac{4S^5}{5\pi^5}+\frac{4S^6}{3\pi^6}-\frac{8S^7}{7\pi^7}+\ldots$$
This proves 2), 3) and 4), because
$$S=2\left(\frac{x^3}{3}+\frac{x^7}{7}+\frac{x^{11}}{11}+\ldots\right)=2\sum_{k=0}^\infty \frac{x^{4k+3}}{4k+3} .$$
To prove 1), we need to prove the analogous property for $\arctan(1+x)$ and the proof can be based on the formula
$$\frac{d^n}{dx^n}(\arctan x)=\frac{(-1)^{n-1}(n-1)!}{(1+x^2)^{n/2}}\sin{\left (n\,\arcsin{\left(\frac{1}{\sqrt{1+x^2}}\right)}\right )}$$
proved in K. Adegoke and O. Layeni, The Higher Derivatives Of The Inverse Tangent Function and Rapidly Convergent BBP-Type Formulas For Pi, Applied Mathematics E-Notes, 10(2010), 70-75, available at http://www.math.nthu.edu.tw/~amen/2010/090408-2.pdf. This formula enables us to get a closed-form expression
$$\arctan{\left(\frac{S}{\pi+S}\right)}=\sum_{n=1}^\infty \frac{(-1)^{n-1}}{n}\,2^{n/2}\,\sin{\left(\frac{n\pi}{4}\right)}\,\frac{S^n}{\pi^n} .$$
So the initial questions are not actual now. However I'm still interested to know whether one can calculate in a closed-form the integral $$\int\limits_0^1 S^n(x)\frac{dx}{x} .$$
| Here is a possible approach toward proving (1). Let $i=\sqrt{-1}$. Let $g(x)=f(x) +f(ix) +f(-x)+f(-ix)$. We need to
show that $g(x)=0$. Unfortunately, Maple is unable to do this directly. Thus write each
$\arctan u$ as $\frac i2\log\frac{1-iu}{1+iu}$ and $\mathrm{arctanh}\,u$ as $\frac 12\log\frac{1+u}{1-u}$
and simplify. Every time you see a $\log \frac ab$, replace with $\log a-\log b$. Perhaps the
16 terms will cancel out in pairs. I could not get Maple to do this. You also might need to replace
expressions like $\log(-i(1-x))$ with $\log(-i)+\log(1-x)$. Perhaps Mathematica will be smarter than Maple in
showing $g(x)=0$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/155263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
A possibly surprising appearance of Fibonacci numbers Let $H(n) = 1/1 + 1/2 + \dotsb + 1/n,$ and for $i \leq j,$ let $a_1$ be the least $k$ such that
$$H(k) > 2H(j) - H(i),$$
let $a_2$ be the least k such that
$$H(k) > 2H(a_1) - H(j),$$
and for $n \geq 3,$ let $a_n$ be the least $k$ such that
$$H(k) > 2H(a_{n-1}) - H(a_{n-2}).$$
Prove (or disprove) that if $i = 5$ and $j = 8$, then $(a_n)$ is the sequence $(5,8,13,21,\dotsc)$ of Fibonacci numbers, and determine all $(i,j)$ for which $(a_n)$ is linearly recurrent.
| The statement is true. Write $F_n$ for the $n$-th Fibonacci (my indexing starts at $(F_0, F_1, F_2, F_3, \dots) = (0,1,1,2,\dots)$). We are being asked to show that
$$\frac{1}{F_{j+1}} > \sum_{m=F_{j}+1}^{F_{j+1}} \frac{1}{m} - \sum_{m=F_{j-1}+1}^{F_{j}} \frac{1}{m} > 0\ \mbox{for}\ j \geq 6.$$
Computer computations easily check this for $6 \leq j \leq 20$, so we only need to check large $j$.
We know that
$$H(n) = \log n + \gamma + \frac{1}{2n} + O(1/n^2)$$
where the constant in the $O( \ )$ can be made explicit -- something like $1/12$.
So
$$\sum_{m=B+1}^C \frac{1}{m} - \sum_{m=A+1}^B \frac{1}{m} = \log \frac{AC}{B^2} + \frac{1}{2} \left(\frac{1}{A}-\frac{2}{B}+\frac{1}{C} \right) + O(1/A^2)$$
where the constant in the $O( \ )$ is something like $1/3$.
Now, $F_{j-1} F_{j+1} = F_j^2 \pm 1$. So
$$\log \frac{F_{j-1} F_{j+1}}{F_j^2} = \log \left( 1 \pm \frac{1}{F_j^2} \right) = O(1/F_j^2).$$
So, up to terms with error $O(1/F_j^2)$ and a fairly small constant in the $O( \ )$, we are being asked to show that
$$\frac{1}{F_{j+1}} > \frac{1}{2} \left( \frac{1}{F_{j+1}} - \frac{2}{F_j} + \frac{1}{F_{j-1}} \right) > 0.$$
Set $\tau = \frac{1+\sqrt{5}}{2}$, and recall that $F_j = \frac{\tau^j}{\sqrt{5}} + O(\tau^{-j})$. Then, up to errors of $O(\tau^{-3j}) = O(1/F_j^3)$, this turns into the inequalities
$$\frac{1}{\tau^2} > \frac{1}{2} \left( 1 - \frac{2}{\tau} + \frac{1}{\tau^2} \right) > 0.$$
The latter is obviously true; the former can be checked by calculation.
It looks like the same analysis should apply sufficiently far out any linear recursion with solution of the form $G_j = c_1 \theta_1^j + \sum_{r=2}^s c_r \theta_r^j$ where $1 < \theta_1 < 1+\sqrt{2}$ and $|\theta_r|<1$ for $r>1$. (So $\theta_1$ should be a Pisot number.) The inequality $|\theta_r|<1$ makes $\log \frac{G_{j+1} G_{j-1}}{G_j^2} \approx \frac{\max_{r \geq 2} |\theta_r|^j \theta_1^j}{\theta_1^{2j}}$ be much less than $1/G_j \approx 1/\theta_1^j$. The inequality $\theta_1 < 1+\sqrt{2}$ makes $1/\theta^2 > (1/2)(1-2/\theta+1/\theta^2)$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/196681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 2,
"answer_id": 1
} |
Prove $4\sum_{k=1}^{p-1}\frac{(-1)^k}{k^2}\equiv 3\sum_{k=1}^{p-1}\frac{1}{k^2}\pmod{p^2}$ Wolstenholme's theorem is stated as follows:
if $p>3$ is a prime, then
\begin{align*}
\sum_{k=1}^{p-1}\frac{1}{k}\equiv 0 \pmod{p^2},\\
\sum_{k=1}^{p-1}\frac{1}{k^2} \equiv 0 \pmod{p}.
\end{align*}
It is also not hard to prove that
$$
\sum_{k=1}^{p-1}\frac{(-1)^k}{k^2}\equiv 0 \pmod{p}.
$$
However, there are some relationships between $\sum_{k=1}^{p-1}\frac{(-1)^k}{k^2}$ and $\sum_{k=1}^{p-1}\frac{1}{k^2}$ mod $p^2$, which I can not prove.
Question:
If $p$ is an odd prime, then
$$
4\sum_{k=1}^{p-1}\frac{(-1)^k}{k^2}\equiv 3\sum_{k=1}^{p-1}\frac{1}{k^2}\pmod{p^2}.
$$
I have verified this congruence for $p$ upto $7919$.
Comments:
(1) Since
$$4\sum_{k=1}^{p-1}\frac{(-1)^k}{k^2}=2\sum_{k=1}^{\frac{p-1}{2}}\frac{1}{k^2}-4\sum_{k=1}^{p-1}\frac{1}{k^2},$$
we need to prove
$$2\sum_{k=1}^{\frac{p-1}{2}}\frac{1}{k^2}\equiv 7\sum_{k=1}^{p-1}\frac{1}{k^2} \pmod{p^2}.$$
This idea was given by Fedor Petrov.
(2) It is interesting that the congruence in the question is ture mod $p^3$ for $p\ge 7$, i.e.,
$$
4\sum_{k=1}^{p-1}\frac{(-1)^k}{k^2}\equiv 3\sum_{k=1}^{p-1}\frac{1}{k^2}\pmod{p^3}, \quad \text{for $p\ge 7$}.
$$
This was conjectured by tkr.
I appreciate any proofs, hints, or references!
| This goes way back to Emma Lehmer, see her elementary paper on Fermat quotients and Bernoulli numbers from 1938.
Assume $p\ge 7$. First, I reformulate your congruence. I replace
$$4\sum_{k=1}^{p-1}\frac{(-1)^k}{k^2}\equiv 3\sum_{k=1}^{p-1}\frac{1}{k^2}\pmod{p^3}$$
with the equivalent
$$\sum_{k=1}^{p-1} \frac{1}{k^2} \equiv 8\sum_{k=1}^{\frac{p-1}{2}}\frac{1}{(p-2k)^2}\pmod{p^3}$$
Lehmer has proved the following, see equation (19) in the linked paper:
$$\sum_{k=1}^{\frac{p-1}{2}} (p-2k)^{2r} \equiv p 2^{2r-1} B_{2r} \pmod {p^3}$$
(valid when $2r \not\equiv 2 \pmod {p-1}$). Plugging $2r=p^3-p^2-2$, we get
$$\sum_{k=1}^{\frac{p-1}{2}} (p-2k)^{p^3-p^2-2} \equiv p 2^{p^3-p^2-3} B_{p^3-p^2-2} \pmod {p^3}$$
Now use Euler's theorem to replace $x^{p^3-p^2-2}$ with $x^{-2}$:
$$\sum_{r=1}^{\frac{p-1}{2}} \left(\frac{1}{p-2k}\right)^2 \equiv p 2^{-3} B_{p^3-p^2-2} \pmod {p^3}$$
So your congruence becomes
$$(*)\qquad \sum_{k=1}^{p-1} \frac{1}{k^2} \equiv pB_{p^3-p^2-2} \pmod {p^3}$$
This was also proved in Lehmer's paper - in equation (15) she writes
$$\sum_{k=1}^{p-1} k^{2r} \equiv pB_{2r} \pmod {p^3}$$
(valid when $2r \neq 2 \mod {p-1}$). Plugging $2r=p^3-p^2-2$ and using Euler's theorem, we get the desired result.
It is now easy to generalize - when $2r \not\equiv 2 \pmod {p-1}$, we have
$$\sum_{k=1}^{p-1} (-1)^k k^{2r} \equiv (1-2^{2r}) \sum_{k=1}^{p-1} k^{2r} \pmod {p^3}$$
Plugging $2r=p^3-p^2-2c$ ($c \not\equiv -1 \pmod {\frac{p-1}{2}}$) we get
$$\sum_{k=1}^{p-1} \frac{(-1)^k}{k^{2c}} \equiv (1-2^{-2c}) \sum_{k=1}^{p-1} \frac{1}{k^{2c}} \pmod {p^3}$$
To get a taste of things, I include a proof of $(*)$. Via Faulhaber's formula for sum of powers, we get:
$$\sum_{k=1}^{p-1} \frac{1}{k^2} \equiv \sum_{k=1}^{p-1} k^{p^3-p^2-2} \equiv \sum_{k=1}^{p} k^{p^3-p^2-2} = $$
$$\frac{1}{p^3-p^2-1} \sum_{j=0}^{p^3-p^2-2}\binom{p^3-p^2-1}{j}B_j p^{p^3-p^2-1-j} \pmod {p^3}$$
Note that odd-indexed Bernoulli's vanish, and that by the Von Staudt–Clausen theorem, the even-indexed Bernoulli's have $p$ in the denominator with multiplicity $\le 1$. Additionally, $B_{p^3-p^2-4}$ is $p$-integral (since $p-1 \nmid p^3-p^2-4$).
Hence, for the purpose of evaluating the sum modulo $p^3$, we can drop almost all the terms and remain only with the last one:
$$\sum_{k=1}^{p-1} \frac{1}{k^2} = \frac{1}{p^3-p^2-1} \binom{p^3-p^2-1}{p^3-p^2-2} B_{p^3-p^2-2} p = pB_{p^3-p^2-2} \pmod {p^3}$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/226804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 1,
"answer_id": 0
} |
Mutation of valued quivers Mutations of valued quivers are defined in cluster algebras II, Proposition 8.1 on page 28. I have a question about the number $c'$. For example, let $a = 2, b=1, c=1$ and consider the quiver $Q$:
$1 \overset{a}{\to} 2 \overset{b}{\to} 3$ and there is an arrow $1 \overset{c}{\to} 3$ in this quiver $Q$.
Now we mutate at $2$. According the formula $ \pm \sqrt{c} \pm \sqrt{c'} = \sqrt{ab}$. In this example, it is $ -1 + \sqrt{c'} = \sqrt{2}$. But $c' = (\sqrt{2}+1)^2$ is not an integer. Where do I make a mistake? Thank you very much.
| I don't think your diagram is coming from a skew-symmetrizable matrix. Let $B$ be a skew-symmetrizable matrix and assume $Q = \Gamma(B)$ so your quiver is the diagram of $B$ as defined in Definition 7.3 of the linked paper. Then we must have
$$B = \begin{bmatrix}0 & x & y \\ \frac{-1}{x} &0 & z \\ \frac{-1}{y} & \frac{-2}{z} &0 \end{bmatrix}$$
with $x,y,z > 0$.
There also must exist a diagonal matrix $D = \mathrm{diag}(r,s,t)$ with $r,s,t > 0$ so that
$$DB = \begin{bmatrix}0 & rx & ry \\ \frac{-s}{x} &0 & sz \\ \frac{-t}{y} & \frac{-2t}{z} &0 \end{bmatrix}$$
is skew-symmetric.
So, $rx = \frac{s}{x}$, $ry = \frac{t}{y}$, and $sz = \frac{2t}{z}$ or equivalently $x^2 = \frac{s}{r}$, $y^2 = \frac{t}{r}$, and $z^2 = \frac{2t}{s}$.
But this leads to a contradiction since $\left(\frac{y}{x}\right)^2 = \frac{y^2}{x^2} = \frac{t}{s}$, but also $z^2 = 2\left(\frac{t}{s}\right)$.
All the variables above aren't completely necessary. Since all entries of $B$ are integers we see $x = y = 1$ and $z = 1$ or $z = 2$. Then then follows $r = s = t$ and the contradiction is $z^2 = 2$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/248083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find all $m$ such $2^m+1\mid5^m-1$ The problem comes from a problem I encountered when I wrote the article
Find all positive integer $m$ such
$$2^{m}+1\mid5^m-1$$
it seem there no solution. I think it might be necessary to use quadratic reciprocity knowledge to solve this problem.
If $m$ is odd then $2^m+1$ is divisible by 3 but $5^m-1$ is not.
so $m$ be even, take $m=2n$, then
$$4^n+1\mid25^n-1.$$
if $n$ is odd,then $4^n+1$ is divisible by $5$, but $25^n-1$ is not
so $n$ is even,take $n=2p$, we have
$$16^p+1\mid625^p-1.$$
| Here is a proof.
Theorem. $2^m+1$ never divides $5^m-1$.
Assume that there is some $m$ such that $2^m+1$ divides
$5^m-1$. We already know that $m$ must be divisible by $4$.
Let $m = 2^n a$ with an odd integer $a$ and $n \ge 2$.
The $n$th Fermat number $$F_n = 2^{2^n} + 1$$ is congruent
to $2$ mod $5$ (this uses $n \ge 2$), so it has a prime
divisor $p$ such that $$p \equiv \pm 2 \pmod 5.$$
We know that $p-1 = 2^{n+1}k$ for some integer $k$.
Since
$\left(\frac{5}{p}\right) = \left(\frac{p}{5}\right) = -1$,
we have that $$5^{2^n k} = 5^{(p-1)/2} \equiv -1 \pmod p,$$
so $$5^{mk} = (5^{2^n k})^a \equiv -1 \pmod p$$
as well. In particular,
$$5^m \not\equiv 1 \pmod p.$$
On the other hand,
$$2^m = (2^{2^n})^a \equiv (-1)^a = -1 \pmod p.$$
Thus $p$ divides $2^m+1$, but does not divide $5^m-1$,
a contradiction.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/317643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
} |
$\sum_{k =1, k \neq j}^{N-1} \csc^2\left(\pi \frac{k}{N} \right)\csc^2\left(\pi \frac{j-k}{N} \right)=?$ It is well-known that one can evaluate the sum
$$\sum_{k =1}^{N-1} \csc^2\left(\pi \frac{k}{N} \right)=\frac{N^2-1}{3}.$$
The answer to this problem can be found here
click here.
I am now interested in the more difficult problem of evaluating for some $j \in \{1,...,N-1\}$ the sum (we do not sum over $j$)
$$\sum_{k =1, k \neq j}^{N-1} \csc^2\left(\pi \frac{k}{N} \right)\csc^2\left(\pi \frac{j-k}{N} \right)=?$$
Does this one still allow for an explicit answer?
| Start from the well known formula
\begin{equation}
2^{N-1} \prod _{k=1}^N \left[\cos (x-y)-\cos \left(x+y+\frac{2\pi k}{N}\right)\right]=\cos N(x-y)-\cos N(x+y),
\end{equation}
take logarithmic derivative
\begin{equation}
\sum _{k=1}^N \frac{\sin(x-y)}{\cos (x-y)-\cos \left(x+y+\frac{2\pi k}{N}\right)}=\frac{N\sin N(x-y)}{\cos N (x-y)-\cos N(x+y)},
\end{equation}
rewrite it as
\begin{equation}
N+\sum _{k=1}^N \frac{\cos(x-y)+\cos \left(x+y+\frac{2\pi k}{N}\right)}{\cos (x-y)-\cos \left(x+y+\frac{2\pi k}{N}\right)}=\frac{2N\sin N(x-y)}{\cos N (x-y)-\cos N(x+y)}\cot(x-y),
\end{equation}
and simplify to get
\begin{equation}
\sum_{k=1}^N\cot\left(x-\frac{\pi k}{N}\right)\cot\left(y-\frac{\pi k}{N}\right)=\frac{2N\cot(x-y)\sin N(x-y)}{\cos N(x-y)-\cos N(x+y)}-N.
\end{equation}
Then take derivatives wrt $x$ and $y$
$$
\sum_{k=1}^N\csc^2\left(x-\frac{\pi k}{N}\right)\csc^2\left(y-\frac{\pi k}{N}\right)=\frac{N}{\sin ^2(x-y)} \left(\frac{N}{\sin ^2Nx}+\frac{N}{\sin ^2Ny}-\frac{2\sin N(x-y)}{\sin Nx\sin Ny}\,\cot (x-y) \right).
$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/349776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Diophantine representation of the set of prime numbers of the form $n²+1$ A polynomial formula for the primes (with 26 variables) was presented by Jones, J., Sato, D., Wada, H. and Wiens, D. (1976). Diophantine representation of the set of prime numbers. American Mathematical Monthly, 83, 449-464.
The set of prime numbers is identical with the set of positive values taken on by the polynomial
$(k+2)(1-(wz+h+j-q)^2-((gk+2g+k+1)\cdot(h+j)+h-z)^2-(2n+p+q+z-e)^2-(16(k+1)^3\cdot(k+2)\cdot(n+1)^2+1-f^2)^2-(e^3\cdot(e+2)(a+1)^2+1-o^2)^2-((a^2-1)y^2+1-x^2)^2-(16r^2y^4(a^2-1)+1-u^2)^2-(((a+u^2(u^2-a))^2-1)\cdot(n+4dy)^2+1-(x+cu)^2)^2-(n+l+v-y)^2-((a^2-1)l^2+1-m^2)^2-(ai+k+1-l-i)^2-(p+l(a-n-1)+b(2an+2a-n^2-2n-2)-m)^2-(q+y(a-p-1)+s(2ap+2a-p^2-2p-2)-x)^2-(z+pl(a-p)+t(2ap-p^2-1)-pm)^2)$
as the variables range over the nonnegative integers.
I am asking if there exist a similar result for the primes of the form $n²+1$, i.e., this set of prime numbers is identical with the set of positive values taken on by certain polynomial.
| Call your polynomial $P$. I propose the following polynomial:
$$
P' = (\xi^2+1)(1 - (\xi^2+1-P)^2)
$$
Proof (that the positive values of $P'$ are exactly the primes of the form $N^2+1$):
Let $P_0$ be one of the values of $P$, and let $\xi_0$ be any integer.
Case (i). Suppose $P_0 = \xi_0^2+1$. Then the value of the above polynomial (with the appropriate substitutions made) is $P_0 = \xi_0^2+1$, since the second factor evaluates to unity.
Case (ii). Suppose $P_0 \neq \xi_0^2+1$. Now the second factor evaluates to some non-positive number, and hence the value of the polynomial itself is non-positive.
Now the first case gives us all primes of the form $N^2+1$ as values of $P'$, and no other values (since $P_0=\xi_0^2+1$ is positive), whereas the second case gives us zero or negative numbers exclusively. So the positive values of $P'$ are exactly the primes of the form $N^2+1$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/350089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
A probability question from sociology We know that $\frac{1}{2} \leq a \leq p \leq 1$. And, $n \geq 3$ is a positive odd number, and $t$ is an integer. $a$ satisfies the equation below.
\begin{equation} \small
\begin{aligned}
&\sum_{t=0}^{n-1} \left( {n-1 \choose t} [p a^{t} (1-a)^{n-t-1}+(1-p) (1-a)^{t} a^{n-{t}-1}] \cdot [\frac{a^{t+1} (1-a)^{n-t-1}}{a^{t+1} (1-a)^{n-t-1}+(1-a)^{t+1} a^{n-t-1}}-\frac{a^{t} (1-a)^{n-t}}{a^{t} (1-a)^{n-t}+(1-a)^{t} a^{n-t}}] \right) \\[15pt]
&={n-1 \choose {\frac{n-1}{2}}} a^{\frac{n-1}{2}}(1-a)^{\frac{n-1}{2}} (2p-1)
\end{aligned}
\end{equation}
We want to prove that $a$ converges to $\frac{1}{2}$.
Any references or remarks are appreciated!
| If I expand the left-hand-side of your equation around $a=1/2$ I find
$$
\sum_{t=0}^{n-1} {n-1 \choose t} [p a^{t} (1-a)^{n-t-1}+(1-p) (1-a)^{t} a^{n-{t}-1}] $$
$$\times\left[\frac{a^{t+1} (1-a)^{n-t-1}}{a^{t+1} (1-a)^{n-t-1}+(1-a)^{t+1} a^{n-t-1}}-\frac{a^{t} (1-a)^{n-t}}{a^{t} (1-a)^{n-t}+(1-a)^{t} a^{n-t}}\right] $$
$$=\sum_{t=0}^{n-1}\left(a-\frac{1}{2}\right) 2^{2-n} \binom{n-1}{t}+{\cal O}(a-1/2)^3=2a-1+{\cal O}(a-1/2)^3.$$
I similarly expand the right-hand-side,
$${n-1 \choose {\frac{n-1}{2}}} a^{\frac{n-1}{2}}(1-a)^{\frac{n-1}{2}} (2p-1)=2^{1-n} (2 p-1) \binom{n-1}{\frac{n-1}{2}}-\left(a-\tfrac{1}{2}\right)^2 2^{2-n} (n-1) (2 p-1) \binom{n-1}{\frac{n-1}{2}}+{\cal O}(a-1/2)^4.$$
Equating left-hand-side and right-hand-side I solve for $a$,
$$a=\frac{1}{2}+\frac{2^n \left(\sqrt{2^{3-2 n} (n-1) (1-2 p)^2 q^2+1}-1\right)}{4(n-1) (2 p-1) q},\;\;q=\binom{n-1}{\frac{n-1}{2}}.$$
For $n\gg 1$ this solution tends to
$$a\rightarrow\frac{1}{2}+\frac{1}{2\sqrt{2n}}\frac{\sqrt{16 (p-1) p+\pi +4}-\sqrt{\pi }}{2 p-1}.$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/363761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove this high-degree inequality Let $x$,$y$,$z$ be positive real numbers which satisfy $xyz=1$. Prove that:
$(x^{10}+y^{10}+z^{10})^2 \geq 3(x^{13}+y^{13}+z^{13})$.
And there is a similar question: Let $x$,$y$,$z$ be positive real numbers which satisfy the inequality
$(2x^4+3y^4)(2y^4+3z^4)(2z^4+3x^4) \leq(3x+2y)(3y+2z)(3z+2x)$. Prove this inequality:
$xyz\leq 1$.
| This is to give an alternative proof of the inequality in Fedor Petrov's nice answer
$$(2 x^{10} + x^{-20})^2\ge3 (2 x^{13} + x^{-26})$$
for all $x > 0$.
We have
\begin{align*}
&2x^{13+1/3} + x^{-26-2/3} - (2x^{13} + x^{-26})\\
={}& 2(x^{13+1/3} - x^{13}) + (x^{-13-1/3} - x^{-13})
(x^{-13-1/3} + x^{-13})\\
={}& x^{13}(x^{1/3}-1)(2 - x^{-119/3} - x^{-118/3})\\
\ge{}& 0.
\end{align*}
Thus, it suffices to prove that
$$(2 x^{10} + x^{-20})^2\ge 3(2x^{13+1/3} + x^{-26-2/3}).$$
Letting $x = a^{3/10}$, it suffices to prove that, for all $a > 0$,
$$(2a^3 + a^{-6})^2 \ge 3(2a^4 + a^{-8}).$$
We have
\begin{align*}
&(2a^3 + a^{-6})^2 - 3(2a^4 + a^{-8})\\
={}& 4a^6 + 4a^{-3}+ a^{-12} - 6a^4 - 3a^{-8}\\
\ge\,& 4a^6 + 2(3a^{-2} - 1) + a^{-12} - 6a^4 - 3a^{-8}\\
={}& \frac{(2a^2+1)(2a^{12} - 2a^6 + 1)(a^2-1)^2}{a^{12}}\\
\ge{}& 0
\end{align*}
where we have used
$2a^{-3} - (3a^{-2} - 1) = \frac{(a+2)(a-1)^2}{a^3} \ge 0$.
We are done.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/385942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 4
} |
Calculation of a series It seems that we have:
$$\sum_{n\geq 1} \frac{2^n}{3^{2^{n-1}}+1}=1.$$
Please, how can one prove it?
| This is the special case $q=3$ of a formula
$$
\qquad\qquad
\sum_{n=1}^\infty \frac{2^n}{q^{2^{n-1}}+1} = \frac{2}{q-1}
\qquad\qquad(*)
$$
which holds for all $q$ such that the sum converges, i.e. such that $|q|>1$.
This follows from the identity
$$
\frac{1}{x-1} - \frac{2}{x^2-1} = \frac{1}{x+1}.
$$
Substitute $q^{2^{n-1}}$ for $x$, multiply by $2^n$, and sum from
$n=1$ to $n=N$ to obtain the telescoping series
$$
\frac{2}{q-1} - \frac{2^{N+1}}{q^{2^N}-1}
= \sum_{n=1}^N
\left( \frac{2^n}{q^{2^{n-1}}-1} - \frac{2^{n+1}}{q^{2^n}-1} \right)
= \sum_{n=1}^N \frac{2^{n-1}} {q^{2^n} + 1}.
$$
Taking the limit as $N \to \infty$ yields the claimed formula $(*)$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/389313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 1,
"answer_id": 0
} |
Computing global maximum For $\lambda\in\mathbb{R}$, I want to find the expression of $f(\lambda)$:
$$f(\lambda)=\max_{E\in\mathbb{C}}arccosh{\frac{|E^2+i\lambda E|+|E^2+i\lambda E-4|}{4}}-arccosh{\frac{|E^2-i\lambda E|+|E^2-i\lambda E-4|}{4}}$$
that is
$$A={\sqrt{a^2+b^2} \sqrt{a^2+(b+\lambda )^2}+\sqrt{\left(a^2-b^2-b \lambda -4\right)^2+(2 a b+a \lambda )^2}+\sqrt{\left(\sqrt{a^2+b^2} \sqrt{a^2+(b+\lambda )^2}+\sqrt{\left(a^2-b^2-b \lambda -4\right)^2+(2 a b+a \lambda )^2}\right)^2-16}}$$
$$B={\sqrt{a^2+b^2} \sqrt{a^2+(b-\lambda )^2}+\sqrt{\left(a^2-b^2+b \lambda -4\right)^2+(2 a b-a \lambda )^2}+\sqrt{\left(\sqrt{a^2+b^2} \sqrt{a^2+(b-\lambda )^2}+\sqrt{\left(a^2-b^2+b \lambda -4\right)^2+(2 a b-a \lambda )^2}\right)^2-16}}$$
$$f(\lambda)=\max_{a\in\mathbb{R},b\in\mathbb{R}}\log\frac{A}{B} $$
| As $g(t)=\log{t}$ is a strictly increasing function, what you really needs is just the maximum of $\frac{A}{B}.$
Also, after some experiments, it seems that, for pure imaginary numbers only, the maximum occurs when the imaginary part is equal to $\lambda$. That is (in your notation), when $a=0$ the maximum is obtained with $b=\lambda$. So "divide and conquer", by restricting at one curve at a time, may be a efficient approach to the full solution.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/407264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Two interpretations of a sequence: an opportunity for combinatorics The sequence that is addressed here is resourced from the most useful site OEIS, listed as A014153, with a generating function
$$\frac1{(1-x)^2}\prod_{k=1}^{\infty}\frac1{1-x^k}.$$
In particular, look at these two interpretations mentioned there.
$a(n)=$ Number of partitions of $n$ with three kinds of $1$.
Example. $a(2)=7$ because we have $2, 1+1, 1+1', 1+1'', 1'+1', 1'+1'', 1''+1''$.
$b(n)=$ Sum of parts, counted without multiplicity, in all partitions of $n$.
Example. $b(3)=7$ because the partitions are $3, 2+1, 1+1+1$ and removing repetitions leaves $3, 2+1, 1$. Adding these shows $b(3)=7$.
I could not resist asking:
QUESTION. Can you provide a combinatorial proof for $a(n)=b(n+1)$ for all $n$.
| For what it's worth, here is a non-combinatorial proof.
That $\frac{1}{(1-x)^2}\prod_{k=1}^{\infty} \frac{1}{1-x^k}$ is the generating function for partitions of $n$ with three flavors of $1$'s is clear.
So I will focus on the sum of all the irredundant parts of partitions of $n$. Let me use $||\lambda||$ to denote this sum of irredundant parts (in contrast to $|\lambda|$ which is the usual size).
By basic combinatorial reasoning we have
$$\begin{align}\sum_{\lambda} q^{||\lambda||} x^{|\lambda|} &= (1+qx+qx^2+\cdots)(1+q^2x^2+q^2x^4+\cdots)(1+q^3x^3+q^3x^6+\cdots)\cdots \\
&= \left(\frac{q}{1-x}+(1-q)\right)\left(\frac{q^2}{(1-x^2)}+(1-q^2)\right)\left(\frac{q^3}{(1-x^3)}+(1-q^3)\right)\cdots \\
&= \frac{1+x(q-1)}{(1-x)} \cdot \frac{1+x^2(q^2-1)}{(1-x^2)} \cdot \frac{1+x^3(q^3-1)}{(1-x^3)} \cdots \\
&= \frac{\prod_{k=1}(1+x^k(q^k-1))}{\prod_{k=1}^{\infty}(1-x^k)}.
\end{align} $$
Hence $\sum_{\lambda} ||\lambda|| \cdot x^{|\lambda|} $ is what we get from the above generating function by taking the derivative with respect to $q$ and setting $q:=1$. So let $f_k(q,x)=1+x^k(q^k-1)$. Being a little fast and loose with analytic issues, we can apply the "infinite product rule" to conclude
$$
\begin{align}
\sum_{\lambda} ||\lambda|| \cdot x^{|\lambda|} &= \frac{\sum_{k=1}^{\infty} \partial/\partial q f_k(x,q) \mid_{q=1} \cdot \prod_{i\neq k}f_i(x,q)\mid_{q=1}}{\prod_{k=1}^{\infty}(1-x^k)} \\
&=\frac{\sum_{k=1}^{\infty} (kx^kq^{k-1}) \mid_{q=1} \cdot \prod_{i\neq k}(1+x^i(q^i-1))\mid_{q=1}}{\prod_{k=1}^{\infty}(1-x^k)}\\
&= \frac{\sum_{k=1}^{\infty} kx^k}{\prod_{k=1}^{\infty}(1-x^k)} \\
&= \frac{x}{(1-x)^2} \cdot \prod_{k=1}^{\infty}\frac{1}{(1-x^k)},
\end{align}$$
exactly as claimed (with the extra power of $x$ reflecting the $n+1$ in $b(n+1)$).
| {
"language": "en",
"url": "https://mathoverflow.net/questions/408194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 0
} |
A question on the real root of a polynomial For $n\geq 1$, given a polynomial
\begin{equation*}
\begin{aligned}
f(x)=&\frac{2+(x+3)\sqrt{-x}}{2(x+4)}(\sqrt{-x})^n+\frac{2-(x+3)\sqrt{-x}}{2(x+4)}(-\sqrt{-x})^n \\
&+\frac{x+2+\sqrt{x(x+4)}}{2(x+4)}\left ( \frac{x+\sqrt{x(x+4)}}{2} \right )^n+\frac{x+2-\sqrt{x(x+4)}}{2(x+4)}\left ( \frac{x-\sqrt{x(x+4)}}{2} \right )^n.
\end{aligned}
\end{equation*}
Using Mathematic $12.3$, when $n$ is large enough, we give the distribution of the roots of $f(x)$ in the complex plane as follows
In this figure, we can see that the closure of the real roots of $f(x)$ may be $\left [ -4,0 \right ]$.
So we have the following question
Question: all roots of $f(x)$ are real? It seems yes! But we have no way of proving it.
| Assume that we have $x = - (t + 1/t)^2 = -t^2 - 1/t^2 - 2$ for some $t \in \mathbb{C}$. Then
$$
\sqrt{-x} = t + \frac{1}{t} = \frac{t^2 + 1}{t},\quad x + 4 = -\left(t - \frac{1}{t}\right)^2, \quad \sqrt{x(x + 4)} = t^2 - \frac{1}{t^2},
\\
x + \sqrt{x(x + 4)} = -2\left(1 + \frac{1}{t^2}\right) = -2\frac{1 + t^2}{t^2}, \quad x - \sqrt{x(x + 4)} = -2(1 + t^2)
$$
Consequently, we get
$$
f_n(x)= \tilde f_n(t) = \frac{(1 + t^2)^n(t^{n + 2} - 1)^2}{t^{2n}(t^2 - 1)^2},\quad\text{if } n\text{ is even},
\\
f_n(x) = \tilde f_n(t) = -\frac{(1 + t^2)^n(1 - t^{n -1})(1 - t^{n+5})}{t^{2n}(t^2 - 1)^2},\quad\text{if } n\text{ is odd}.
$$
Hence the only roots of $\tilde f_n$ are the suitable roots of unity and the roots of $f_n$ are all in the real segment $[-4, 0]$ and their closure is the whole segment for large $n$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/440962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 6,
"answer_id": 0
} |
Any sum of 2 dice with equal probability The question is the following: Can one create two nonidentical loaded 6-sided dice such that when one throws with both dice and sums their values the probability of any sum (from 2 to 12) is the same. I said nonidentical because its easy to verify that with identical loaded dice its not possible.
Formally: Let's say that $q_{i}$ is the probability that we throw $i$ on the first die and $p_{i}$ is the same for the second die. $p_{i},q_{i} \in [0,1]$ for all $i \in 1\ldots 6$. The question is that with these constraints are there $q_{i}$s and $p_{i}$s that satisfy the following equations:
$ q_{1} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{2} + q_{2} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{3} + q_{2} \cdot p_{2} + q_{3} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{4} + q_{2} \cdot p_{3} + q_{3} \cdot p_{2} + q_{4} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{5} + q_{2} \cdot p_{4} + q_{3} \cdot p_{3} + q_{4} \cdot p_{2} + q_{5} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{6} + q_{2} \cdot p_{5} + q_{3} \cdot p_{4} + q_{4} \cdot p_{3} + q_{5} \cdot p_{2} + q_{6} \cdot p_{1} = \frac{1}{11}$
$ q_{2} \cdot p_{6} + q_{3} \cdot p_{5} + q_{4} \cdot p_{4} + q_{5} \cdot p_{3} + q_{6} \cdot p_{2} = \frac{1}{11}$
$ q_{3} \cdot p_{6} + q_{4} \cdot p_{5} + q_{5} \cdot p_{4} + q_{6} \cdot p_{3} = \frac{1}{11}$
$ q_{4} \cdot p_{6} + q_{5} \cdot p_{5} + q_{6} \cdot p_{4} = \frac{1}{11}$
$ q_{5} \cdot p_{6} + q_{6} \cdot p_{5} = \frac{1}{11}$
$ q_{6} \cdot p_{6} = \frac{1}{11}$
I don't really now how to start with this. Any suggestions are welcome.
| You can write this as a single polynomial equation
$$p(x)q(x)=\frac1{11}(x^2+x^3+\cdots+x^{12})$$
where $p(x)=p_1x+p_2x^2+\cdots+p_6x^6$ and similarly for $q(x)$.
So this reduces to the question of factorizing $(x^2+\cdots+x^{12})/11$
where the factors satisfy some extra conditions (coefficients positive,
$p(1)=1$ etc.).
This is a standard method (generating functions).
| {
"language": "en",
"url": "https://mathoverflow.net/questions/41310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 6,
"answer_id": 4
} |
What is the independence number of hamming graph? Hamming graph H(d,q) is Cartesian product of d complete graphs Kq. We know the independence number of direct product of d complete graphs Kq. What is the independence number of hamming graph?
| Experiments suggest it might be $q^{d-1}$.
Here is data from sage:
d= 2 q= 2 alpha= 2 factor= 2 alpha - q^(d-1) 0
d= 2 q= 3 alpha= 3 factor= 3 alpha - q^(d-1) 0
d= 2 q= 4 alpha= 4 factor= 2^2 alpha - q^(d-1) 0
d= 2 q= 5 alpha= 5 factor= 5 alpha - q^(d-1) 0
d= 2 q= 6 alpha= 6 factor= 2 * 3 alpha - q^(d-1) 0
d= 3 q= 2 alpha= 4 factor= 2^2 alpha - q^(d-1) 0
d= 3 q= 3 alpha= 9 factor= 3^2 alpha - q^(d-1) 0
d= 3 q= 4 alpha= 16 factor= 2^4 alpha - q^(d-1) 0
d= 3 q= 5 alpha= 25 factor= 5^2 alpha - q^(d-1) 0
d= 3 q= 6 alpha= 36 factor= 2^2 * 3^2 alpha - q^(d-1) 0
d= 4 q= 2 alpha= 8 factor= 2^3 alpha - q^(d-1) 0
d= 4 q= 3 alpha= 27 factor= 3^3 alpha - q^(d-1) 0
d= 4 q= 4 alpha= 64 factor= 2^6 alpha - q^(d-1) 0
d= 4 q= 5 alpha= 125 factor= 5^3 alpha - q^(d-1) 0
| {
"language": "en",
"url": "https://mathoverflow.net/questions/141842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
On $a^4+nb^4 = c^4+nd^4$ and Chebyshev polynomials In a 1995 paper, Choudhry gave a table of solutions to the quartic Diophantine equation,
$a^4+nb^4 = c^4+nd^4\tag{1}$
for $n\leq101$. Seiji Tomita recently extended this to $n<1000$ and solved all $n$ except $n=967$ (which was later found by Andrew Bremner).
It can be shown there is an infinite number of $n$ such that $(1)$ is solvable, such as $n=v^2-3$. In general, given the Chebyshev polynomials of the first kind $T_m(x)$
$$\begin{aligned}
T_1(x) &= x\\
T_2(x) &= 2x^2-1\\
T_3(x) &= 4x^3-3x\\
\end{aligned}$$
etc, and the second kind $U_m(x)$
$$\begin{aligned}
U_1(x) &= 2x\\
U_2(x) &= 4x^2-1\\
U_3(x) &= 8x^3-4x\\
\end{aligned}$$
etc, define,
$$n = \frac{T_m(x)}{x}$$
$$y = \frac{U_{m-2}(x)}{x}$$
then one can observe that,
$(n + n x + y)^4 + n(1 - n x - x y)^4 = (n - n x + y)^4 + n(1 + n x + x y)^4\tag{2}$
Questions:
*
*Is $(1)$ solvable in the integers for all positive integer $n$ where $a^4\neq c^4$? (This has also been asked in this MSE post.)
*How do we show that $(2)$ is indeed true for all positive integer $m$?
| Question 2:
The reason must be that Chebyshev polynomials solve Fermat-Pell equations.
The difference between the two sides of equation (2),
$$
(n+nx+y)^4 + n(1−nx−xy)^4 = (n−nx+y)^4 + n(1+nx+xy)^4,
$$
factors as $8nx(n+y) \phantom. ((x^2-1)y^2 + 2n(x^2-1)y + 1 - n^2)$.
The last factor is a quadratic in $n$ with leading coefficient $1$,
so has solutions iff the discriminant is a square.
This yields the equation $(x^2-1) u^2 + 1 = t^2$ where $u=xy$,
which is a Fermat-Pell equation in $u$ and $t$
with fundamental solution $(u,t)=(1,x)$, etc.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/142192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Taylor series coefficients This question arose in connection with A hard integral identity on MATH.SE.
Let
$$f(x)=\arctan{\left (\frac{S(x)}{\pi+S(x)}\right)}$$
with $S(x)=\operatorname{arctanh} x -\arctan x$, and let
$$f(x)=\sum_{n=0}^\infty a_nx^n=\frac{2}{3\pi}x^3-\frac{4}{9\pi^2}x^6+\frac{2}{7\pi}x^7+\frac{16}{81\pi^3}x^9-\frac{8}{21\pi^2}x^{10}+\ldots$$
be its Taylor series expansion at $x=0$. Some numerical evidence suggests the following interesting properties of the $a_n$ coefficients ($b_n$, $c_n$, $d_n$, $\tilde{c}_n$, $\tilde{d}_n$ are some positive rational numbers, $k>0$ is an integer):
1) $a_n=0$, for $n=4k$.
2) $a_n=\frac{2/n}{\pi}-\frac{b_n}{\pi^5}+(\text{maybe other terms of higher order in} 1/\pi)$, for $n=4k+3$.
3) $a_n=-\frac{c_n}{\pi^2}+\frac{d_n}{\pi^6}+(\text{maybe other terms of higher order in} 1/\pi)$, for $n=4k+2$.
4) $a_n=\frac{\tilde{c}_n}{\pi^3}-\frac{\tilde{d}_n}{\pi^7}+(\text{maybe other terms of higher order in } 1/\pi)$, for $n=4k+1$, $k>1$.
How can these properties (if correct) be proved?
P.S. We have
$$\arctan{\left(1+\frac{2S}{\pi}\right)}-\frac{\pi}{4}=\arctan{\left(\frac{1+2S/\pi-1}{1+(1+2S/\pi)}\right)}=\arctan{\left(\frac{S}{\pi+S}\right)} .$$
Using
$$\arctan(1+x)=\frac{\pi}{4}+\frac{1}{2}x-\frac{1}{4}x^2+\frac{1}{12}x^3-\frac{1}{40}x^5+\frac{1}{48}x^6-\frac{1}{112}x^7+\ldots$$
we get
$$\arctan{\left(\frac{S}{\pi+S}\right)}=\frac{S}{\pi}-\frac{S^2}{\pi^2}+\frac{2S^3}{3\pi^3}-\frac{4S^5}{5\pi^5}+\frac{4S^6}{3\pi^6}-\frac{8S^7}{7\pi^7}+\ldots$$
This proves 2), 3) and 4), because
$$S=2\left(\frac{x^3}{3}+\frac{x^7}{7}+\frac{x^{11}}{11}+\ldots\right)=2\sum_{k=0}^\infty \frac{x^{4k+3}}{4k+3} .$$
To prove 1), we need to prove the analogous property for $\arctan(1+x)$ and the proof can be based on the formula
$$\frac{d^n}{dx^n}(\arctan x)=\frac{(-1)^{n-1}(n-1)!}{(1+x^2)^{n/2}}\sin{\left (n\,\arcsin{\left(\frac{1}{\sqrt{1+x^2}}\right)}\right )}$$
proved in K. Adegoke and O. Layeni, The Higher Derivatives Of The Inverse Tangent Function and Rapidly Convergent BBP-Type Formulas For Pi, Applied Mathematics E-Notes, 10(2010), 70-75, available at http://www.math.nthu.edu.tw/~amen/2010/090408-2.pdf. This formula enables us to get a closed-form expression
$$\arctan{\left(\frac{S}{\pi+S}\right)}=\sum_{n=1}^\infty \frac{(-1)^{n-1}}{n}\,2^{n/2}\,\sin{\left(\frac{n\pi}{4}\right)}\,\frac{S^n}{\pi^n} .$$
So the initial questions are not actual now. However I'm still interested to know whether one can calculate in a closed-form the integral $$\int\limits_0^1 S^n(x)\frac{dx}{x} .$$
| Here is a simple dérivation of the Taylor expansion of $~\arctan(\frac{x}{1+x})~:$
Since $$~\arctan(u)=\dfrac{i}{2}\ln\big(\dfrac{1-iu}{1+iu}\big),~~\arctan(\frac{x}{1+x})=\dfrac{i}{2}\ln\Big(\dfrac{1-i\frac{x}{1+x}}{1+i\frac{x}{1+x}}\Big)$$
$$=
\dfrac{i}{2}\ln\Big(\dfrac{1+(1-i)x}{1+(1+i)x}\Big)=\dfrac{i}{2}\ln\Big(\dfrac{1+\sqrt{2}\exp(-i\frac{\pi}{4}) x}{1+\sqrt{2}\exp(i\frac{\pi}{4})x}\Big).~$$
So, for $~|x|<\frac{1}{\sqrt{2}},$
$$\arctan(\frac{x}{1+x})=\dfrac{i}{2}\sum\limits_{n=1}^{\infty}\frac{(-1)^{n-1}2^{n/2}}{n}(e^{-in\frac{\pi}{4}}
-e^{in\frac{\pi}{4}})x^n$$
$$=\sum\limits_{n=1}^{\infty}\frac{(-1)^{n-1}2^{n/2}}{n}\sin(n\frac{\pi}{4})x^n,$$
and, for $~|w|<\frac{\pi}{\sqrt{2}},$
$$(1)~~~~~~\arctan(\frac{w}{\pi+w})=\sum\limits_{n=1}^{\infty}\frac{(-1)^{n-1}2^{n/2}}{n}\sin(n\frac{\pi}{4})(\frac{w}{\pi})^n.$$
But I'm a bit worried about the following detail : if $$~S(x)=2\sum\limits_{n=0}^{\infty}\dfrac{x^{4n+3}}{4n+3},
~~~~~~~~~~~~\lim\limits_{x\rightarrow1^{-}}S(x)=+\infty,~$$
so it seems a bit dangerous to substitute $~S(x)~$ for $~w~$ into $~(1)~$ (since this series diverges for $~|w|>\frac{\pi}{\sqrt{2}})$ (and even more dangerous to interchange summation and integral...)
Am I wrong ?
| {
"language": "en",
"url": "https://mathoverflow.net/questions/155263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Two questions about Whittaker functions I am watching the video: Modeling p-adic Whittaker functions, Part I. I have two questions about Whittaker functions in the video.
From 33:00 to 37:00, it is said that after changing of variables, we have
\begin{align}
W^0(t_{\lambda}) = \int_{U} f^0(w_0 u t_{\lambda}) \psi^{-1}(u) du = \int_{\bar{U}} f^0(\bar{u}) \bar{\psi}^{-1}(t_{\lambda} \bar{u} t_{\lambda}^{-1}) d \bar{u}. \quad (1)
\end{align}
I tried to prove this formula by letting $\bar{u} = w_0 u t_{\lambda}$. But I didn't get $\int_{\bar{U}} f^0(\bar{u}) \bar{\psi}^{-1}(t_{\lambda} \bar{u} t_{\lambda}^{-1}) d \bar{u}$. How to prove (1)?
I am trying to understand the formula of expressing a Whittaker function as a sum over a crystal. In the case of $SL_2$, $\bar{u} = \left( \begin{matrix} 1 & 0 \\ x & 1 \end{matrix} \right)$. In the video, it is said that if $x \in \mathfrak{o}_F$ (the ring of integers of $F$), then it is done; if $x \not\in \mathfrak{o}_F$, we have a decomposition $\left( \begin{matrix} 1 & 0 \\ x & 1 \end{matrix} \right) = \left( \begin{matrix} x^{-1} & 1 \\ 0 & x \end{matrix} \right) \left( \begin{matrix} 0 & -1 \\ 1 & x^{-1} \end{matrix} \right) $ and spherical vector is constant on "shells". What do "shells" mean? Why we need the decomposition of the matrix $\left( \begin{matrix} 1 & 0 \\ x & 1 \end{matrix} \right)$? Why spherical vector is constant on "shells"? Thank you very much.
| In the case of $SL_2$, we have
$$
U = \left\{ \left(\begin{matrix} 1 & x \\ 0 & 1 \end{matrix}\right) : x \in F \right\}
$$
and
\begin{align}
& \int_U f^0\left( w_0 u t_{\lambda} \right) \psi^{-1}(u) du \\
& = \int_U f^0\left( \left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right) \left(\begin{matrix} 1 & x \\ 0 & 1 \end{matrix}\right) \left(\begin{matrix} a & 0 \\ 0 & b \end{matrix}\right) \right) \psi^{-1}(\left(\begin{matrix} 1 & x \\ 0 & 1 \end{matrix}\right)) dx \\
& = \int_U f^0\left( \left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right) \left(\begin{matrix} a & 0 \\ 0 & b \end{matrix}\right) \left(\begin{matrix} a^{-1} & 0 \\ 0 & b^{-1} \end{matrix}\right) \left(\begin{matrix} 1 & x \\ 0 & 1 \end{matrix}\right) \left(\begin{matrix} a & 0 \\ 0 & b \end{matrix}\right) \right) \psi^{-1}(\left(\begin{matrix} 1 & x \\ 0 & 1 \end{matrix}\right)) dx \\
& = \int_U f^0\left( \left(\begin{matrix} b & 0 \\ 0 & a \end{matrix}\right) \left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right) \left(\begin{matrix} 1 & a^{-1}bx \\ 0 & 1 \end{matrix}\right) \right) \psi^{-1}(\left(\begin{matrix} 1 & x \\ 0 & 1 \end{matrix}\right)) dx \\
& = \chi_1(b) \chi_2(a) \int_U f^0\left( \left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right) \left(\begin{matrix} 1 & a^{-1}bx \\ 0 & 1 \end{matrix}\right) \right) \psi^{-1}(\left(\begin{matrix} 1 & x \\ 0 & 1 \end{matrix}\right)) dx.
\end{align}
Let $\bar{u} = t_{\lambda}^{-1} u t_{\lambda}$. Then we have
\begin{align}
& \int_U f^0\left( w_0 u t_{\lambda} \right) \psi^{-1}(u) du \\
& = \chi_1(b) \chi_2(a) \int_U f^0\left( \left(\begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix}\right) \left(\begin{matrix} 1 & a^{-1}bx \\ 0 & 1 \end{matrix}\right) \right) \psi^{-1}(\left(\begin{matrix} 1 & x \\ 0 & 1 \end{matrix}\right)) dx \\
& = \chi_1(b) \chi_2(a) \int_U f^0(w_0 \bar{u}) \psi^{-1}(t_{\lambda} \bar{u} t_{\lambda}^{-1}) dx \\
& = \chi_1(b) \chi_2(a) \int_U f^0(w_0 \bar{u}) \psi^{-1}(t_{\lambda} \bar{u} t_{\lambda}^{-1}) \frac{1}{a^{-1}b} d(a^{-1}bx) \\
& = \chi_1(b) \chi_2(a) \int_{\bar{U}} f^0(w_0 \bar{u}) \psi^{-1}(t_{\lambda} \bar{u} t_{\lambda}^{-1}) \frac{1}{a^{-1}b} d(\bar{u}) \\
& = \frac{\chi_1(b) \chi_2(a)}{a^{-1}b} \int_{\bar{U}} f^0(w_0 \bar{u}) \psi^{-1}(t_{\lambda} \bar{u} t_{\lambda}^{-1}) d(\bar{u}).
\end{align}
For the second question, if $x \in \mathfrak{o}_F$, then $\left(\begin{matrix} 1 & 0 \\ x & 1 \end{matrix}\right) \in G(\mathfrak{o}_F)=K$. Since $f^{0}$ is invariant under right multiplication of $K$, $f^{0}(w_0 \bar{u}) = f^{0}(w_0)$ in this case.
If $x \not\in \mathfrak{o}_F$, then $|x| > 1$ and
$$\left( \begin{matrix} 1 & 0 \\ x & 1 \end{matrix} \right) = \left( \begin{matrix} x^{-1} & 1 \\ 0 & x \end{matrix} \right) \left( \begin{matrix} 0 & -1 \\ 1 & x^{-1} \end{matrix} \right). $$
Therefore $|x^{-1}|<1$ and $\left( \begin{matrix} 0 & -1 \\ 1 & x^{-1} \end{matrix} \right) \in G(\mathfrak{o}_F)$. Therefore
\begin{align}
& f^0(w_0 \left( \begin{matrix} 1 & 0 \\ x & 1 \end{matrix} \right) ) \\
& = f^0(w_0 \left( \begin{matrix} x^{-1} & 1 \\ 0 & x \end{matrix} \right) \left( \begin{matrix} 0 & -1 \\ 1 & x^{-1} \end{matrix} \right)) \\
& = f^0(w_0 \left( \begin{matrix} x^{-1} & 1 \\ 0 & x \end{matrix} \right)) \\
& = f^0(w_0 \left( \begin{matrix} x^{-1} & 0 \\ 0 & x \end{matrix} \right) \left( \begin{matrix} 1 & x \\ 0 & 1 \end{matrix} \right)) \\
& = f^0( \left( \begin{matrix} x & 0 \\ 0 & x^{-1} \end{matrix} \right) w_0 \left( \begin{matrix} 1 & x \\ 0 & 1 \end{matrix} \right)) \\
& = \chi_1(x) \chi_2(x^{-1}) f^0( w_0 \left( \begin{matrix} 1 & x \\ 0 & 1 \end{matrix} \right)).
\end{align}
| {
"language": "en",
"url": "https://mathoverflow.net/questions/207825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Do there exist infinitely many $n$ such that $n^3+an+b$ is squarefree? Question: Assume that $a,b\in Z$, and $4a^3+27b^2\neq 0$. Prove that there exist infinitely many positive integers $n$ such $n^3+an+b$ is square-free.
I have following
There exist infinitely many postive integers $n$ such $n^2+1$ is squarefree.
Idea of proof
Let $S(n)$ be the number of non-squarefree numbers of the form $k^2+1$ less than or equal to $n^2+1$. We can bound it from above by counting every time a number is divisible by the square of a prime:
$S(n) \le \sum_{1 \le p \le n}{\sum_{1 \le k \le n, p^2 \vert k^2+1}{1}}$
Since $k^2+1$ is never divisible by $4$ or $9$, this becomes:
$S(n) \le \sum_{5 \le p \le n}{\sum_{1 \le k \le n, p^2 \vert k^2+1}{1}}$
$-1$ has at most two square roots mod $p^2$, so we can write this as:
$S(n) \le \sum_{5 \le p \le n}{(\frac{2 \cdot n}{p^2}+2)}$
And we can separate the constant term to get:
$S(n) \le 2 \cdot \pi(n) + \sum_{5 \le p \le n}{\frac{2 \cdot n}{p^2}}$
Assuming $n \gt 120$:
$S(n) \le \frac{2 \cdot \phi(120) \cdot n}{120} + 2 \cdot n \cdot \sum_{5 \le p \le n}{\frac{1}{p^2}}$
Simplifying and ignoring that the sum is limited to primes:
$S(n) \le \frac{8 \cdot n}{15} + 2 \cdot n \cdot (\frac{\pi^2}{6} - \frac{1}{16} - \frac{1}{9} - \frac{1}{4} - 1)$
Calculating and rounding up:
$S(n) \le 0.54 \cdot n + 0.45 \cdot n$
$S(n) \le 0.99 \cdot n$
So the density of squarefree numbers of the form $k^2+1$ is at least $0.01$, and therefore there are an infinite number of them.
But I can't prove my Question.
| Erdos was the first to show that cubic polynomials (meeting obvious necessary conditions) do take on square-free values. More generally he considered polynomials of degree $\ell$ taking on $\ell-1$-th power-free values infinitely often. Granville has shown that the ABC conjecture allows one to find square-free values of higher degree polynomials. See also interesting recent work by Reuss who considers $\ell-1$-th power-free values of degree $\ell$ polynomials evaluated at primes, and also gives many other references.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/235330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Norm of Moore-Penrose inverse of a product I had asked this question in math.stackexchange (link: https://math.stackexchange.com/questions/1902276/bounds-on-the-moore-penrose-inverse-of-a-product ) but I did not get any response so I am trying my luck here.
Let $A^{\dagger}$ denote the Moore-Penrose inverse of a real matrix and let $\|A\|$ denote the usual matrix norm given by the largest singular value of $A.$
Then is it true that $\|(AB)^{\dagger}\| \leq \|A^{\dagger}\| \|B^{\dagger}\|?$
This is trivially true whenever $(AB)^{\dagger} = B^{\dagger}A^{\dagger}$, which happens, for example, when $A$ is full column rank and $B$ is full row rank. But what about the general case?
| Here’s a small counterexample involving two rank-deficient matrices $A$ and $B$: $$A = \left( {\begin{array}{*{20}c}
0 & { - 1} & 0 \\
0 & 0 & 0 \\
0 & { - 1} & 0 \\
\end{array}} \right), B = \left( {\begin{array}{*{20}c}
4 & 0 & 2 \\
2 & 0 & 1 \\
4 & 0 & 2 \\
\end{array}} \right)$$
The corresponding Moore–Penrose pseudoinverses of interest are: $$
A^{\dagger} = \left( {\begin{array}{*{20}c}
0 & 0 & 0 \\
{ -\frac{1}{2}} & 0 & { -\frac{1}{2}} \\
0 & 0 & 0 \\
\end{array}} \right), B^{\dagger} = \left( {\begin{array}{*{20}c}
{\frac{4}{{45}}} & {\frac{2}{{45}}} & {\frac{4}{{45}}} \\
0 & 0 & 0 \\
{\frac{2}{{45}}} & {\frac{1}{{45}}} & {\frac{2}{{45}}} \\
\end{array}} \right), (AB)^{\dagger} = \left( {\begin{array}{*{20}c}
{-\frac{1}{{5}}} & 0 & {-\frac{1}{{5}}} \\
0 & 0 & 0 \\
{-\frac{1}{{10}}} & 0 & {-\frac{1}{{10}}} \\
\end{array}} \right)$$
From this, we have that $\sqrt {\frac{1}{{10}}} = \left\| {\left( {AB} \right)^{\dagger} } \right\|_2 > \left\| {B^{\dagger}} \right\|_2 \left\| {A^{\dagger}} \right\|_2 = \sqrt {\frac{1}{{45}}} \sqrt {\frac{1}{2}} = \frac{1}{3}\sqrt {\frac{1}{{10}}}$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/248367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
how to reduce the integral into hypergeometric function? The equation is
$\int_{0}^{1}dy\left( \sqrt{1+\Pi ^{2}+2\Pi \sqrt{1-y^{2}}}-\sqrt{1+\Pi
^{2}-2\Pi \sqrt{1-y^{2}}}\right) =\frac{\pi \Pi }{2}\ _{2}F_{1}(-\frac{1}{2},%
\frac{1}{2};2;\Pi ^{2})$,
where $\Pi \le 1$.
How to reduce the left integral into the right hypergeometric fucntion? Is there anyone can solve this problem? Thank you.
Thank you for @T. Amdeberhan. I learn general binomial expansion from your answer.
Your result is more general. After some modifications, the result is
\begin{eqnarray*}
&&\int_{0}^{1}dy\left( \sqrt{1+\Pi ^{2}+2\Pi \sqrt{1-y^{2}}}-\sqrt{1+\Pi
^{2}-2\Pi \sqrt{1-y^{2}}}\right) \\
&=&\frac{\pi }{2}\frac{\Pi }{\sqrt{1+\Pi ^{2}}}\ _{2}F_{1}\left( \frac{1}{4},%
\frac{3}{4};2;\left( \frac{2\Pi }{1+\Pi ^{2}}\right) ^{2}\right) ,
\end{eqnarray*}
which is valid for any value of $\Pi $. But for $\Pi \leq 1$, how to
transform the result into
$
\int_{0}^{1}dy\left( \sqrt{1+\Pi ^{2}+2\Pi \sqrt{1-y^{2}}}-\sqrt{1+\Pi
^{2}-2\Pi \sqrt{1-y^{2}}}\right) =\frac{\pi \Pi }{2}\ _{2}F_{1}(-\frac{1}{2},%
\frac{1}{2};2;\Pi ^{2})
$?
It is seems that we should expand the integrad with respect to $\Pi $.
Should we use multinomial expansion? But how to deal with the double
summation? Could we prove the equality of the two hypergeometrical functions
directly for $\Pi \leq 1$?
| Letting $\beta=\frac{2\Pi}{1+\Pi^2}$, write the integrand as
$$\sqrt{1+\Pi^2}\left(\sqrt{1+\beta \sqrt{1-y^{2}}}-\sqrt{1-\beta \sqrt{1-y^{2}}}\right).\tag1$$
Use the binomial expansion to express (1), after combining two infinite series, in the form
$$2\sqrt{1+\Pi^2}\sum_{n\geq0}\binom{1/2}{2n+1}\beta^{2n+1}\int_0^1(1-y^2)^n\sqrt{1-y^2}dy.\tag2$$
Standard integral evaluations enable to compute the integral, hence (2) becomes
$$\begin{align}
&2\sqrt{1+\Pi^2}\sum_{n\geq0}\binom{1/2}{2n+1}\beta^{2n+1}\binom{2n+2}{n+1}\frac{\pi}{2^{2n+3}} \\
=& \frac{\beta\pi}4\sqrt{1+\Pi^2}\sum_{n\geq0}\binom{1/2}{2n+1}\binom{2n+2}{n+1}\left(\frac{\beta}2\right)^{2n} \\
=& \frac{\beta\pi}4\sqrt{1+\Pi^2}\sum_{n\geq0}\binom{4n}{2n}\frac1{(2n+1)2^{4n+1}}\binom{2n+2}{n+1}\left(\frac{\beta}2\right)^{2n} \\
=& \frac{\beta\pi}8\sqrt{1+\Pi^2}\sum_{n\geq0}\binom{4n}{2n}\frac1{2n+1}\binom{2n+2}{n+1}\left(\frac{\beta}8\right)^{2n} \\
=& \frac{\pi\Pi}{4\sqrt{1+\Pi^2}}\sum_{n\geq0}\binom{4n}{2n}\frac1{2n+1}\binom{2n+2}{n+1}\left(\frac{\beta}8\right)^{2n} \\
=& \frac{\pi\Pi}{4\sqrt{1+\Pi^2}}\sum_{n\geq0}\frac{(4n)!}{(2n)!(n+1)!n!}\left(\frac{\Pi}{4+4\Pi^2}\right)^{2n}.
\end{align}$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/256947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Efficient method to write number as a sum of four squares? Wikipedia states that there randomized polynomial-time algorithms for writing $n$ as a sum of four squares
$n=x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}$
in expected running time $\mathrm {O} (\log^{2} n).$
My question is can someone give the efficient algorithm( $\mathrm {O} (\log^{2} n)$ ) to represent $n$ as sum of four squares.
| One of their methods for $n=4k+2$ is as follows:
Randomly select an even number $a$ and an odd number $b$ such that $a^2+b^2 < n$. Then, we hope $p=n-a^2-b^2$ is a prime (you can show there's about a $1/(A \log n \log \log n)$ chance of $p$ being prime ); $p$ is of the form $4r+1$, so if $p$ is prime there's a solution to $c^2+d^2=p$.
To find that, we try to solve $m^2+1 \equiv 0 \pmod{p}$; I'm actually going to describe a slightly different method. Select $x$ at random from $1$ to $p-1$; then, $x^{2r}=\pm 1$ depending on whether $x$ is a quadratic residue so calculate $x^r$ by repeated squaring, and with a $1/2$ chance if $p$ is prime (a smaller one if $p$ is composite) you'll find a valid $m$.
Given such an $m$, $m+i$ is a Gaussian integer with norm divisible by $p$ but smaller than $p^2$; use the Euclidean algorithm on the Gaussian integers with $p$ and get $c+di$ with norm $p$, and $a^2+b^2+c^2+d^2=n$.
For an odd number $n$, solve for $a^2+b^2+c^2+d^2=2n$; note that by mod $4$ considerations exactly two of $a,b,c,d$ must be odd, and without loss of generality assume $a,b$ are odd and $c,d$ are even. Then,$(\frac{1}{2}(a+b))^2+(\frac{1}{2}(a-b))^2+(\frac{1}{2}(c+d))^2+(\frac{1}{2}(c-d))^2=n.$
For $n$ a multiple of $4$, solve for $n/4$ recursively, and multiply all values by $4$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/259152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
"Nearly" Fermat triples: case cubic Suppose $a^2+b^2-c^2=0$ are formed by a (integral) Pythagorean triple. Then, there are $3\times3$ integer matrices to generate infinitely many more triples. For example, take
$$\begin{bmatrix}-1&2&2 \\ -2&1&2 \\ -2&2&3\end{bmatrix}\cdot
\begin{bmatrix}a\\ b\\ c\end{bmatrix}=\begin{bmatrix}u\\ v\\ w\end{bmatrix}.$$
One may wish to do the same with $a^3+b^3-c^3=0$, but Fermat's Last Theorem forbids it!
Alas! one settles for less $a^3+b^3-c^3=\pm1$. Here, we're in good company: $9^3+10^3-12^3=1$, coming from Ramanujan's taxicab number $1729=9^3+10^3=12^3+1^3$. There are plenty more.
Question. Does there exist a concrete $3\times3$ integer matrix $M=[m_{ij}]$ such that whenever $a^3+b^3-c^3\in\{-1,1\}$ (integer tuple) then $u^3+v^3-w^3\in\{-1,1\}$ provided
$$\begin{bmatrix}m_{11}&m_{12}&m_{13} \\ m_{21}&m_{22}&m_{23} \\ m_{31}&m_{32}&m_{33}\end{bmatrix}\cdot
\begin{bmatrix}a\\ b\\ c\end{bmatrix}=\begin{bmatrix}u\\ v\\ w\end{bmatrix}.$$
| Daniel Loughran: I apologize for the confusion. You were right. I was only focusing on a single family of solutions to $x^3+y^3-z^3=\pm1$, for which of course there is tedious experimental finding that does a parametrization. Not for all solutions.
Let's see if we can prove that it works: start with $[9,10,12]^T$ with $a^3+b^3-c^3=\pm1$ then I claim that $[u,v,w]^T=M^n\cdot[9,10,12]^T$ also satisfies $u^3+v^3-w^3=\pm1$.
Take the matrix to be
$$M=\begin{bmatrix}63&104&-68\\64&104&-67 \\80&131&-85\end{bmatrix}.$$
For example, if $[a,b,c]^T=[9,10,12]^T$ we have $9^3+10^3-12^3=1$ and
$$[u,v,w]^T=M\cdot[9,10,12]^T=[791,812,1010]^T$$
satisfies $791^3+812^3-1010^3=-1$. If we continue,
$$M\cdot[791,812,1010]^T=[65601,67402,83802]^T$$
satisfies $65601^3+67402^3-83802^3=1$. This process generates and infinite family of solutions.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/269799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Algebraic inequalities on different means If $a^2+b^2+c^2+d^2=1$ in which $a,b,c,d>0$, prove or disprove
\begin{equation*}
\begin{aligned}
(a+b+c+d)^8&\geq 2^{12}abcd;\\
a+b+c+d+\frac{1}{2(abcd)^{1/4}}&\geq 3.
\end{aligned}
\end{equation*}
Can you tell any general algorithm for this type of problems? Thanks.
| There is a general algorithm for this type of problems.
We can use the Vasc's EV Method: https://www.emis.de/journals/JIPAM/images/059_06_JIPAM/059_06.pdf
For example, your second inequality is wrong, but the following is true.
Let $a$, $b$, $c$ and $d$ be positive numbers such that $a^2+b^2+c^2+d^2=1$. Prove that:
$$a+b+c+d+\frac{2}{3\sqrt[4]{abcd}}\geq\frac{10}{3}.$$
Indeed, let $a+b+c+d=constant.$
Thus, it's enough to prove our inequality for a maximal value of $abcd$, which by Corollary 1.8(b)
happens for equality case of three variables.
After homogenization we need to prove that
$$a+b+c+d+\frac{2(a^2+b^2+c^2+d^2)}{3\sqrt[4]{abcd}}\geq\frac{10}{3}\sqrt{a^2+b^2+c^2+d^2}.$$
Now, let $a=x^4$, where $x>0$ and $b=c=d=1$.
Thus, we need to prove that
$$x^4+3+\frac{2(x^8+3)}{3x}\geq\frac{10}{3}\sqrt{x^8+3}$$ or after squaring of the both sides
$$(x-1)^2(4x^{14}+8x^{13}+12x^{12}+28x^{11}+44x^{10}+60x^9-15x^8-54x^7-69x^6-84x^5-45x^4+30x^3+105x^2+180x+36)\geq0,$$ which is obvious by AM-GM.
By this method you can create infinitely many inequalities.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/302192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Probability distribution optimization problem of distances between points in $[0,1]$ Let $\mathcal{D}$ be a probability distribution with support $[0,1]$. Let $X, Y, Z$ three i.i.d. random variables with distribution $\mathcal{D}$, and $T$ a random variable uniformly distributed in $[0,1]$ independent from $X$, $Y$ and $Z$. We define $$\Delta=\mathbb{E}\left(1-|x-y|~\big|~x,y<t<z\right)$$ and $$\Delta'=\mathbb{E}\left(1-\min\left(|x-y|,|z-y|\right)~\big|~x,y<t<z\right)~.$$
Question: What is the minimum value of the ratio $\rho=\frac{\Delta}{\Delta'}$ over all probability distributions $\mathcal{D}$?
(If $\mathcal{D}$ is uniform, then $\rho=\frac{16}{17}$. Is there a distribution $\mathcal{D}$ such that $\rho<\frac{16}{17}$?)
| Sorry, my computation in the comments was wrong. I think it leads to something with $\rho < \frac{16}{17}$.
Namely, let $\mathcal{D}$ be the distribution with $\mathrm{Pr}(\mathcal{D}=0)=\mathrm{Pr}(\mathcal{D}=3/4)=1/N$, and $\mathrm{Pr}(\mathcal{D}=1)=(N-2)/N$, where $N$ is large.
Then the possibilities for $(x,y,z)$ which fit your conditional probability are:
*
*$0 < t < \frac{3}{4}$: $(0,0,\frac{3}{4})$, $(0,0,1)$
*$\frac{3}{4} < t < 1$: $(0,0,1)$, $(0,0,1)$, $(0,\frac{3}{4},1)$, $(\frac{3}{4},0,1)$, $(\frac{3}{4},\frac{3}{4},1)$
Only one of these has $z\neq 1$; if $N$ is very large, then that case will occur much less frequently and we can "ignore" it (so we're really doing the limit $N\to \infty$ computation, for convenience).
Let $\delta=1-|x-y|$ and $\delta'=1-\min(|x-y|,|z-y|)$. Then the events to consider, and their probabilities and values, are
*
*$0 < t < \frac{3}{4}$: $(0,0,1)$ - relative prob. $\frac{3}{7}$, $\delta=\delta'=1$
*$\frac{3}{4} < t < 1$: $(0,0,1)$ - relative prob. $\frac{1}{7}$, $\delta=\delta'=1$
*$\frac{3}{4} < t < 1$: $(0,\frac{3}{4},1)$ - relative prob. $\frac{1}{7}$, $\delta=\frac{1}{4}$, $\delta'=\frac{3}{4}$
*$\frac{3}{4} < t < 1$: $(\frac{3}{4},0,1)$ - relative prob. $\frac{1}{7}$, $\delta=\delta'=\frac{1}{4}$
*$\frac{3}{4} < t < 1$: $(\frac{3}{4},\frac{3}{4},1)$ - relative prob. $\frac{1}{7}$, $\delta=\delta'=1$
So we can compute
$$\Delta=\frac{3}{7}+\frac{1}{7}+\frac{1}{7}(\frac{1}{4})+\frac{1}{7}(\frac{1}{4})+\frac{1}{7}=\frac{11}{14}$$
$$\Delta'=\frac{3}{7}+\frac{1}{7}+\frac{1}{7}(\frac{3}{4})+\frac{1}{7}(\frac{1}{4})+\frac{1}{7}=\frac{12}{14}$$
$$\rho=\frac{\Delta}{\Delta'}=\frac{11}{12}< \frac{16}{17}$$
As mentioned, really we took the limit $N\to \infty$; but since we got $\rho< \frac{16}{17}$, that means there should be some finite $N$ we can take with $\rho< \frac{16}{17}$, just the computation will be more annoying.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/372341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Irrationality of $e^{x/y}$ How to prove the following continued fraction of $e^{x/y}$
$${\displaystyle e^{x/y}=1+{\cfrac {2x}{2y-x+{\cfrac {x^{2}}{6y+{\cfrac {x^{2}}{10y+{\cfrac {x^{2}}{14y+{\cfrac {x^{2}}{18y+\ddots }}}}}}}}}}}$$
Since $a_i \geq b_i$ for all $i \geq 1$. By the condition of irrationality of generalized continued fraction, proving this directly proves that $e^{x/y}$ will be an irrational number!
| I think this might be a solution.
The Continued Fraction Expansion of the hyperbolic tanh function discovered by Gauss is
$$\tanh z = \frac{z}{1 + \frac{z^2}{3 + \frac{z^2}{5 + \frac{z^2}{...}}}} \\\\$$
We also know that the hyperbolic tanh function is related to the exponential function with the following formula
$$\tanh z=\frac{e^z-e^{-z}}{e^z+e^{-z}}$$
Putting $\frac{x}{y}$ in place of $z$ in the previous equation we get
$$\frac{e^{\frac{x}{y}}-e^{-{\frac{x}{y}}}}{e^{\frac{x}{y}}+e^{-{\frac{x}{y}}}}= \frac{{\frac{x}{y}}}{1 + \frac{{\frac{x}{y}}^2}{3 + \frac{{\frac{x}{y}}^2}{5 + \frac{{\frac{x}{y}}^2}{...}}}} \\\\$$
This continued fraction can be simplified into
$$\frac{e^{\frac{x}{y}}-e^{-{\frac{x}{y}}}}{e^{\frac{x}{y}}+e^{-{\frac{x}{y}}}}= \frac{x}{y + \frac{x^2}{3y + \frac{x^2}{5y + \frac{x^2}{...}}}} \\\\$$
This equation can be further be simplified as
$$1+\frac{2}{e^{\frac{2x}{y}}-1}=y + \frac{x^2}{3y + \frac{x^2}{5y + \frac{x^2}{...}}} \\\\$$
$$\frac{e^{\frac{2x}{y}}-1}{2}=\frac{1}{(\frac{y}{x}-1) + \frac{x}{3y + \frac{x^2}{5y + \frac{x^2}{...}}}}$$
From this equation after some algebraic manipulation, we finally get the continued fraction expansion of $e^{x/y}$ as
$${\displaystyle e^{x/y}=1+{\frac {2x}{2y-x+{\frac {x^{2}}{6y+{\frac {x^{2}}{10y+{\frac {x^{2}}{14y+{\frac {x^{2}}{18y+\ddots }}}}}}}}}}}$$
This is an infinite generalized continued fraction. We will now state the necessary and sufficient condition for the continued fraction proved by Lagrange given in Corollary 3, on page 495, in chapter XXXIV on "General Continued Fractions" of Chrystal's Algebra Vol.II to converge into an irrational number.
\begin{theorem}The necessary and sufficient condition that the continued fraction
$$\frac{b_1}{a_1 + \frac{b_2}{a_2 + \frac{b_3}{a_3 + \dots}}}$$
is irrational is that the values $a_{i}, b_{i}$ are all positive integers, and if we have $|a_i| > |b_i|$ for all $i$ greater than some $n$
\end{theorem}
In the continued fraction of $e^{x/y}-1$ as we have derived $a_{i}, b_{i}$ are equals to $2(2i-1),x^2$ except $i=1$.Therefore we have $|a_i| > |b_i|$ for all $i>\frac{\frac{x^2}{2}+1}{2}$. Hence we have proved that $e^{x/y}-1$ is irrational which in turn means $e^{x/y}$ is irrational.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/383273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Given an integer $N$, find solutions to $X^3 + Y^3 + Z^3 - 3XYZ \equiv 1 \pmod{N}$ Given an integer $N > 0$ with unknown factorization, I would like to find nontrivial solutions $(X, Y, Z)$ to the congruence $X^3 + Y^3 + Z^3 - 3XYZ \equiv 1 \pmod{N}$. Is there any algorithmic way to rapidly find such tuples? One special case arises by taking $Z \equiv 0 \pmod{N}$, in which case we can consider the simpler congruence $X^3 + Y^3 \equiv 1 \pmod{N}$. Does anyone know how to find such $(X, Y)$?
| Since, $X^3+Y^3+Z^3-3XYZ=\frac{1}{2}(X+Y+Z)((X-Y)^2+(Y-Z)^2+(Z-X)^2)$, taking $X,Y,Z$ close to each other give some non-trivial and cheap solutions.
For instance $(k+1,k,k)$ for $N=3k$, $(k+1,k+1,k)$ for $N=3k+1$, $(k+1,k,k-1)$ for $N=9k-1$, etc.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/387018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Can all three numbers $\ n\ \ n^2-1\ \ n^2+1\ $ be fine (as opposed to coarse)? Let $\ n\ $ be an arbitrary natural number ($\ 1\ 2\ \ldots).\ $ Then
*
*$\ n\ $ is coarse $\ \Leftarrow:\Rightarrow\ $ there exists a prime divisor $p$ of $\ n\ $ such that $\ p^3>n.$;
*$\ n\ $ is a p-cube $\ \Leftarrow:\Rightarrow\ $ the positive cubical root of $\ n\ $ is a prime number;
*$\ n\ $ is fine $\ \Leftarrow:\Rightarrow\ p^3<n\ $
for every prime divisor $\ p\ $ of $\ n$.
Example: Natural $\ 64\ $ and
$$ 4095=64^2-1\ = 3^2\cdot 5\cdot 7\cdot13 $$
are both fine. However,
$$ 4097=64^2+1=17\cdot241 $$
is coarse.
QUESTION Does there exist a fine natural number $\ n\ $ such that both $\ n^2-1\ $ and $\ n^2+1\ $ are fine too? (My guess: perhaps NOT).
Also, I don't expect that there is any p-cube $\ n\ $ such that both $\ n^2-1\ $ and $\ n^2+1\ $ are fine.
On the other hand, I believe that there are infinitely many coarse $\ n\ $ such that both $\ n^2-1\ $ and $\ n^2+1\ $ are fine (as rare as they may be).
| For any $k\in\mathbb{N}$, the number $n=2^3\cdot3^{72k}$ works:
Obviously, $n$ itself is fine.
For $n^2-1 = 2^6\cdot3^{144k}-1 = (2^2\cdot 3^{48 k} + 2\cdot 3^{24 k} + 1)(2^2\cdot 3^{48 k}-2\cdot 3^{24 k} + 1)(2\cdot3^{24 k} + 1) (2\cdot3^{24 k} - 1)$, the first factor is divisible by $7$, the others are small, hence $n^2-1$ is fine.
For $n^2+1 = 2^6\cdot3^{144k}+1 = (2^2\cdot3^{48k}+2^2\cdot3^{36k}+2\cdot3^{24k}+2\cdot3^{12k}+1)(2^2\cdot3^{48k}-2^2\cdot3^{36k}+2\cdot3^{24k}-2\cdot3^{12k}+1)(2\cdot3^{24k}+2\cdot3^{12k}+1)(2\cdot3^{24k}-2\cdot3^{12k}+1)$, the first factor is divisible by $13$, the others are small, hence $n^2+1$ is fine.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/396310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Modular arithmetic problem Here's the problem: start with $2^n$, then take away $\frac{1}{2}a^2+\frac{3}{2}a$ starting with $a=1$, and running up to $a = 2^{n+1}-2$, evaluating modulo $2^n$. Does the resulting sequence contain representatives for all the congruence classes module $2^n$?
Some examples of these sequences:
\begin{align*}
n=2\quad & 2,3,3,2,0,1\\
n=3\quad & 6,3,7,2,4,5,5,4,2,7,3,6,0,1\\
n=4\quad & 14,11,7,2,12,5,13,4,10,15,3,6,8,9,9,8,6,3,15,10,4,13,5,12,2,7,11,14,0,1
\end{align*}
It seems that the answer is yes but I want to show this holds for all natural numbers $n$. How does one do this?
| This is correct. Denote $f(x)=x(x+3)/2 \pmod {2^n}$. Then $f(x)=f(y)$ if and only if $x(x+3)-y(y+3)=(x-y)(x+y+3)$ is divisible by $2^{n+1}$. Since $x-y$ and $x+y+3$ have different parity, this in turn happens if and only if one of them is divisible by $2^{n+1}$. So, if $x=1,2,\ldots,2^{n+1}$, then $f(x)$ takes each value (i.e., each residue modulo $2^n$) exactly twice, as these numbers are distinct modulo $2^{n+1}$ and are partitioned onto pairs with sum $-3$ modulo $2^{n+1}$. Two of these pairs are $(2^{n+1}-2,2^{n+1})$ and $(2^{n+1}-1,2^{n+1}-2)$, thus if you remove a number from each pair (correspondingly, $2^{n+1}$ and $2^{n+1}-1$, so that the numbers $1,\ldots,2^{n+1}-2$ remain), the remaining numbers still cover all possible values of $f$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/416699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are the number of solutions to $ax^2+bxy+cy^2\equiv u\pmod{p}$, $(x,y)\in\{0,\dotsc,p-1\}$, the same for all units $u$? Let $p$ be an odd prime and $F={\mathbb Z}/p{\mathbb Z}$. With $a,b,c\in F$, let $Q(x,y)=ax^2+bxy+cy^2$ where $p\nmid b^2-4ac$. I wish to prove that the number of solutions $(x,y)\in F^2$ of
$$ax^2+bxy+cy^2=u$$ is the same for all units $u\in F$.
For example, when $p=5$, the equation $x^2-2xy+3y^2=u$ has one solution when $u=0$ and 6 solutions for each of $u=1,2,3,4$.
| Inspired by @JoeSilverman's answer, I've updated this answer to handle any odd-characteristic field $F$, and to remove a few ugly case distinctions from the original argument; but I do assume that the characteristic is not $2$, as @KConrad points out.
Put $\theta = \dfrac{b + \sqrt{b^2 - 4a c}}2$, where we have chosen a square root of $b^2 - 4a c$ that does not equal $-b$ (either one, if $4a c \ne 0$).
We have that $a x^2 + b x y + c y^2 = a x^2 + (\theta + a c\theta^{-1})x y + c y^2$ equals $(a x + \theta y)(x + c\theta^{-1}y)$. If $\theta$ belongs to $F$, then the change of variables $\begin{pmatrix} x' \\ y' \end{pmatrix} = \begin{pmatrix} a & \theta \\ 1 & c\theta^{-1} \end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix}$ is non-singular (with determinant $a c\theta^{-1} - \theta = (a c - \theta^2)\theta^{-1} = \sqrt{b^2 - 4a c}$), so we are considering solutions $(x', y')$ of $x'y' = u$, which form an $F^\times$-torsor. (In particular, if $F$ is finite, then there are $\lvert F^\times\rvert$ of them.)
If, on the other hand, $\theta$ does not belong to $F$, then $b^2 - 4a c$ is not a square in $F$, so $a$ is non-$0$. Then $a x^2 + bx y + cy^2$ equals $a^{-1}(a x + \theta y)(a x + a c\theta^{-1} y) = a^{-1}N_{F[\theta]/F}(a x + \theta y)$, so parameterising solutions of $a x^2 + b x y + c y^2 = u$ is the same as parameterising solutions of $N_{F[\theta]/F}(a x + \theta y) = u a$. Since $(x, y) \mapsto a x + \theta y$ is an isomorphism of $F$-vector spaces (not of $F$-algebras) $F \oplus F \to F[\theta]$, we are parameterising the fibre of the norm map over $u a$. In general, of course, fibres are either empty or torsors for $\ker N_{F[\theta]/F}$; and, if $F$ if finite, so that the norm map is surjective, then all fibres are torsors, hence have size $\lvert F[\theta]^\times\rvert/\lvert F^\times\rvert$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/424856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
On the equation $a^4+b^4+c^4=2d^4$ in positive integers $a\lt b\lt c$ such that $a+b\ne c$ Background: The equation
$$a^4+b^4+c^4=2d^4$$
has infinitely many positive integral solutions if we take $c=a+b$ and $a^2+ab+b^2=d^2$ further assuming that $GCD(a,b,c)=1$.
Main problem: Find some positive integral solutions to the equation
$$a^4+b^4+c^4=2d^4$$
with $a\lt b\lt c\ne a+b$ and $GCD(a,b,c)=1$.
| $(a, b, c, d) = (32, 1065, 2321, 1973), (2156, 5605, 8381, 7383)$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/438196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Does anyone know a polynomial whose lack of roots can't be proved? In Ebbinghaus-Flum-Thomas's Introduction to Mathematical Logic, the following assertion is made:
If ZFC is consistent, then one can obtain a polynomial $P(x_1, ..., x_n)$ which has no roots in the integers. However, this cannot be proved (within ZFC).
So if $P$ has no roots, then mathematics (=ZFC, for now) cannot prove it.
The justification is that Matiyasevich's solution to Hilbert's tenth problem allows one to turn statements about provable truths in a formal system to the existence of integer roots to polynomial equations. The statement is "ZFC is consistent," which cannot be proved within ZFC thanks to Gödel's theorem.
Question: Has such a polynomial ever been computed?
(This arose in a comment thread on the beta site math.SE.)
| For every consistent recursively axiomatizable theory $T$ there exists (and can be effectively computed from the axioms of $T$) an integer number $K$ such that the following Diophantine equation (where all letters except $K$ are variables) has no solutions over non-negative integers, but this fact cannot be proved in $T$:
\begin{align}&(elg^2 + \alpha - bq^2)^2 + (q - b^{5^{60}})^2 + (\lambda + q^4 - 1 - \lambda b^5)^2 + \\
&(\theta + 2z - b^5)^2 + (u + t \theta - l)^2 + (y + m \theta - e)^2 + (n - q^{16})^2 + \\
&[(g + eq^3 + lq^5 + (2(e - z \lambda)(1 + g)^4 + \lambda b^5 + \lambda b^5 q^4)q^4)(n^2 - n) + \\
&\quad\quad(q^3 - bl + l + \theta \lambda q^3 + (b^5-2)q^5)(n^2 - 1) - r]^2 + \\
&(p - 2w s^2 r^2 n^2)^2 + (p^2 k^2 - k^2 + 1 - \tau^2)^2 + \\
&(4(c - ksn^2)^2 + \eta - k^2)^2 + (r + 1 + hp - h - k)^2 + \\
&(a - (wn^2 + 1)rsn^2)^2 + (2r + 1 + \phi - c)^2 + \\
&(bw + ca - 2c + 4\alpha \gamma - 5\gamma - d)^2 + \\
&((a^2 - 1)c^2 + 1 - d^2)^2 + ((a^2 - 1)i^2c^4 + 1 - f^2)^2 + \\
&(((a + f^2(d^2 - a))^2 - 1) (2r + 1 + jc)^2 + 1 - (d + of)^2)^2 + \\
&(((z+u+y)^2+u)^2 + y-K)^2 = 0.
\end{align}
Moreover, for every such theory, the set of numbers with this property is infinite and not recursively enumerable.
The equation is derived from Undecidable Diophantine Equations, James P. Jones, Bull. Amer. Math. Soc. (N.S.), Vol. 3(2), Sep. 1980, pp. 859–862, DOI: 10.1090/S0273-0979-1980-14832-6.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/32892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "67",
"answer_count": 5,
"answer_id": 1
} |
Any sum of 2 dice with equal probability The question is the following: Can one create two nonidentical loaded 6-sided dice such that when one throws with both dice and sums their values the probability of any sum (from 2 to 12) is the same. I said nonidentical because its easy to verify that with identical loaded dice its not possible.
Formally: Let's say that $q_{i}$ is the probability that we throw $i$ on the first die and $p_{i}$ is the same for the second die. $p_{i},q_{i} \in [0,1]$ for all $i \in 1\ldots 6$. The question is that with these constraints are there $q_{i}$s and $p_{i}$s that satisfy the following equations:
$ q_{1} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{2} + q_{2} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{3} + q_{2} \cdot p_{2} + q_{3} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{4} + q_{2} \cdot p_{3} + q_{3} \cdot p_{2} + q_{4} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{5} + q_{2} \cdot p_{4} + q_{3} \cdot p_{3} + q_{4} \cdot p_{2} + q_{5} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{6} + q_{2} \cdot p_{5} + q_{3} \cdot p_{4} + q_{4} \cdot p_{3} + q_{5} \cdot p_{2} + q_{6} \cdot p_{1} = \frac{1}{11}$
$ q_{2} \cdot p_{6} + q_{3} \cdot p_{5} + q_{4} \cdot p_{4} + q_{5} \cdot p_{3} + q_{6} \cdot p_{2} = \frac{1}{11}$
$ q_{3} \cdot p_{6} + q_{4} \cdot p_{5} + q_{5} \cdot p_{4} + q_{6} \cdot p_{3} = \frac{1}{11}$
$ q_{4} \cdot p_{6} + q_{5} \cdot p_{5} + q_{6} \cdot p_{4} = \frac{1}{11}$
$ q_{5} \cdot p_{6} + q_{6} \cdot p_{5} = \frac{1}{11}$
$ q_{6} \cdot p_{6} = \frac{1}{11}$
I don't really now how to start with this. Any suggestions are welcome.
| You can't even solve this with two-sided dice. Consider two dice with probabilities p and q of rolling 1, and probabilities (1-p) and (1-q) of rolling 2. The probability of rolling a sum of 2 is pq, and the probability of rolling a sum of 4 is (1-p)(1-q). These are equal only if p=(1-q). Hence they are equal to one third only if p satisfies the quadratic equation p(1-p) = 1/3. Since this has no real roots, it cannot be done. This logic extends to multisided dice.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/41310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 6,
"answer_id": 5
} |
Subsets and Splits