Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
How to guess that $ \sum_{i=1}^{n}3^i = \frac{3}{2}(3^n - 1)$ As in title how do you guess that $ \sum_{i=1}^{n}3^i = \frac{3}{2}(3^n - 1)$?
I have homework about solving recurrence relations and using iterate method I can find that http://www.wolframalpha.com/input/?i=RSolve%5B%7Ba%5Bn%5D+%3D+a%5Bn-1%5D+%2B+3%5En%2C+a%5B1%5D+%3D+3%7D%2Ca%5Bn%5D%2Cn%5D series shown in this example is equal to $ \sum_{i=1}^{n}3^i$, but I am curious how you change it into $ \frac{3}{2}(3^n - 1)$. I know how to prove it's true, but I don't know how to guess it on my own. Anyone could explain?
| Well... When I look deep inside of me and try to really see how I would guess it, it comes to this anecdote:
An infinite number of mathematicians walk into a bar.
The first asks the bartender for a beer.
The second asks for half a beer. The third one says he wants a fourth of a beer.
The bartender interrupts, puts two beers on the bar and says:
“You guys need to learn your limits.”
I think, although I cannot be sure, that this is ingrained somewhere deep in the childhood memories of most mathematicians (maybe most people on earth). You learn your fractions, you notice that $\frac{1}{2} + \frac{1}{4} = \frac{3}{4}$, and $\frac{1}{2} + \frac{1}{4} + \frac{1}{8} = \frac{7}{8}$, then you realize that $\sum_{i=1}^{n} \frac{1}{2^i} = 1 - \frac{1}{2^n}$.
Then, when you see a sum like $1 + 3 + \ldots + 3^n$, you already know what to do.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/532046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Find the expressions for the common difference and common ratio
The first 2 terms of a geometric progression (first term $a$ and common ratio $r$) are the same as the first 2 terms of an arithmetic progression (first term $a$ and common difference $d$). The third term of the geometric progression is twice as big as the third term of the arithmetic progression. Find 2 different expressions for $d$ and hence or otherwise find the 2 possible values of $r$.
So I've tried to tackle this problem, to no avail:
$$ ar = a + d$$
$$ar^2 = 2(a + 2d)$$
So using this I found $d = ar -a $ and $d = \dfrac{1}{4}ar^2 - \dfrac{1}{2}a$, so $$ ar -a =\dfrac{1}{4}ar^2 - \dfrac{1}{2}a$$
$$a(r^2-4r+2)=0$$
Here I knew I was wrong, because I'm sure that you're not supposed to be using the quadratic for this exercise (judging from how much time you have).
So where did I go wrong?
| I'm afraid that you've done nothing wrong. (There's an odd thing to say.)
It's worth noting that if $a=0,$ then we get $d=0$ immediately, but $r$ can take literally any value. Since the problem expects two possible values for $r,$ then we clearly must assume $a\neq0.$
Since $a\ne 0,$ then $r^2-4r+2=0,$ and from there, the quadratic formula will give you the two possible values for $r$. Alternately, you could note the following: $$r^2-4r+2=0\\r(r-4)+2=0\\(r-2+2)(r-2-2)+2=0\\(r-2)^2-2^2+2=0\\(r-2)^2-2=0$$
This is an end-around method of completing the square by using the difference of squares formula, from which you can readily conclude that $r=2\pm\sqrt2,$ if you'd rather avoid the quadratic formula.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/532119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Expressing a product in a Dihedral group Write the product $x^2yx^{-1}y^{-1}x^3y^3$ in the form $x^iy^j$ in the dihedral group $D_n$.
I used the fact that the dihedral group is generated by two elements $x$ and $y$ such that: $y^n=1$, $x^2=1$ and $xy=y^{-1}x$
and I found that $x^2yx^{-1}y^{-1}x^3y^3=y^5$
Is it correct ?
| Yes, I think, except that we can also write
$$ x^2yx^{-1}y^{-1}x^3y^3 = 1yxy^{-1}xy^3 = yxxyy^3 = y1y^4 = y^5 = y^{(5 \mod n)}.$$ Here we have used the fact that $x^2=1$ and so $x^{-1} = x$, and $5 \mod n$ denotes the remainder when $5$ is divided by $n$ in accordance with the Euclidean division algorithm.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/532195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Rearrangement of double infinite sums Lets say I have a double infinite sum $\sum\limits_{n=0}^\infty a_n\sum\limits_{k=n}^\infty b_k$ If I know that the sum is absolutely convergent and that $\sum\limits_{k=n}^\infty b_k$ is absolutely convergent, what kind of rearrangements are valid?
For example I would like expand the series and collect certain terms into meaningful groups, is that a "legal" operation provided all the series I work with are absolutely convergent?
| If the series $\sum_{n=1}^\infty |a_n| \sum_{k=n}^\infty |b_k| $ converges, then you can do whatever you want with $a_n b_k$; any rearrangement will converge to the same sum. More generally: if $I$ is an index set and $\sum_{i\in I} |c_i|<\infty$, then any rearrangement of $c_i$ converges to the same sum. This is because for every $\epsilon>0$ there is a finite set $S\subset I$ such that $\sum_{i\in I\setminus S}|c_i|<\epsilon $; any method of summation will use up $S$ at some point, and after that the sum is guaranteed to be within $\epsilon$ of $\sum_{i\in S} c_i$.
But if you only know that $$\sum |b_k|\tag{1}$$ and $$\sum_{n=1}^\infty \left|a_n \sum_{k=n}^\infty b_k \right|\tag{2} $$ converge, then rearrangements can go wrong. For example, take the series $\sum b_k $ to be $\frac{1}{2} -\frac{1}{2}+\frac{1}{4}-\frac{1}{4}+\frac{1}{8}-\frac{1}{8}+\dots$ then every other sum $\sum_{k\ge n} b_k$ is zero, and the corresponding $a_n$ could be arbitrarily large without disturbing the convergence of (2). In this situation you could even have infinitely many terms $a_nb_k$ that are greater than $1$ in absolute value; clearly this series can't be rearranged at will.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/532266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to solve equations of this form: $x^x = n$? How would I go about solving equations of this form:
$$
x^x = n
$$
for values of n that do not have obvious solutions through factoring, such as $27$ ($3^3$) or $256$ ($4^4$).
For instance, how would I solve for x in this equation:
$$x^x = 7$$
I am a high school student, and I haven't exactly ventured into "higher mathematics." My first thought to approaching this equation was to convert it into a logarithmic form and go from there, but this didn't yield anything useful in the end.
My apologies if this question has been asked and answered already; I haven't been able to find a concrete answer on the matter.
| The solution involves a function called Lambert's W-function. There is a Wikipedia page on it.
As Oliver says, it is not a nice neat form. It is an entirely new function. Its definition is $$ye^y=z, W(z)=y$$
Can you use logs to turn $x^x=n$ into $ye^y=f(n)$, for $y$ equal to some function of $x$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/532336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 2
} |
How to analyze the asymptotic behaviour of this integral function? Based on the asymptotic analysis of correlation functions at large distence in Physics, now I get a math question. Let the function $$f(x)=\int_{-1}^{1}\sqrt{1-k^2}e^{ikx}dk.$$
Without working out the explicit form of this function, how do we know the asymptotic behaviour of $f(x)$ at large distence $\left |x \right |\rightarrow+\infty$?
| I going to manipulate the integral into a form that I can analyze the method of stationary phase. Let $k=\cos{\theta}$ and the integral becomes
$$\begin{align}I(x) &= \int_0^{\pi} d\theta \, \sin^2{\theta} \, e^{i x \cos{\theta}}\\ &= \int_0^{\pi} d\theta \, e^{i x \cos{\theta}} - \int_0^{\pi} d\theta \, \cos^2{\theta} \, e^{i x \cos{\theta}}\\ &= \int_0^{\pi} d\theta \, e^{i x \cos{\theta}} + \frac{d^2}{dx^2} \int_0^{\pi} d\theta \, e^{i x \cos{\theta}}\end{align}$$
Now, note that
$$\int_0^{\pi} d\theta \, e^{i x \cos{\theta}} = \int_0^{\pi/2} d\theta \, e^{i x \cos{\theta}} + \int_{\pi/2}^{\pi} d\theta \, e^{i x \cos{\theta}} = 2 \Re{\left [ \int_0^{\pi/2} d\theta \, e^{i x \cos{\theta}}\right]}$$
Now, we may apply stationary phase. The stationary point of the integrand is at $\theta=0$; there, we may approximate the argument of the exponential by its Taylor expansion. Further, because of the oscillatory cancllations, we may simply draw the upper limit of the integral to infinity to first order:
$$\int_0^{\pi/2} d\theta \, e^{i x \cos{\theta}} \sim e^{i x} \int_0^{\infty} d\theta \, e^{-(i x/2) \theta^2} = \frac12 e^{i(x-\pi/4)} \sqrt{\frac{2 \pi}{x}} \quad (x\to\infty)$$
Therefore
$$\int_0^{\pi} d\theta \, e^{i x \cos{\theta}} \sim \sqrt{\frac{2 \pi}{x}} \cos{\left ( x-\frac{\pi}{4}\right)}\quad (x\to\infty)$$
To get the asymptotic behavior of $I(x)$, it looks like we need to take the second derivative of the above result. It turns out that
$$\frac{d^2}{dx^2} \left[x^{-1/2} \cos{\left ( x-\frac{\pi}{4}\right)}\right ]= \left (\frac{3}{4} x^{-5/2} - x^{-1/2}\right ) \cos{\left ( x-\frac{\pi}{4}\right)} + x^{-3/2}\sin{\left ( x-\frac{\pi}{4}\right)} $$
Note that the $x^{-1/2}$ piece drops out when combined with the original integral, and the $x^{-5/2}$ piece is subdominant. Thus, the leading asymptotic behavior of the integral $I(x)$ is finally
$$\int_{-1}^1 dk \, \sqrt{1-k^2} \, e^{i k x} \sim \sqrt{2 \pi} x^{-3/2} \sin{\left ( x-\frac{\pi}{4}\right)}\quad (x\to\infty)$$
Here is a plot illustrating this behavior against the exact value of the integral (the lower plot is the asymptotic result derived here):
ADDENDUM
One objection to the above derivation might be that I neglected the next term in the asymptotic expansion of the first integral of $I(x)$, which we know goes as $A x^{-3/2} \sin{(x-\pi/4)}$. How could I ignore that? It turns out $I$ is the sum of this term and its second derivative, and the only term that remains $O(x^{-3/2})$ vanishes because it is a simple sine term, the sum of itself and its second derivative vanishing. So the derivation of the asymptotic approximation above remains valid.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/532394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
How do you factor this using complete the square? $6+12y-36y^2$ I'm so embarrassed that I'm stuck on this simple algebra problem that is embedded in an integral, but I honestly don't understand how this is factored into $a^2-u^2$
Here are my exact steps:
$6+12y-36y^2$ can be rearranged this way: $6+(12y-36y^2)$ and I know I can factor out a -1 and have it in this form: $6-(-12y+36y^2)$
This is the part where I get really lost. According to everything I read, I take the $b$ term, which is $-12y$ and divide it by $2$ and then square that term. I get: $6-(36-6y+36y^2)$
The form it should look like, however, is $7-(6y-1)^2$
Can you please help me to understand what I'm doing wrong?
| Here is the first step:
$$6+12y-36y^2 = -\left((6y)^2-2\cdot (6y)-6\right)$$
A complete square would be
$$\left(6y-1\right)^2 = (6y)^2-2\cdot (6y) + 1.$$
Ignoring the overall minus for a moment and adding and subtracting 1 to the r.h.s. of the first equation above, we get (using $a^2-b^2=(a-b)(a+b)$ formula:
$$(6y)^2-2\cdot (6y) - 6 = \left(6y-1\right)^2 - 7 = \left(6y-1 - \sqrt{7}\right) \left(6y-1+\sqrt{7}\right)$$
And finally, restoring the overall minus gets us the answer:
$$6+12y-36y^2 = - \left(6y-1 - \sqrt{7}\right) \left(6y-1+\sqrt{7}\right).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/532462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Are there any open mathematical puzzles? Are there any (mathematical) puzzles that are still unresolved? I only mean questions that are accessible to and understandable by the complete layman and which have not been solved, despite serious efforts, by mathematicians (or laymen for that matter)?
My question does not ask for puzzles that have been shown to have either no solution or multiple solutions (or have been shown to be ambiguously formulated).
| In his comment, user Vincent Pfenninger referred to a YouTube video that, amongst other fascinating, layman accessible puzzles, discusses packing squares problems proposed by Paul Erdős. I thought I'd include it among the answers (as a community wiki).
How big a square do you need to hold eleven little squares?
We don't even know if this is the best possible [solution.]
Which, to me, comes as a complete surprise. :)
Here is the link to Erich's Packing Center provided in grey below the picture. It contains lots of proposed solutions to packing problems like this one.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/532544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "159",
"answer_count": 20,
"answer_id": 6
} |
Simple Polynomial Interpolation Problem Simple polynomial interpolation in two dimensions is not always possible. For example, suppose that the following data are to be represented by a polynomial of first degree in $x$ and $y$, $p(t)=a+bx+cy$, where $t=(x,y):$
Data: $f(1,1) = 3, f(3,2)=2, f(5,3)=6$
Show that it is not possible.
I've been trying to think of ways to use either Newton's Form or Divided Differences to prove this is not possible, but can't come up with how to work it out. Looking for any help :)
| Since you want a polynomial of degree $\leqslant 1$, you have the three equations
$$\begin{align}
a + b + c &= 3\tag{1}\\
a + 3b + 2c &= 2\tag{2}\\
a + 5b + 3c &= 6\tag{3}
\end{align}$$
Subtracting $(1)$ from $(2)$ yields $2b + c = -1$, and subtracting $(2)$ from $(3)$ yields $2b + c = 4$. These are incompatible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/532623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Use the relation of Laplace Transform and its derivative to figure out $L\left\{t\right\}$,$L\left\{t^2\right\}$,$L\left\{t^n\right\}$ If $F(s) = L\left\{f(t)\right\}$, then $F'(s) = -L\left\{tf(t)\right\}$
Use this relation to determine
$(a)$ $L\left\{t\right\}$
$(b)$ $L\left\{t^2\right\}$
$(c)$ $L\left\{t^n\right\}$ for any positive integer $n$.
I was able use the definition of Laplace Transform and integration to figure out $(a)$,$(b)$ and $(c)$.
Namely
$L\left\{t\right\} = \dfrac{1}{s^2}$
$L\left\{t^2\right\}=\dfrac{2}{s^3}$
$L\left\{t^n\right\} = \dfrac{n!}{s^{n+1}}$
But how I use the relation between Laplace Transform and its derivative to figure out $(a)$,$(b)$ and $(c)$?
| Induction. You know how to write down $F'(s)$. Now, $\mathscr L\{t\cdot t^n\}=\cdots$?
For example, $\mathscr L\{t^3\}=\mathscr L\{t\cdot t^2\}=\mathscr L\{tf(t)\} $ with $f(t)=t^2$. If you knew that $\mathscr L\{t^2\}=\dfrac{2!}{s^3}$ then $\mathscr L\{t^3\}=-F'(s)=-\dfrac{d}{ds}\left(\dfrac{2!}{s^3}\right)=\dfrac{3!}{s^4}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/532724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Differential Equation $y(x)'=(y(x)+x)/(y(x)-x)$ can someone give me some tips on how to solve this differential equation.
I looked at the Wolfram solution which substituted $y(x)=xv(x)$. I'd know how to solve from there, but I have know idea why they did it in the first place, well why the algorithm did it in the first place. When do you substitue $y(x)=xv(x)$?
What are other ways on solving it?
| Let's assume a equation scaling $x \to \alpha x$ and $y \to \beta y$. Under such scaling the equation becomes
$$
y'
=
{\alpha \over \beta}\,
{y + \left(\alpha/\beta\right)x \over y - \left(\alpha/\beta\right)x}
$$
Then, we can see the equation doesn't change its form whenever $\alpha = \beta$. It means $y/x$ doesn't change its form either. Then, a change of variables
$\left(~y \to {\rm f}~\right)$ like $y/x = {\rm f}\left(x\right)$ should simplify the original equation as other people ( see @Amzoti ) already showed.
For this simple equation, this analysis can be too pretentious. However, it illustrates a general technique that can be quite useful in more complicated cases.
See, for example,
Applications of Lie Groups to Difference Equations ( Differential and Integral Equations and Their Applications )
by Vladimir Dorodnitsyn.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/532810",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How do symbolic math software work? As the answer to a question of mine I was referred to a website (see here please)
How can WolframAlpha do it like humans?
| For the specific system (Mathematica) that you mentioned, there are descriptions of its internals on these web pages and these. But Mathematica is a commercial system, so its internal workings are proprietary, which is why the descriptions don't provide much detail.
While these systems might appear to work "the same way as a human", they really don't. Human beings often use clever creative tricks to solve problems. Computers in general (and computer algebra systems in particular) typically use brute-force systematic methods, as described in the referenced materials. See the description of how Mathematica finds indefinite integrals, for example.
The best results are obtained by a combination of a powerful brute-force computing and intelligent guidance provided by a human being. Computer algebra systems are enormously useful, but they only do what you tell them to do :-)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/532897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
} |
Probability : A bag contains 12 pair socks . Four socks are picked up at random. Find the probability that there is at least one pair. Probability : A bag contains 12 pair socks . Four socks are picked up at random. Find the probability that there is at least one pair.
My approach :
Number of ways of selecting 4 socks from 24 socks is n(s) = $^{24}C_4$
The number of ways of selecting 4 socks from different pairs is n(E) = $^{12}C_4$
$$\Rightarrow P(E) = \frac{^{12}C_4 }{^{24}C_4}$$
Therefore, the probability of getting at least one pair is
$$1- \frac{^{12}C_4 }{^{24}C_4}$$
But the answer is $$1- \frac{^{12}C_4 \times 2^4}{^{24}C_4}$$... Please guide the error .. thanks..
| The error is in the second line. The number of ways to select four different pairs must be multiplied by the number of ways to select just one sock from each of those four selected pairs.
This is: (12C4 × 24)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/532982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
My conjecture on almost integers. Here when I was studying almost integers , I made the following conjecture - Let $x$ be a natural number then For sufficiently large $n$ (Natural number) Let $$\Omega=(\sqrt x+\lfloor \sqrt x \rfloor)^n$$ then $\Omega$ is an almost integer . The value of $n$ depends upon the difference between the number $x$ and its nearest perfect square which is smaller than it.Can anyone prove this conjecture.
Moreover, I can provide examples like
$(\sqrt 5+2)^{25}=4721424167835364.0000000000000002$
$(\sqrt 27+5 )^{15}=1338273218579200.000000000024486$
| Well, we have that $$(\sqrt{x}+\lfloor\sqrt{x}\rfloor)^{n} + (-\sqrt{x}+\lfloor\sqrt{x}\rfloor)^{n} $$
is an integer (by the Binomial Theorem), but $(-\sqrt{x}+\lfloor\sqrt{x}\rfloor)^{n}\to 0$ if $\sqrt{x}$ was not already an integer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/533166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Proving an identity We define $\|x\|_A^2:= x^TAx$ and $(x,y)_M := y^TMx$ for a symmetric positive definite matrix $A$ and an invertible matrix $M$.
I want to show the following identity for the errors of Richardson's method for solving an equation system $Ax=b$:
$$\frac{\|e_k\|^2_A - \|e_{k+1}\|_A^2}{\|e_k\|_A^2} = \frac{(y_k,y_k)_M^2}{(M^{-1}Ay_k,y_k)_M(A^{-1}My_k,y_k)_M}$$
whereas $e_k=x-x_k$, $y_k=M^{-1}r_k$, $r_k=b-Ax_k$ and $e_{k+1} = (I-M^{-1}A)e_k$ with $I$ the unit matrix.
So far I established the identity:
$\|e_{k+1}\|^2_A= e_k^Tr_k - e_k^T Ay_k - r_k^T M^{-1}Ae_k + y_k^T AM^{-1}Ae_k$
and I'm tried to calculate that
$$\frac{\|e_k\|^2_A - \|e_{k+1}\|_A^2}{\|e_k\|_A^2}\cdot (M^{-1}Ay_k,y_k)_M(A^{-1}My_k,y_k)_M = (y_k,y_k)_M^2$$
but my calculations never really go anywhere.
Does anyone have an idea on how to show this? Or how to calculate that the identity holds without getting confused somewhere along the way?
| I guess this actually does hold. Not for the (preconditioned) Richardson method but for the (preconditioned) steepest descent method.
In the preconditioned SD method, you update your $x_k$ such that the $A$-norm of the error of $x_{k+1}$ is minimal along the direction of the preconditioned residual $y_k=M^{-1}r_k$, that is, $x_{k+1}=x_k+\alpha_k y_k$, and since $e_{k+1}=e_k-\alpha_k y_k$ and
$$
\|e_{k+1}\|_A^2 = \|e_k\|_A^2-2\alpha_ky_k^Tr_k+\alpha_k^2y_k^TAy_k,
$$
we have $\alpha_k=y_k^Tr_k/y_k^TAy_k$.
Then
$$
\begin{split}
\|e_{k+1}\|_A^2 &= \|e_k^2\|_A^2-2\left(\frac{y_k^Tr_k}{y_k^TAy_k}\right)y_k^Tr_k+\left(\frac{y_k^Tr_k}{y_k^TAy_k}\right)^2y_k^TAy_k\\
&=\|e_k\|_A^2-\frac{2(y_k^Tr_k)^2-(y_k^Tr_k)^2}{y_k^TAy_k}
=\|e_k\|_A^2-\frac{(y_k^Tr_k)^2}{y_k^TAy_k}
\end{split}
$$
and hence
$$
\|e_k\|_A^2-\|e_{k+1}\|_A^2=\frac{(y_k^Tr_k)^2}{y_k^TAy_k}
$$
and
$$
\frac{\|e_k\|_A^2-\|e_{k+1}\|_A^2}{\|e_k\|_A^2}=\frac{(y_k^Tr_k)^2}{(y_k^TAy_k)(e_k^TAe_k)}.
$$
Since $y_k^Tr_k=y_k^TMy_k$, $e_k^TAe_k=r_k^TA^{-1}r_k=y_k^TM^TA^{-1}My_k$, we have
$$
\frac{\|e_k\|_A^2-\|e_{k+1}\|_A^2}{\|e_k\|_A^2}=\frac{(y_k^TMy_k)^2}{(y_k^TAy_k)(y_k^TM^TA^{-1}My_k)}.
$$
So far without any further assumptions on $M$.
Assuming that $M$ is symmetric and positive definite (so that it induces an inner product $(x,y)_M=y^TMy$), is equivalent to
$$
\frac{\|e_k\|_A^2-\|e_{k+1}\|_A^2}{\|e_k\|_A^2}=\frac{(y_k,y_k)_M^2}{(M^{-1}Ay_k,y_k)_M(A^{-1}My_k,y_k)_M},
$$
which looks like what you are looking for. However, it is true if:
*
*You use the steepest descent method instead of the simple Richardson,
*the preconditioner $M$ is symmetric and positive definite.
Hope this helps, feel free to ask for clarification on any point (I could make a typo somewhere too).
P.S.: Without the assumption that $M$ is SPD (or at least symmetric), the last identity is correct (if you set $(x,y)_M=y^TMx$ formally without it being an actual inner product), except the second term in the denominator on the right-hand side because $y_k^TM^TA^{-1}My_k$ would be actually $(A^{-1}My_k,y_k)_{M^T}$ instead of $(A^{-1}My_k,y_k)_{M}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/533245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
If $\alpha_1, \alpha_2, \alpha_3, \alpha_4$ are roots of $x^4 +(2-\sqrt{3})x^2 +2+\sqrt{3}=0$ .... Problem :
If $\alpha_1, \alpha_2, \alpha_3, \alpha_4$ are roots of $x^4 +(2-\sqrt{3})x^2 +2+\sqrt{3}=0$ then the value of $(1-\alpha_1)(1-\alpha_2)(1-\alpha_3)(1-\alpha_4)$ is
(a) 2$\sqrt{3}$
(b) 5
(c) 1
(d) 4
My approach :
Discriminant of this problem is :
$(2-\sqrt{3})^2- 4(2+\sqrt{3}) <0$
Therefore roots are imaginary.
Now how to consider the roots here... please suggest.. thanks
| Let give a polinom $p_4(x)=a_4 x^4+b_3x^3+c_2x^2+d_1x+e$. Use formula:
$$x_1+x_2+x_3+x_4=-\frac{b}{a}$$
$$x_1x_2+x_1x_3+x_2x_3+x_1x_4+x_2x_4+x_3x_4=\frac{c}{a}$$
$$x_1x_2x_3+x_1x_2x_4+x_1x_3x_4+x_2x_3x_4=-\frac{d}{a}$$
$$x_1x_2x_3x_4=\frac{e}{a}$$
where $x_1,x_2,x_3,x_4$ is rrots of the given polinom.
For the given example, we have: $x^4+0\cdot x^3 +(2-\sqrt 3)x^2+0\cdot x + 2+\sqrt 3=0$ $\Rightarrow$ $a=-1, b=0, c=2-\sqrt 3, d=0, e=2+\sqrt 3$
$$\alpha_1+\alpha_2+\alpha_3+\alpha_4=0$$
$$\alpha_1\alpha_2+\alpha_1\alpha_3+\alpha_2\alpha_3+\alpha_1\alpha_4+\alpha_2\alpha_4+\alpha_3\alpha_4=2-\sqrt 3$$
$$\alpha_1\alpha_2\alpha_3+\alpha_1\alpha_2\alpha_4+\alpha_1\alpha_3\alpha_4+\alpha_2\alpha_3\alpha_4=0$$
$$\alpha_1x\alpha_2\alpha_3\alpha_4=2-\sqrt 3$$
$(1-\alpha_1)(1-\alpha_2)(1-\alpha_3)(1-\alpha_4)=1-(\alpha_1+\alpha_2+\alpha_3+\alpha_4)+(\alpha_1\alpha_2+\alpha_1\alpha_3+\alpha_2\alpha_3+\alpha_1\alpha_4+\alpha_2\alpha_4+\alpha_3\alpha_4)-(\alpha_1\alpha_2\alpha_3+\alpha_1\alpha_2\alpha_4+\alpha_1\alpha_3\alpha_4+\alpha_2\alpha_3\alpha_4)+\alpha_1x\alpha_2\alpha_3\alpha_4=2-\sqrt 3+2+\sqrt 3=4$
$d)$ $4$ is answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/533371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is it true that $2^p\|f\|^p_{L_p}+2^p\|f'\|^p_{L_p} >2\|f\|^p_{L_\infty}$? For any $C^1$ function defined in $(0,1)$, is it true that
$$ 2^p\|f\|^p_{L_p}+2^p\|f'\|^p_{L_p} >2\|f\|^p_{L_\infty}
$$
If it is true, how to prove it?
| If $x,y \in (0,1)$ then $$|f(x)| \le |f(y)| + |f(x) - f(y)| \le |f(y)| + \int_x^y |f'(t)| \, dt$$ so that $$ |f(x)| \le |f(y)| + |x-y|^{1-1/p} \left( \int_x^y |f'(t)|^p \, dt\right)^{1/p}$$ by e.g. Holder's inequality. This leads to $$|f(x)|^p \le 2^p |f(y)|^p + 2^p |x-y|^{p-1} \int_x^y |f'(t)|^p \, dt \le 2^p |f(y)|^p + 2^p \int_0^1 |f'(t)|^p \, dt.$$
You could now integrate with respect to $dy$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/533430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Set theory relation: irreflexive and transitive Which of the following relations on $T = \{1, 2, 3\}$ is irreflexive and transitive.
*
*$\{(2, 1), (2, 3)\}$
*$\{(1, 1), (2, 1), (3, 2)\}$
*$\{(2, 1), (1, 2), (3, 2), (2, 3)\}$
*$\{(1, 1), (2, 2), (3, 3), (2, 1), (1, 2)\}$
From my understanding 2 and 4 are excluded because for irreflexitivity $ x \in T $ then $(x, x) \notin R$
but I don't see how either 1 or 3 can be transitive... which I understand as if $(x, y) \in R$ and $(y, z) \in R$ then $(x, z) \in R$
am I missing something here?
| $(1)$ is transitive, because the condition of transitivity is vacuously satisfied. There are no elements related in such a way for transitivity to fails, hence, by default, the relation is transitive.
$(3)$ is not transitive, because $(3, 2), (2, 3) \in R$ but $(3, 3)\notin R$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/533500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to solve the inequality $n! \le n^{n-2}$? The inequality is $n! \le n^{n-2}$. I used Stirling's approximation for factorials and my answer was $n \le (e(2\pi)^{-1/2})^{2/5}$ but this doesn't seem right. Any help would be much appreciated.
| Here's an elementary proof, with some details left out to be filled in by you :).
Taking logarithms, you want to show that
$$
S_n = \log 2 + \log 3 + \dots \log n \le (n-2) \log n \, .
$$
Note that there are $n-1$ terms on the left, each less than $\log n$. So you only have to "squeeze out" an additional term $\log n$. This can be done as follows:
Let $k = \lfloor \sqrt{n} \rfloor$, that is, $k$ is the largest integer such that $k \le \sqrt{n} < k+1$. Split the sum on the left to obtain:
$$
\log 2 + \log 3 + \dots + \log k + \log (k+1) + \dots + \log n
$$
The terms up to $\log k$ are all $\le \log \sqrt{n} = \frac{1}{2} \log n$, and there are $k-1$ such terms. The remaining terms are all $\le \log k$, and there are $n-k$ such terms.
Add everything up. The result is
$$
S_n \le (k-1) \frac{\log n}{2} + (n-k) \log n = \left( n - \frac{k + 1}{2} \right) \log n .
$$
Now determine for which $k$ and therefore for which $n$ this implies $S_n \le (n-2) \log n$. You'll notice that you have proved the inequality for $n \ge 9$ or so. As Robert Israel already pointed out, the inequality actually holds for $n \ge 5$. These remaining cases can be checked by direct computation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/533597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Is $\mathbb{Q}[α]=\{a+bα+cα^2 :a,b,c ∈ \mathbb{Q}\}$ with $α=\sqrt[3]{2}$ a field? I'm making some exercises to prepare for my ring theory exam:
Is $\mathbb{Q}[α]=\{a+bα+cα^2 :a,b,c ∈ \mathbb{Q}\}$ with
$α=\sqrt[3]{2}$ a field ?
If $(a+bα+cα^2)(a'+b'α+c'α^2)=1$, then (after quite some calculation and noticing that $α^3=2$ and $α^4=2α$):
\begin{align*}
aa'+2bc'+2cb'&=1 \\
ab'+ ba'+2cc'&=0 \\
ca'+bb'+ac' &= 0
\end{align*}
I'm not sure how to proceed, and if I'm heading in the right direction. Any help would be appreciated.
Something else I was thinking about, this ring I have seems to be isomorphic to:
$$\mathbb{Q}[X]/(X^3-2)$$
But this is not a maximal ideal, as it is contained in the ideal $(X^3,2)$. Would this be correct reasoning ?
| There is also a low-brow approach.
It will suffice that the three equations you wrote out always have a solution $a',b',c'$ for any $a,b,c$, except $0,0,0$.
This is a linear system of equations, with the corresponding matrix $$A = \begin{bmatrix} a & 2c & 2b \\ b & a & 2c \\ c & b & a
\end{bmatrix}.$$
So, to check that there is a solution, you just need to check that $\det A \neq 0$. But $\det A = a^3 + 2b^3 + 4 c^3 - 6abc$, which is non-negative by the inequality between arithmetic and geometric means, and which would be $0$ only if $a^3 = 2b^3 = 4c^3$ - but this cannot happen for non-zero integers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/533655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Multiplication of projection matrices Let $A$ and $B$ be two projection matrices having same dimensions. Then does the following hold
\begin{equation}
AB\leq I,
\end{equation}
where $I$ is the identity matrix. In other words is it true that $I-AB$ is positive semi-definite.
| It's not clear to me what you mean by matrix inequality, but assuming you mean an inequality componentwise, it's clearly false, for instance let
$$A = B = \frac{1}{2} \left[\begin{array}{cc}1 & 1\\1 &1\end{array}\right]$$
be projection onto the vector $(1,1)$.
Then $AB = A$ has positive off-diagonal entries so violates your inequality.
EDIT: $(I-AB)$ is not positive semi-definite, as projection matrices are not in general symmetric. For instance
$$A = \left[\begin{array}{cc}1 & 2\\0 & 0\end{array}\right]$$
is a projection matrix ($A^2 = A$) but $I - A^2 = I-A$ is clearly not symmetric.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/533745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
What finite fields are quadratically closed? A field is quadratically closed if each of its elements is a square.
The field $\mathbb{F}_2$ with two elements is obviously quadratically closed.
However, testing some more finite fields with this property, I didn't find any more. Hence my question is:
Which finite fields $\mathbb{F}_{p^n}$ are quadratically closed and
why?
| Consider the squaring map from the multiplicative group of a finite field $F$ to itself. The kernel is $\{\pm1 \}$, i.e., it is trivial if and only if the characteristic of $F$ is $2$. Since this map is surjective if and only if it is injective, every element of $F$ is a square if and only if the characteristic of $F$ is $2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/533812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
A criterion for weak convergence of probability measures
Let $\mathbb P_n$ and $\mathbb P$ be probability measures. We have that $\mathbb P_n$ converges weakly to $\mathbb P$ if for each continuous bounded function, $\int f(x)\mathrm d\mathbb P_n\to\int f(x)\mathrm d\mathbb P$. Show that $\mathbb P_n$ weakly converges to $\mathbb P$ if and only if $\lim_{n\rightarrow \infty} \mathbb P_n(A) = \mathbb P(A)$ for all Borel set $A$ with $P(\partial A)=0.$
And here is a related question, which I couldn't solve as well.
Give an example of a family of probability measures $\mathbb P_n$ on ($\mathbb{R}, \mathcal{B}(\mathbb{R}$)) which weakly converges to $\mathbb P$, all of $\mathbb P_n$ and $\mathbb P$ absolutely continuous with respect to the Lebesgue measure, yet there is a Borel set $A$ such that $\mathbb P_n(A)\not \rightarrow \mathbb P(A)$.
| The first part in contained in the statement of portmanteau theorem. One direction is not specially hard (approximate pointwise the characteristic function of a closed set by a continuous function in order to get $\limsup_n\mathbb P_n(F)\leqslant \mathbb P(F)$).
For the second one, the question reduces to the following: if $f_n,f$ are non-negative integrable function of integral $1$ and for each continuous $\phi$, we have $\int_{\mathbb R}f_n(x)\phi(x)\mathrm dx\to\int_{\mathbb R}f(x)\phi(x)\mathrm dx$ can we extend this to $\phi\in L^\infty$? (this follows from an approximation argument). The answer is no.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/533953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Let $f(x)$ be a 3rd degree polynomial such that $f(x^2)=0$ has exactly $4$ distinct real roots Problem :
Let $f(x)$ be a 3rd degree polynomial such that $f(x^2)=0$ has exactly four distinct real roots, then which of the following options are correct :
(a) $f(x) =0$ has all three real roots
(b) $f(x) =0$ has exactly two real roots
(c) $f(x) =0$ has only one real root
(d) none of these
My approach :
Since $f(x^2)=0 $ has exactly four distinct real roots. Therefore the remaining two roots left [ as $ f(x^2)=0$ is a degree six polynomial].
How can we say that the remaining two roots will be real . This will not have one real root ( as non real roots comes in conjugate pairs).
So, option (c) is incorrect. I think the answer lies in option (a) or (b) . But I am not confirm which one is correct. Please suggest.. thanks..
| Both $f(x)=(x-1)(x-4)^2$ (two real roots including one double root), and $f(x)=(x+1)(x-1)(x-4)$ (three real roots) fulfit the conditions of the problem, with $f(x^2)=0$ having roots in $x=\pm1$ and $x=\pm2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/534049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 2
} |
Prove this limit as $x\to\infty$ Let $f:(a,\infty)\to\mathbb{R}$ be a function. Suppose for each $b>a$, $f$ is bounded on $(a,b)$ and $\lim_{x\to\infty}f(x+1)-f(x)=A$. Prove
$$\lim_{x\to\infty}\frac{f(x)}{x}=A.$$
Here we assume $f$ to be arbitrary and no further conditions. $f$ could be continuous,discontinuous,as long as it fits all assumptions. I have some trouble with the intermediate steps.
| We have that $$\lim_{x\to\infty}f(x+1)-f(x)=A$$ which means that for any $\varepsilon\in\mathbb R^+$ there is an $n\in\mathbb N$ such that for each $x>n$ then $A-\varepsilon<f(x+1)-f(x)<A+\varepsilon$. (if we find some $n<a$ for this condition then we can chose $n=\lceil a\rceil$ for the remind of the proof.)
For simplicity I will define $g(x):=f(x)-Ax$, then we have that $g(x+1)-g(x)=f(x+1)-f(x)-A$ and for $x>n$: $|g(x+1)-g(x)|<\varepsilon$.
As $f$ is bounded in $(a,b)$ for any $b>a$, then $f$ is bounded in $(a,n+2)$ and as we have find $n\ge a$, then $f$ is bounded in $(n,n+1]$, let's call $\beta$ that boundary: $|f(x)|<\beta$ for any $x$ in $(n,n+1]$.
Also, let $\gamma=\beta+(n+1)|A|$, then $g$ is bounded and $|g(x)|<\gamma$ for $x\in(n,n+1]$.
Now, we can prove that
\begin{align}
|g(x)|&\le\gamma+(\lfloor x\rfloor-n)\varepsilon,&&\text{and}\\
\frac{|g(x)|}x &\le\frac{\gamma+(\lfloor x\rfloor-n)\varepsilon}x =\frac\gamma x+\frac{\lfloor x\rfloor-n}x\varepsilon
\end{align}
For any fixed positive $\varepsilon$, we have:
$$\lim_{x\to\infty}\frac\gamma x+\frac{\lfloor x\rfloor-n}x\varepsilon=\varepsilon$$
therefor
$$\lim_{x\to\infty}\left|\frac{g(x)}x\right|=\lim_{x\to\infty}\frac{|g(x)|}x\le\varepsilon$$
This should prove that $\lim_{x\to\infty}\frac{g(x)}x=0$, now the limit for $\frac{f(x)}x$ should follow by having $f(x)=Ax+g(x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/534110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
A $2 \times 2$ matrix $A$ such that $A^n$ is the identity matrix So basically determine a $2 \times 2$ matrix $A$ such that $A^n$ is an identity matrix, but none of $A^1, A^2,..., A^{n-1}$ are the identity matrix. (Hint: Think geometric mappings)
I don't understand this question at all, can someone help please?
| Hint: Rotate through $\frac{2\pi}{n}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/534198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 3
} |
Map induced by $O(n)\hookrightarrow U(n)$ on homotopy groups There is an inclusion $O(n)\hookrightarrow U(n)$ which views an $n\times n$ orthogonal matrix as a unitary matrix. It is also a theorem, sometimes called Bott periodicity, that we have the following homotopy groups:
$\pi_{4i-1}(O(n)) = \mathbb{Z}$ and $\pi_{4i-1}(U(n))=\mathbb{Z}$
for $n$ large, say $n>2(4i +1)$. My question is: What is the induced map $\pi_{4i-1}(O(n))\rightarrow \pi_{4i-1}(U(n))$? It is a map from $\mathbb{Z}$ to $\mathbb{Z}$ so must be multiplication by some integer, but which integer is it?
The only thing I know that might help is that the map $O(n)\hookrightarrow U(n)$ induces a map on classifying spaces $BO(n)\rightarrow BU(n)$. One can think of this map as an inclusion of a real grassmannian into a complex grassmannian, but the map on homotopy groups seems just as mysterious.
Thanks for any help.
| I had this question myself a few days ago and found this question, but I now have an answer with a hint from my supervisor.
The key seems to be in a theorem (3.2) in 'Homotopy theory of Lie Groups' by Mimura, which can be found in the Handbook of Algebraic Topology. The theorem gives a weak homotopy equivalence
$$BO\rightarrow \Omega (SU/SO) .$$
This is enough for our requirement since there are isomorphisms
$$\pi_i(O)\cong\pi_i(SO) \mbox{ and } \pi_i(U)\cong\pi_i(SU)$$
for $i>2$. Notice that these homotopy groups are the same as $U(n)$ or $O(n)$ in the ranges you specified.
Now there is a fibration
$$ SO\overset{f}{\rightarrow} SU \rightarrow SU/SO$$
and $f$ will induce the same map on $\pi_i$ for $i>2$. Due to the resulting long exact sequence of homotopy groups, we are required to work out $\pi_i(SU/SO)$, but by the theorem, we have
$$\pi_i(SU/SO)\cong \pi_{i-2}(O).$$
I shall leave the details to you, but for all $k$, we get $\pi_{3+8k}(O) \rightarrow \pi_{3+8k}(U) $ is multiplication by 2 and $\pi_{7+8k}(O)\rightarrow \pi_{7+8k}(U)$ is an isomorphism.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/534293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Is this also a homotopy If $F$ is a path at a point $x$ then the following defines a homotopy from the path $FF^{-1}$ to the constant path $e$:
$$ \begin{array}{cc}
H(t,s) = F(2t) & s \ge 2t \\
H(t,s) = F(s) & s \le 2t \land s \le -2t + 2 \\
H(t,s) = F(2-2t) & s \ge -2t + 2
\end{array}$$
Is it possible that the following is also a homotopy from $e$ to $FF^{-1}$?
$$ \begin{array}{cc}
H(t,s) = F(0) & s =0 \\
H(t,s) = F({2t \over s}) & 0 \le t \le s/2 \\
H(t,s) = F^{-1}({2t - 1 \over s}) & s/2 \le t \le s
\end{array}$$
| As far as I can see, you haven't described what happens when $t\geq s$ for instance what is $H(\frac{3}{4},\frac{1}{2})$? And so no, this does not define a homotopy as it is not even a well defined function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/534373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to compute probability of winning The producer places at random a valuable prize behind one of the three doors, numbered 1 to 3, and nothing behind the remaining two and each door is equally likely to hide the prize. If you have selected a door that hides a prize, the host will always open the smaller-numbered one of the remaining doors and then will always offer you the possibility of switching doors. Otherwise, with probability 0.5 the host will open immediately the door you have selected i.e. revealing that you have lost, and with probability 0.5 he will open the other door that does not hide the prize. In the latter case he will always give you the opportunity of switching doors. In the end, the door you select will be opened and if there's a prize, you win it.
If initially you select door 3 and if the host opens door 2 you switch, otherwise you stick with door 3, what is your probability of winning the prize?
| If the prize was behind door $1$, you lose or win with probability $\frac12$ each.
If the prize was behind door $2$, you lose, since you never switch to door $2$.
If the prize was behind door $3$, you win, since the host opens door $1$ and you stick with door $3$.
Thus you win with probability $\frac13\left(\frac12+0+1\right)=\frac12$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/534472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solve $x^2 = I_2$ where x is a 2 by 2 matrix I tried a basic approach and wrote x as a matrix of four unknown elements $\begin{pmatrix} a && b \\ c && d \end{pmatrix}$ and squared it when I obtained $\begin{pmatrix} a^2 + bc && ab + bd \\ ca + dc && cd + d^2\end{pmatrix}$ and by making it equal with $I_2$ I got the following system
$a^2 + bc = 1$
$ab +bd = 0$
$ca + dc = 0$
$cd + d^2 = 1$
I don't know how to proceed. (Also, if anyone knows of a better or simpler way of solving this matrix equation I'd be more than happy to know).
| The second and third equalities are
$$b(a+d)=0$$
$$c(a+d)=0$$
Split the problem in two cases:
Case 1: $a+d \neq 0$. Then $b=0, c=0$, and from the first and last equation you get $a,d$.
Case 2: $a+d=0$. Then $d=-a$ and $bc=1-a^2$. Show that any matrix satisfying this works.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/534564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
Laurent series in all possible regions of convergence Let $f=\frac{2}{z}-\frac{3}{z-2}+\frac{1}{z+4}$ . Find the Laurent series in all possible regions of convergence about z=0 and read the residuum.
I am not sure if I have to consider all 3 regions (inside circle pf radius 2,
inside annulus $2<r<4$ and ouside circle of radius 4). Can someone guide me on how to continue? Thank you!
| Note that for $z\neq -4,0,$ we can write $$\frac1{z+4}=\cfrac1{4-(-z)}=\frac1{4}\cdot\cfrac1{1-\left(-\frac{z}{4}\right)}$$ and $$\frac1{z+4}=\cfrac1{z-(-4)}=\frac1{z}\cdot\cfrac1{1-\left(-\frac{4}{z}\right)}.$$
Now, one of these can be expanded as a multiple of a geometric series in the disk $|z|<4,$ and the other can be expanded as a multiple of a geometric series in the annulus $|z|>4$. That is, we will use the fact that $$\frac1{1-w}=\sum_{k=0}^\infty w^k$$ whenever $|w|<1$. You should figure out which region works for which rewritten version, and find the respective expansions in both cases.
Likewise, we can rewrite $\frac1{z-2}$ in two similar forms, one of which is expandable in $|z|<2$ and one of which is expandable in $|z|>2.$
Using these expansions will give you three different Laurent expansions of $f(z),$ one for each of the regions $0<|z|<2,$ $2<|z|<4,$ and $|z|>4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/534652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Evaluate this power series Evaluate the sum
$$x+\frac{2}{3}x^3+\frac{2}{3}\cdot\frac{4}{5}x^5+\frac{2}{3}\cdot\frac{4}{5}\cdot\frac{6}{7}x^7+\dots$$
Totally no idea. I think this series may related to the $\sin x$ series because of those missing even powers. Another way of writing this series:
$$\sum_{k=0}^{\infty}\frac{(2k)!!}{(2k+1)!!}x^{2k+1}.$$
| In this answer, I mention this identity, which can be proven by repeated integration by parts:
$$
\int_0^{\pi/2}\sin^{2k+1}(x)\;\mathrm{d}x=\frac{2k}{2k+1}\frac{2k-2}{2k-1}\cdots\frac{2}{3}=\frac{1}{2k+1}\frac{4^k}{\binom{2k}{k}}\tag{1}
$$
Your sum can be rewritten as
$$
f(x)=\sum_{k=0}^\infty\frac1{(2k+1)}\frac{4^k}{\binom{2k}{k}}x^{2k+1}\tag{2}
$$
Combining $(1)$ and $(2)$, we get
$$
\begin{align}
f(x)
&=\int_0^{\pi/2}\sum_{k=0}^\infty\sin^{2k+1}(t)x^{2k+1}\,\mathrm{d}t\\
&=\int_0^{\pi/2}\frac{x\sin(t)\,\mathrm{d}t}{1-x^2\sin^2(t)}\\
&=\int_0^{\pi/2}\frac{-\,\mathrm{d}x\cos(t)}{1-x^2+x^2\cos^2(t)}\\
&=-\frac1{\sqrt{1-x^2}}\left.\tan^{-1}\left(\frac{x\cos(t)}{\sqrt{1-x^2}}\right)\right]_0^{\pi/2}\\
&=\frac1{\sqrt{1-x^2}}\tan^{-1}\left(\frac{x}{\sqrt{1-x^2}}\right)\\
&=\frac{\sin^{-1}(x)}{\sqrt{1-x^2}}\tag{3}
\end{align}
$$
Radius of Convergence
This doesn't appear to be part of the question, but since some other answers have touched on it, I might as well add something regarding it.
A corollary of Cauchy's Integral Formula is that the radius of convergence of a complex analytic function is the distance from the center of the power series expansion to the nearest singularity. The nearest singularity of $f(z)$ to $z=0$ is $z=1$. Thus, the radius of convergence is $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/534736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
How to I find the distribution of $\log p(X)$ when $X$ is distributed under $p$? I have a feeling there's no general solution to this problem, but I'll ask anyway.
I have a multivariate PDF $p$ and, given a random vector $X\sim p$, I'd like to find the the PDF of $\log p(X)$.
For example, if I have a simple 2-dimensional Gaussian,
$$
p(x,y)=\frac{1}{2\pi}\exp\left(-\frac{1}{2}\left(x^2+y^2\right)\right),
$$
I can write the distribution of the distance from the peak, $r$, as
$$
p_r(r)=r\exp\left(-\frac{1}{2}r^2\right)
$$
and then change variables with $\log p = -\frac{1}{2}r^2 - \log 2\pi$. I find that the PDF of $\log p$ on $(-\infty,-\log 2\pi]$ is then
$$
p_{\log p}(\log p) = 2\pi\exp(\log p).
$$
I can use a similar argument for higher-dimensional Gaussians, but is there a general way of getting this result for an arbitrary $p$, without relying on specific symmetries?
| I think this is achieved as follows:
*
*augment $\log(p)$ to form a vector of the same dimension of $X$ (call this vector $Y$);
*partition the support of $Y$ into regions where $Y = g(X)$ is a one-to-one function of $X$ (hence invertible);
*use the standard results on transformations of multivariate probability distributions (e.g., http://www.statlect.com/subon2/dstfun2.htm), which require to compute the Jacobian of $g^{-1}(y)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/534805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Computing the total curvature Let $C$ be the curve in $\Bbb{R}^2$ given by $(t-\sin t,1-\cos t)$ for $0 \le t \le 2 \pi$. I want to find the total curvature of $C$.
I found it brutally by finding the curvature $k(t)$, and then reparametrize it by arc-length $s$, and then $\int_0^Lk(t(s))ds$, where $L=8$ is the lenght of $C$.
I found that the answer is $\pi$. But is there any way to compute it easily? The above computation was somewhat hard, and I guess that there maybe some easy methods(maybe something like Gauss-Bornet, or how much the angle of tangent vector has changed)
| A simpler answer is as follows: given the natural equation for a curve in the form of $\kappa(s)$, it can be shown that the tangent angle is given by
$$\theta=\int \kappa (s) ds \ \ \ \text{or} \ \ \ \kappa (s)=\frac{d\theta}{ds}$$
Thus from the definition of curvature used here, we obtain
$$K=\int_{s_1}^{s_2}\kappa(s)ds=\int_{\theta_1}^{\theta_2}d\theta=\theta_2-\theta_1$$
In the present example, you will find $K=-\pi$, where the minus sign is due to the curve unwinding in the clockwise direction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/534851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
What's the limit of this sequence? $\lim_{n \to \infty}\frac{1}{n}\bigg(\sqrt{\frac{1}{n}}+\sqrt{\frac{2}{n}}+\cdots + 1 \bigg)$
My attempt:
$\lim_{n \to \infty}\frac{1}{n}\bigg(\sqrt{\frac{1}{n}}+\sqrt{\frac{2}{n}}+\cdots + 1 \bigg)=\lim_{n \to \infty}\bigg(\frac{\sqrt{1}}{\sqrt{n^3}}+\frac{\sqrt{2}}{\sqrt{n^3}}+\cdots + \frac{\sqrt{n}}{\sqrt{n^3}} \bigg)=0+\cdots+0=0$
| $\newcommand{\angles}[1]{\left\langle #1 \right\rangle}%
\newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}%
\newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}%
\newcommand{\dd}{{\rm d}}%
\newcommand{\isdiv}{\,\left.\right\vert\,}%
\newcommand{\ds}[1]{\displaystyle{#1}}%
\newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}%
\newcommand{\expo}[1]{\,{\rm e}^{#1}\,}%
\newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}%
\newcommand{\ic}{{\rm i}}%
\newcommand{\imp}{\Longrightarrow}%
\newcommand{\pars}[1]{\left( #1 \right)}%
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\pp}{{\cal P}}%
\newcommand{\sgn}{\,{\rm sgn}}%
\newcommand{\ul}[1]{\underline{#1}}%
\newcommand{\verts}[1]{\left\vert #1 \right\vert}%
\newcommand{\yy}{\Longleftrightarrow}$
$$
\int_{0}^{n}x^{1/2}\,\dd x
<
\sum_{k = 1}^{n}\sqrt{\vphantom{\large A}k\,}
<
\int_{1}^{n + 1}x^{1/2}\,\dd x
\quad\imp\quad
{2 \over 3}\,n^{3/2}
<
\sum_{k = 1}^{n}\sqrt{\vphantom{\large A}k\,}
<
{2 \over 3}\bracks{\pars{n + 1}^{3/2} - 1}
$$
$$
{2 \over 3}\quad
<\quad
{1 \over n}\sum_{k = 1}^{n}\sqrt{\vphantom{\large A}k \over n\,}\quad
<\quad
{2 \over 3}\bracks{\pars{1 + {1 \over n}}^{3/2} - {1 \over n^{3/2}}}
$$
$$
\vphantom{\Huge A}
$$
$${\large%
\lim_{n \to \infty}
\pars{{1 \over n}\sum_{k = 1}^{n}\sqrt{\vphantom{\large A}k \over n\,}\,\,}
=
{2 \over 3}}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/534958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Product / GM of numbers, with fixed mean, increase as numbers get closer to mean. I am trying to prove a statement which goes like this.
Let $a_i$ and $b_i$ be positive real numbers where $i = 1,2,3,\ldots,n$; where $n$ is a positive integer greater than or equal to $2$, such that,
$$0\lt a_1\le a_2 \le \ldots\le a_n \ \ and \ \ 0\lt b_1\le b_2 \le\ldots\le b_n \tag{i}$$
and,
$$\sum^n_{i=1} a_i = \sum^n_{j=1} b_j \tag{ii}$$
If $\exists$ $k \in \Bbb Z$ such that $1 \le k\le n-1$ and,
$$a_k\le b_1 \le b_2 \le \ldots \le b_n \le a_{k+1} \tag{iii}$$
then,
$$\prod^n_{i=1} a_i \le \prod^n_{j=1} b_j \tag{iv}$$
Is there anyway to prove this?
Comments: Basically we are trying to prove if the product of $n$ numbers whose mean is constant increases as all the numbers come closer (in the number line) to the mean than the closest number (out of the $n$ number in the previous step) to mean in the previous step.
It is a tweak of the famous $GM \le AM$ inequality.
| If we only 'move' two elements of the sequence $(a_k)_k$, letting $b_i:=a_i$ except for $i=j,k$ ($j<k$) when $b_j:=a_j+\varepsilon$ and $b_k:=a_k-\varepsilon$, then we have
$$b_jb_k=(a_j+\varepsilon)(a_k-\varepsilon) = a_ja_k+\varepsilon\,(a_k-a_j\, -\varepsilon) \ > \ a_ja_{k}$$
using that $a_k>a_j+\varepsilon$. (Actually, we can assume that $\varepsilon\le \displaystyle\frac{a_k-a_j}2 $, provided $a_j<a_k$.)
By induction on $n$ we might be able to prove that the given $(b_k)$ sequence can be obtained from the given $(a_k)$ sequence using repeatedly the 'move two elements' method.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/535043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Quotient $K[x,y]/(f)$, $f$ irreducible, which is not UFD Does anyone know an irreducible polynomial $f \in K[x,y]$ such that the quotient $K[x,y]/(f)$ is not a UFD? Is it known when this quotient is a UFD?
Thanks.
| Yes, I think that the standard example is to take $f(x,y)=x^3-y^2$. Then when you look at it this is the same as the set of polynomials in $K[t]$ with no degree-one term. That is, things that look like $c_0+\sum_{i>1}c_it^i$, the sum being finite, of course. You prove this by mapping $K[x,y]$ to $K[t]$ by sending $x$ to $t^2$ and $y$ to $t^3$. I’ll leave it to you to check that the kernel is the ideal generated by $x^3-y^2$, that the image is what I said, and that this is not UFD.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/535102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Preparedness for a graduate course in Complex Analysis I am entering graduate school next year without any background in Complex Analysis. I have, however, taken 2 semesters of Real Analysis and a reader course in Measure Theory (using Bartle's Elements of Integration and Lebesgue Measure). I can, of course, brush up over the summer; however, I am more interested in reviewing material. That being said, would I perhaps be able to jump in to a course using the Alfhors text?
| When you start a course of complex analysis you want to be sure that you have a rock solid basis on complex numbers, de Moivre and solving basic complex equations. You also may want to practice Completing the Square on quadratic equations with complex coefficients. I found that preparation very useful when I did a course on complex vars. Another good thing to practice is complex numbers and locus, describing a set of complex numbers fulfilling some constraint in the form of an equation. Lastly you may look into complex transformations, like conform mapping and Möbius transformations. For the most part, the topics I mentioned do not involve complex calculus, but mastering these topics will be very useful when you take on a complex analysis course. It worked for me...Good luck
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/535186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
number theory proof Does this proof work?
Prove or disprove that if $\sigma(n)$ is a prime number, n must be a power of a prime.
Since $\sigma(n)$ is prime, $n$ can not be prime unless it is the only even prime, $2$, since $\sigma(n)$ for prime $n=p+1$ which will always be even and therefore not a power of a prime.
Now, assume that n is a power of a prime.
Then $\sigma(n)=\sigma(p^a)$ for prime number $p$
so since $\sigma$ is a multiplicative function,
$\sigma(n)=\sigma(p^a)=\sigma(p)\sigma(p)...\sigma(p)$
which is not prime since it has multiple factors.
$n$ must therefore not be a power of a prime
| Not quite. $\sigma$ is a multiplicative function, as you said, which means that if $\textrm{gcd}(a,b)=1$, then $\sigma(a)\sigma(b)=\sigma(ab)$. But with $a=b=p$, we clearly don't have $\textrm{gcd}(a,b)=1$. By looking at the contrapositive of that statement, you can see that you're being asked to show is that if $n$ is not a prime power - or in other words, that $n$ has multiple prime factors - then $\sigma(n)$ is not prime. Can you show this using your multiplicativity hypothesis?
(As a side note, if a function $\phi: \mathbb{Z}^+ \rightarrow \mathbb{Z}^+$ has $\phi(ab)=\phi(a)\phi(b)$ for all $a,b \in \mathbb{Z}^+$, $\phi$ is called "completely multiplicative".)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/535249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Cyclic groups question Show that $\mathbb Z_{35}^\times$ is not cyclic.
I assume that I need to show that no element of $\mathbb Z_{35}$ has a particular order, indicating it is not cyclic, but I'm not sure how to do this.
| Your group $G=\mathbb Z_{35}^{\times}$ has order $24$, right? So all you have to do is show that none of the $24$ elements of $G$ has order $24$. Let's just do it.
Let's start by picking an element of $G$ at random and computing its order. I picked $2$, and computed: $2^1=2$, $2^2=4$, $2^3=8$, $2^4=16$, $2^5$=32, $2^6=29$, $2^7=23$, $2^8=11$, $2^9=22$, $2^{10}=9$, $2^{11}=18$, $2^{12}=1$, so $2$ has order $12$.
Now we are half done! Each of those $12$ powers of $2$ has order $12$ or less, seeing as $(2^n)^{12}=(2^{12})^n=1^n=1$. Let$$H=\{2^n:n\in\mathbb Z\}=\{2^n:1\le k\le12\}=\{1,\ 2,\ 4,\ 8,\ 9.\ 11,\ 16,\ 18,\ 22,\ 23,\ 29,\ 32\}.$$We know that $H$ is a subgroup of $G$ (right?) and $h^{12}=1$ for every $h\in H$.
Now let's pick a random element of $G\setminus H$. I picked $3$. What's the order of $3$? Well, $3^2=9=2^{10}$, so $3^{12}=2^{60}=1^5=1$, so the order of $3$ is $12$. Now the coset $$3H=\{3h:h\in H\}$=\{3,\ 6,\ 12,\ 24,\ 27,\ 33,\ 13,\ 19,\ 31,\ 34,\ 17,\ 31\}=\{3,\ 6,\ 12,\ 13,\ 17,\ 19,\ 24,\ 26,\ 27,\ 31,\ 33,\ 34\}$$contains the other $12$ elements of $G$. If $x\in3H$, then $x^{12}=(3h)^{12}=3^{12}h^{12}=1\cdot1=1.$
We have shown that every element of $G$ satisfies the equation $x^{12}=1$. That means that every element has order $1$, $2$, $3$, $4$, $6$, or $12$. There are no elements of order $24$, so $G$ is cyclic!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/535331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
} |
Orthonormal Sets and the Gram-Schmidt Procedure
What my problem in understanding in the above procedure is , how they
constructed the successive vectors by substracting? Can you elaborate
please?
| Let $|w_1\rangle=|u_1\rangle$. Let us find $|w_2\rangle$ as a linear combination of $|w_1\rangle$ and $|u_2\rangle$. Suppose
$$
|w_2\rangle = \lambda |w_1\rangle + |u_2\rangle.
$$
Now we wish that $\langle w_1 | w_2\rangle =0$ (since $|w_1\rangle$ and $|w_2\rangle$ must be orthogonal). Multiply the above equality by $\langle w_1|$:
$$
0=\langle w_1|w_2\rangle = \lambda \langle w_1|w_1\rangle + \langle w_1|u_2\rangle.
$$
Thus, $\lambda=-\frac{\langle w_1|u_2\rangle}{\langle w_1|w_1\rangle}$ ($\langle w_1|w_1\rangle \neq 0$ since $|w_1\rangle \neq 0$) and the formula follows.
Finally, suppose $|w_3\rangle=\lambda_1 |w_1\rangle + \lambda_2 |w_2\rangle + |u_3\rangle$. Multiply it by $\langle w_1|$ to find $\lambda_1$ and $\langle w_2|$ to find $\lambda_2$ (remember that we want $\langle w_1 | w_3\rangle =0$ and $\langle w_2 | w_3\rangle =0$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/535452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The existence of inequalities between the sum of a sequence and the sum of its members Let $(a_n)_{n=m}^\infty$ be a sequence of positive real numbers. Let $I$ denote some finite subset of $M := \{m, m+1, \cdots \}$, i.e., $I$ is the index of some points of $(a_n)_{n=m}^\infty$.
Does there exist a real number $r$ such that for any valid $I$, $\sum_{n=m}^\infty a_n > r$ and $\sum_{i \in I}a_i \leq r$?
It seems that if $\sum_{n=m}^\infty a_n$ is finite, let's denoted by $S$. Such a $r$ can be written as $S - \epsilon$ where $\epsilon$ is a positive real number. Assume that for any $\epsilon > 0$, $\sum_{i \in I} a_i > S - \epsilon$, which means that the sequence $(\sum_{i \in I}a_i)_{I \in 2^M}$ is bounded by $S$. Then how to do next?
Thanks a lot!
| No. Since each $a_n$ is positive you have $$\sum_{n=m}^\infty a_n = \sup_I \sum_{i \in I} a_i$$ where the supremum is taken over all finite subsets $I$ of $M$. This is a consequence of the monotone convergence theorem for series.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/535654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How find the integral $I=\int_{-R}^{R}\frac{\sqrt{R^2-x^2}}{(a-x)\sqrt{R^2+a^2-2ax}}dx$ Find the integral:
$$I=\int_{-R}^{R}\dfrac{\sqrt{R^2-x^2}}{(a-x)\sqrt{R^2+a^2-2ax}}\;\mathrm dx$$
My try:
Let $x=R\sin{t},\;t\in\left[-\dfrac{\pi}{2},\dfrac{\pi}{2}\right]$
then,
$$I=\int_{-\pi/2}^{\pi/2}\dfrac{R\cos{t}}{(a-R\sin{t})\sqrt{R^2+a^2-2aR\sin{t}}}\cdot R\cos{t}\;\mathrm dt$$
so,
$$I=R^2\int_{-\pi/2}^{\pi/2}\dfrac{\cos^2{t}}{(a-R\sin{t})\sqrt{R^2+a^2-2aR\sin{t}}}\;\mathrm dt$$
Maybe following can use Gamma function? But I can't find it. Thank you someone can help me.
| The second line of the OP is
$$I=\int_{-\pi/2}^{\pi/2} \frac{\cos^2 t}
{(\alpha -\sin t)\sqrt{1+\alpha^2-2\alpha \sin t}}dt$$
$$=\int_{-\pi/2}^{\pi/2} \frac{1-\sin^2 t}
{(\alpha -\sin t)\sqrt{1+\alpha^2-2\alpha \sin t}}dt$$
where $\alpha\equiv a/R$.
Partial fraction decomposition yields
$$=\int_{-\pi/2}^{\pi/2} [\sin t +\alpha+\frac{1-\alpha^2}{\alpha -\sin t}]\frac{1}
{\sqrt{1+\alpha^2-2\alpha \sin t}}dt.$$
Then the three integrals
$$\int \frac{1}{\sqrt{a+b\sin x}}dx,$$
$$\int \frac{\sin x}{\sqrt{a+b\sin x}}dx,$$
and
$$\int \frac{1}{(2-p^2+p^2\sin x)\sqrt{a+b\sin t}}dt$$
are tabulated in terms of Elliptic Integrals as 2.571.1, 2.571.2 and 2.574.1
in the Gradsteyn-Rzyshik tables of integrals.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/535726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Compare this topology with the usual topology I have to compare the following topology with the usual one. Which of them is finer?
$\tau= \{U\subseteq \mathbb{R}^2:$ for any $(a,b) \in U$ exists $\epsilon >0 $ where $[a,a+\epsilon] \times [b-\epsilon, b+\epsilon]\subseteq U\}$
By definition, $\tau\subseteq\tau_u $ if and only if for every $U\in \tau$ implies $U\in \tau_u$
However, how can I compare them using open basis?
THANK YOU!
| *
*If $U \in \tau_u$, then for any $(a,b) \in U$, there is a basic open set
$$
(a-\delta, a+\delta)\times (b-\delta', b+\delta') \subset U
$$
you can take
$$
\epsilon = \min\{\delta/2, \delta'/2\}
$$
then
$$
[a,a+\epsilon]\times [b-\epsilon,b+\epsilon] \subset U
$$
Hence $U \in \tau$. So
$$
\tau_u \subset \tau
$$
*The set
$$
[0,1)\times (-1,1) \in \tau\setminus \tau_u
$$
Do you see why?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/535801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
prove that quadrangle is isosceles trapezoid How to prove that quadrangle $ABCD$ is a isosceles trapezoid?
where $AB$ is parallel to $CD$
| You are given two facts:
1. A qualrilateral ABCD is inscribed in a circle.
2. AB is parallel to CD
Because of the parallel lines, qualrilateral ABCD is, by definition, a trapezoid.
To prove that trapezoid ABCD is isosceles, you need to show that the non-parallel sides BD and AC have equal lengths. This can be accomplished as follows. Erase the lines that go to the circle's center. Draw a single line from point A to point C. Finish the proof by using these three theorems:
1. If parallel lines are cut by a transversal,
then the alternate interior angles measure the same.
2. In a circle, the arc subtended by an inscribed angle has
twice the measure of the inscribed angle.
3, In a circle, if two minor arcs are equal in measure,
then their corresponding chords are equal in length.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/535855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
how to find center/radius of a sphere Say you have an irregular tetrahedron, but you know the (x,y,z) coordinates of the four vertices; is there a simple formula for finding a sphere whose center exists within the tetrahedron formed by the four points and on whose surface the four points lie?
| Given four points, a, b, c, and d, you can find the center by setting the following determinant to zero and solving it:
$$
\begin{vmatrix}
(x^2 + y^2 + z^2) & x & y & z & 1 \\
(ax^2 + ay^2 + az^2) & ax & ay & az & 1 \\
(bx^2 + by^2 + bz^2) & bx & by & bz & 1 \\
(cx^2 + cy^2 + cz^2) & cx & cy & cz & 1 \\
(dx^2 + dy^2 + dz^2) & dx & dy & dz & 1 \\
\end{vmatrix}
= 0
$$
The math is gnarly, but the following C++ code implements the solution:
class Point {
public:
double x;
double y;
double z;
Point() { x = 0; y = 0; z = 0; }
Point(double x_, double y_, double z_) { x = x_; y = y_; z = z_; }
};
class Sphere {
public:
Point center;
double radius;
Sphere(Point center_, double radius_) {
center = Point(center_.x, center_.y, center_.z);
radius = radius_;
}
};
Sphere
sphereFromFourPoints(Point a, Point b, Point c, Point d)
{
#define U(a,b,c,d,e,f,g,h) (a.z - b.z)*(c.x*d.y - d.x*c.y) - (e.z - f.z)*(g.x*h.y - h.x*g.y)
#define D(x,y,a,b,c) (a.x*(b.y-c.y) + b.x*(c.y-a.y) + c.x*(a.y-b.y))
#define E(x,y) ((ra*D(x,y,b,c,d) - rb*D(x,y,c,d,a) + rc*D(x,y,d,a,b) - rd*D(x,y,a,b,c)) / uvw)
double u = U(a,b,c,d,b,c,d,a);
double v = U(c,d,a,b,d,a,b,c);
double w = U(a,c,d,b,b,d,a,c);
double uvw = 2 * (u + v + w);
if (uvw == 0.0) {
// Oops. The points are coplanar.
}
auto sq = [] (Point p) { return p.x*p.x + p.y*p.y + p.z*p.z; };
double ra = sq(a);
double rb = sq(b);
double rc = sq(c);
double rd = sq(d);
double x0 = E(y,z);
double y0 = E(z,x);
double z0 = E(x,y);
double radius = sqrt(sq(Point(a.x - x0, a.y - y0, a.z - z0)));
return Sphere(Point(x0, y0, z0), radius);
}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/535922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
Dense subspace of linear functionals We know that any real measurable function can be approximated by increasing simple functions. So, integral of real valued measurable function can be written as a limit of integrals of simple functions. We can observe that the integral of simple functions is just a linear combination of projection maps. I was thinking if this procedure could be done for any linear functional. To be precise, my question is as follows :
Let $X$ be a set and let $L$ be the space of all real valued functions and equip it with uniform norm. Can any linear functional on $L$ be written as a limit of linear combination of projection maps?
| Note that you need to require your functions to be bounded for the norm to make sense. So $L=\ell^\infty(X)$.
Regarding your question, let us take $X=\mathbb N$. It is well-known that there are functionals that annihilate all finitely supported functions. Concretely, you take any free ultrafilter $\omega\in\beta\mathbb N\setminus \mathbb N$ and define a functional
$$
\varphi:f\mapsto \bar f(\omega),
$$
where $\bar f$ is the extension of $f$ to $\beta\mathbb N$. For any $g\in\ell^\infty(\mathbb N)$ with $g(n)=0$, $n\geq m$ for some $m$, we will have $\varphi(g)=\lim_{n\to\infty}g(n)=0$.
Now let $\varphi_0$ be a "linear combinations of projections maps" as you say:
$$
\varphi_0(f)=\sum_{j=1}^hc_jf(n_j)
$$
for some fixed $n_1,\ldots,n_h\in\mathbb N$ and $c_1,\ldots,c_h\in\mathbb C$. Now define a function
$$
f(n)=\begin{cases}0,&\mbox{ if }n\in\{n_1,\ldots,n_h\}\\ 1,&\mbox{ otherwise} \end{cases}.
$$
Then $\|f\|=1$, and
$$
|\varphi(f)-\varphi_0(f)|=|1=0|=1.
$$
In other words, $\|\varphi-\varphi_0\|\geq1$, i.e. the distance from $\varphi$ to any linear combination of projection maps is at least $1$.
*See this answer for some more information on free-ultrafilters in this context.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/535993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
An interesting (unknown) property of prime numbers. I don't know if this is the right place to ask this question. Please excuse my ignorance if it is not.
I like to play with integers. I have been doing this since my childhood. I spend a lot of time looking up new integer sequences on OEIS. Last week I stumbled upon a unique property of prime numbers. I have been searching the internet since then to find if there are any papers talking about this property and haven't found any. I want to publish it. I am not a mathematician but I am an engineer and I can write a decent paper to clearly express the property. I haven't attempted to prove this property. It is just an observation that I verified upto the largest known prime number (http://primes.utm.edu/). It holds!
The problem is I do not want to divulge this property for fear of being denied credit to its discovery. How should I disseminate this finding?
Again, I apologize if this is not the right place for this question.
| I believe that posting your observation here will clarify that the observation is yours.
Also as Bill Cook mentions http://arxiv.org/ would be the best place to post your discovery.
But if you don't tell us which is the ''unknown property of prime numbers'' you will never know if you are a genius, or someone who has illusions.
Share your thoughts! I really want to see something that might be new and (as you say) interesting!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/536075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Proof that $\int_0^{2\pi}\sin nx\,dx=\int_0^{2\pi}\cos nx\,dx=0$
Prove that $\int_0^{2\pi}\sin nx\,dx=\int_0^{2\pi}\cos nx\,dx=0$ for all integers $n \neq 0$.
I think I'm encouraged to prove this by induction (but a simpler method would probably work, too). Here's what I've attempted: $$\text{1.}\int_0^{2\pi}\sin x\,dx=\int_0^{2\pi}\cos x\,dx=0.\;\checkmark\\\text{2. Assume}\int_0^{2\pi}\sin nx\,dx=\int_0^{2\pi}\cos nx\,dx=0.\;\checkmark\\\text{3. Prove}\int_0^{2\pi}\sin (nx+x)\,dx=\int_0^{2\pi}\cos (nx+x)\,dx=0.\\\text{[From here, I'm lost. I've tried applying a trig identity, but I'm not sure how to proceed.]}\\\text{For the}\sin\text{integral},\int_0^{2\pi}\sin (nx+x)\,dx=\int_0^{2\pi}\sin nx\cos x\,dx+\int_0^{2\pi}\cos nx\sin x\,dx.$$
I hope I'm on the right track. In the last step, I have $\sin nx$ and $\cos nx$ in the integrals, but I'm not sure if that helps me. I would appreciate any help with this. Thanks :)
As I indicated above, it'd be great to find a way to complete this induction proof—probably by, as Arkamis said, "working it like the transcontinental railroad" with trig identities (if that's possible). I think my instructor discouraged a simple $u$-substitution, because we've recently been focused on manipulating trig identities.
| I do not know if you can use $e^{ix}=\cos x+i\sin x$ but here is one solution:
$$
\int_0^{2\pi}e^{inx}\,dx=\frac 1{in}e^{inx}\left|_0^{2\pi}\right.=0
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/536165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 7,
"answer_id": 1
} |
Proof of Aristarchus' Inequality Does anyone know how to prove that if $0<\alpha<\beta<\frac{\pi}{2}$ then $\frac{\sin\alpha}{\alpha}>\frac{\sin\beta}{\beta}$. Any methods/techniques may be used.
| The function $f(x)=\frac{\sin(x)}{x}$ is decreasing on $[0,\pi/2]$. Since its derivative is
$$
f^{'}(x)=\frac{\cos(x)x-\sin(x)}{x^2},
$$
we've reduce the problem to seeing that $\cos(x)x-\sin(x)\leq 0$. For small values of $x$, we have $\sin(x)\approx x$, so $\cos(x)x-\sin(x)\approx x(\cos(x)-1)$, which is negative. To see that it remains negative for all $x$ in $[0,\pi/2]$, you can check, for example via bissection method, that the first positive solution to $\cos(x)x-\sin(x)=0$ is around $4.5>\pi/2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/536253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 4
} |
Pullback of a Volume Form Under a Diffeomorphism. I have an exercise here, which I have no idea how to do.
Problem: Let $ U $ and $ V $ be open sets in $ \mathbb{R}^{n} $ and $ f: U \to V $ an orientation-preserving diffeomorphism. Then show that
$$
{f^{*}}(\text{vol}_{V})
= \sqrt{\det \! \left( \left[
\left\langle {\partial_{i} f}(\bullet),{\partial_{j} f}(\bullet) \right\rangle
\right]_{i,j = 1}^{n} \right)} \cdot
\text{vol}_{U}.
$$
Notation:
*
*$ \text{vol}_{U} $ and $ \text{vol}_{V} $ denote the volume forms on $ U $ and $ V $ respectively.
*$ f^{*}: {\Lambda^{n}}(T^{*} V) \to {\Lambda^{n}}(T^{*} U) $ denotes the pullback operation on differential $ n $-forms corresponding to $ f $.
| I don't have enough for a comment (my original account was wiped out), but this is my impression:I think this is just the change-of-basis theorem/result; if $U,V$ are open balls, a diffeomorphism is basically a coordinate change map, and so it transforms according to the (determinant of the ) Jacobian.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/536361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What is the relationship between Fourier transformation and Fourier series? Is there any connection between Fourier transformation of a function and its Fourier series of the function? I only know the formula to find Fourier transformation and to find Fourier coefficients to find the corresponding Fourier series.
| Given a locally compact abelian group $G$, one can define the character group of $G$ as the group of continuous homomorphisms $G \to S^1$. (It should actually land in $\mathbf C^\times$, but for the purpose at hand, this is good enough.)
The character group of the circle $S^1$ is isomorphic with $\mathbf Z$ (the characters are $\chi_n : \theta \mapsto e^{2\pi i n\theta}$).
On the other hand, the character group of $\mathbf R$ is isomorphic with $\mathbf R$ itself. The characters are $\chi_t : s \mapsto e^{2\pi i st}$.
It is a general principle that the characters of a locally compact group form a "basis" for the space of "nice-enough" functions on the group. Thus, periodic functions (i.e. functions on the circle) have a decomposition as sums of $e^{2\pi i n\theta}$ (Fourier series), whereas functions on $\mathbf R$ have a decomposition as Fourier integrals (inverse Fourier transform of their Fourier transform).
I'm sweeping hundreds of years of analysis under the rug (not that I know all of it), but this is the general idea.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/536425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 1
} |
Group covered by finitely many cosets This question appears in my textbook's exercises, who can help me prove it?
If a group $G$ is the set-theoretic union of finitely-many cosets, $$G=x_1S_1\cup\cdots\cup x_nS_n$$ prove that at least one of the subgroups $S_i$ has finite index in $G$.
I think that the intersection of these cosets is either empty or a coset of the intersection of all the $S_i$. I want to start from this point to prove it. So I suppose none of these $S_i$ has finite index, but I don't know how to continue?
| For completeness, the Neumann proof is roughly as follows.
Let $r$ be the number of distinct subgroups in $S_1,S_2,\dots,S_n$. We will prove by induction on $r$.
If $r=1$, then $G=\bigcup_{i=1}^n x_iS_1$ and $S_1$ thus has finite index in $G$.
Now assume true for $r-1$ distinct groups, and assume $S_1,\dots,S_n$ has $r>1$ distinct groups, with $S_{m+1}=\cdots=S_n$ the same, and $S_i\neq S_n$ for $i\leq m$.
If $G=\bigcup_{i=m+1}^n x_iS_i$ then $S_n$ has finite index, from the $r=1$ case.
So assume $h\not\in \bigcup_{i=m+1}^n x_iS_i$. Then $hS_n\cap \left(\bigcup_{i=m+1}^n x_iS_i\right) = \emptyset$, since $hS_n$ is a distinct coset of $S_n$.
So $$hS_n \subseteq \bigcup_{i=1}^m x_iS_i$$ So for any $x\in G$, $$xS_n=xh^{-1}hS_n\subseteq \bigcup_{i=1}^m xh^{-1}x_iS_i$$
So we can replace $x_iS_i=x_iS_n$ for $i>m$ with a finite number of cosets of $S_1,\dots,S_m$. And now we have $G$ as a finite union of cosets of the $r-1$ distinct subgroups in $S_1,\dots,S_m$. And, by induction, one of those must have finite index in $G$.
(The harder result, that some $S_i$ has finite index smaller than $n$, does not follow from this argument, since we might increase the number of terms when we do the reduction from $r$ to $r-1$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/536479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Do Hyperreal numbers include infinitesimals? According to definition of Hyperreal numbers
The hyperreals, or nonstandard reals, *R, are an extension of the real numbers R that contains numbers greater than anything of the form
1 + 1 + 1 + ...... + 1.[this definition has been extracted from wiki encyclopedia Hyperreal number]
According to above statement Hyperreal numbers include only infinite numbers and doesn't include their reciprocals, infinitesimals(correct me if I am wrong in saying this).
I have come across another statement mentioned in the same wiki encyclopedia which states that
The idea of the hyperreal system is to extend the real numbers R to form a system *R that includes infinitesimal and infinite numbers, but without changing any of the elementary axioms of algebra.[this statement has been extracted from wiki encyclopedia Hyperreal number/The transfer principle]
According to above statement Hyperreal numbers include both infinitesimal and infinite numbers,which contradicts the definition of Hyperreal numbers.So,what does it mean?Do Hyperreal numbers include infinitesimals?
| Yes, they do: if $x\in{}^*\Bbb R$ is greater than any ordinary integer, then $\frac1x$ is necessarily a positive infinitesimal. There is no contradiction: the first statement doesn’t mention the infinitesimals explicitly, but the very next sentence does:
Such a number is infinite, and its reciprocal is infinitesimal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/536564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Show that $\frac{(m+n)!}{m!n!}$ is an integer whenever $m$ and $n$ are positive integers using Legendre's Theorem Show that $\frac{(m+n)!}{m!n!}$ is an integer whenever $m$ and $n$ are positive integers using Legendre's Theorem.
Hi everyone, I seen similar questions on this forum and none of them really talked about how to apply the Legendre's theorem to questions like the one above.
I get that $\frac{(m+n)!}{m!n!}$ = $\binom{m+n}{m}$, which is an integer. But could someone explain how the Legendre proof works in this case and why it proves the above is true?
| For a prime $p$, denote by $v_p(r)$ the exponent of $p$ in the prime factorization of $r$. So for example, $v_2(12) = 2$. Legendre's theorem states that for any prime $p$ and integer $n$, we have
$$v_p(n!) = \sum_{i = 1}^\infty \left\lfloor \frac{n}{p^i} \right\rfloor $$
Note that $v_p(rs) = v_p(r) + v_p(s)$. Also $r$ divides $s$ if and only if $v_p(r) \leq v_p(s)$ for all primes $p$.
The fraction $\frac{(m+n)!}{m!n!}$ is an integer if and only if $m!n!$ divides $(m+n)!$. So to prove your statement, it suffices to prove $v_p(n!) + v_p(m!) \leq v_p((n+m)!)$. Using Legendre's theorem, this follows from $\lfloor x \rfloor + \lfloor y \rfloor \leq \lfloor x + y \rfloor$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/536644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Fermat's Little Theorem and polynomials We know that in $F_p[y]$, $y^p-y=y(y-1)(y-2)\cdots (y-(p-1))$. Let $g(y)\in F_p[y]$. Why is it valid to set $y=g(y)$ in the above equation to obtain $g(y)^p-g(y)=g(y)(g(y)-1)\cdots (g(y)-(p-1))$. This is done in Theorem 1 of Chapter 22 of A Concrete Introduction to Higher algebra by Lindsay Childs.
| The identity
$$
z^p-z=z(z-1)(z-2)\cdots (z-p+1)
$$
holds for all elements $z$ of any commutative ring $R$ of characteristic $p$.
This follows from the corresponding identity in the polynomial ring by the universal property of univariate polynomial rings.
In this example the selections $R=F_p[y]$, $z=g(y)$ were made.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/536760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How often in years do calendars repeat with the same day-date combinations (Julian calendar)? E.g. I'm using this formulas for calculating day of week (Julian calendar):
\begin{align}
a & = \left\lfloor\frac{14 - \text{month}}{12}\right\rfloor\\
y & = \text{year} + 4800 - a \\
m & = \text{month} + 12a - 3
\end{align}
\begin{align}
J\!D\!N =
(\text{day} +
\left\lfloor\frac{153m+2}{5}\right\rfloor +
365y+
\left\lfloor\frac{y}{4}\right\rfloor -
\left\lfloor\frac{y}{100}\right\rfloor +
\left\lfloor\frac{y}{400}\right\rfloor -
32045)\text{mod}\text{7}
\end{align}
And I know that in Julian calendar, leap year is a year which:
\begin{align}
\text{year}\:mod\:4=0
\end{align}
Using simple python program, I can solve this problem very fast. And answer (if we starting for leap year) is 28 year cycle.
But how to correctly prove this hypothesis, using only math equations? Is it possible?
| I can directly say 28 years. It's because 7 is a prime number, thus is relatively prime with every natural number smaller than itself. How does this information helps us solve the problem. This way:
You know that $365 \equiv 1 (mod\, 7)$ and $366 \equiv 2 (mod\, 7)$. As 1 year in each $4$ year contains $366$ days. Then, in every $4$ years, we have a $5$ day iteration in days. So when does the loop goes back to the beginning? We have to find the Leas Common Multiple of $5$ and $7$, to find the number of the days iterated. $lcd[5,7]=35$. That is, when the loop was over, we were back to the beginning. In how many years, $35$ days iteration happens? $35*4/5=28$ years.
Edit: I've noticed that I did not mention how to use the info at the beginning. As 7 is relatively prime with every number smaller than itself, the answer would always* be the multiplication of 7 with loop lenght of years for different day content (Sorry for my English, that was a complicated way to explain. $4$ for this situation, as in every 4 years we have $1$ plus day. That is, in any 2 years that have 4k years difference, they have the same number of days).
Anyway, in any case we're gonna look for we're gonna look at the lenght of the year loop (let's say $l$ years) and days of iteration (let's say $i$ days). Then find the $lcm[7,i]$ then multiplicate it with $l/i$. As $7$ is prime, if $i<7, lcm[7,i]=7*i$. Then what we're looking for is actually $7*i*l/i = 7*l$
*Except $6$ years of loop. If it was $6$, then we would be looking for $lcm[7,7]$ which is $7$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/536847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 3
} |
Countable basis of function spaces Show that the space of functions $f: \mathbb{N} \to \mathbb{R}$ does not have a countable basis.
I really don't know where to start with this one! Could anyone help me?
Thanks
| Try this. Since $\mathbb N$ and $\mathbb Q$ have the same cardinal, it is the same question to ask whether the set of functions $f : \mathbb Q \to \mathbb R$ has a countable basis. For $x \in \mathbb R$, define $g_x : \mathbb Q \to \mathbb R$, by
$$
g_x(s)=1\quad\text{if }s<x,\qquad g_x(s)=0\quad\text{if }s\ge x
$$
for all $s \in \mathbb Q$. Then the set $\{g_x : x \in \mathbb R\}$ is linearly independent and uncountable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/536956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
''min $c^tx$ subject to $x^tAx=1$'': is is possible to solve it with Lagrange multiplier or in the scope of KKT? I find a problem:
Minimize $c^tx$ subject to $x^tAx=1$, where $A$ is a positive semidefinite symmetric matrix.
But the question obligates to use KKT but I am trying to apply simple Lagrange multiplier solution but cannot cope with it.
Is that question solvable via Lagrange Multiplier or this method is only available for linear equality constraints?
| If $A$ is a positive definite matrix then you can use usual Lagrangian methods but if $A$ is only positive semi-definite matrix then you have some constraint on the $x_s$ that is missing from your original question. For example, if $A=\left(\begin{array}{cc}0 & 0 \\0 & 1\end{array}\right)$ and $c=(-1,2)$ then there is no bounded solution. The "solution" in this example is $x_1=-\infty$ and $x_2=1$.
In what follows I assume $A$ is a positive definite matrix.
The first-order condition for the optimization problem give us:
$$c - \lambda\,\left(A^t+A\right)x=0 \Rightarrow x=\dfrac{\left(A^t+A\right)^{-1}c}{\lambda}\Rightarrow x^tAx=\dfrac{c^t \left(\left(A^t+A\right)^{-1}\right)^tA\left(A^t+A\right)^{-1}c}{\lambda^2}=\dfrac{c^t \left(A^t+A\right)^{-1}A\left(A^t+A\right)^{-1}c}{\lambda^2}=1$$
where $\left(A^t+A\right)^{-1}$ is the inverse matrix of $\left(A^t+A\right)$ (notice that the existence of $\left(A^t+A\right)^{-1}$ is not guaranteed if $A$ were to be only semi definite). So $\lambda=\sqrt{c^t \left(A^t+A\right)^{-1}A\left(A^t+A\right)^{-1}c}$ and we finally get: $$ \boxed{x=\dfrac{\left(A^t+A\right)^{-1}c}{\sqrt{c^t \left(A^t+A\right)^{-1}A\left(A^t+A\right)^{-1}c}}}.$$
Suggestion: don't get intimidated with vector calculus. It always helps to do simpler versions of the problem in $\mathbb{R}^2$ to check you are on the right track.
Side work: $\nabla_x\left( c^t x\right) = c$ so $\nabla_x\left( y^tAx\right)= A^t y$ and $\nabla_y\left( y^tAx\right)= Ax$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/537064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
$A \oplus B = A \oplus C$ imply $B = C$? I don't quite yet understand how $\oplus$ (xor) works yet. I know that fundamentally in terms of truth tables it means only 1 value(p or q) can be true, but not both.
But when it comes to solving problems with them or proving equalities I have no idea how to use $\oplus$.
For example: I'm trying to do a problem in which I have to prove or disprove with a counterexample whether or not $A \oplus B = A \oplus C$ implies $B = C$ is true.
I know that the venn diagram of $\oplus$ in this case includes the regions of A and B excluding the areas they overlap. And similarly it includes regions of A and C but not the areas they overlap. It would look something like this:
I feel the statement above would be true just by looking at the venn diagram since the area ABC is included in the $\oplus$, but I'm not sure if that's an adequate enough proof.
On the other hand, I could be completely wrong about my reasoning.
Also just for clarity's sake: Would $A\cup B = A \cup C$ and $A \cap B = A \cap C$ be proven in a similar way to show whether or not the conditions imply $B = C$? A counterexample/ proof of this would be appreciated as well.
| Hint: $A\oplus(A\oplus B)=(A\oplus A)\oplus B = B$.
And of course $A\cup B=A\cup C$ does not imply $B=C$ (consider the case $B=A\ne \emptyset = C$). And $A\cap B=A\cap C$ does not imply $B=C$ either (consider the case $A=\emptyset$)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/537172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 1
} |
Why does knowing where two adjacent vertices of regular $n$-gon move under rigid motion determines the motion? I am reading the book Abstract Algebra by Dummit and Foote.
In the section about the group $D_{2n}$ (of order $2n$) the authors
claim that knowing where two adjacent vertices move to, completely
determine the entire motion.
Based on this, the book gets the conclusion that $|D_{2n}|\leq2n$
and by showing existence of $2n$ different motions it concludes $|D_{2n}|=2n$.
My question is about the claim maid: Why does knowing where two adjacent
vertices of a regular $n$-gon move to, completely determine the entire
motion ?
I am looking for a convincing argument, not neccaseraly a formal proof,
I just want to get the idea.
| An Euclidean movement is determined by the effect on three (noncollinear) points. If $A\mapsto A'$, $B\mapsto B'$, $C\mapsto C'$, we can consider the movement composed of
*
*the translation along $\vec{A'A}$
*the rotation about $A$ that maps the translated image of $B'$ to $B$
*the reflection at $AB$ if necesary for the third point
Performing these movements after the given movement leaves $A,B,C$ fixed in the end and the only movement that leaves a (nondegenerate) triangle fixed is the identity.
In the case of a regular $n$-gon being mapped to itself, the center must remain fixed(!) Together with two vertices we thus know th eeffect of the movement on three noncollinera points as required.
Alternatively, if you feel uncomfortable about the center being ficed: If we know where a vertex $A$ and a neighbour(!) vertex $B$ go, we also know that "the other" neighbour $C$ (assuming the polygon has more than two vertices) of $A$ must be mapped to "the other" neighbour of the image of $A$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/537244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proof of $10^{n+1} -9n -10 \equiv 0 \pmod {81}$ I am trying to prove that $10^{n+1} -9n -10 \equiv 0 \pmod {81}$. I think that decomposing into 9 and then 9 again is the way to go, but I just cannot get there. Any help is greatly appreciated.
\emph{edit} I originally posted this a $9^n$ not $9n$. Apologies.
| Without anything more thatn the geometric sum formula:
It's certainly true mod $9$, since it is easy to reduce the left side by $9$:
$$1^{n+1}-0-1\equiv{0}\mod{9}$$
Now dividing the left side by $9$ gives $$10\frac{10^n-1}{9}-n$$ which is $$10\frac{(10^{n-1}+10^{n-2}+\cdots+10^1+1)(10-1)}{9}-n$$ which is $$10(10^{n-1}+10^{n-2}+\cdots+10^1+1)-n$$ Reducing mod $9$ gives $$1\cdot\left(\overbrace{1+1+\cdots+1}^{n\text{ copies}}\right)-n\equiv0\mod{9}$$
So the original left hand side is divisible by $9$, twice. Thus it's divisible by $81$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/537347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
} |
Reduction of Order: $t^{2}y''+3ty'+y=0$, $\quad t>0$; $\quad y_{1}(t)=t^{-1}$ I am working an exercise from Elementary Differential Equations and Boundary Value Problems Ninth Edition by Boyce and Diprima, and I think there is mistake\typo. On page 173 Section 3.4 exercise 25.
The book is correct I dropped the minus sign on the first integral (as pointed out by user40615 below) fixing that leads to the correct answer, also as Artem pointed out my Mathematica code had a typo!
The question is as follows: In each of the following problems 23 through 30 us the method of reduction of order to find a second solution to the given DFQ's.
(25). $t^{2}y''+3ty'+y=0$, $\quad t>0$; $\quad y_{1}(t)=t^{-1}$ .
The solution in the back of the book is $y_2(t)=t^{-1}\ln t$. I used Mathematica to check the solution and it gives:
Which translates to $y(t)\to c_{1}e^{1/t}\left(1-\frac{1}{t}\right)+c_{2}
$ in $\LaTeX$
I am not quite sure how to interpret this answer since there is no function for the $c_2$ does this mean that $c_{1}e^{1/t}\left(1-\frac{1}{t}\right)$ is the only solution, plus or minus a constant?
All of my written work is below, did I make a mistake?
Let $y_{2}=v/t$
, and we calculate its derivatives (I was using Euler's notation)
$$D_{t}\left[\frac{v}{t}\right]=\frac{v't-v}{t^{2}}$$
and \begin{align*}D_{t}^{2}\left[\frac{v}{t}\right] &= D_{t}\left[\frac{v't-v}{t^{2}}\right]\\
&= \frac{t^{2}v''-2tv'+2v}{t^{3}}\end{align*}
Now, we $plug y_{2}$ and its derivatives back into the equation. We get
\begin{align*}
t^{2}y''+3ty'+y &= 0\\
t^{2}\left[\frac{t^{2}v''-2tv'+2v}{t^{3}}\right]+3t\left[\frac{v't-v}{t^{2}}\right]+v/t &= 0\\
\left[\frac{t^{4}v''-2t^{3}v'+2t^{2}v}{t^{3}}\right]+\left[\frac{3v't^{2}-3vt}{t^{2}}\right]+v/t &= 0\\
tv''-2v'+2\frac{v}{t}+3v'-3\frac{v}{t}+\frac{v}{t} &= 0\\
tv''+-2v'+3v' &= 0\\
tv''+v' &= 0\\
\end{align*}
Again $tv''+v'=0$
is a first order linear differential equation, which we can solve by separation of parts.
\begin{align*}
tv''+v' &= 0\\
v''+\frac{v'}{t} &= 0\\
\frac{dv'}{v'} &= -\frac{v'}{t}\\
\int\frac{dv'}{v'} &= \int\frac{dt}{t}\\
\ln v' &= \ln t+c\\
v' &= e^{c}t\\
\end{align*}
So, we have that $v'=ct$
, if we integrate again we have that
\begin{align*}
\int v'dt &= c\int tdt\\
v &= ct^{2}+k\\
\end{align*}
Lastley we multiply $v$
by $y_{1}$
to get
\begin{align*}
y_{2}(t) &= t^{-1}\left[ct^{2}+k\right]\\
&= ct+k/t\\
\end{align*}
In the end the last term will be combined with $c_{2}y_{1}$
, and that the constant $c$
will get combined with $c_{2}$
so we have that $y_{2}(t)=t$
but this cannot be true because y_{2}
wouldn't be twice differentialable.
| For the sake of simplicity it is better to write $y_2=vt^{-1}$ so that $y_2'= v't^{-1}-vt^{-2}$ and $y''_2= v''t^{-1}-2v't^{-2}+2vt^{-3}$. Then
$t^{2}y_2''+3ty_2'+y_2=tv''+v'=0$. This implies $(tv')'=0$. Integrating this we get $tv'=c_1$. Thus integrating once more we obtain $v=c_1\ln t+c_2$. Since we wish a particular solution which is linearly independent with $y_1=t^{-1}$, we can take $c_1=1$ and $c_2=0$ so that $v=\ln t$ and hence $y_2= \frac{\ln t}{t}$. In your solution you forget the minus sign while you are integrating where $v'$ will be $e^ct^{-1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/537454",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Computing some differential forms using complex coordinates I was computing some things in the Poincaré disk $\mathbb{H}^2$ in complex coordinates and then I tried to show that $\sigma_r(z) = \frac{r^2}{z}$ is an isometry. However $d\sigma_r = \frac{-r^2}{z^2}dz$ and $g = \frac{4 dz\otimes d\overline{z}}{ (1 - z\overline{z})^2} $, then $d\sigma_r(\partial z) = \frac{-r^2}{z^2}$, therefore $g(d\sigma_r(\partial z), d\sigma_r(\partial z)) = ?$ I do not know how to compute it since there is no $\partial z$ or $\partial \overline{z}$ in $d\sigma_r(\partial z)$. In real coordinates it appears $\partial x$ and $\partial y$ however when I try using complex coordinates, $C^{\infty}(\mathbb{H}^2)$ changes.
Thanks in advance.
| $d\sigma_r = \frac{-r^2}{z^2}dz$ and $g = \frac{4 dz\otimes d\overline{z}}{ (1 - z\overline{z})^2} $, then $d\sigma_r(\partial z) = \frac{-r^2}{z^2} \partial z$, and not $d\sigma_r(\partial z) = \frac{-r^2}{z^2}$, therefore $g(d\sigma_r(\partial z), d\sigma_r(\partial z)) = g (\partial z, \partial z)$. The problem is that $\partial z$ is NOT dual to $dz$ in the sense that $dz (\partial z) \neq 1$, since $dz = 1$ the differential of the identity and $\partial z$ is just an abstract stuff living the cotangent bundle (as a section).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/537535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Determining where the derivative of a function equals zero, will always produce absolute max/min values. T/F? Is this true of false and why?
I believe it is false since when the derivative equals zero, it produces local max/min.
Also endpoints will also give you absolute max/min thus you must check that.
thoughts?
| It is not true that where the derivative is zero the is a local extreme. What is true is the opposite: if there is a local extreme and the function is differentiable around the point, then the derivative is zero.
Think $f(x)=x^3$ at $x=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/537606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Efficient division using binary math I'm writing code for an FPGA and I need to divide a number by $1.024$. I could use floating and/or fixed point and instantiate a multiplier but I would like to see if I could do this multiplication more efficiently.
I noticed that $2^0$ + $2^-$$^6$ + $2^-$$^7$ = $1.0234375$ which is 99.95% of $1.024$; well within my tolerance requirement. It feels like there is some way I can take advantage of this fact to divide a number by $1.0234375$ (essentially $1.024$) without having to do costly multiplication but I'm stuck on where to go from here. I've seen similar types of things done by early game developers to speed up their calculations and this is essentially what I'm trying to accomplish here but instead of maximizing speed I want to minimize FPGA utilization.
| $$\frac{1}{1.024} = \frac{1024-24}{1024} = \left(\frac{1024 - 16 - 8}{1024}\right)$$
So to divide N,
$$ N*\left(\frac{1000}{1024}\right) = ((N << 10) - (N << 4) - (N << 3)) >> 10 $$
You need 2 adders. Shift operation will be free in FPGA as all are powers of 2. If you want to use fraction of the result, simply use lower 10 bits of the addition. For FPGA specific questions, you can always try electronics.SE
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/537682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
The probability that a randomly chosen grain weighs less than the mean grain weight If Y has a log-normal distribution with parameters $\mu$ and $\sigma^2$, it can be shown that
$E(Y)=e^\frac{\mu + \sigma^2}{2}$ and
$V(Y)=e^{2\mu +\sigma^2}(e^{\sigma^2}-1)$.
The grains composing polyerstalline metals tend to have weights that follow a log-normal distribution. For a type of aluminum, grain weights have a log-normal distribution with $\mu=3$ and $\sigma^2=4$ (in units of $10^{-4}$g).
Part A: Find the mean and variance of the grain weights.
Answer: $E(Y)=e^\frac{19}{2}$ and $V(Y)=e^{22}(e^{16}-1)$
Part B: Find an interval in which at least 75% of the grain weights should lie.
Answer: (0, $e^\frac{19}{2}+2e^{11}(e^{16}-1)^{1/2}$)
Part C: Find the probability that a randomly chosen grain weighs less than the mean grain weight.
I am unsure what to do for this part. Can someone sketch the procedures for me?
As of 12/28/13 there still instead an answer to this question.
| It can also be shown that for a log-normal distribution, the cumulative distribution function is
$$F(x)=\Phi \left( \frac{ \log x - \mu} {\sigma}\right).$$
Here, $\Phi$ is the cumulative distribution function of the standard normal. So the answer is
$$
\Phi \left(\frac{\frac{19}{2} -e^{19/2} }{e^{11}\sqrt{e^{16} -1}} \right),
$$
very close to 0.5, as you might expect.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/537763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Derivative: chain rule applied to $\cos(\pi x)$
What is the derivative of the function $f(x)= \cos(\pi x)$?
I found the derivative to be $f^{\prime}(x)= -\pi\sin(\pi x)$. Am I correct? Can you show me how to find the answer step by step?
This is a homework question:
What is $x$ equal to if $-\pi\sin(\pi x)=0$?
| To find the derivative of this function you need to use the chain rule.
Let u = $\pi x$, and we know $\frac{d}{du}(\cos(u)) = -\sin(u)$.
Then $$\frac{d}{dx}\cos(\pi x)=\frac{d\cos(u)}{du} \frac{du}{dx}$$
simplifies to your answer that you found.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/537871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Inequality. $\frac{a}{\sqrt{b}}+\frac{b}{\sqrt{c}}+\frac{c}{\sqrt{a}}\geq3$ Let $a,b,c \in (0, \infty)$, with $a+b+c=3$. How can I prove that:
$$\frac{a}{\sqrt{b}}+\frac{b}{\sqrt{c}}+\frac{c}{\sqrt{a}}\geq3 ?$$.
I try to use Cauchy-Schwarz rewriting the inequality like :
$$\sum_{cyc}\frac{a\sqrt{b}}{b} \geq \frac{(\sum_{cyc}{\sqrt[4]{a^2b}})^2}{a+b+c}$$ but I don't obtain anything.
| With homogenizazion: We need to prove for all $a,b,c>0$: $$\frac{a}{\sqrt{b}}+\frac{b}{\sqrt{c}}+\frac{c}{\sqrt{a}}\geq\sqrt{3}\sqrt{a+b+c}.$$ By squaring this is equivalent to $$\frac{\sum_{cyc} a^3c+2\sum_{cyc} a^2 b^\frac32 c^\frac12}{abc}\geq \frac{3\sum_{cyc} a^2bc}{abc}$$ which follows immediately from AM-GM, for example for the first sum, use the cyclical versions of $$2ab^3+4a^3c+bc^3\geq 7a^2bc$$ by AM-GM.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/537934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 6
} |
How to recognize removable singularity and how to remove it I don't understand the idea of a removable singularity yet. Can someone explain me how to recognize a removable singularity and how to remove it?
Example: $g(z)=f(z)/z$. Is $z=0$ then a removable singularity and if yes, how would I remove it?
| If both $f$ and $g$ are holomorphic at $z_0$, they have Taylor expansions $$f(z)=\sum_{k=0}^\infty a_k(z-z_0)^k,\quad g(z)=\sum_{k=0}^\infty b_k(z-z_0)^k$$
Let $m$ be the smallest index for which $a_k\ne 0$, and $n$ be the smallest index for which $b_k\ne 0$. Then
$$
\frac{f(z)}{g(z)} = z^{m-n}\frac{a_{m }+a_{m+1}(z-z_0)+\dots}{b_{n}+b_{n+1}(z-z_0)+\dots}
\tag{1}$$
where the fraction on the right is holomorphic in a neighborhood of $z_0$ (because the denominator does not vanish), and takes on the nonzero value $a_m/b_n$ at $z_0$. The behavior of $f/g$ at $z_0$ is determined by $m-n$. If $m\ge n $, you have removable singularity, since the right-hand side of (1) is holomorphic. If $m<n $, you get a pole.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/538020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What on earth is (B|A)? I'm stumped and getting nowhere with this question:
Description:
The Air Pollution and Mortality data of 60 cities were collected in a study. 11 can be considered to have high hydrocarbon pollution potential levels. Suppose that two cities are picked at random from the list. (That is, two cities are picked randomly, one after another without replacement.)
Let A be the event that the first city picked has a high hydrocarbon pollution potential level. Let B be the event that the second city picked has a high hydrocarbon pollution potential level.
Question:
"From the physical description of the problem it is possible to directly model P(A),
P(A') (complement), P(B|A) and P(B|A')."
So I believe P(A) = 11/60 and P(A') = 49/60.
But I'm unsure of what that last two are? 10/59? 11/59?
NOTE:
The next question is "what is P(A n B) (intersect)?"
Therefore, having this in conjunction with "directly model" means that you're not to use P(AnB) to find P(B|A)
Cheers
| If event A is true, then the first city picked is polluted, therefore the other 59 cities contain 10 polluted cities, therefore P(B|A)=10/59 .
If event A' is true, then the first city picked is clean, therefore the other 59 cities contain 11 polluted cities, therefore P(B|A')=11/59 .
So, you were correct in both your guesses.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/538090",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Let $u:\mathbb{C}\rightarrow\mathbb{R}$ be a harmonic function such that $0\leq f(z)$ for all $ z \in \mathbb{C}$. Prove that $u$ is constant. Let $u:\mathbb{C}\rightarrow\mathbb{R}$ be a harmonic function such that $0\leq u(z)$ for all $ z \in \mathbb{C}$. Prove that $u$ is constant.
I think i should use Liouville's theorem, but how can i do that? Help!
| Since $u$ is entire, and $\mathbb{C}$ is simply connected, there is an entire holomorphic function $f \colon \mathbb{C}\to\mathbb{C}$ with $u = \operatorname{Re} f$. Then
$$g(z) = \frac{f(z)-1}{f(z)+1}$$
is an entire bounded function, hence $g$ is constant. Therefore $f$ is constant, and hence also $u$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/538221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Integration of $\int_{x(0)}^{x(t)}\frac{1}{\sqrt{|y|} }dy$ for $y<0$ I tried to solve $x'(t)=\sqrt{|x(t)|}$ by using separation of variables.
So I did $$\int_{0}^{t}\frac{x'(s)}{\sqrt{|x(s)|}}ds$$
and used the substituion $y=x(s)$, which gave me the integral
$$\int_{x(0)}^{x(t)}\frac{1}{\sqrt{|y|} }dy$$
If we have $y>0$, I dont't have problems solving this: $2\sqrt{x(t)}-2\sqrt{x(0)}$
But how can I solve the integral if we have $y<0$.
| Do you have any more information about the initial condition? For instance, $x'$ is always non-negative so if $x(0)>0$, then $x(t)>0$ for all $t>0$, and so $y$ is always positive as well.
It's also perhaps worth noting that this differential equations is one of the typical 'pathological' examples from existence and uniqueness theory for ODE's. Observe that the function $\sqrt{|x|}$ is not Lipschitz on any interval including $x=0$, and if you try to solve this problem with the initial condition $x(0) = 0$ (for instance), you can find infinitely many $C^1$ solutions. (the 'basic' solution is $x = \dfrac{1}{4}t^2 : t \geq 0; x = 0 : t< 0$, and you can get other solutions by translating this to the right any number of units).
Further, if you try and solve this with an initial condition $x(0) < 0$, there's going to be some $t_0 > 0$ so that $x(t_0) = 0$ (this doesn't follow directly from $x' > 0$, but that's at least reasonable intuition), and then you have the uniqueness problem all over again relative to that point.
So, in some sense, it only makes sense to talk about 'solving' this problem with initial data $x(0) > 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/538381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does $\cos(x+y)=\cos x + \cos y$? Find the value using a calculator: $\cos 75°$
At first I thought all I need is to separate the simpler known values like this:
$\cos 75^\circ = \cos 30°+\cos45° = {\sqrt3}/{2} + {\sqrt2}/{2} $
$= {(\sqrt3+\sqrt2)}/{2} $ This is my answer which translates to= $1.5731$ by calculator
However, when I used the calculator directly on $\cos 75°$, I get $0.2588$.
Where I am going wrong?
| You can simply plot $cos(x+y)-(cos(x)+cos(y))$ to have your answer:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/538500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 7,
"answer_id": 1
} |
Prime Factors of Cyclotomic Polynomials I'm trying to show that if $q$ is a prime and $f_{q}(x)$ is the $q$-th cyclotomic polynomial, then all prime divisors of $f_{q}(a)$ for some fixed $a \neq 1$ either satisfy $p \equiv 1\, \text{mod}\; q$ or $p = q$.
Clearly $f_{q}(a) \equiv 1\, \text{mod}\; q$, because $1 + a + \cdots + a^{q-1} \equiv 2 + a + \cdots + a^{q-2} \equiv 1\, \text{mod}\; q$. However, I can't quite get from here to the conclusion. Any tips or suggestions would be appreciated.
| Suppose $1+x+x^2+ \ldots + x^{q-1} \equiv 0 \pmod p$ for some integer $x$.
If $x \equiv 1 \pmod p$, then $q \equiv 0 \pmod p$, and because $p$ and $q$ are prime, $p=q$.
If $x \neq 1 \pmod p$, then $p$ divides $(1+x+x^2+ \ldots + x^{q-1})(x-1) = x^q - 1$, which means that $x$ is a nontrivial $q$th root of unity modulo $p$. Fermat's little theorem says that $x^{p-1} \equiv x^q \equiv 1 \pmod p$. Since $x$ is not $1$ and $q$ is prime, this implies that $q$ is the order of $x$ in the group of units mod $p$, and that $p-1$ is a multiple of $q$. So $p \equiv 1 \pmod q$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/538707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Holomorphic functional Calculus in Dunford and Schwartz I am currently studying the spectral theory for bounded operators as described in the book "Linear Operators" by Dunford and Schwartz because I would like to obtain a better understanding of the functional Calculus that uses the Cauchy formula
$$
f(T) = \frac{1}{2 \pi i} \int_C f(\lambda)(\lambda - T)^{-1} \,d\lambda
$$
where $T$ is a bounded operator, $f$ is a function of a complex variable $\lambda$ which is analytic in an open subset of $\mathbb{C}$ that contains the spectrum $\sigma(T)$ of $T$, and $C$ is an appropriately chosen contour.
Now the text starts with the case of operators on finite dimensional spaces, and it is there where I got stuck with the following statement:
For $f$ a function as described above, let $P$ be a polynomial such that
$$
f^{(m)}(\lambda) = P^{(m)}(\lambda) \,, \qquad m \le \nu(\lambda) - 1 \,,
$$
for each $\lambda \in \sigma(T)$.
(The number $\nu(\lambda)$, where $\lambda \in \sigma(T)$, denotes the least integer for such that $(\lambda I - T)^\nu x = 0$ for every vector $x$ that satisfies $(\lambda I - T)^{\nu + 1}x = 0$.)
How can I find such a polynomial? I thought I should use the power series expansion of $f$, but how do I ensure that the equation above holds for each $\lambda \in \sigma(T)$?
Thanks a lot for your help!!
| It looks like what you need is Hermite Interpolation. It requires you to prescribe the same number of derivatives at all points; but you are dealing with a finite number of points, so you just take the bigger $m$ and make up the values for the missing derivatives.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/538766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If $\displaystyle x= \frac{e^{3z}}{y^4}$ then $z(x,y)$? If $\displaystyle x= \frac{e^{3z}}{y^4}$ then $z(x,y)$ ?
I know that we subtract powers in fractions but how do we solve it when there is a variable $3z$? And what is what is $z(x,y)$?
This question is supposed to be easy and it irritate me that I can't solve it. Can anyone please help?
| It seems like they are asking for $z$ in terms of $x$ and $y$. That is what $z(x,y)$ means. (It's not entirely clear from just what you've posted, but that would be my best guess.)
So you need to solve for $z$. First multiply by $y^4$, then take the natural log, then divide by 3.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/538863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Find the remainder when $2^{47}$ is divided by $47$ So I am trying to work out how to find remainders in several different way.
I have a few very similar question,
1.) Find the remainder when $2^{47}$ is divided by 47
So i have a solution that says
$$2^{24} \equiv 2$$
$$2^{48} \equiv 4$$
$$2^{47} \equiv 2$$ Since $(2,47)=1$
and
2.)Find the remainder when $2^{32}$ is divided by $47$:
We have $$2^6 = 64 \equiv 17$$
$$2^{12} \equiv 17^2 = 289 \equiv 7$$
$$2^{24} \equiv 7^2=49 \equiv 2$$
$$2^8 \equiv 4*17=68 \equiv 21$$
$$2^{32}=2^{8}*2^{24} \equiv21*2 \equiv42$$
Okay so although i have some solutions here, Im not 100 percent sure how they derived this. For example on the second question, why have they started with $2^{6}$ before finding the remainder? Is there some steps missing or am I just missing the point?
Is this called something in particular? There are not many problems similar to and i do not know if it has a special name to search for.
P.S Please do not use fermat's little theorem, i want to understand this method, thanks :)
| Fermat's Little Theorem gives you the answer at once: since $\;47\; $ is prime, we get
$$2^{47}=2\pmod {47}$$
which means that the wanted residue is $\;2\;$ .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/538976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
2 regular graphs and permutations I have found this question on MSE before but I didn't find the answer satisfactory and it is so old I doubt anyone is still following it.
Let $f_{n}$ be the number of permutations on $[n]$ with no fixed points or two cycles. Let $g_{n}$ be the number of simple, labeled two regular graphs on $n$ vertices. Let $F(x)$ be the EGF for $f_{n}$ and $G(x)$ be the EGF for $g_{n}$. I've shown that $F(x)=G(x)^{2}$. So by composition of exponential genereating functions
$$f_{n}=\sum_{k\geq 0}^{n}\binom{n}{k}g_{k}g_{n-k}$$.
I am tasked to prove $F(x)=G(x)^{2}$ via a bijective proof. I know that given a graph on $k$ vertices we can decompose it into cycles. Then for each cycle we can orient the cycle two ways to make a permutation on $n$. But I think this count is too large.
Any hint would help thanks.
| HINT: Each permutation of $[n]$ with no fixed points and no $2$-cycles corresponds in an obvious way to a unique labelled two-regular directed graph in which each cyclic component is a directed cycle. Given such a graph $G$, we can split the components into two sets, $\mathscr{C}_I(G)$ and $\mathscr{C}_D(G)$, as follows:
Suppose that $C$ is a component whose lowest-numbered vertex is $v$; there are vertices $u$ and $w$ in $C$ such that $u\to v$ and $v\to w$ are directed edges of $C$. If $u<w$, put $C$ into $\mathscr{C}_I(G)$, and if $u>w$, put $C$ into $\mathscr{C}_D(G)$.
Let $G_I=\bigcup\mathscr{C}_I$ and $G_D=\bigcup\mathscr{C}_D$. Now show that for a fixed set of $k$ labels from $[n]$ there are $g_k$ possible graphs $G_I$ and $g_{n-k}$ possible graphs $G_D$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/539059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
I want to prove that $m||v||_1\le ||v||$ Let $(V,||\cdot ||)$ a finite-dimensional real normed space, and $\{v_1,...,v_n\}$ a basis. We define $||v||_1=\sqrt{x_1^2+...+x_n^2}$, where $x_1,...,x_n$ are the coordinates of $v$.
I already proved that there exists $w\in S:=\{v\in V:||v||_1=1\}$ such that $$||w||\le ||v||$$ for every $v\in S$.
Now I want to prove that $$m||v||_1\le ||v||$$ for every $v\in V.$, but I can't see why we have that.
Any hint? Thanks.
Edit: I forgot... $m:=||w||$
| Hint: First note, that $w \ne 0$, let $m := \|w\| >0$. Now let $v \in V$. Then $v':= v/\|v\|_1 \in S$, hence $m = \|w\| \le \|v'\|$. Now use the defition of $v'$ and homogenity of $\|\cdot \|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/539135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove that the multi-period market satisfies Non-arbitrage given that the single-period market admits Non-arbitrage Here is the question:
Let $(Ω,\mathscr F,\mathbb P,\mathbb F= (\mathscr F_k)_{k=0,...,T})$ be a filtered probability space and $S=(S_k)_{k=0,...,T}$ a discounted price process. Show that the following are equivalent:
a) $S$ satisfies Non-arbitrage.
b) For each $k = 0, . . . , T − 1$, the one-period market $(S_k, S_{k+1})$ on $(Ω, \mathscr F_{k+1}, \mathbb P, (\mathscr F_k, \mathscr F_{k+1}))$
satisfies Non-arbitrage.
I know how to prove from a) to b). But proving from b) to a) seems quite difficult. So if anyone can help, please share your idea here. Thanks!
| No-arbitrage is equivalent to existence of state prices or stochastic discount factor. The proof usually is based on separating hyperplane theorem (see Duffie's Dynamic Asset Pricing Theory for example) but the conclusion is that prices do not admit arbitrage from date $t_1$ to date $t_2$ if and only if there exist state prices $\pi_{t_1,t_2}$ such that $S_{t_1}=E^{t_1}[\pi_{t_1,t_2}S_{t_2}]$.
Part b) implies existence of $\pi_{k,k+1}$ such that $S_{k}=E^{k}[\pi_{k,k+1}S_{k+1}]$ for $k=0,...,T-1$. For any $t_1<t_2$, using law of iterated expectations, $S_{t_1}=E^{t_1}[\prod_{k=t_1}^{t_2-1}{\pi_{k,k+1}}S_{t_2}]$ for $k=0,...,T-1$ so $\prod_{k=t_1}^{t_2-1}{\pi_{k,k+1}}$ represents state prices or discount factor between dates $t_1$ and $t_2$. This means there is no-arbitrage between dates $t_1$ and $t_2$. You can choose $t_1=0$ and $t_2=T-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/539210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $f(n)$ is the nearest integer to $\frac12(1+\sqrt2)^{n+1}$? Let $f(n)$ denote the number of sequences $a_1, a_2, \ldots, a_n$ that can be constructed where each $a_i$ is $+1$, $-1$, or $0$.
Note that no two consecutive terms can be $+1$, and no two consecutive terms can be $-1$. Prove that $f(n)$ is the nearest integer to $\frac12(1+\sqrt2)^{n+1}$.
Can anyone help me with this problem. I really do not know what to do in this one.
| Let $f_0(n)$ be the number of such sequences ending by $0$ and $f_1(n)$ be the number of such sequences ending by $1$ (number of such sequences ending by $-1$ is $f_1(n)$ too due to symmetry reason). Then $f_0(n) = f_0(n-1) + 2f_1(n-1)$, $f_1(n) = f_0(n-1) + f_1(n-1)$ and $f(n) = f_0(n) + 2f_1(n) = f_0(n+1)$. Some transformations: $f_0(n) - f_1(n) = f_1(n-1)$, $f_0(n) = f_1(n-1) + f_1(n)$, $f_1(n) = 2f_1(n-1) + f_1(n-2)$ and $f_1(0) = 0$, $f_1(1) = 1$. Solving this recurrent relation we get $\lambda^2 - 2\lambda - 1 = 0$, $\lambda_{1,2} = 1\pm \sqrt2$, $f_1(n) = c_1 \lambda_1^n + c_2 \lambda_2^n$.
$$\begin{cases}0 = c_1 + c_2,\\1 = c_1(1 + \sqrt 2) + c_2(1 - \sqrt 2);\end{cases} \begin{cases}c_1 = 1/(2\sqrt2),\\c_2 = -1/(2\sqrt2).\end{cases}$$
Then $f_1(n) = \frac1{2\sqrt2}\left((1+\sqrt2)^n - (1-\sqrt2)^n\right)$ and $$f(n) = f_0(n+1) = f_1(n+1) + f_1(n) =\\= \frac1{2\sqrt2}\left((1 + \sqrt2)^n(1 + 1 + \sqrt2) - (1 - \sqrt2)^n(1 + 1 - \sqrt2)\right) =\\= \frac12\left((1 + \sqrt2)^{n+1} + (1 - \sqrt2)^{n+1}\right).$$
The second summand is always less then $\frac12$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/539288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Proof of $(\forall x)(x^2+4x+5 \geqslant 0)$ $(\forall x)(x^2+4x+5\geqslant 0)$ universe is $\Re$
I went about it this way
$x^2+4x \geqslant -5$
$x(x+4) \geqslant -5$
And then I deduce that if $x$ is positive, then $x(x+4)$ is positive, so it's $\geqslant 5$
If $ 0 \geqslant x \geqslant -4$, then $x(x+4)$ is also $\geqslant -5$.
If $ x < -4$, then $x(x+4)$ will be negative times negative = positive, so obviously $\geqslant -5$
I'm wondering if there's an easier to solve this problem? My way seems a little clunky.
| Just note that $(x+2)^2=x^2+4x+4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/539375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 0
} |
Family with three children. Chances of at least one boy and girl? Here's my exercise:
A family has three children. What's the probability of event $A \cup C$ where:
$A$- Family has children of both sexes.
$C$- Family has at most one girl.
Well I see two ways to look at this conundrum:
The first one would be to differentiate possibilities by triples, $B$ meaning boy, $D$ meaning girl. So we have eight possibilities: $BBB,BBD,BDB,DBB,BDD,DDB,DBD,DDD$. So the answer would be $P(A \cup C)=\frac{7}{8}$.
And this is correct according to solutions in my textbook. Yet my first solution was this, since they ask about if the family has two boys, and not about whether the first and third child are boys, then $BDB$ and $BBD$ are indistinguishable, then we have only four possibilities: one girl and two boys, three boys, three girls, one boy and two girls. Then $P(A \cup C)=\frac{3}{4}$.
Why was my first intuition wrong?
| You distinguish between four possibilities: $BBB, BBD, BDD, DDD$. In three out of four cases the event $A \cup C$ occurs. However this only implies $P(A \cup C) = \frac{3}{4}$ when each of the four cases are equally likely, which is not the case here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/539441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Mapping homotopic to the identity map has a fixed point Suppose $\phi:\mathbb{S}^2\to\mathbb{S}^2$ is a mapping, homotopic to the identity map. Show that there is a fixed point $\phi(p)=p$.
| Have you seen homology and degree of continuous mappings? If so, this is pretty short:
Since $f$ is homotopic to the identity, $\deg f = \deg \text{id}_{S^2} = +1$. On the other hand, if a continuous map $g:S^n\to S^n$ has no fixed points, then $\deg g = (-1)^{n+1}$. This shows that $f$ must have a fixed point, since otherwise $\deg f = (-1)^{2+1} = -1$.
Alternatively, you can prove that any map $S^n\to S^n$ with no fixed point is homotopic to the antipodal map. (This is used to prove the statement about $g$ in the previous proof.) Note that if $f$ has no fixed points, then $$h(x, t) = (1-t)f(x)-tx$$ is nonzero, so $h(x,t)/|h(x,t)|$ defines a homotopy from $f$ to the antipodal map. (This proof is in Hatcher, Algebraic Topology, section 2.2.) And the antipodal map is homotopic to the identity if and only if $n$ is odd, so this proves the claim by contraposition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/539506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
area of a triangle using double integral I can find the area of a triangle with known vertices but the problem here is that the question is general: I have to use double integral to prove that the area of the triangle is:
$$A_{\text{triangle}}=\frac {\text{base}\times\text{height}}{2}$$
I assume that the width is $b$ and the height is $h$ so I have to integrate from $0$ to $b$ and from $0$ to $h$ but this would lead me to the area of a rectangle, that is, $bh$.
Any help is appreciated.
| Here is an approach. Just assume it has the three vertices $(0,0),(0,b)$ and $(c,h)$ such that $b,c >0$. Now, you need to
1) find the equation of the line passing through points $(0,0),(c,h)$
2) find the line passing through the points $(0,b), (c,h) $
3) consider a horizontal strip; i.e. the double integral
$$ \iint_{D} dxdy .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/539592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Volume of $n$-ball in terms of $n-2$ ball Let $V_n(R)$ be the volume of the ball with radius $R$ in $\mathbb{R}^n$. This page says
$$V_n(R)=\int_0^{2\pi}\int_0^RV_{n-2}(\sqrt{R^2-r^2})r\,dr\,d\theta$$
I don't really understand the explanation given in there. Could someone explain it in the case $n=3$ (so $n-2=1$) how this integral is derived?
| Let's do the $n=2$ case first. What is a zero-dimensional sphere, you ask? Well, it's nothing but a point, and $0$-dimensional volume is, I claim, just cardinality. So $V_0 (R) = \lvert\{\ast \}\rvert= 1$.
Now, how do we calculate $V_2 (R)$, the $2$-volume (i.e. area) of a $2$-ball (i.e. disc)? Well, we want to add up the volume of all those teensy $0$-dimensional balls which make up the disc, parametrized by the radius and the angle. First, imagine cutting the disc into very skinny rings. We will look at one ring with inner radius $r$ and outer radius $r+dr$. Next, we cut this ring into extremely thin sections. We will focus on one such section, that sweeps out a very small angle, $d\theta$.
Call the area of this little piece $dA$. Then the volume we want is exactly the sum of all those little pieces, which is exactly $\int dA$. There are very general principles that tell us how to compute $dA$, but here is how to work it out from scratch.
Our little chunk is very very close to being a rectangle when $d\theta$ is very small. It's so close, that we might as well pretend it is a rectangle (if this makes you uncomfortable, then try working out the error term and seeing how quickly it goes to zero, or else look at the rigorous proof of the multivariate change-of-variables formula).
And what are the side lengths of this rectangle? The radial one is obvious: it's just $dr$. The other one is a little more tricky. Whatever it is, if we add it up all around the circle, we should get the circumference. Since $\int_0^{2\pi} r d\theta = 2\pi r$, the length we want is $r d\theta$.
So we conclude that $dA = r \,dr\, d\theta$. Since we're interested in adding up these areas over all $r$ and $theta$ with $0\leq r \leq R$ and $0\leq \theta \leq 2\pi$, we have $V_2 (R) = \int dA = \int r\,dr\,d\theta = \int_0^{2\pi} \int_0^R r\,dr\,d\theta$.
If we want, we can actually compute this out as a sanity check, and get $2\pi \left.\frac{1}{2} r^2 \right|_0^R = \pi R^2$.
Now that we've done $n=2$, we're more than ready for $n=3$. Imagine that you're looking down on a sphere of radius $R$, so that it looks like a disc. Now we want to chop up the disc exactly as before, except now, instead of tiny near-rectangular bits, these are very thin bars that go all the way through the sphere (picture an apple slicer). Their volume is just the area $dA$ that we had before, times... hmm...
We need to break out the Pythagorean theorem to find the length of our rectangular bar. Say that the length is $2s$. Then, if you think for a minute, you should find that $s^2 + r^2 = R^2$, because the top of our bar lies on the sphere, which has radius $R$, and the line from the origin is the hypoteneuse of a right triangle that goes radially outwards (from our top-down perspective) a distance $r$, then up (towards us) a distance that we defined to be $s$. So the length we want is $2s$—which just so happens to be $V_1 (s)$, the "volume" of the $1$-sphere of radius $s$.
So we have $V_3 (R) = \int 2s \cdot r\,dr\,d\theta = \int V_1 (s) \cdot r\,dr\,d\theta$ $= \int V_1 (\sqrt{R^2 - r^2})r\,dr\,d\theta$ $= \int_0^{2\pi} \int_0^R V_1 (\sqrt{R^2 - r^2}) r\,dr\,d\theta$.
One again, we can do a sanity check, and actually compute the thing to get $\frac{4}{3} \pi R^3$.
Now the general case should be clear. We will dividing up the $n$-ball into pieces, making rectangular cuts. The rectangles will have side lengths $dr$ and $rd\theta$, and they will cut out a ball of dimension $n-2$ and radius $s$, with $s^2 + r^2 = R^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/539727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Use a change of contour to show that $\int_0^\infty \frac{\cos{(\alpha x)}}{x+\beta}dx = \int_0^\infty \frac{te^{-\alpha \beta t}}{t^2 + 1}dt$ A problem from an old qualifying exam:
Use a change of contour to show that
$$\int_0^\infty \frac{\cos{(\alpha x)}}{x+\beta}dx = \int_0^\infty \frac{te^{-\alpha \beta t}}{t^2 + 1}dt,$$
provided that $\alpha, \beta >0$. Define the LHS as the limit of proper integrals, and show it converges.
My attempt so far:
It seems fairly easy to tackle the last part of the question...$\cos{(\alpha x)}$ will keep picking up the same area in alternating signs, and $x$ will continue to grow, so we're basically summing up a constant times the alternating harmonic series.
I've never actually heard the phrase "change of contour". I assume what they mean is to choose a contour on one, and then use a change of variable (which will then change the contour...e.g. as $z$ traverses a certain path $z^2$ will traverse a different path).
The right hand side looks ripe for subbing $z^2$ somehow...but then that would screw up the exponential. We need something divided by $\beta$ to get rid of the $\beta$ in the exponential on the RHS, leaving $\cos{(\alpha x)}$ as the real part of the exponential.
I also thought of trying to use a keyhole contour on the RHS and multiplying by $\log$, but it seems we'd have problems with boundedness in the left half-plane.
Any ideas or hints? I don't need you to feed me the answer. Thanks
| Yes, it really looks like you should go for $z^2$. Let's try $x = \beta t^2$. Then we have $ dx = 2\beta tdt$ and hence:
$$ \int_0^\infty\frac{\cos(\alpha x)}{x+\beta}dx = \int_0^\infty\frac{t\cos(\alpha \beta t^2)}{t^2+1}dt $$
Maybe you should note that $\cos(x) = \frac{e^{-ix} + e^{ix}}{2}$. This would also explain the "complex analysis" tag.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/539807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Find $\int \frac {\mathrm dx}{(x + 1)(x^2 + 2)}$ I'm supposed to find the antiderivative of
$$\frac {1}{(x + 1)(x^2 + 2)}$$
and I'm completely stumped. I've been trying substitutions with $u = x^2$ and that's led me nowhere. I don't think I can use partial fractions here since I have one linear factor and one quadratic factor below the division line, right?
| Consider $$\int {1\over (x+1)(x^2+2)}dx.$$
Using the method of partial fraction decomposition we have $${1\over (x+1)(x^2+2)} = {A\over x+1}+{Bx+C\over x^2+2}.$$ Solving for constans $A$, $B$,and $C$ we obtain $${1\over 3(x+1)}+{1-x\over 3(x^2+2)}.$$ So we must integrate $$\int{1\over 3(x+1)}+{1-x\over 3(x^2+2)}dx.$$ Rewrite the integral as $$\int{1\over 3}\cdot {1\over x+1}+{1\over 3}\cdot {1\over x^2+2}-{1\over 3}\cdot {x\over x^2+2}dx.$$ The first term is a $u$-substitution, the second term involves a $\tan^{-1}(x)$ identity and the third term is another $u$-substitution. Thus we integrate and obtain $${1\over 3}\cdot \ln(x+1)+{1\over 3\sqrt2}\cdot \tan^{-1}({x\over \sqrt2})-{1\over 6}\cdot \ln(x^2+2)+C$$ where $C$ is a constant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/539906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Reflexivity of a Banach space without the James map The reflexivity of a Banach space is usually defined as having to be enforced by a particular isometric isomorphism. Namely the map that takes each element to the evaluation, which is already an injective linear isometry from a Banach space to its double dual. It just isn't necessarily surjective.
Are there examples of when a space is isometrically isomorphic to its double dual, but is not reflexive? If $X$ is reflexive, is the same true of the $k^{th}$ dual of $X$? (Or maybe this property moves backwards instead? As in maybe if the dual is reflexive then so is the original. Separability does this too, but I'm not sure how to intuit when a property should pushforward and when it should pull back under the operation of taking a dual.) Does being reflexive, or at least isometrically isomorphic to its double dual capture any information about the "size" of the space or any other intuitive concepts that are easy to picture?
| Here are answers to your questions.
1) Robert C. James provided the first example of non-reflexive Banach space isomorphic to its double dual. Also, see this MO post. Thanks to David Mitra for the reference.
2) Banach space is reflexive iff $X^*$ is reflexive. See this post for details.
3) Reflexivity does not bound the size of Banach space. For example $\ell_p(S)$ with $1<p<+\infty$ is reflexive, $\dim \ell_p(S)>|S|$, though $|S|$ could be arbitrarily large.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/539974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Find the coefficient of a power of x in the product of polynomials - Link with Combinations? I came across a new set of problems while studying combinatorics which involves restrictions to several variables and use of multinomial theoram to evaluate the number of possible combinations of the variables subjected to those restrictions.For example
If we were to pick 6 balls out of a bag which contained 3 red , 4 green and 5 white identical balls then the method i've been taught is as follows :
Suppose
$a$ = red balls , $b$= green balls and $c$=white balls .
Now,
$a +b+c =6$ where $0<=a<=3,0<=b<=4,0<=c<=5.$
Now i am supposed to find the coefficient of $x^6$ in the following product :
$(1+x^1+x^2+x^3)(1+x^1+x^2+x^3+x^4)(1+x^1+x^2+x^3+x^4+x^5)$ and that is the solution.
I want to understand what's actually happening here since finding a coefficient seems to have no link with the possible combinations. THANKS.
| Here's one way to think of it. Let's rewrite as $$(r^0+r^1+r^2+r^3)(g^0+g^1+g^2+g^3+g^4)(w^0+w^1+w^2+w^3+w^4+w^5),$$ for a moment. When we expand this, every single one of the terms will be of the form $r^ag^bw^c.$ This gives us a one-to-one correspondence with the various ways in which the balls can be drawn--where $r^ag^bw^c$ represents the situation of drawing $a$ red balls, $b$ green balls, and $c$ white balls. Now, if we no longer wish to distinguish between the various ways we can draw a certain number of balls of any type, then we can set $r,g,w=x$ to get your original expression. The coefficient of $x^6,$ then, will be the number of terms $r^ag^bw^c$ such that $a+b+c=6$--that is, the number of ways we can draw $6$ balls of any type, as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/540082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is $\mbox{lcm}(a,b,c)=\mbox{lcm}(\mbox{lcm}(a,b),c)$? $\newcommand{\lcm}{\operatorname{lcm}}$Is $\lcm(a,b,c)=\lcm(\lcm(a,b),c)$?
I managed to show thus far, that $a,b,c\mid\lcm(\lcm(a,b),c)$, yet I'm unable to prove, that $\lcm(\lcm(a,b),c)$ is the lowest such number...
| The "universal property" of the $\newcommand{\lcm}{\operatorname{lcm}}\lcm$ is
if $\lcm(a,b) \mid x$, and $c \mid x$, then $\lcm(\lcm(a,b),c) \mid x$.
Make a good choice for $x$, for which you can prove the hypothesis and/or the conclusion is useful.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/540135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
Path connected spaces with same homotopy type have isomorphic fundamental groups I was try to understand the following theorem:-
Let $X,Y$ be two path connected spaces which are of the same homotopy type.Then their fundamental groups are isomorphic.
Proof: The fundamental groups of both the spaces $X$ and $Y$ are independent on the base points since they are path connected. Since $X$ and $Y$ are of the same homotopy type, there exist continuous maps $f:X\to Y $ and $g:Y\to X$ such that $g\circ f\sim I_X$ by a homotopy, say, $F$ and $f\circ g \sim I_Y$ by some homotopy, say $G$. Let $x_0\in X$ be a base point. Let
$$f_\#:\pi_1(X,x_0)\to \pi_1(Y,f(x_0))$$ and $$g_\#:\pi_1(Y,f(x_0))\to \pi_1(X,g(f(x_0)))$$ be the induced homomorphisms. Let $\sigma$ be the path joining $x_0$ to $gf(x_0)$ defined by the homotopy $F$.
After that the author says that $\sigma_\#$ is a isomorphism. obviously $\sigma_\#$ is a homomorphism but I could not understand how it becomes a isomorphism.
Can someone explain me please. thanks for your kind help and time.
| I suppose $σ$ is a path from $x_0$ the a point $x_1$ and the map $\sigma_\#$ is defined as:
$$σ_\#:\pi_1(X,x_0)\to π_0(X,x_1)\\σ_\#([p])=[σ\cdot p\cdot\barσ]$$ where $σ⋅p$ is the path which first traverses the loop $p$ and then $σ$, and $\bar σ$ is the reversed path.
This is a bijection because it has the inverse $\barσ_♯$ which we see by calculating $$\barσ_\#σ_\#([p])=[\barσ⋅σ⋅p⋅\barσ⋅σ]=[\barσ⋅σ]⋅[p]⋅[\barσ⋅σ]=[p]$$ as the homotopy class of $[\barσ⋅σ]$ is trivial. Similarly, we get $σ_\#\barσ_\#([p])=[p]$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/540227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
probability question: find min{a,b,c} X, Y, Z: discrete random variables
Probability of X, Y, Z have distinct values is 1.
Let $a=P(X>Y), b=P(Y>z), c=P(Z>X)$
(i) Find min{a,b,c}.
(Note: in an election, it's possible for more than half of the voters to prefer candidate A to candidate B, more than half B to C, and more than half C to A)
(ii) if X,Y,Z are i.i.d, a=b=c=1/2
Could anyone give me a clue for how to start with this problem? Thank you!
| Since any equality has probability $0$, we only have to look at six possibilities
*
*$p_1 = P(X \gt Y \gt Z)$
*$p_2 = P(X \gt Z \gt Y)$
*$p_3 = P(Y \gt Z \gt X)$
*$p_4 = P(Y \gt X \gt Z)$
*$p_5 = P(Z \gt X \gt Y)$
*$p_6 = P(Z \gt Y \gt X)$
and we have $p_1+p_2+p_3+p_4+p_5+p_6=1$ with
*
*$a=p_1+p_2+p_5$;
*$b= p_3+p_4+p_1$;
*$c=p_5+p_6+p_3$.
$a,b,c$ are each sums of probabilities and so must be non-negative.
(i) If the six probabilities can take any values then
*
*for example it is possible $a=0$, one possibility being that $p_4=1$ and the others zero, in which case $\min \{a,b,c\}\ge 0$.
*$a+b+c =p_1+p_2 +p_5+ p_3+p_4+ p_1+p_5+p_6+p_3 = 1 + p_1+p_3+p_5$, which must lie in the range $[1,2]$, so $\min \{a,b,c\}\le \frac23$ which would for example happen if $p_1=p_3=p_5=\frac13$ and $p_2=p_4=p_6=0$.
(ii) If $X,Y,Z$ are i.i.d. then $p_1=p_2=p_3=p_4=p_5=p_6 = \frac16$, so $a=b=c=\frac12$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/540367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\int_{-\infty}^{0}xe^{x}$ diverge or converge How would one find whether the following improper integral converge or diverge.
$\int_{-\infty}^{0}xe^{x}$
I did the following.
$t\rightarrow\infty$
$\int_{t}^{0}xe^x$
I did the integration by parts.
$u=x$
$dv=e^x$
$xe^x-\int 1e^x$
$xe^x-1xe^x$
$(0)(e^0)-e^0(0)-te^t-te^t$
$0-\infty-\infty$
would this mean there is a divergence.
| Step 1: set the limit $\lim_{t \rightarrow -\infty} $
Step 2: find integral (i got $xe^x - e^x$)
Step 3: plug in the limits and ftc, so u get
$\lim_{ t-> -\infty} [(0-e^0)-(te^t-e^t)]$
*key: $e^\infty = \infty$,
$e^-\infty = 0$
step 4: evaluate
answer: convergent sum $-1 $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/540450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Prove that $\sqrt{2n^2+2n+3} $ is irrational I have proven this by cases on $n$.
I would like to see a neater proof. One similar to the proof of the fact that $\sqrt{2}$ is irrational.
| Assume by contradiction that $\sqrt{2n^2+2n+3}$ is rational. Then
$$\sqrt{2n^2+2n+3} =\frac{p}{q}$$
with $gcd(p,q)=1$. Squaring we get $q^2|p^2$ which implies $q=1$. Thus
$$2n^2+2n+3=p^2$$
It follows that $p$ is odd. Let $p=2k+1$
Then
$$2n^2+2n+3=4k^2+4k+1 \Rightarrow n^2+n=2k^2+2k-1$$
P.S. Here is the same argument, if you know modular arithmetic:
$$2n^2+2n+3 \equiv 3 \pmod{4}$$
thus it cannot be a perfect square.
Hence $n(n+1)$ is odd, which implies that both $n$ and $n+1$ are odd. Contradiction.
Thus $\sqrt{2n^2+2n+3}$ cannot be rational.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/540555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Limit of e with imaginary number Important part:
$\lim\limits_{x\to\infty} e^{-ix} - e^{-x} $
This is suppose to approximate to "$1$" but the way I see it we have $0 - 0$ ...
| Expand the term with the imaginary exponent using Euler's formula:
$$\lim_{x\to\infty} \left[e^{-ix} - e^{-x}\right] = \lim_{x\to\infty} \left[\cos(-x) + i\sin(-x) - e^{-x}\right]=\lim_{x\to\infty} \cos(x) - i\lim_{x\to\infty} \sin(x)$$
which does not converge to either 0 or 1, but instead oscillates around the unit circle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/540723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Radical of a homogeneous ideal If $S=\bigoplus_{d\ge 0} S_d$ is a graded ring and $\mathfrak a\subset S$ a homogenous ideal, I'm trying to prove this implication:
$\sqrt {\mathfrak a}=S_+=\bigoplus_{d\gt 0}S_d\implies S_d \subset \mathfrak
a$ for some $d\gt 0$.
My attempt of solution
I know that $x_i^d\in \sqrt {\mathfrak a}$ for $i=0,\ldots,n$ and $d\gt 0$, but I couldn't go further.
I don't have much experience solving graded ring questions.
I really need help.
Thanks a lot.
| I think you are supposed to be assuming Noetherian here, which is what the other users are pointing out.
Let me assume that $\mathfrak{a}\subseteq S:=k[x_0,\ldots,x_n]$ and let you generalize. If $\sqrt{\mathfrak{a}}=S_+$, then for all $i$ there exists $m_i$ such that $x_i^{m_i}\in\mathfrak{a}$. Let $M=(n+1)\max\{m_i\}$. We claim that $S_M\subseteq\mathfrak{a}$. Indeed, it suffices to show that each monomial $x_0^{e_0}\cdots x_n^{e_n}\in \mathfrak{a}$ for $e_0+\cdots+e_n=M$, since these span $S_M$ as a $k$-space. But, note that since $e_0+\cdots+e_n=M$, at least one of the $e_i$'s must be at least as big as $\max\{m_i\}$, and thus at least one of $x_i^{e_i}\in\mathfrak{a}$ from where the conclusion follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/540794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Complex integral of an exponent divided by a linear ($\int \frac{e^u}{u-1}$) Here is the question I'm working on:
Evaluate the following integral:
$$ \oint_{|z+1|=1} \frac{\sin \frac{\pi z}{4}}{z^2-1}dz$$
I've tried along the following line. Substitute $sin(z) = \frac{e^z-e^{-z}}{2i}$:
$$ \frac{1}{2i} \int \frac{e^{\pi z/4}-e^{-\pi z/4}}{z^2-1}dz$$
The contour is a circle of radius 1 around $-1$, so substitute $z=-1+e^{it}$, with $dz = ie^{it}dt$:
$$ \frac{1}{2i} \int_0^{2\pi} \cfrac{e^{\pi/4(e^{it}-1)}-e^{-\pi/4(e^{it}-1)}}{(e^{it}-1)^2-1}ie^{it}dt = \frac{1}{2} \int \cfrac{e^{\pi/4(e^{it}-1)}-e^{-\pi/4(e^{it}-1)}}{e^{it}-2}dt$$
A repeat of the same substitution $u = e^{it} - 1$ with $du = ie^{it}dt$ gives:
$$\frac{1}{2} \int \cfrac{e^{\pi u/4}-e^{-\pi u/4}}{u-1}du$$
At this point I've run out of ideas. Any hint or pointer would be appreciated.
| Use the Residue Theorem:
$$\oint\limits_{|z+1|=1}\frac{\sin\frac{\pi z}4}{z^2-1}dz=\frac12\left(\overbrace{\oint\limits_{|z+1|=1}\frac{\sin\frac{\pi z}4}{z-1}dz}^{=0}-\oint\limits_{|z+1|=1}\frac{\sin\frac{\pi z}4}{z+1}dz\right)=$$
$$=\left.-\frac{2\pi i}2\sin\frac{\pi z}4\right|_{z=-1}=\frac{\pi i}{\sqrt2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/540888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Describing co-ordinate systems in 3D for which Laplace's equation is separable Laplace's Equation in 3 dimensions is given by $$\nabla^2f=\frac{ \partial^2f}{\partial x^2}+\frac{ \partial^2f}{\partial z^2}+\frac{ \partial^2f}{\partial y^2}=0$$ and is a very important PDE in many areas of science. One of the usual ways to solve it is by seperation of variables. We let $f(x,y,z)=X(x)Y(y)Z(z)$ and then the PDE reduces to 3 independent ODE's of the form $X''(x)+k^2 x=0$ with $k$ a constant.
This method works for a surprising number of coordinate systems. I can use cylindrical coordinates, spherical coordinates, bi-spherical coordinates and more. In fact in 2 dimensions, I can take $\mathbb{R}^2$ to be $\mathbb{C}$, and then any analytic function $f(z)$ maps the Cartesian coordinates of $\mathbb{R}^2$ onto a set of coordinates that is separable in Laplace's equation. This follows from analytic functions also being harmonic functions. For example $f(z)=z^2$ maps the cartesian coordinates to the parabolic coordinates.
So how about in 3 dimensions? Is there a way to describe all coordinate systems such that the Laplacian separates in this way?I have no idea how to start. I think that Confocal Ellipsoidal Coordinates will work but I'm not sure how to verify it. I'm not familIar with differential geometry, so any help appreciated.
Image source
| The Helmholtz equation is separable only in ellipsoidal coordinates (and degenerations like polar coordinates, and cartesian of course). For Laplace, there are a couple more; see the MathWorld article about Laplace's equation.
A good book source on this subject is Chapter 5 of Morse & Feshbach, part I.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/540955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.