Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Purpose Of Adding A Constant After Integrating A Function I would like to know the whole purpose of adding a constant termed constant of integration everytime we integrate an indefinite integral $\int f(x)dx$. I am aware that this constant "goes away" when evaluating definite integral $\int_{a}^{b}f(x)dx $. What has that constant have to do with anything? Why is it termed as the constant of integration? Where does it come from?
The motivation for asking this question actually comes from solving a differential equation $$x \frac{dy}{dx} = 5x^3 + 4$$ By separation of $dy$ and $dx$ and integrating both sides, $$\int dy = \int\left(5x^2 + \frac{4}{x}\right)dx$$ yields $$y = \frac{5x^3}{3} + 4 \ln(x) + C .$$
I've understood that $\int dy$ represents adding infinitesimal quantity of $dy$'s yielding $y$ but I'am doubtful about the arbitrary constant $C$.
| There are many great answers here, but I just wanted to chime in with my favorite example of how things can go awry if one forgets about the constant of integration.
Consider
$$\int \sin(2x) dx.$$
We will find antiderivatives in two ways. First, a substitution $u=2x$ yields:
$$\int \frac{\sin(u)}{2}du = -\frac{\cos(u)}{2} = -\frac{\cos(2x)}{2}.$$
Second, we use the identity $\sin(2x)=2\sin(x)\cos(x)$ and a substitution $v=\sin(x)$:
$$\int \sin(2x)dx = \int 2\sin(x)\cos(x)dx = \int 2vdv =v^2 = \sin^2(x).$$
Thus, we have found two antiderivatives of $\sin(2x)$ that are completely different! Namely
$$F(x)=\sin^2(x) \quad \text{and} \quad G(x)=-\frac{\cos(2x)}{2}.$$
Notice that $F(x)=\sin^2(x)\neq -\cos(2x)/2=G(x)$. For instance, $F(0)=\sin^2(0) = 0$ but $G(0)=-\cos(2\cdot 0)/2=-1/2$. So, what happened?
We forgot about the constant of integration, that's what happened. The theory of integration tells us that all antiderivatives differ by a constant. So, if $F(x)$ is an antiderivative, then any other antiderivative $G(x)$ can be expressed as $G(x)=F(x)+C$ for some constant $C$. In particular, our antiderivatives above must differ by a constant. Indeed, the constant $C$ in this case is exactly $C=-\frac{1}{2}$:
$$F(x)+C = F(x) - \frac{1}{2} = \sin^2(x)-\frac{1}{2} = \frac{(1-\cos(2x))}{2}-\frac{1}{2} = -\frac{\cos(2x)}{2} = G(x),$$
where we have used the trigonometric identity $\sin^2(x) = (1-\cos(2x))/2.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/94258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 9,
"answer_id": 7
} |
Simple trigonometry question (angles) I am starting again with trigonometry just for fun and remember the old days. I was not bad at maths, but however I remember nothing about trigonometry...
And I'm missing something in this simple question, and I hope you can tell me what.
One corner of a triangle has a 60º angle, and the length of the two
adjacent sides are in ratio 1:3. Calculate the angles of the other
triangle corners.
So what we have is the main angle, $60^\circ$, and the adjacent sides, which are $20$ meters (meters for instance). We can calculate the hypotaneous just using $a^2 + b^2 = h^2$. But how to calculate the other angles?
Thank you very much and sorry for this very basic question...
| Since we are only interested in the angles, the actual lengths of the two sides do not matter, as long as we get their ratio right. So we can take the lengths of the adjacent sides to be $1$ and $3$, in whatever units you prefer. If you want the shorter of the two adjacent sides to be $20$ metres, then the other adjacent side will need to be $60$ metres. But we might as well work with the simpler numbers $1$ and $3$.
To compute the length of the third side, we use a generalization of the Pythagorean Theorem called the Cosine Law. Let the vertices of a triangle be $A$, $B$, and $C$, and let the sides opposite to these vertices be $a$, $b$, and $c$. For brevity, let the angle at $A$ be called $A$, the angle at $B$ be called $B$, and the angle at $C$ be called $C$. The Cosine Law says that
$$c^2=a^2+b^2-2ab\cos C.$$
Take $C=60^\circ$, and $a=1$, $b=3$.
Since $\cos(60^\circ)=1/2$, we get
$$c^2=1^2+3^2-2(1)(3)(1/2),$$
so $c^2=7$ and therefore $c=\sqrt{7}$. We now know all the sides.
To find angles $A$ and $B$, we could use the Cosine Law again. We ilustrate the procedure by finding $\cos A$.
By the Cosine Law,
$$a^2=b^2+c^2-2bc\cos A.$$
But $a=1$, $b=3$, and by our previous work $c=\sqrt{7}$. It follows that
$$1=9+7-2(3)(\sqrt{7})\cos A,$$
and therefore
$$\cos A= \frac{5}{2\sqrt{7}}.$$
The angle in the interval from $0$ to $180^\circ$ whose cosine is $5/(2\sqrt{7})$ is not a "nice" angle. The calculator (we press the $\cos^{-1}$ button) says that this angle is about $19.1066$ degrees.
Another way to proceed, once we have found $c$, is to use the Sine Law
$$\frac{\sin A}{a}=\frac{\sin B}{b}=\frac{\sin C}{c}.$$
From this we obtain that
$$\frac{\sin A}{1}=\frac{\sqrt{3}/2}{\sqrt{7}}.$$
The calculator now says that $\sin A$ is approximately $0.3273268$, and then the calculator gives that $A$ is approximately $19.1066$ degrees. In the old days, the Cosine Law was not liked very much, and the Sine Law was preferred, because the Sine Law involves only multiplication and division, which can be done easily using tables or a slide rule. A Cosine Law calculation with ugly numbers is usually more tedious.
The third angle of the triangle (angle $B$) can be found in the same way. But it is easier to use the fact that the angles of a triangle add up to $180^\circ$. So angle $B$ is about $100.8934$ degrees.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/94321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Need help deriving recurrence relation for even-valued Fibonacci numbers. That would be every third Fibonacci number, e.g. $0, 2, 8, 34, 144, 610, 2584, 10946,...$
Empirically one can check that:
$a(n) = 4a(n-1) + a(n-2)$ where $a(-1) = 2, a(0) = 0$.
If $f(n)$ is $\operatorname{Fibonacci}(n)$ (to make it short), then it must be true that $f(3n) = 4f(3n - 3) + f(3n - 6)$.
I have tried the obvious expansion:
$f(3n) = f(3n - 1) + f(3n - 2) = f(3n - 3) + 2f(3n - 2) = 3f(3n - 3) + 2f(3n - 4)$
$ = 3f(3n - 3) + 2f(3n - 5) + 2f(3n - 6) = 3f(3n - 3) + 4f(3n - 6) + 2f(3n - 7)$
... and now I am stuck with the term I did not want. If I do add and subtract another $f(n - 3)$, and expand the $-f(n-3)$ part, then everything would magically work out ... but how should I know to do that? I can prove the formula by induction, but how would one systematically derive it in the first place?
I suppose one could write a program that tries to find the coefficients x and y such that $a(n) = xa(n-1) + ya(n-2)$ is true for a bunch of consecutive values of the sequence (then prove the formula by induction), and this is not hard to do, but is there a way that does not involve some sort of "Reverse Engineering" or "Magic Trick"?
| The definition of $F_n$ is given:
*
*$F_0 = 0$
*$F_1 = 1$
*$F_{n+1} = F_{n-1} + F_{n}$ (for $n \ge 1$)
Now we define $G_n = F_{3n}$ and wish to find a recurrence relation for it.
Clearly
*
*$G_0 = F_0 = 0$
*$G_1 = F_3 = 2$
Now we can repeatedly use the definition of $F_{n+1}$ to try to find an expression for $G_{n+1}$ in terms of $G_n$ and $G_{n-1}$.
$$\begin{align*}
G_{n+1}&= F_{3n+3}\\
&= F_{3n+1} + F_{3n+2}\\
&= F_{3n-1} + F_{3n} + F_{3n} + F_{3n+1}\\
&= F_{3n-3} + F_{3n-2} + F_{3n} + F_{3n} + F_{3n-1} + F_{3n}\\
&= G_{n-1} + F_{3n-2} + F_{3n-1} + 3 G_{n}\\
&= G_{n-1} + 4 G_{n}
\end{align*}$$
so this proves that $G$ is a recurrence relation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/94359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 8,
"answer_id": 4
} |
Solving quadratic equation $$\frac{1}{x^2} - 1 = \frac{1}{x} -1$$
Rearranging it I get: $1-x^2=x-x^2$, and so $x=1$. But the question Im doing says to find 2 solutions. How would I find the 2nd solution?
Thanks.
| I think it should be emphasised what the salient point is here:
Given the equation
$$
\Phi =\Psi
$$
you may multiply both sides by the same non-zero number $a$ to obtain the equivalent equation
$$
a\Phi =a\Psi.
$$
Multiplying both sides of an equation by 0 may give an equation that's not equivalent to the original equation.
With your equation, eventually you'll get to the point where you have
$$
\tag{1}{1\over x^2}= {1\over x}.
$$
At this point, if you want to "cancel the $x$'s", you could multiply both sides by
$x^2$ as long as $x^2\ne0$. You need to consider what happens when $x=0$ separately.
$x=0$ is not a solution of (1) in this case, so the solutions of (1) are the non-zero solutions of
$$
1=x.
$$
If you multiplied both sides of (1) by $x^3$, the solutions would be the non-zero solutions of
$$
x=x^2.
$$
Your text made an error, most probably, at this stage...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/94405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 3
} |
Conditional probability of a general Markov process given by its running process I have a question as follow:
"Let $X$ be a general Markov process, $M$ is a running maximum process of $X$ and $T$ be an exponential distribution, independent of $X$.
I learned that there is the following result:
Probability: $P_x(X_T\in dz \mid M_T=y)$ is independent of starting point $x$ of the process $X$. Where $y, z \in R$"
Is there anyone who knows some references which mentioned the result above? I heard that this result was found around the seventies but I haven't found any good reference yet.
Thanks a lot!
| For real-valued diffusion processes, this is essentially a local form of David Williams' path decomposition, and can be deduced from
Theorem A in a paper "On the joint distribution of the maximum and its location for a linear diffusion" by Csaki, Foldes and Salminen
[Ann. Inst. H. Poincare Probab. Statist., vol. 23 (1987) pp. 179--194].
For more general Markov processes, you will need to look into the theory of "last-exit times". Although these are not stopping times, many Markov processes possess a sort of strong Markov property at such times. This theory can be applied to the last time before $T$ that
the process is at level $y$. One place to start might be the paper of Meyer, Smythe and Walsh "Birth and death of Markov processes" in vol. III (pp. 295-305) of the Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability (1972).
See also the work of P.W. Millar from roughly the same time period.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/94469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Proof that the set of odd positive integers greater then 3 is countable I found one problem which asks following:
Show that the set of odd positive integers greater then 3 is countable.
At the begining I was thinking that such numbers could be represented by $2k+1$,where $k>1$; but in the answers paper there was written as $2n+3$ or general function is
$$f(n)=2n+3$$
and when I was thinking how to prove that such answer is countable, the answer paper said this function is a one-to-one correspondence from the set of positive numbers set to the set of positive odd integers greater to 3.
My question is: is it enough to prove a one-to-one correspondence between two sets, that one of them is countable. If yes, then once my lecturer ask me to proof that rational numbers are countable, so in this case if I represent rational numbers by following function from set of positive numbers:
$$f(n)=\frac{n+1}{n}$$
or maybe $f(n)=\frac{n}{n+1}$. They both are one-to-one correspondences from the set of positive numbers to the set of rational numbers (positives sure). Please help me, is my logic correct or not?
| If you know that a set $A$ is countable and you demonstrate a bijection $f:A\to B$ then you have also shown that the set $B$ is countable; when $A=\mathbb{Z}^+$ this is the very definition of countable. Both of the functions $2k+1$ and $2n+3$ can be used to show that the set of odds greater than $3$ are countable but the former uses the countable domain of $\{2,3,\dots\}$ instead of $\mathbb{Z}^+=\{1,2,3,\dots\}$.
However, neither the functions $f(n)=(n+1)/n$ or $n/(n+1)$ are bijections. Try and express the positive rational number $1/3$ in either of these forms and you will find there is no integer $n$ that works. In order for a function to be a bijection it must be both injective and surjective; your function here is not surjective (it does not obtain every value in the target in the codomain at least once - some values, like $1/3$, are left untouched).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/94508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Matrix/Vector Derivative I am trying to compute the derivative:$$\frac{d}{d\boldsymbol{\mu}}\left( (\mathbf{x} - \boldsymbol{\mu})^\top\boldsymbol{\Sigma} (\mathbf{x} - \boldsymbol{\mu})\right)$$where the size of all vectors ($\mathbf{x},\boldsymbol{\mu}$) is $n\times 1$ and the size of the matrix ($\boldsymbol{\Sigma}$) is $n\times n$.
I tried to break this down as $$\frac{d}{d\boldsymbol{\mu}}\left( \mathbf{x}^\top\boldsymbol{\Sigma} \mathbf{x} - \mathbf{x}^\top\boldsymbol{\Sigma} \boldsymbol{\mu} - \boldsymbol{\mu}^\top\boldsymbol{\Sigma} \mathbf{x} + \boldsymbol{\mu}^\top\boldsymbol{\Sigma} \boldsymbol{\mu} \right) $$
yielding $$(\mathbf{x} + \boldsymbol{\mu})^\top\boldsymbol{\Sigma} + \boldsymbol{\Sigma}(\boldsymbol{\mu} - \mathbf{x})$$
but the dimensions don't work: $1\times n + n\times 1$. Any help would be greatly appreciated.
-C
| There is a very short and quick way to calculate it correctly. The object $(x-\mu)^T\Sigma(x-\mu)$ is called a quadratic form. It is well known that the derivative of such a form is (see e.g. here),
$$\frac{\partial x^TAx }{\partial x}=(A+A^T)x$$
This works even if $A$ is not symmetric. In your particular example, you use the chain rule as,
$$\frac{\partial (x-\mu)^T\Sigma(x-\mu) }{\partial \mu}=\frac{\partial (x-\mu)^T\Sigma(x-\mu) }{\partial (x-\mu)}\frac{\partial (x-\mu)}{\partial \mu}$$
Thus,
$$\frac{\partial (x-\mu)^T\Sigma(x-\mu) }{\partial (x-\mu)}=(\Sigma +\Sigma^T)(x-\mu)$$
and
$$\frac{\partial (x-\mu)}{\partial \mu}=-1$$
Combining equations you get the final answer,
$$\frac{\partial (x-\mu)^T\Sigma(x-\mu) }{\partial \mu}=(\Sigma +\Sigma^T)(\mu-x)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/94562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 3,
"answer_id": 0
} |
How many smooth functions are non-analytic? We know from example that not all smooth (infinitely differentiable) functions are analytic (equal to their Taylor expansion at all points). However, the examples on the linked page seem rather contrived, and most smooth functions that I've encountered in math and physics are analytic.
How many smooth functions are not analytic (in terms of measure or cardinality)? In what situations are such functions encountered? Are they ever encountered outside of real analysis (e.g. in physics)?
| In terms of cardinality, there are the same number of smooth and analytic functions, $2^{\aleph_0}$. The constant functions are enough to see that there are at least $2^{\aleph_0}$ analytic functions. The fact that a continuous function is determined by its values on a dense subspace, along with my presumption that you are referring to smooth functions on a separable space, imply that there are at most $(2^{\aleph_0})^{\aleph_0}=2^{\aleph_0}$ smooth functions.
Added: In light of the question edit, I should mention that the cardinality of the set of smooth nonanalytic functions is also $2^{\aleph_0}$. This can be seen by taking the constant multiples of some bump function.
I don't know about measures, but analytic functions are a very special subclass of smooth functions (something which I'm sorry to leave vague at the moment, but hopefully someone will give a better answer here (Added: Now Dave L. Renfro has)). They are also important, useful, and relatively easy to work with, which is part of why they are so prevalent in the math and physics you have seen.
Where are they encountered? Bump functions are important in differential equations and manifolds, so I would guess they're important in physics. Bump functions are smooth and not analytic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/94634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 4,
"answer_id": 0
} |
Why is the pullback completely determined by $d f^\ast = f^\ast d$ in de Rham cohomology? Fix a smooth map $f : \mathbb{R}^m \rightarrow \mathbb{R}^n$. Clearly this induces a pullback $f^\ast : C^\infty(\mathbb{R}^n) \rightarrow C^\infty(\mathbb{R}^m)$. Since $C^\infty(\mathbb{R}^n) = \Omega^0(\mathbb{R}^n)$ (the space of zero-forms) by definition, we consider this as a map $f^\ast : \Omega^0(\mathbb{R}^n) \rightarrow \Omega^0(\mathbb{R}^m)$. We want to extend $f^\ast$ to the rest of the de Rham complex in such a way that $d f^\ast = f^\ast d$.
Bott and Tu claim (Section I.2, right before Prop 2.1), without elaboration, that this is enough to determine $f^\ast$ . I can see why this forces e.g.
$\displaystyle\sum_{i=1}^n f^\ast \left[ \frac{\partial g}{\partial y_i} d y_i \right] = \sum_{i=1}^n f^* \left[ \frac{\partial g}{\partial y_i}\right] d(y_i \circ f)$,
but I don't see why this forces each term of the LHS to agree with each term of the RHS -- it's not like you can just pick some $g$ where $\partial g/\partial y_i$ is some given function and the other partials are zero.
| $\newcommand\RR{\mathbb{R}}$I don't have the book here, but it seems you are asking why there is a unique extension of $f^*:\Omega^0(\RR^n)\to\Omega^0(\RR^m)$ to an appropriate $\overline f^*:\Omega^\bullet(\RR^n)\to\Omega^\bullet(\RR^m)$ such that $f^*d=df^*$. Here appropriate should probably mean that the map $\overline f^*$ be a morphism of graded algebras.
Now notice that the since $f^*$ is fixed on $\Omega^0(\RR^n)$ and the commutation relation with $d$ tells us that it is also fixed on the subspace $d(\Omega^0(\RR^n))\subseteq\Omega^1(\RR^n)$. The uniqueness follows from the fact that the subspace $\Omega^0(\RR^n)\oplus d(\Omega^0(\RR^n))$ of $\Omega^\bullet(\RR^n)$ generates the latter as an algebra.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/94691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
$p(x)$ divided by $x-c$ has remainder $p(c)$? [Polynomial Remainder Theorem] This is from Pinter, A Book of Abstract Algebra, p.265.
Given $p(x) \in F[x]$ where $F$ is a field, I would like to show that $p(x)$ divided by $x-c$ has remainder $p(c)$.
This is easy if $c$ is a root of $p$, but I don't see how to prove it if $c$ is not a root.
| By the division algorithm, if $a(x)$ and $b(x)$ are any polynomials, and $a(x)\neq 0$, then there exist unique $q(x)$ and $r(x)$ such that
$$b(x) = q(x)a(x) + r(x),\qquad r(x)=0\text{ or }\deg(r)\lt \deg(a).$$
Let $b(x) = p(x)$, and $a(x)=x-c$. Then $r(x)$ must be constant (since it is either zero or of degree strictly smaller than one), so
$$b(x) = q(x)(x-c) + r.$$
Now evaluate at $x=c$.
Note. I find it strange that you say that this is "easy if $c$ is a root of $p(x)$". The Factor Theorem (that $x-c$ divides $p(x)$ when $c$ is a root of $p(x)$) is a corollary of this result. How exactly do you prove it without this?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/94728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
} |
Why do we reverse inequality sign when dividing by negative number? We all learned in our early years that when dividing both sides by a negative number, we reverse the inequality sign.
Take
$-3x < 9$
To solve for $x$, we divide both sides by $-3$ and get
$$x > -3.$$
Why is the reversal of inequality? What is going in terms of number line that will help me understand the concept better?
| Multiplying or dividing an inequality by $-1$ is exactly the same thing as moving each term to the other side. But then if you switch side for all terms, each term faces the opposite "side" of inequality sign...
For example:
$2x < -3$
Moving them on the other side yields:
$3 < -2x$ which is the same as $-2x > 3$...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/94790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 9,
"answer_id": 0
} |
What is the result of $\lim\limits_{x \to 0}(1/x - 1/\sin x)$? Find the limit:
$$\lim_{x \rightarrow 0}\left(\frac1x - \frac1{\sin x}\right)$$
I am not able to find it because I don't know how to prove or disprove $0$ is the answer.
| Since everybody was 'clever', I thought I'd add a method that doesn't really require much thinking if you're used to asymptotics.
The power series for $\sin x$
$$\sin x = x + O(x^3)$$
We can compute the inverse of this power series without trouble. In great detail:
$$\begin{align}\frac{1}{\sin x} &= \frac{1}{x + O(x^3)}
\\ &= \frac{1}{x} \left( \frac{1}{1 - O(x^2))} \right)
\\ &= \frac{1}{x} \left(1 + O(x^2) \right)
\\ &= \frac{1}{x} + O(x)
\end{align}$$
going from the second line to the third line is just the geometric series formula. Anyways, now we can finish up:
$$\frac{1}{x} - \frac{1}{\sin x} = O(x)$$
$$ \lim_{x \to 0} \frac{1}{x} - \frac{1}{\sin x} = 0$$
If we wanted, we could get more precision: it's not hard to use the same method to show
$$ \frac{1}{\sin x} = \frac{1}{x} + \frac{x}{6} + O(x^3) $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/94864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 8,
"answer_id": 0
} |
How to show if $ \lambda$ is an eigenvalue of $AB^{-1}$, then $ \lambda$ is an eigenvalue of $ B^{-1}A$? Statement: If $ \lambda$ is an eigenvalue of $AB^{-1}$, then $ \lambda$ is an eigenvalue of $ B^{-1}A$ and vice versa.
One way of the proof.
We have $B(B^{-1}A ) B^{-1} = AB^{-1}. $ Assuming $ \lambda$ is an eigenvalue of $AB^{-1}$ then we have,
$$\begin{align*}
\det(\lambda I - AB^{-1}) &= \det( \lambda I - B( B^{-1}A ) B^{-1} )\\
&= \det( B(\lambda I - B^{-1}A ) B^{-1})\\
&= \det(B) \det\bigl( \lambda I - B^{-1}A \bigr) \det(B^{-1})\\
&= \det(B) \det\bigl( \lambda I - (B^{-1}A )\bigr) \frac{1}{ \det(B) }\\ \
&= \det( \lambda I - B^{-1}A ).
\end{align*}$$
It follows that $ \lambda$ is an eigenvalue of $ B^{-1}A.$ The other side of the lemma can also be proved similarly.
Is there another way how to prove the statement?
| A shorter way of seeing this would be to observe that if
$$
(AB^{-1})x=\lambda x
$$
for some non-zero vector $x$, then by multiplying that equation by $B^{-1}$ (from the left) we get that
$$
(B^{-1}A)(B^{-1}x)=\lambda (B^{-1}x).
$$
In other words $(B^{-1}A)y=\lambda y$ for the non-zero vector $y=B^{-1}x$. This process is clearly reversible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/94926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Serving customers algorithm Well I have a problem with a Christmas assignment and my teacher is not responding(maybe he is skiing somewhere now) so I will need some help.
The algorithm is about an office and the waiting time of the customers. We have one office that has to serve $n$ customers $a_1, a_2,\cdots ,a_n$. We assume that serving time $t(a_j)$ for each customer $a_j$ is known. Let $a_j$ to be served after $k$ customers $a_{i_{1}},a_{i_{2}},\cdots ,a_{i_{k}}$. His waiting time $T(a_j)$ is equal to
$$T(a_j) = t(a_{i_{1}})+t(a_{i_{2}})+\cdots +t(a_{i_{k}})+t(a_{j})$$
I want to an efficient algorithm that will compute the best way to serve in order to reduce the total waiting time.
$$\sum_{j=1}^nT(a_j)$$
My first thought is that the customers with the smallest serving time have to go first and the obvious solution is apply a sorting algorithm.
Am I wrong?
| Problems of this kind belong to the area of operations research known as scheduling problems (scheduling theory). Here is a short bibliography of books that deal with this topic: http://www.york.cuny.edu/~malk/biblio/scheduling2-biblio.html There is a lot of nice mathematics involved.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/94976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
When is $[0,1]^K$ submetrizable or even metrizable? Let $I=[0,1]$ and $K$ is a compact space. Then could the function space $I^K$ be submetrizable, even metrizable? In other words, in general, if $I^A$ can be submetrizable (metrizable) for some space $A$, what's condition that $A$ should satisfying?
| If $A$ is compact, $I^A$ is metrizable with the metric being the uniform norm. That is, $d(f,g):=\sup_{a\in A} d(f(a),g(a))$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/95092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proof that $\binom{2\phi(r)}{\phi(r)+1} \geq 2^{\phi(r)}$ I try to prove the following
$$\binom{2\phi(r)}{\phi(r)+1} \geq 2^{\phi(r)}$$
with $r \geq 3$ and $r \in \mathbb{P}$. Do I have to make in induction over $r$ or any better ideas?
Any help is appreciated.
| Combinatorial proof of ${2n \choose n+1} \geq 2^n$ where $n \geq 2$:
Let's take set $\{x_1,y_1,\dots,x_{n-2},y_{n-2},a,b,c,d\}$ which has $2n$ elements; select three elements out of $\{a,b,c,d\}$ and for all $i$, a single element of $\{x_i,y_i\}$, you'll select $n+1$ in total. So
${2n \choose n+1} \geq {4 \choose 3} 2^{n-2}=2^n$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/95168",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
partial sum involving factorials Here is an interesting series I ran across.
It is a binomial-type identity.
$\displaystyle \sum_{k=0}^{n}\frac{(2n-k)!\cdot 2^{k}}{(n-k)!}=4^{n}\cdot n!$
I tried all sorts of playing around, but could not get it to work out.
This works out the same as $\displaystyle 2^{n}\prod_{k=1}^{n}2k=2^{n}\cdot 2^{n}\cdot n!=4^{n}\cdot n!$
I tried equating these somehow, but I could not get it. I even wrote out the series.
There were cancellations, but it did not look like the product of the even numbers.
$\displaystyle \frac{(2n)!}{n!}+\frac{(2n-1)!\cdot 2}{(n-1)!}+\frac{(2n-2)!2^{2}}{(n-2)!}+\cdot\cdot\cdot +n!\cdot 2^{n}=4^{n}\cdot n!$.
How can the closed form be derived from this?. I bet I am just being thick. I see the last term is nearly the result except for being multiplied by $2^{n}$. I see if the factorials are written out, $2n(2n-1)(2n-2)(2n-3)\dots$ for example, then 2's factor out of $2n, \;\ 2n-2$ (even terms) in the numerator.
There is even a general form I ran through Maple. It actually gave a closed from for it as well, but I would have no idea how to derive it.
$\displaystyle \sum_{k=0}^{n}\frac{(2n-k)!\cdot 2^{k}\cdot (k+m)!}{(n-k)!\cdot k!}$.
In the above case, m=0. But, apparently there is a closed form for $m\in \mathbb{N}$ as well.
Maple gave the solution in terms of Gamma: $\displaystyle \frac{\Gamma(1+m)4^{n}\Gamma(n+1+\frac{m}{2})}{\Gamma(1+\frac{m}{2})}$
Would anyone have an idea how to proceed with this?. Perhaps writing it in terms of Gamma and using some identities?. Thanks very much.
| This identity can be re-written as
$$\sum_{k=0}^n {2n-k \choose n-k} 2^k = 4^n.$$
Start from
$${2n-k \choose n-k} =
\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2n-k}}{z^{n-k+1}} \; dz.$$
This yields for the sum
$$\frac{1}{2\pi i} \int_{|z|=\epsilon}
\sum_{k=0}^n \frac{(1+z)^{2n-k}}{z^{n-k+1}} 2^k \; dz
\\ = \frac{1}{2\pi i} \int_{|z|=\epsilon}
\frac{(1+z)^{2n}}{z^{n+1}}
\sum_{k=0}^n \frac{(2z)^{k}}{(1+z)^k} \; dz.$$
We can extend the sum to infinity because when $n-k+1 \le 0$ or $k \ge n+1$ the integrand of the defining integral of the binomial coefficient is an entire function and the integral is zero. This yields
$$\frac{1}{2\pi i} \int_{|z|=\epsilon}
\frac{(1+z)^{2n}}{z^{n+1}}
\sum_{k=0}^\infty \frac{(2z)^{k}}{(1+z)^k} \; dz
\\ = \frac{1}{2\pi i} \int_{|z|=\epsilon}
\frac{(1+z)^{2n}}{z^{n+1}} \frac{1}{1-2z/(1+z)} \; dz
\\ = \frac{1}{2\pi i} \int_{|z|=\epsilon}
\frac{(1+z)^{2n+1}}{z^{n+1}} \frac{1}{1-z} \; dz.$$
Thus the value of the integral is given by
$$[z^n] \frac{1}{1-z} (1+z)^{2n+1}
= \sum_{q=0}^n {2n+1\choose q} = \frac{1}{2} 2^{2n+1} = 4^n.$$
A trace as to when this method appeared on MSE and by whom starts at this
MSE link.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/95236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 3
} |
Side-stepping contradiction in the proof of ; ab = 0 then a or b is 0. Suppose we need to show a field has no zero divisors - that is prove the title - then we head off exactly like the one common argument in the reals (unsurprisingly as they themselves are a field).
What I want to know is; how do we prove this not by contradiction?
I was talking to some philosophers about - again not so surprising - logic and they seemed to have an issue with argument by contradiction. I admit I'm not a huge fan (of it) myself, though the gist of it was that classical logic (where contradiction / law of excluded middle is valid) is a really, really, really strong form of logic; a much weaker type of logic is something called intuitist (?) logic (I only caught the name) but they said it did not hold here.
Now, if we take something like the field axioms - or the reals (e.g. order in the bag...) - how can we prove in this new logic that there are no zero divisors. Or, more precisely how can we avoid contradiction?
| Let $m,n\in\mathbb{N}$ such that $m,n>0$ (I subscribe to $0\in\mathbb{N}$ but it really doesn't matter here). It can be shown by induction that $mn\neq 0$. That is, that $mn>0$.
Now, let $a,b\in\mathbb{Z}$. If $ab=0$, then $|ab|=|0|=0$. Therefore $|ab|>0$ implies $ab\neq 0$.
Suppose $a,b\neq 0$. Then $|a|=m>0$ and $|b|=n>0$. By the above, $|a|\,|b|=|ab|=mn>0$. Hence $ab\neq 0$. Then if $ab=0$, we have $a=0$ or $b=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/95338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 4
} |
Clarifying the definition of "unstable" I would appreciate a definition clarification.
if a numerical method is "unstable", does it mean that if we introduce a small random error in one of the steps, the error would be magnified greatly after further steps? is this true for all unstable algorithms or are there some where the random error is never made significant, say wrt the error of the method itself?
| We say a method is stable when it is capable of controlling errors introduced in each computation. Stability allows the method to converge to a certain solution. Here's a simple example:
\begin{equation}
u_t = -u_x
\end{equation}
Suppose a set of equally distanced nodes on the x-axis in 1D. We assume $U_i$ denotes approximate values of the function $u(x)$ at $i^{th}$ node. We use Forward Euler in time and Centered Differences in space:
\begin{equation}
U_i^{n+1} = U_i^n - \frac{\Delta t}{\Delta x} (U^n_{i+1} - U^n_{i-1})
\end{equation}
Now assume at some iteration (maybe even the initial condition) the numerical solution has an oscillatory form. For simplicity assume, $U_i = (-1)^i*x_i$. Now you can clearly see that no matter what values of $\Delta t, \Delta x$ you choose, negative values of the function will get smaller and positive values will grow larger which eventually leads to unacceptable results.
This is only a simple explanation. For further reading I advise you to take a look at Computational Fluid Dynamics Vol. I, by Hoffman and Chiang. As you can see, this doesn't necessarily happen with random errors introduced intentionally, It can happen whenever there is some errors or oscillations in the solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/95407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
An inequality about maximal function Consider the function on $\mathbb R$ defined by
$$f(x)=\begin{cases}\frac{1}{|x|\left(\log\frac{1}{|x|}\right)^2} & |x|\le \frac{1}{2}\\
0 & \text{otherwise}\end{cases}$$
Now suppose $f^*$ is the maximal function of $f$, then I want to show the inequality $f^*(x)\ge \frac{c}{|x|\left(\log\frac{1}{|x|}\right)}$ holds for some $c>0$ and all $|x|\le \frac{1}{2}$.
But I don't know how to prove it. Can anyone give me some hints?
Thanks very much.
| By definition
$$
f^*(x)=\sup_{B\in \text{Balls}(x)}\frac{1}{\mu(B)}\int\limits_B |f(y)|d\mu(y)\qquad(1)
$$
where $\text{Balls}(x)$ the set of all closed balls containing $x$. We can express $f^*$ in another form
$$
f^*(x)=\sup_{\alpha\leq x\leq\beta}\frac{1}{\beta-\alpha}\int\limits_{\alpha}^{\beta} |f(y)|d\mu(y)
$$
Consider $0< x\leq1/2$. Obviously
$$
f^*(x)=\sup_{\alpha\leq x\leq\beta}\frac{1}{\beta-\alpha}\int\limits_{\alpha}^{\beta} |f(y)|d\mu(y)\geq\frac{1}{x-0}\int\limits_{0}^{x}|f(y)|d\mu(y)=-\frac{1}{x\log x}
$$
Thus $f^*(x)\geq\frac{1}{x\log\left(\frac{1}{x}\right)}$ for $0<x\leq 1/2$. Since $f$ is even then does $f^*$, hence inequality
$$
f^*(x)\geq\frac{1}{|x|\log \left(\frac{1}{|x|}\right)}
$$
holds for all $-1/2\leq x\leq1/2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/95558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Meaning of $f:[1,5]\to\mathbb R$ I know $f:[1,5]\to\mathbb R$, means $f$ is a function from $[1,5]$ to $\mathbb R$. I am just abit unclear now on the exact interpretation of "to $\mathbb R$". Is $1\le x\le 5$ the domain? And is $\mathbb R$ the co-domain (or image?)?
Is my interpretation in words ---$f$ is a function which takes a number $1\le x\le 5$, and maps it onto the real numbers $\mathbb R$, correct?
Suppose we take $f=x^2$ and $x=2$, does $f:[1,5]\to\mathbb R$ hold? So thus the function gives us $4$, which is $\in \mathbb R$.
| The notation
$$
f: [1,5] \rightarrow \mathbb{R}
$$
means that $f$ is a function whose domain is to taken to be the interval $[1,5]$ and whose codomain is $\mathbb{R}$ (i.e. all the outputs of $f$ fall into $\mathbb{R}$). It makes no claims about surjectivity or injectivity; you must analyze the function itself to decide that.
To address your example, you define
$$
f: [1,5] \rightarrow ?
$$
$$
f(x) = x^2,
$$
where the question mark means we aren't sure what to put there yet. Since since the square of any number in the interval $[1,5]$ is a real number, then indeed $\mathbb{R}$ is an acceptable codomain for $f$, so we could write
$$
f: [1,5] \rightarrow \mathbb{R}.
$$
Notice that, as we have defined it, $f$ is not surjective, since some numbers in $\mathbb{R}$ cannot be reached by $f$ (like 36).
Looking a little closer, we might notice that the square of any number in the interval $[1,5]$ falls in the interval $[1,25]$. Thus, we would also be correct in writing
$$
f: [1,5] \rightarrow [1,25].
$$
After this change in codomain, $f$ becomes surjective, since every number in $[1,25]$ can be reached by $f$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/95626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Proof of $f \in C_C(X)$ where $X$ is a metric space implies $f$ is uniformly continuous Can you tell me if the following proof is correct?
Claim:
If $f$ is a continuous and compactly supported function from a metric space $X$ into $\mathbb{R}$ then $f$ is uniformly continuous.
Proof:
The proof is in two parts.
First we want to show that $f$ is uniformly continuous on $K := \operatorname{supp}{f}$:
Let $\varepsilon > 0$.
Because $f$ is continuous we have that for each $x$ in $K$ there is a $\delta_x$ such that for all $y$ with $d(x,y) < 2 \delta_x$ we have $|f(x) - f(y)| < \varepsilon$ and because $\{ B(x, \frac{\delta_x}{2}) \}_{x \in K}$ is an open cover of $K$ there is a finite subcover which we denote $\{ B(x_i, \frac{\delta_i}{2}) \}_{i=1}^n$.
Define $\delta := \min_i \frac{\delta_i}{2}$ and let $x$ and $y$ be any two points in $K$ with $d(x,y) < \delta$. $\{ B(x_i, \frac{\delta_i}{2}) \}_{i=1}^n$ is a cover so there exists an $i$ such that $x$ is in $B(x_i, \frac{\delta_i}{2})$ which means that $d(x,x_i) < \frac{\delta_i}{2}$. Then $d(x_i ,y) \leq d(x_i ,x) + d(x,y) < \frac{\delta_i}{2} + \delta \leq \delta_i$ hence $y$ is also in $B(x_i, \delta_i)$.
Since $d(x_i,y) < \delta_i$ and $d(x, x_i) < \delta_i$ we have $|f(x) - f(y)| \leq |f(x) - f(x_i)| + |f(x_i) - f(y)| < 2 \varepsilon$.
Next we want to show that if $f$ is uniformly continuous on $K$ then it is uniformly continuous on all of $X$:
Let $\varepsilon > 0$. For any two points $x$ and $y$ we're done if either both are in $K$ or both are outside $K$ so let $x \in X \setminus K$ and $y \in K$ with $d(x,y) < \delta$. Then there is an $i$ such that $y$ is in $B(x_i, \frac{\delta_i}{2})$. Then $d(x,x_i) \leq d(x,y) + d(y,x_i) < \delta_i$ and hence $|f(x) - f(y)| \leq |f(x) - f(x_i)| + |f(x_i) - f(y)| < 2 \varepsilon$.
Is it necessary to prove this in two parts or is the second part "obvious" and should be left away?
Thanks for your help.
| That looks good except for the correction that t.b. pointed out. In the spirit of Henning Makholm's comment, here is a "canned theorem" approach.
A continuous function on a compact metric space is uniformly continuous, so $f|_K$ is uniformly continuous. Let $\varepsilon>0$ be given. Then $K_\varepsilon:=\{x:|f(x)|\geq \varepsilon\}$ is a closed subset of $K$, hence compact, and $\{x:f(x)=0\}$ is a closed set disjoint from $K_\varepsilon$, so there is a positive distance $\delta_1$ between $K_\varepsilon$ and $\{x:f(x)=0\}$. Let $\delta_2$ be such that if $x$ and $y$ are in $K$ and $d(x,y)<\delta_2$, then $|f(x)-f(y)|<\varepsilon$. Let $\delta=\min\{\delta_1,\delta_2\}$.
If $x$ and $y$ are in $X$ and $d(x,y)<\delta$, then:
*
*$x$ and $y$ are both in $K$, and since $d(x,y)<\delta_2$, we have $|f(x)-f(y)|<\varepsilon$; or
*one of $x$ or $y$ is not in $K$. WLOG suppose $x$ is not in $K$. Then $f(x)=0$, and since $d(x,y)<\delta_1$, $y$ is not in $K_\varepsilon$, meaning $|f(x)-f(y)|=|f(y)|<\varepsilon$. $\square$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/95680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
A question on the sylow subgroups of a normal subgroup $H$ normal subgroup of a group $G$ with cardinality finite. $p$ prime number dividing $|H|$. $P$ a $p$-Sylow subgroup of $H$, how can I prove that then $G=HN_G(P)$ where $N_G(P)$ is the normalizer of $P$ in $G$?
| If $g\in G$, then $gHg^{-1}=H$, and so $gPg^{-1}\subseteq H$. Since $gPg^{-1}$ is a $p$-Sylow subgroup of $H$, by Sylow's Theorems we know that $gPg^{-1}$ is conjugate to $P$ in $H$. That is, there exists $h\in H$ such that $hPh^{-1} = gPg^{-1}$. Therefore, $g^{-1}hPh^{-1}g = P$, so $h^{-1}g\in N_G(P)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/95735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Dedekind domain with a finite number of prime ideals is principal I am reading a proof of this result that uses the Chinese Remainder Theorem on (the finite number of) prime ideals $P_i$. In order to apply CRT we should assume that the prime ideals are coprime, i.e. the ring is equal to $P_h + P_k$ for $h \neq k$, but I can't see it. How does it follow?
| Hint $\ $ Nonzero prime ideals are maximal, hence comaximal $\, P + Q\ =\ 1\, $ if $\, P\ne Q.$
Another (perhaps more natural) way to deduce that semi-local Dedekind domains are PIDs is to exploit the local characterization of invertibility of ideals. This yields a simpler yet more general result, see the theorem below from Kaplansky, Commutative Rings. A couple theorems later is the fundamental result that a finitely generated ideal in a domain is invertible iff it is locally principal. Therefore, in Noetherian domains, invertible ideals are global generalizations of principal ideals. To best conceptually comprehend such results it is essential to understand the local-global perspective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/95789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 3,
"answer_id": 0
} |
software for algebraic simplifying expressions I have many huge algebraic expressions such as:
$$\frac{8Y}{1+x}-\frac{(1-Y)}{x}+\frac{Kx(1+5x)^{3/5}}{2}$$
where $\ Y=\dfrac{Kx(1+x)^{n+2}}{(n+4)(1+5x)^{2/5}}+\dfrac{7-10x-x^2}{7(1+x)^2}+\dfrac{Ax}{(1+5x)^{2/5}(1+x)^2}\ $ and $A,n$ are constants.
To simplify these expressions by hand is taking me a lot of time and there is also the danger of making a mistake. I am looking for a free software on the internet using which I can simplify these expressions. Does anyone have any recommendations?
| Note that if you set $\rm\ z = (5x+1)^{1/5}\ $ then your computations reduce to rational function arithmetic combined with the rewrite rule $\rm\: z^5\ \to\ 5x+1\ $ with the following expressions
$$\frac{8Y}{1+x}-\frac{(1-Y)}{x}+\frac{Kxz^3}{2}$$
where $\ Y\ =\ \dfrac{Kx(1+x)^{n+2}}{(n+4)z^2}+\dfrac{7-10x-x^2}{7(1+x)^2}+\dfrac{Ax}{(z(1+x))^2}\ $ and $A,n$ are constants.
This is so simple that it can be done by hand. When using computer algebra systems you need to be sure that they can effectively compute with algebraic functions, or that they can effectively handle said rewrite rule implementing this simple special case. For example, in Macsyma (or Maxima, e.g. in Sage) one may use $\rm\:radcan\:$ (RADical CANonicalize) or, alternatively, set $\rm\:algebraic:true\:$ and do $\rm\:tellrat(\:z^5 =\: 5*x+1)\ $ and then employ the $\rm\:rat\:$ function to normalize such "rational" expressions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/95829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
Isomorphism of quotient modules implies isomorphism of submodules? Let $A$ be a commutative ring, $M$ an $A$-module and $N_1, N_2$ two submodules of $M$.
If we have $M/N_1 \cong M/N_2$, does this imply $N_1 \cong N_2$?
This seems so trivial, but I just don't see a proof... Thanks!
| The implication is false for all commutative non-zero rings $A$.
Indeed, just take $M=\oplus_{i=0}^{i=\infty} A$ , $N_1=A\oplus 0\oplus0...$ and $N_2=A\oplus A\oplus 0\oplus 0...$.
Since $N_1$ is isomorphic to $A$ and $N_2$ is isomorphic to $A^2$, they are not isomorphic.
However $M/N_1$ and $M/N_2$ are isomorphic because they are both isomorphic to $M$.
[To see that $A$ and $A^2$ are not isomorphic as $A$-modules the standard trick is to reduce to the case where $A$ is a field by tensoring with $A/\mathfrak m$, where $\mathfrak m$ is some maximal ideal in $A$]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/95899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Odds of guessing suit from a deck of cards, with perfect memory While teaching my daughter why drawing to an inside straight is almost always a bad idea, we stumbled upon what I think is a far more difficult problem:
You have a standard 52-card deck with 4 suits and I ask you to guess the suit of the top card. The odds of guessing the correct suit are obviously 1 in 4. You then guess again, but the first card is not returned to the deck. You guess a suit other than the first drawn and the odds are 13/51, somewhat better than 1 in 4.
Continuing through the deck your odds continually change (never worse than 1 in 4, definitely 100% for the last card) what are your overall odds for any given draw over the course of 52 picks?
Can this be calculated? Or do you need to devise a strategy and write a computer program to determine the answer? Do these type of problems have a name?
Dad and to a much less extent daughter, await your thoughts!
|
overall odds for any given draw over the course of 52 picks
If I rephrased your question as "how much should you be willing to pay to play the game where I will show you $n$ cards out of 52 and if you guess the next remaining card then I give you a dollar" would an answer to this question be suitable? Just to be clear, in this game you would have to pay up front before you see any cards (although you know the number $n$ beforehand), and you should be willing to pay up to a quarter when $n=0$ and up to a dollar when $n=52-1$. This is not really an answer, I just wanted to understand the question.
Alternatively, would you allow the player to see the $n$ cards (rather than just the number $n$) before deciding how much to pay? Or on the other hand would you not even allow $n$ to be known, but you would require the player to decide how much to pay and then $n$ is chosen uniformly at random between 0 and 51 at the beginning of the game?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/95968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 2
} |
Probability of an odd number in 10/20 lotto Say you have a lotto game 10/20, which means that 10 balls are drawn from 20.
How can I calculate what are the odds that the lowest drawn number is odd (and also how can I calculate the odds if it's even)?
So a detailed explanation:
we have numbers 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 and 20
and the drawn numbers were for example 3, 5, 8, 11, 12, 13, 14, 15, 18 and 19
so, we see now that lowest number is 3 and he is an odd number.
So, as stated above, can you help me in finding out how to calculate such probability?
| The total number of outcomes is ${20 \choose 10}$. Now count the total number of favorable outcomes:
*
*outcomes with lowest element 1 : ${19 \choose 9}$ ;
*outcomes with lowest element 3 : ${17 \choose 9}$ ;
*outcomes with lowest element 5 : ${15 \choose 9}$ ;
*outcomes with lowest element 7 : ${13 \choose 9}$ ;
*outcomes with lowest element 9 : ${11 \choose 9}$ ;
*outcomes with lowest element 11 : ${9 \choose 9} = 1$ ;
So the probability is $$\sum_{k\in \{9, 11, 13, 15, 17, 19 \}} { {k \choose 9} \over {20 \choose 10}} = {30616 \over 46189} \simeq 0.662842.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/96030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Is every Mersenne prime of the form : $x^2+3 \cdot y^2$? How to prove or disprove following statement :
Conjecture :
Every Mersenne prime number can be uniquely written in the form : $x^2+3 \cdot y^2$ ,
where $\gcd(x,y)=1$ and $x,y \geq 0$
Since $M_p$ is an odd number it follows that : $M_p \equiv 1 \pmod 2$
According to Fermat little theorem we can write :
$2^p \equiv 2 \pmod p \Rightarrow 2^p-1 \equiv 1\pmod p \Rightarrow M_p \equiv 1 \pmod p$
We also know that :
$2 \equiv -1 \pmod 3 \Rightarrow 2^p \equiv (-1)^p \pmod 3 \Rightarrow 2^p-1 \equiv -1-1 \pmod 3 \Rightarrow$
$\Rightarrow M_p \equiv -2 \pmod 3 \Rightarrow M_p \equiv 1 \pmod 3$
So , we have following equivalences :
$M_p \equiv 1 \pmod 2$ , $M_p \equiv 1 \pmod 3$ and $M_p \equiv 1 \pmod p$ , therefore for $p>3$
we can conclude that : $ M_p \equiv 1 \pmod {6 \cdot p}$
On the other hand : If $x^2+3\cdot y^2$ is a prime number greater than $5$ then :
$x^2+3\cdot y^2 \equiv 1 \pmod 6$
Proof :
Since $x^2+3\cdot y^2$ is a prime number greater than $3$ it must be of the form $6k+1$ or $6k-1$ .
Let's suppose that $x^2+3\cdot y^2$ is of the form $6k-1$:
$x^2+3\cdot y^2=6k-1 \Rightarrow x^2+3 \cdot y^2+1 =6k \Rightarrow 6 | x^2+3 \cdot y^2+1 \Rightarrow$
$\Rightarrow 6 | x^2+1$ , and $ 6 | 3 \cdot y^2$
If $6 | x^2+1 $ then : $2 | x^2+1$ , and $3 | x^2+1$ , but :
$x^2 \not\equiv -1 \pmod 3 \Rightarrow 3 \nmid x^2+1 \Rightarrow 6 \nmid x^2+1 \Rightarrow 6 \nmid x^2+3 \cdot y^2+1$ , therefore :
$x^2+3\cdot y^2$ is of the form $6k+1$ , so : $x^2+3\cdot y^2 \equiv 1 \pmod 6$
We have shown that : $M_p \equiv 1 \pmod {6 \cdot p}$, for $p>3$ and $x^2+3\cdot y^2 \equiv 1 \pmod 6$ if $x^2+3\cdot y^2$ is a prime number greater than $5$ .
This result is a necessary condition but it seems that I am not much closer to the solution of the conjecture than in the begining of my reasoning ...
| (Outline of proof that, for prime $p\equiv 1\pmod 6$, there is one positive solution to $x^2+3y^2=p$.)
It helps to recall the Gaussian integer proof that, for a prime $p\equiv 1\pmod 4$, $x^2+y^2=p$ has an integer solution. It starts with the fact that there is an $a$ such that $a^2+1$ is divisible by $p$, then uses unique factorization in the Gaussian integers to show that there must a common (Gaussian) prime factor of $p$ and $a+i$, and then that $p$ must be the Gaussian norm of that prime factor.
By quadratic reciprocity, we know that if $p\equiv 1\pmod 6$ is prime, then $a^2=-3\pmod p$ has a solution.
This means that $a^2+3$ is divisible by $p$. If we had unique factorization on $\mathbb Z[\sqrt{-3}]$ we'd have our result, since there must be a common prime factor of $p$ and $a+\sqrt{-3}$, and it would have to have norm $p$, and we'd be done.
But we don't have unique factorization in $\mathbb Z[\sqrt{-3}]$, only in the ring, R, of algebraic integers in $\mathbb Q[\sqrt{-3}]$, which are all of the form: $\frac{a+b\sqrt{-3}}2$ where $a=b\pmod 2$.
However, this isn't really a big problem, because for any element $r\in R$, there is a unit $u\in R$ and an element $z\in\mathbb Z[\sqrt{-3}]$ such that $r=uz$.
In particular, then, for any $r\in R$, the norm $N(r)=z_1^2+3z_2^2$ for some integers $z_1,z_2\in \mathbb Z$.
As with the Gaussian proof for $x^2+y^2$, we can then use this to show that there is a solution to $x^2+3y^2=p$. To show uniqueness, you need to use properties of the units in $R$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/96101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Do "imaginary" and "complex" angles exist? During some experimentation with sines and cosines, its inverses, and complex numbers, I came across these results that I found quite interesting:
$ \sin ^ {-1} ( 2 ) \approx 1.57 - 1.32 i $
$ \sin ( 1 + i ) \approx 1.30 + 0.63 i $
Does this mean that there is such a thing as "imaginary" or "complex" angles, and if they do, what practical use do they serve?
| A fundamental equation of trigonometry is $x^2+y^2 = 1$, where $x$ is the "adjacent side" and $y$ the "opposite side".
If you experiment plot $f(x)$ out of the real domain - for example to $x=1.5$ you obtain $y$ imaginary - you will get an imaginary shape situated in a plane perpendicular to the plane $x,y$ and containing the $x$-axis. This shape is a hyperbola.
So you have two planes, one for the circle, and one for the hyperbola.
The "$z$-axis" (imaginary) where the hyperbola is plotted correspond to the "$\sinh$" and $x$ is the "$\cosh$" once the $R = 1$. Note that the $\sinh$ is situated in a plane $90$ degrees of the $x,y$-plane.
Observe that $$\sin iy = i \sinh y$$ is in accord with was explained above.
The geometric interpretation is easy.
It's valuable remember that the angle of a circumference could be measured by the double of the area of the sector. The hyperbolic angle could be measured by the double of area limited by the radius and the arc of hyperbola.
See Wikipedia.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/96151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37",
"answer_count": 7,
"answer_id": 5
} |
question on fourier transform. I ask myself what
$$
{\mathscr F}^{-1}( e^{it\xi} ({\mathscr F} \phi)'(\xi) )(s)
$$
is. If it was just about
$$
{\mathscr F}^{-1}( e^{it\xi} ({\mathscr F} \phi)(\xi) )(s)
$$
it would be clear (a shift by $t$), the same is with
$$
{\mathscr F}^{-1}( ({\mathscr F} \phi)'(\xi) )(s),
$$
which gives a multiplication by $-is$.
But what is about the combination of exponential and derivative? Any hints? Thanks, Eric
| Why don't you just compute it?
$$
{\mathscr F}^{-1}( e^{it\xi} ({\mathscr F} \phi)'(\xi) )(s)={\mathscr F}^{-1}( ({\mathscr F} \phi)'(\xi) )(s+t)=-i(s+t)\phi(s+t)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/96217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Question about implication and probablity Let $A, B$ be two event. My question is as follows:
Will the following relation holds:
$$A \to B \Rightarrow \Pr(A) \le\Pr(B) $$
And why?
| In terms of intuition, the fact that some event $A$ implies some event $B$ means that whenever $A$ happens, $B$ happens. But if the event $B$ happens, we might not have event $A$. So in other words, we have that $\mathbb P(A) \le \mathbb P(B)$ because the probability that $A$ happens is also "the probability that $B$ happens because of $A$", which is less than "the probability that $B$ happens" with no constrains on it. (Note that there are cases with equality, for instance when $A \Rightarrow B$ and $B \Rightarrow A$. )
From a theoretical point of view though (this is a little bit more advanced for a beginner course in probability though), the expression "$A \Rightarrow B$" doesn't make much sense, since the way probabilities are defined is that the "$\mathbb P$" is actually a function from something we call an space of possible events to the interval of real numbers $[0,1]$. The possible events are sets, and the right way to say $A \Rightarrow B$ in this system would be that the set $A$ is included in the set $B$, i.e. $A \subseteq B$. (You need to think about those sets as "a regrouping of possibilities", for instance if the event space is all subsets of $\{1,2,3,4,5,6\}$ in the context where we roll a fair dice, an example of event would be the possibility that "the roll is even" or "the roll is a $1$ or a $4$".) Since in the construction of probabilities, one most common axiom is that a probability function is countably additive, or in other words
$$
\mathbb P \left( \bigcup_{i=0}^{\infty} A_i \right) = \sum_{i=0}^{\infty} \,\mathbb P (A_i)
$$
when the sets $A_i$ are pairwise disjoint, and another axiom would be that $P(\varnothing) = 0$, we can deduce from that that
\begin{align}
\mathbb P(B) = \mathbb P( (A \cap B) \cup (A^c \cap B))
& = \mathbb P( (A \cap B) \cup (A^C \cap B) \cup \varnothing \cup \dots) \\
& = \mathbb P(A \cap B) + \mathbb P(A^c \cap B) + \mathbb P (\varnothing) + \mathbb P (\varnothing) + \dots \\
& = \mathbb P(A) + \mathbb P(A^c \cap B) + 0 + 0 + \dots \\
& \ge \mathbb P(A).
\end{align}
Note that the reason I added this is because I used the axiom "countably additive" and not "finitely additive". The way to show that countably additive implies finitely additive is by adding plenty of $\varnothing$'s after the finitely many sets.
There are many possible axioms you can add/choose that are equivalent though. I just took my favorite.
Hope that helps,
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/96364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Motivation for a particular integration substitution In an old Italian calculus problem book, there is an example presented:
$$\int\frac{dx}{x\sqrt{2x-1}}$$
The solution given uses the strange substitution $$x=\frac{1}{1-u}$$
Some preliminary work in trying to determine the motivation as to why one would come up with such an odd substitution yielded a right triangle with hypotenuse $x$ and leg $x-1;$ determining the other leg gives $\sqrt{2x-1}.$ Conveniently, this triangle contains all of the "important" parts of our integrand, except in a non-convenient manner.
So, my question is two-fold:
(1) Does anyone see why one would be motivated to make such a substitution?
(2) Does anyone see how to extend the work involving the right triangle to get at the solution?
| This won't answer the question, but it takes the geometry a bit beyond where the question left it. Consider the circle of unit radius in the Cartesian plane centered at $O=(0,1)$. Let $A=(1,0)$ and $B=(x,0)$. Let $C=(1,\sqrt{2x-1})$. Your right triangle is $ABC$, with angle $\alpha$ at vertex $B$. Another right triangle is $OAC$, with angle $\beta$ at $O$. Then
$$
u=\frac{x-1}{x}= \cos\alpha=\sin(\pi/2-\alpha)=\sin\angle BCA$$
and
$$
\sqrt{2x-1}=\tan\beta=\cot(\pi/2-\beta)=\cot\angle OCA.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/96481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
quadratic reciprocity happy new year
I have this statement:
"By quadratic reciprocity there are the integers $a$ and $b$ such that $(a,b)=1$, $(a-1,b)=2$, and all prime $p$ with $p\equiv a$ (mod $b$) splits in $K$ (where $K$ is a real quadratic field)".
I have tried with many properties of quadratic reciprocity but couldn't even get to the first conclusion.
Thank you very much in advance, for any idea or advice for approach the problem
| Edited to address some bizarrely horrible errors in the first version.
Here's a simple case from which it should not be too hard to generalize. Suppose that $K$ is a quadratic field of prime discriminant $q$. Since $q\equiv 1\pmod{4}$, note that a prime $p$ splits in $K$ if and only if $\left(\frac{p}{q}\right)=1$.
Let $b=2q$ and choose an integer $a\not\equiv 1\pmod{q}$ which is an odd quadratic residue mod $q$. Such a thing exists since there are $\frac{q-1}{2}\geq 2$ (since $q\geq 5$) residues mod $q$, and if $a'$ is any not-conguent-to-1 residue mod $q$, then one of $a'=a$ and $a'=q+a$ gives you an odd residue. For example, when $q=5$, take $a'=4$ and then $a=9$. (Actually, $p=5$ is the only example where you have to add $q$: for all other $q$ there is an odd quadratic residue $a$ in the range $2\leq a\leq p-1$).
Now $a$ is odd and not divisible by $q$, so $(a,b)=(a,2q)=1$, and since $a\not\equiv 1\pmod{q}$, we also have $(a-1,b)=2$. Finally, if $p\equiv a\pmod{b}$, then $p\equiv a\pmod{q}$, so $\left(\frac{p}{q}\right)=\left(\frac{1}{q}\right)=1$, and $p$ splits in $K$.
Just some minor commentary on where this came from: Your $\gcd$ conditions force $b$ to be even, and for a congruence-mod-b condition to determine splitting in $K$, your $b$ needs to be a multiple of the discriminant (in this case, $q$), and preferrably as small a multiple as possible to prevent extra congruence classes from slipping in. The value of $b=2q$ satisfies all of these requirements, and since clearly one must choose $a$ to be an odd quadratic residue mod $q$, you're left with essentially the above construction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/96644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Equivalence class on real numbers Call two real numbers equivalent if their binary decimal expansion differ in a finite amount of places, if S is a set which contains an element of every equivalence class, must S contain an interval?
How to show that every interval contains an (uncountable number of?) element of every equivalence class?
| Added part: We produce a bounded set $S$ that contains a member of every equivalence class but does not contain an interval. Every equivalence class meets $[0,1]$, since for any $x$, we can, by making a finite number of changes to the bits of $x$, produce an $x'\in [0,1]$.
Use the Axiom of Choice to select $S\subset [0,1]$ such that $S$ contains precisely one member of each equivalence class. Any two dyadic rationals (expressed in ultimately $0$'s form) belong to the same equivalence class, so $S$ contains exactly one dyadic rational. Since the dyadic rationals are dense in the reals, this means that $S$ cannot contain an interval of positive length.
(End of added part)
Every non-empty interval $I$ contains a member of every equivalence class.
For let $I$ be a (finite) interval, and let $a$ be its midpoint. Suppose that $I$ has length $\ge 2\times 2^{-n}$. Let $x$ be any real number. By changing the initial bits of $x$ so that they match the initial bits of $a$, up to $n$ places after the "decimal" point, we can produce an $x'$ equivalent to $x$ which is at distance less than $2^{-n}$ from $a$.
Edit: The question has changed to ask whether every interval contains an uncountable number of members from every equivalence class. Minor modification of the first paragraph shows that every interval contains a countably infinite number of members from every equivalence class. As Asaf Karagila points out, one cannot get more, since every equivalence class is itself countable. (The set of places where there is "change" can be identified with a finite subset of the integers, and $\mathbb{N}$ has only countably many finite subsets.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/96693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
In a Boolean algebra B and $Y\subseteq B$, $p$ is an upper bound for $Y$ but not the supremum. Is $qI don't think that this is the case. I am reading over one of my professor's proof, and he seems to use this fact. Here is the proof:
Let $B$ be a Boolean algebra, and suppose that $X$ is a dense subset of $B$ in the sense that every nonzero element of $B$ is above a nonzero element of $X$. Let $p$ be an element in $B$. The proof is to show that $p$ is the supremum of the set of all elements in $X$ that are below $p$. Let $Y$ be the set of elements in $X$ that are below $p$. It is to be shown that $p$ is the supremum of $Y$. Clearly, $p$ is an upper bound of $Y$. If $p$ is not the least upper bound of $Y$, then there must be an element $q\in B$ such that $q<p$ and $q$ is an upper bound for $Y$ ...etc.
I do not see how this last sentence follows. I do see that if $p$ is not the least upper bound of $Y$, then there is some upper bound $q$ of $Y$ such that $p$ is NOT less than or equal to $q$. But, since we have only a partial order, and our algebra is not necessarily complete, I do not see how we can show anything else.
So, is my professor's proof wrong, or am I just missing something fundamental?
| The set of upper bounds is closed under intersection, so $p \cap q$ is an upper bound less than $p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/96864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Jordan decomposition/Levi decomposition in GL(n) in positive characteristic Let $k$ be a non archimedean field of positive characteristic. Lets consider a parabolic subgroup $P \subset GL(n, k)$.
I am a little bit confused by the following statement in "Laumon - Cohomology of Drinfeld Modular ... ":
I have an issue with the following two assertions
$P = MN$ has a Levi decomposition over $k$ (pg.123)
and
$\gamma \in P$ can be written as $\gamma = \gamma_m \gamma_n$ with $\gamma_m \in M$ and $\gamma_n \in N$ (pg.124)
Now, I have read that the Jordan decomposition and Levi decomposition need not to hold in positive characteristic (e.g. in Humpreys, Waterhouse). Do they mean that the decomposition are not functorial with respect to field extensions, and are available for the group, but not the group scheme?
Why is this not a contradiction?
Remark: I understand that elliptic element can become unipotent in an algebraic extension, since the minimal polynomial might not be separable in general.
| Both those assertions describe a Levi decomposition. In general, groups need not have Levi decompositions, but parabolic subgroups of reductive groups do. This is proven for connected reductive groups (e.g. $GL(n, k)$) in Borel "Linear Algebraic Groups": see 20.5 for the decomposition over k.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/96929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Subgroup(s) of a group of order 25 I am working on a problem (self-study) from Artin - 2.8.8 which goes:
"Let G be a group of order 25. Prove that G has at least one subgroup of order 5, and that if it contains only one subgroup of order 5, then it is a cyclic group."
I can see that there is an element of order 5, and this can generate a cyclic subgroup of order 5.
-- So my first question is: what would make one think that there might be more than one subgroup of order 5, and what would they look like. And what would be the criteria for there being only one rather than more than one?
For the second part, I am familiar with a proof that a group of order $p^2$ is abelian, by showing the center is all of G.
-- My second question is how to show G is cyclic - and how does this use the stipulation that there is only one subgroup of order 5 (in the text there is a statement that if there is only one subgroup of a particular order then it is normal). And how does there being more than one subgroup of order 5 prevent G from being cyclic.
Thanks.
| It is an easy exercise to show that if $c_d(G)$ denotes the number of cyclic subgroups of $G$ of order $d$ then $\displaystyle \sum_{d\mid |G|}c_d(G)\varphi(d)=|G|$ (just partition your group according to the elements orders). Now, if $G$ had only one subgroup of order $5$, then via the fact that all groups of order $5$ are cyclic we can conclude that $c_5(G)=1$. If then $c_{25}(G)=0$ then we'd see that
$$25=|G|=c_1(G)\varphi(1)+4c_5(G)+\varphi(5)=1+4=5$$
Of course, this is ridiculous. So, if $G$ contains only one subgroup of order $5$ then $c_d(G)\ne0$ and so there exists a cyclic subgroup of $G$ of order $25$ which, obviously, must be $G$ itself.
EDIT: Of course, there is nothing special about $5$. The above argument tells you that for a group of order $p^2$, $p$ prime, that being cyclic is equivalent to having one subgroup of order $p$. You will eventually learn that, up to isomorphism, the only groups of order $p^2$ are the cyclic one and the $2$-dimensional $\mathbb{F}_p$-space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/96985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
Chance for picking a series of numbers (with repetition, order doesn't matter) I want to calculate the chance for the following situation:
You throw a die 5 times. How big is the chance to get the numbers "1,2,3,3,5" if the order does not matter (i.e. 12335 = 21335 =31235 etc.)?
I have 4 different solutions here, so I won't include them to make it less confusing.
I'm thankful for suggestions!
| There are $5$ options for placing the $1$, then $4$ for placing the $2$, and then $3$ for placing the $5$, for a total of $5\cdot4\cdot3=60$. Alternatively, there are $5!=120$ permutations in all, and pairs of these are identical because you can exchange the $3$s, which also gives $120/2=60$. The total number of combinations you can roll is $6^5=7776$, so the chance is $60/7776=5/648\approx0.77\%$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/97046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Lie Algebra of $SL_n(\mathbb H)$ The Lie algebra of $SL_n(\mathbb C)$ are the matrices where the trace is $0$. But what is the Lie algebra of $SL_n(\mathbb H)$ where $\mathbb H$ is the quaternions?
| The obvious canditate for $\mathfrak{sl}_2(H)$ is the space of $2\times 2$ matrices $\begin{pmatrix}a&b\\c&d\end{pmatrix}$ with quaternion entries such that $a+d=0$, with bracket the commutator of matrices, but... that is not a Lie algebra.
For example, the trace of the commutator of $\begin{pmatrix}i&0\\0&-i\end{pmatrix}$ and $\begin{pmatrix}j&0\\0&-j\end{pmatrix}$ is not zero.
The big problem, really, is that you have to decide what you mean by $SL_2(H)$. There is no determinant... (There is the Dieudonné determinant, though)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/97107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Cauchy Sequence in $X$ on $[0,1]$ with norm $\int_{0}^{1} |x(t)|dt$ In Luenberger's Optimization book pg. 34 an example says "Let $X$ be the space of continuous functions on $[0,1]$ with norm defined as $\|x\| = \int_{0}^{1} |x(t)|dt$". In order to prove $X$ is incomplete, he defines a sequence of elements in $X$ by
$$ x_n(t) =
\left\{ \begin{array}{ll}
0 & 0 \le t \le \frac{1}{2} - \frac{1}{n} \\ \\
nt-\frac{n}{2} + 1 & \frac{1}{2} - \frac{1}{n} \le t \le \frac{1}{2} \\ \\
1 & t \ge \frac{1}{2}
\end{array} \right.
$$
Each member of the sequence is a continuous function and thus member of space $X$. Then he says:
the sequence is Cauchy since, as it is easily verified, $\|x_n - x_m\| = \frac{1}{2}\left|\dfrac1n - \dfrac1m\right| \to 0$.
as $n,m \to \infty$. I tried to verify the norm $\|x_n - x_m\|$ by computing the integral for the norm. The piecewise function is not dependent on $n,m$ on the last piece (for $t \ge 1/2$), so norm $\|x_n - x_m\|$ is 0. For the middle piece I calculated the integral, it comes up zero. That leaves the first piece, and I did not receive the result Luenberger has. Is there something wrong in my approach?
| It's relatively easy to see that for $m<n$ we have $x_n(t)\le x_m(t)$ for each $t$. Hence
$$\|x_m-x_n\|=\int_0^1 x_m(t) \mathrm{d}t-\int_0^1 x_n(t) \mathrm{d}t.$$
We can disregard intervals $\langle 0,1/2-1/m\rangle$, since both functions are zero there. We can also disregard $\langle 1/2,1\rangle$, since $x_m(t)=x_n(t)$ on that interval. Therefore
$$\|x_m-x_n\|=\int_{\frac12-\frac1m}^1 x_m(t) \mathrm{d}t-\int_{\frac12-\frac1n}^1 x_n(t) \mathrm{d}t=\frac1{2m}-\frac1{2n}.$$
The last equality can be shown by direct computation. You can also see this geometrically: If you draw the picture, the first integral is area of a triangle with base $\frac1{2m}$ and height $1$. The second is a triangle as well, the base is $\frac1{2n}$.
I used metapost to create the picture. In case someone is interested to see it, it is figure 6 in this source code: rapidshare, megaupload, pastebin.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/97171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
} |
Is this true about integrating composite functions? Let's say that I'm integrating a composite function, say $f(g(x))$, that is in a form to which I can apply the substitution rule. Is it true to say that both $f$ and $g$ must be differentiable?
I understand that the substitution rule requires $g$ to be differentiable and that the substitution rule relies on the chain rule, and the chain rule requires both f and g to be differentiable.
| if you talk about the Riemann-integral: if $f : [a,b] \to [c,d]$ R-integrable and $g : [c,d] \to \mathbb{R}$ continuous, than $g \circ f : [a,b] \to \mathbb R$ is R-integrable.
Differentiable implies continuous, so if f,g are differentiable they are R-integrable
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/97236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Solving $\int\frac{\ln(1+e^x)}{e^x} \space dx$ I'm trying to solve this integral.
$$\int\frac{\ln(1+e^x)}{e^x} \space dx$$
I try to solve it using partial integration twice, but then I get to this point (where $t = e^x$ and $dx = \frac{1}{t} dt$):
$$\int\frac{\ln(1+t)}{t^2} \space dt = \frac{1}{t} \cdot \ln(1+t) - \frac{1}{t} \cdot \ln(1+t) + \int\frac{\ln(1+t)}{t^2} \space dt$$
$$\cdots$$
$$0 = 0$$
What am I doing wrong?
| Edit:
Your problem is having integrated by parts twice. Doing it one time looks like this:
$$\int \frac{\ln(1+t)}{t^2} \ \mathrm{d}t = -\frac1{t}\ln(1+t) - \int -\frac1{t}\cdot\frac1{1+t} \ \mathrm{d}t = -\frac1{t}\ln(1+t) + \int \frac1{t}\cdot\frac1{1+t} \ \mathrm{d}t$$
That new integral should be evaluated with partial fractions:
$$
\int \frac1{t(t+1)} \ \mathrm{d}t = \int \frac1{t}-\frac1{t+1} \ \mathrm{d}t = \ln t - \ln (t+1) + C
$$
If you want to check, the result is the following:
$$ \int \frac{\ln(1+e^x)}{e^x} \ \mathrm{d}x= x - \ln(1+e^x) - \frac{\ln(1+e^x)}{e^x} + C$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/97292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Partitioning of geometric net into equivalence classes Fellow Puny Humans,
A geometric net is a system of points and lines that obeys three axioms:
*
*Each line is a set of points.
*Distinct line has at most one point in common.
*If $p$ is a point and $L$ is a line with $p \notin L$, then there is exactly one line $M$ such that $p \in M$ and $L \cap M = \phi $.
And whenever $L \cap M = \phi$ we say that $L$ is parallel to $M$ i.e $L || M$.
So far so good.
I want to partition these lines of geometric net into equivalence classes with two lines in same class if they are equal or parallel. One can easily show that binary operation equal or parallel is an equivalence relation.
Let's say there are $m$ such classes, then how many points does a line have in each class? For a given line $l$ in any class, if a point $p \in l$ then how many lines passes through $p$.
For example, if I partition them into two classes $CL_1$ and $CL_2$ of parallel or equal lines, then number of points on any line in $CL_1$ is equal to number of lines in $CL_2$. This implies that each point belongs to two line. Can this be extended to a case when number of classes are $m$ i.e. each point belong to $m$ lines? I am confused because I can not show it for the case when more than two lines passes through the same point.
This problem is from TAOCP 4(a) : combinatorial searching Problem 21. (Edision Wesly).
| It is exactly the number of lines you have in a class, simply because equal or parallel is an equivalence relation. Let me clarify:
Say $C_1, C_2,\dots, C_m$ are your equivalence classes. Say $L\in C_i$ is a line in $C_i$, for some $i\in\{1,\dots,m\}$. Say $1\leq j \leq m$, $j\neq i$, and $M\in C_j$. If $L\cap M = \emptyset$, then $L$ is parallel to $M$, and so $L$ and $M$ must belong to the same class. In other words, $C_i\cap C_j\neq \emptyset$, or $C_i = C_j$ by equivalence. But by assumption, $i\neq j$, so $C_i\neq C_j$, and so $L$ must intersect $M$. By one of your axioms, it must intersect $M$ in only one point. By arbitrariness of $M$, $L$ must intersect every line in $C_j$ in only one point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/97340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Secret Number Problem Ten students are seated around a (circular) table. Each student selects his or her own secret number and tells the person on his or her right side the number and the person his or her left side the number (without disclosing it to anyone else). Each student, upon hearing two numbers, then calculates the average and announces it aloud. In order, going around the table, the announced averages are 1,2,3,4,5,6,7,8,9 and 10. What was the secret number chosen by the person who announced a 6?
| Let us denote secret numbers as $x_i$ , where $i$ is announced number ,then we have following system of equations :
$\begin{cases}
x_1+x_3=4 \\
x_2+x_4=6 \\
x_3+x_5=8 \\
x_4+x_6=10 \\
x_5+x_7=12 \\
x_6+x_8=14 \\
x_7+x_9=16 \\
x_8+x_{10}=18 \\
x_9+x_1=20 \\
x_{10}+x_2=2
\end{cases}$
According to Maple : $x_6=1$ , so requested secret number is $1$ .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/97416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Number of subgroups of prime order I've been doing some exercises from my introductory algebra text and came across a problem which I reduced to proving that:
The number of distinct subgroups of prime order $p$ of a finite group $G$ is either $0$ or congruent to $1\pmod{p} $.
With my little experience I was unable to overcome this (all I was able to conclude is that these groups are disjoint short of the identity), and also did not find any solution with a search on google (except for stronger theorems which I am not interested in because of my novice level).
I remember that a similar result is widely known as one of Sylow Theorems. This result was proven by the use of group actions. But can my problem be proved without using the concept of group actions? Can this be proven WITH the use of that concept?
EDIT: With help from comments I came up with this:
The action Derek proposed is well-defined largely because in a group if $ab = e$ (the identity), then certainly $ba = e$. By Orbit-Stabilizer Theorem we can see that all orbits are either of size 1 or $p$ (here I had most problems, and found out cyclic group of order $p$ acts on the set of solutions in the same way).
The orbits of size 1 contain precisely the elements $(x,x,x....,x)$ for some element x in G. In addition, orders of all orbits add up to $|G|^{(p-1)}$ because the orbits are equivalence classes of an equivalence relation.
But certainly $(e,e,e....,e)$ is in an orbit of size 1, and that means there has to be more orbits of exactly one element, actually $p-1 + np$ more for some integer $n$. These elements form the disjoint groups I am looking for. if $p-1$ divides $(p-1 + np)$, it's easy to check the result is 1 mod p.
Could someone check if I understood this correctly?
| Here's another approach. Consider the solutions to the equation $x_1x_2\cdots x_p=1$ in the group $G$ of order divisible by $p$. Since there is a unique solution for any $x_1,\ldots,x_{p-1}$, the total number of solutions is $|G|^{p-1}$, which is divisible by $p$. If $x_1,x_2,\ldots,x_p$ is a solution, then so is $x_2,x_3,\ldots,x_p,x_1$, and so we have an action of the $p$-cycle $(1,2,3,\ldots,p)$ on the solution set.
Since $p$ is prime, the orbits of this action have size $p$ if $x_1,x_2,\ldots,x_p$ are not all equal, and size 1 if they are all equal. So the number of solutions of $x^p=1$ is a multiple of $p$. Now use Steve D's hint to complete the proof.
Incidentally there is a theorem of Frobenius that says that for any $n>0$ and any finite group of order divisible by $n$, the number of solutions of $x^n=1$ is a multiple of $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/97460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 4,
"answer_id": 2
} |
Probability that ace of spades is at bottom of deck IF ace of hearts is NOT at top
What is the probability that the ace of spades is at the bottom of a standard deck of 52 cards given that the ace of hearts is not at the top?
I asked my older brother, and he said it should be $\frac{50}{51} \cdot \frac{1}{51}$ because that's $$\mathbb{P}(A\heartsuit \text{ not at top}) \times \mathbb{P}(A\spadesuit \text{ at bottom}),$$ but I'm not sure if I agree. Shouldn't the $\frac{50}{51}$ be $\frac{50}{52}$?
Thanks you!
| The ace of hearts has 51 positions available (since it's not at the top).
Having placed it somewhere, there are 51 positions available for ace of spades, so
Pr = P(ace of spades not at bottom)*P(ace of diamonds at bottom)
= 50/51 *1/51 = 50/51²
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/97527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
Solution of a polynomial of degree n with soluble galois group. Background: Given the fundamental theorem of algebra every polynomial of degree n has n roots. From Galois Theory we know that we can only find exact solutions of polynomials if their corresponding Galois group is soluble. I am studying Galois Theory ( Ian Stewart ) and I am not getting the result out of it that I expected. I expected to learn to determine for a polynomial of degree n its corresponding Galois group, and if it that group is soluble a recipe to find the exact roots of that polynomial. My experience thus far with Galois Theory is that it proves that there is no general solution for a polynomial of degree 5 and higher.
Question: I want to learn to solve polynomials of degree 5 and higher if they have a corresponding soluble Galois group. From which book or article can I learn this?
| By exact roots you probably mean radical expressions. Even for equations whose Galois group is unsolvable there might be exact trigonometric expressions for the roots.
If you know German, the diploma thesis "Ein Algorithmus zum Lösen einer
Polynomgleichung durch Radikale" (An algorithm for the solution of a polynomial equation by radicals) by Andreas Distler is exactly what you're looking for. It is available online. It also contains several program codes.
On the other hand, today there are many computer algebra systems which can compute the Galois group of a given polynomial or number field (GAP, Sage, ...).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/97587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Is $\sin^3 x=\frac{3}{4}\sin x - \frac{1}{4}\sin 3x$? $$\sin^3 x=\frac{3}{4}\sin x - \frac{1}{4}\sin 3x$$
Is there any formula that tells this or why is it like that?
| \begin{equation}
\text{You can use De Moivre's identity:}
\end{equation}
\begin{equation}
\text{Let's Call:}\\\\
\end{equation}
\begin{equation}
\mathrm{z=\cos x+i \sin x}\\
\mathrm{\frac{1}{z}=\cos x-i \sin x}\\
\end{equation}
\begin{equation}
\text{Now subtracting both equations together, we get:}\\
\end{equation}
\begin{equation}
\mathrm{2i\sin x=z-\frac{1}{z}}\\
\text{And we know that:}\\
\end{equation}
\begin{equation}
\mathrm{z^n=(cis x)^n=cis~nx}\\
\end{equation}
\begin{equation}
\text{So:}\\
\mathrm{2i\sin x=z-\frac{1}{z}}\Rightarrow\\
\mathrm{-8i\sin^3 x=\left (z-\frac{1}{z} \right )^{3}}\\
\end{equation}
\begin{equation}
\text{Expanding the RHS:}\\
\end{equation}
\begin{equation}
\mathrm{-8i\sin^3 x =z^3-\frac{1}{z^3}-3\left (z-\frac{1}{z}\right)}\\
\end{equation}
\begin{equation}
\mathrm{-8i\sin^3 x=2i\sin 3x-6i\sin x}\\
\end{equation}
\begin{equation}
\boxed{\boxed{\mathrm{\therefore\sin^{3} x=\frac{
3}{4}}\sin\mathrm{x}-\frac{1}{4}\sin 3x}}
\end{equation}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/97654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 8,
"answer_id": 4
} |
Showing $f(x)/x \to 0$ when $\lvert f(x) - f(\lambda x)\rvert/x \to 0$ I would like to solve this problem, but I do not know how ...
Let $f:(0;1) \rightarrow \mathbb{R}$ be a function such that:
$$\lim_{x \to0^+}f(x)=0$$
and such that there exists $0<\lambda<1$ such that:
$$\lim_{x \to0^+} \frac{ \left [ f(x)-f(\lambda x) \right ]}{x}=0$$
prove that
$$\lim_{x \to0^+} \frac{f(x)}{x}=0$$
| Since
$$ \frac{f(x) - f(\lambda x)}{x} \to 0,$$
for any $\epsilon > 0$, we can restrict $x$ near enough $0$ so that we have $\lvert f(x) - f(\lambda x)\rvert \leq \epsilon \lvert x \rvert$. Since $0 < \lambda < 1$, this means that we also have $\lvert f(\lambda^n x) - f(\lambda^{n+1} x) \rvert \leq \epsilon \lvert x \rvert\lambda^n$ for each $n \geq 0$. By using the triangle inequality, we get that
$$ \begin{align}
\lvert f(x) - f(\lambda^n x) \rvert &= \lvert f(x) - f(\lambda x) + f(\lambda x) + \cdots - f(\lambda^n x)\vert \\
&\leq \epsilon \lvert x \rvert ( 1 + \lambda + \lambda^2 + \cdots + \lambda^{n-1}) \\
&\leq \epsilon \lvert x \rvert \frac{1 - \lambda^n}{1 - \lambda} \\
&\leq \epsilon \lvert x \rvert \frac{1}{1 - \lambda}.
\end{align}$$
Notice the final expression on the right is independent of $n$. By letting $n \to \infty$, the right hand side does not change, while the term $f(\lambda^n x) \to 0$ on the left hand side. This leads to an expression of the form
$$\lvert f(x) \rvert \leq \epsilon \lvert x \rvert \frac{1}{1 - \lambda},$$
or equivalently
$$ \frac{\lvert f(x) \rvert}{\lvert x \rvert} \leq \epsilon \frac{1}{1 - \lambda}$$
for all $\epsilon > 0$. Choosing $\epsilon \to 0$ completes the proof. $\diamondsuit$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/97731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Limit of the sequence of regular n-gons. Let $A_n$ be the regular $n$-gon inscribed in the unit circle.
It appears intuitively obvious that as $n$ grows, the resulting polygon approximates a circle ever closer.
Can it be shown that the limit as $n \rightarrow \infty $ of $A_n$ is a circle?
| Given a sequence of sets $(A_n)_{n\geq3}$ there is a natural $\lim\inf_{n\to\infty} A_n=:\underline{A}$ and a natural $\lim\sup_{n\to\infty}A_n=:\overline{A}$ of this sequence.
In the problem at hand the $A_n$ are closed regular $n$-gons inscribed in the unit circle, all sharing the point $P:=(1,0)$.
The set $\underline{A}$ consists of all points that are in all but finitely many of the $A_n$. It is easy to see that all points $z\in D:=\{(x,y)\ |\ x^2+y^2 < 1\}$ satisfy this condition and that in fact $\underline{A}=D\cup\{P\}$.
The set $\overline{A}$ consists of all points that are in infinitely many $A_n$. Obviously $\overline{A}\supset\underline{A}\ $, and $\overline{A}$ is contained in $\overline{D}=\{(x,y)\ |\ x^2+y^2 \leq 1\}$. In fact $\overline{A}\cap\partial D$ consists of all points on the unit circle whose argument is a rational multiple of $\pi$.
This is how much you can say on the pure set-theoretical level; an actual limit set $A_*$ does not exist.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/97861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Changing the argument for a higher order derivative I start with the following:
$$\frac{d^n}{dx^n} \left[(1-x^2)^{n+\alpha-1/2}\right]$$
Which is part of the Rodrigues definition of a Gegenbauer polynomial. Gegenbauer polynomials are also useful in terms of trigonometric functions so I want to use the substitution $x = \cos\theta$, which is the usual way of doing it. However, I'm stuck as to how this works for the Rodrigues definition, because it gives me a derivative with respect to $\cos\theta$ instead of a derivative with respect to $\theta$:
$$\frac{d^n}{d(\cos\theta)^n} \left[(\sin^2\theta)^{n+\alpha-1/2}\right]$$
QUESTION: Is there a way to write this as $\dfrac{d^n}{d\theta^n}[\text{something}]$?
I have read some about Faa di Bruno's formula for the $n$-th order derivative of a composition of functions but it doesn't seem to do what I want to do.
Also, for n=1 there is the identity, from the chain rule, $\dfrac{d}{d(\cos\theta)} \left[(\sin^2\theta)^{n+\alpha-1/2}\right]=\frac{\frac{d}{d\theta} \left[(\sin^2\theta)^{n+\alpha-1/2}\right]}{\frac{d}{d\theta} \left[\cos\theta\right]}$, but this doesn't hold for higher order derivatives. Any ideas?
| Instead of Faa di bruno's formula, you can try generalizing the formula for $n^{th}$ derivative of inverse function.
Let $f,g$ be functions of $x$ and inverses of each other. We know that $\displaystyle f'=\frac{1}{g'}$ i.e. $f'g'=1$. Using Leibniz' rule, we get
$\displaystyle (f'g')^{(n)}(\theta)=\sum_{k=0}^n \binom{n}{k} f^{(n-k+1)} g^{(k+1)}(\theta)=0 \cdots (1)$
Using this equation recursively with $x=cos^{2}\theta$, so that $f(\theta)=cos^{2}\theta$ and $f'(\theta)=-2sin\theta cos\theta$, one can find the required values. Thus, we have: $\displaystyle\frac{d}{d(cos^2\theta)}[(sin^2\theta)^{n+\alpha-1/2}\mathbf{]}=\frac{1}{\frac{d}{d(\theta)}[(sin^2\theta)^{n+\alpha-1/2}\mathbf{]}}=\frac{1}{2{sin\theta}^{2n+2\alpha-1}cos\theta}$. Then, using $(1)$, we can determine $\displaystyle{\frac{d^n}{d\theta^n}[(sin^2\theta)^{n+\alpha-1/2}\mathbf{]}}$ for $n>1$ as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/97929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Derivative of a function is odd prove the function is even. $f:\mathbb{R} \rightarrow \mathbb{R}$ is such that $f'(x)$ exists $\forall x.$
And $f'(-x)=-f'(x)$
I would like to show $f(-x)=f(x)$
In other words a function with odd derivative is even.
If I could apply the fundamental theorem of calculus
$\int_{-x}^{x}f'(t)dt = f(x)-f(-x)$ but since the integrand is odd we have $f(x)-f(-x)=0 \Rightarrow f(x)=f(-x)$
but unfortunately I don't know that f' is integrable.
| *
*Define functions $f_0(x)=(f(x)+f(-x))/2$ and $f_1(x)=(f(x)-f(-x))/2$. Then $f_0$ and $f_1$ are also differentiable, and $f_0$ is even and $f_1$ is odd.
*Show that the derivative of an odd function is even, and that of an even function is odd.
*From the equality $f'=f_0'+f_1'$ conclude that $f_1$ is constant and, therefore, zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/98003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 0
} |
Inequality for modulus Let $a$ and $b$ be complex numbers with modulus $< 1$.
How can I prove that
$\left | \frac{a-b}{1-\bar{a}b} \right |<1$ ?
Thank you
| Here are some hints: Calculate $|a-b|^2$ and $|1-\overline{a}b|^2$ using the formula $|z|^2=z\overline{z}$. To show that $\displaystyle\left | \frac{a-b}{1-\bar{a}b} \right |<1$, it's equivalent to show that
$$\tag{1}|1-\overline{a}b|^2-|a-b|^2>0.$$
To show $(1)$, you need to use the fact that $|a|<1$ and $|b|<1$.
If you need more help, I can give your more details.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/98106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the purpose of Stirling's approximation to a factorial? Stirling approximation to a factorial is
$$
n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}\right)^n.
$$
I wonder what benefit can be got from it?
From computational perspective (I admit I don't know too much about how each arithmetic operation is implemented and which is cheaper than which), a factorial $n!$ contains $n-1$ multiplications.
In Stirling's approximation, one also has to compute one division and $n$ multiplications for $\left(\frac{n}{e}\right)^n$, no? Plus two multiplication and one square root for $\sqrt{2 \pi n}$, how does the approximation reduce computation?
There may be considerations from other perspectives. I also would like to know. Please point out your perspective if you can.
Added: For purpose of simplifying analysis by Stirling's approximation, for example, the reply by user1729, my concern is that it is an approximation after all, and even if the approximating expression converges, don't we need to show that the original expression also converges and converges to the same thing as its approximation converges to?
Thanks and regards!
| A Stirling inequality
$$(n!)^{\frac{1}{n}} \le \frac{e}{n+1}$$
can be used to derive Carleman's inequality from the AM-GM inequality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/98171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39",
"answer_count": 10,
"answer_id": 6
} |
Is there a simple formula for this simple question about a circle? What is the average distance of the points within a circle of radius $r$ from a point a distance $d$ from the centre of the circle (with $d>r$, though a general solution without this constraint would be nice)?
The question arose as an operational research simplification of a real problem in telecoms networks and is easy to approximate for any particular case. But several of my colleagues thought that such a simple problem should have a simple formula as the solution, but our combined brains never found one despite titanic effort. It looks like it might involve calculus.
I'm interested in both how to approach the problem, but also a final algebraic solution that could be used in a spreadsheet (that is, I don't want to have to integrate anything).
| I guess it involves calculus. Let $(x,y)$ be a point within the circle of radius $R$ and $(d,0)$ the coordinates of the point a distance $d$ away from the origin (because of the symmetry we can choose it to lie on the $x$-axis). Then the distance between the two points is given by $$\ell = \sqrt{(x-d)^2 + y^2}.$$
Averaging over the circle is best done in polar coordinates with $x=r \cos \phi$ and $y=r \sin\phi$. We have
$$\begin{align}
\langle \ell \rangle &= \frac{1}{\pi R^2} \int_0^R dr \int_0^{2\pi} d\phi\, r \sqrt{(r\cos \phi -d)^2 + r^2\sin^2 \phi}\\
&= \frac{1}{\pi R^2} \int_0^R dr \int_0^{2\pi} d\phi\, r \sqrt{r^2 + d^2 -2 d r\cos\phi}.
\end{align}$$
I am not sure it the integral has a simple analytic solution. Thus, I calculate it in three simple limits.
(a) $d\gg R$:
we can expand the $\sqrt{}$ and have
$$\langle \ell \rangle = \frac{1}{\pi R^2} \int_0^R dr \int_0^{2\pi} d\phi\, [r d- r^2 \cos \phi + \frac{r^3}{2d} \sin^2 \phi] = d + \frac{R^2}{8d}$$
(b) for $d \approx R$, we have
$$\langle \ell \rangle = \frac{1}{\pi R^2} \int_0^R dr \int_0^{2\pi} d\phi\, r^2 \sqrt{2 (1-\cos \phi)} = \frac{8 R}{3\pi}$$
(c) for $d\ll R$ [joriki's comment]
$$\langle \ell \rangle = \frac{1}{\pi R^2} \int_0^R dr \int_0^{2\pi} d\phi\, \left[r^2 -rd\cos\phi + \frac{d^2}{2} \cos^2\phi\right] = \frac{2 R}{3} + \frac{d^2}{2R}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/98231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
Easy way to determine the primes for which $3$ is a cube in $\mathbb{Q}_p$? This is a qual problem from Princeton's website and I'm wondering if there's an easy way to solve it:
For which $p$ is $3$ a cube root in $\mathbb{Q}_p$?
The case $p=3$ for which $X^3-3$ is not separable modulo $p$ can easily be ruled out by checking that $3$ is not a cube modulo $9$. Is there an approach to this that does not use cubic reciprocity? If not, then I'd appreciate it if someone would show how it's done using cubic reciprocity. I haven't seen good concrete examples of it anywhere.
EDIT: I should have been more explicit here. What I really meant to ask was how would one find all the primes $p\neq 3$ s.t. $x^3\equiv 3\,(\textrm{mod }p)$ has a solution? I know how to work with the quadratic case using quadratic reciprocity, but I'm not sure what should be done in the cubic case.
| For odd primes $q \equiv 2 \pmod 3,$ the cubing map is a bijection, 3 is always a cube $\pmod q.$
For odd primes $p \equiv 1 \pmod 3,$ by cubic reciprocity, 3 is a cube $\pmod p$ if and only if there is an integer representation
$$ p = x^2 + x y + 61 y^2, $$ or $4p=u^2 + 243 v^2.$ In this form this is Exercise 4.15(d) on page 91 of Cox. Also Exercise 23 on page 135 of Ireland and Rosen. The result is due to Jacobi (1827).
For more information when cubic reciprocity is not quite good enough, see Representation of primes by the principal form of discriminant $-D$ when the classnumber $h(-D)$ is 3 by Richard H. Hudson and Kenneth S. Williams, Acta Arithmetica (1991) volume 57 pages 131-153.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/98298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Show that $D_{12}$ is isomorphic to $D_6\times C_2$ Show that $D_{12}$ is isomorphic to $D_6 \times C_2$, where $D_{2n}$ is the dihedral group of order $2n$ and $C_2$ is the cyclic group of order $2$.
I'm somewhat in over my head with my first year groups course. This is a question from an example sheet which I think if someone answered for me could illuminate a few things about isomorphisms to me.
In this situation, does one use some lemma (that the direct product of two subgroups being isomorphic to their supergroup(?) if certain conditions are satisfied)? Does $D_{12}$ have to be abelian for this? Do we just go right ahead and search for a fitting bijection? Can we show the isomorphism is there without doing any of the above?
If someone could please answer the problem in the title and talk their way through, they would be being very helpful.
Thank You.
| Assuming $D_{n}$ is the dihedral group of order $n$, I would proceed as follows. Note that $D_{6} \cong S_{3}$, and $S_{3}$ is generated by $(12)$ and $(123)$. Therefore $D_{6} \times C_{2} = \langle ((12),[0]),((123),[1]) \rangle$. Next note that $D_{12} = \langle r,s | \, r^{6}=s^{2}=e , s^{-1}rs=r^{-1} \rangle$, map the generators of $D_{6} \times C_{2}$ to the generators of $D_{12}$, and show this extends to a bijection.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/98343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 3
} |
Exercise about semidirect product This is exercise 7.12 from Algebra, Isaacs.
$ G= N \rtimes H \ $ is a semidirect product; no nonidentity element of H fixes any nonidentity element of N; identify N and H with the corresponding subgroups of G. Show that:
a) $ H \bigcap H^{g } = 1 $ for all $ g \in G - H $
b) If G is finite, $ G = N \cup \bigcup_{g \in G} \ H^{g} $
| (a) Let $x\in H\cap H^g$, where $g=hn$, $h\in H$, $n\in N$, $n\neq 1$. Then there exists $y\in H$ such that $x=g^{-1}yg = n^{-1}(h^{-1}yh)n$. Since $h^{-1}yh\in H$, it suffices to consider the case of $g\in N$. So we set $g=n$.
(The intuition is that we want to go to some expression like $x^{-1}nx = n$, because this will force $x=1$ given our assumption. The rest of the computations are done to try to get that.)
Thus, we want to show that if $n\in N$, then $H\cap H^n=\{1\}$. If $ x= n^{-1}yn$, then $nx = yn$, so $x^{-1}nx = (x^{-1}y)n$. However, since $N$ is normal, $x^{-1}nx\in N$, so $x^{-1}y\in N$. Since $x,y\in H$, then $x^{-1}y=1$, hence $x=y$. Thus, $x=n^{-1}xn$.
But this in turn means $x^{-1}nx = n$. Since no nonidentity element of $H$ fixes any nonidentity element of $N$, and $n\neq 1$, then $x=1$. Thus, $H\cap H^n=\{1\}$, and by the argument above we conclude that $H\cap H^g=\{1\}$ for any $g\in G-H$, as claimed.
(b) Added: The fact that this is restricted to finite groups should suggest that you are looking for a counting argument: that is, that you want to show that the number of elements on the right hand side is equal to the number of elements on the left hand side, rather than trying to prove set equality by double inclusion.
Note that $H^g=H^{g'}$ if and only if $gg'^{-1}\in N_G(H)$; by (a), $N_G(H)=H$, so we can take the union over a set of coset representatives of $H$ in $G$; $N$ works nicely as such a set. Again using (a) we have then that:
$$\begin{align*}
\left| N \cup \bigcup_{g\in G}H^g\right| &= \left| N \cup \bigcup_{n\in N}H^n\right|\\
&= |N| + |N|(|H|-1)\\
&= |N| + |N||H|-|N|\\
&= |N||H|\\
&=|G|,
\end{align*}$$
and we are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/98401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Estimate probabilities from its moments I want to estimate probability $Pr(X \leq a)$, where $X$ is a continuous random variable and $a$ is given, only based on some moments of $X$ (e.g., the first four moments, but without knowing its distribution type).
| As I had pointed out in my comments, it's hard to answer this question in generality. So, I'll just point you to a resource online.
But, that said, the magic words are generating functions-Probability generating functions and Moment Generating Functions.
The probability generating functions $\Phi_X$ exists only for non-negative integer valued random variables. The Moment generating function $M_X$ is related to the former [whenever and wherever both exist] by the following: $$M_X(t)=\Phi_X(e^t)$$
There are other inputs required, sometimes and sometimes not. So, please go through the material I have pointed you to.
EDITED TO ADD:
I'll get a little specific now:
If the random variable at hand has finite range, and you have $all$ the moments, then the distribution of $X$ can be found out, {Theorem 10.2, pp 5, 369 in the typeset}. If you just have first two moments, you'll get only Mean and Variance.
I'd love to hear from you incase you have specific queries. [Just add a comment below, I'll be notified!]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/98460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 4,
"answer_id": 2
} |
Continuous extension of a real function defined on an open interval Let $I\subset\mathbb{R}$ be a compact interval and let $J$ denote its interior.
Consider $f:J\to\mathbb{R}$ being continuous.
*
*Under which conditions does the following statement hold?
$$
\text{There exists a continuous extension $g:I\to\mathbb{R}$ of $f$.}\tag{A}
$$
*Is boundedness of $f$ sufficient for (A)?
| Call $I = [a, b]$ with $-\infty < a < b < \infty$. Such an extension exists if and only if both $\lim_{x\to a^+} f(x)$ and $\lim_{x\to b^-} f(x)$ exist, and in fact these values become the values of the extension. (The proof is left as a simple exercise.) With this in mind, boundedness is not sufficient due to previously mentioned functions, such as $\sin\left(\frac{1}{x}\right)$ on $(0, 1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/98527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
How to prove that every infinite cardinal is equal to $\omega_\alpha$ for some $\alpha$? How to prove that every infinite cardinal is equal to $\omega_\alpha$ for some $\alpha$ in Kunen's book, I 10.19?
I will appreciate any help on this question. Thanks ahead.
| I took the trouble to read through Kunen in order to understand the problem, as well the definitions which you can use for this.
*
*Cardinal is defined to be an ordinal $\kappa$ that there is no $\beta<\kappa$ and a bijection between $\kappa$ and $\beta$.
*The successor cardinal $\kappa^+$ is the least cardinal which is strictly larger than $\kappa$.
*$\aleph_\alpha=\omega_\alpha$ defined recursively, as the usual definitions go: $\aleph_0=\omega$; $\aleph_{\alpha+1}=\omega_{\alpha+1}=\omega_\alpha^+$; at limit points $\aleph_\beta=\omega_\beta=\sup\{\omega_\alpha\mid\alpha<\beta\}$.
Now we want to show that:
Every cardinal is an $\omega_\alpha$ for some $\alpha$.
Your question concentrates on the second part of the lemma.
Suppose $\kappa$ is an infinite cardinal. If $\kappa=\omega$ we are done. Otherwise let $\beta=\sup\{\alpha+1\mid\omega_\alpha<\kappa\}$. I claim that $\kappa=\omega_\beta$.
Now suppose that $\omega_\beta<\kappa$ then we reach a contradiction since this means that $\beta<\sup\{\alpha+1\mid\omega_\alpha<\kappa\}=\beta$ (since $\beta$ is in this set, then $\beta<\beta+1\le\sup{\cdots}=\beta$).
If so, $\kappa\le\omega_\beta$. If $\beta=\alpha+1$ then $\omega_\alpha<\kappa\le\omega_\beta$ and by the definition of a successor cardinal we have equality. Otherwise $\beta$ is a limit cardinal and we have that $\omega_\alpha<\kappa$ for every $\alpha<\beta$, then by the definition of a supremum we have that $\omega_\beta\le\kappa$ and again we have equality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/98674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Computing taylor series for trigonometric exponential function How do I compute the taylor series for $\cos(x)^{\sin(x)}$ ? I tried using the $e^x$ rule but I still am not getting to the result:
$$\cos(x)^{\sin(x)}=1-\frac{x^3}{2}+\frac{x^6}{8}+o(x^6).$$
| Your formula ($\cos(x)^{\sin(x)}=1-\frac{x^3}{2}+\frac{x^6}{8}+o(x^6).$) has been achieved from the definition of The Taylor Series:
$$f(x) = \sum_{i=0}^{\infty}\frac{f^{(n)}(x_0)}{n!}(x-x_0)^n$$
Where $f^{(n)}(x)$ is $n$th derivative of $f(x)$ with respect to $x$.
(Notice that $f\in c^{\infty}$)
put $x_0=0$ and calculate the coefficients of $x^0$, $x^1$, ... $x^6$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/98780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Examples of Galois connections? On TWF week 201, J. Baez explains the basics of Galois theory, and say at the end :
But here's the big secret: this has NOTHING TO DO WITH FIELDS! It works for ANY sort of mathematical gadget! If you've got a little gadget k sitting in a big gadget K, you get a "Galois group" Gal(K/k) consisting of symmetries of the big gadget that fix everything in the little one.
But now here's the cool part, which is also very general. Any subgroup of Gal(K/k) gives a gadget containing k and contained in K: namely, the gadget consisting of all the elements of K that are fixed by everything in this subgroup.
And conversely, any gadget containing k and contained in K gives a subgroup of Gal(K/k): namely, the group consisting of all the symmetries of K that fix every element of this gadget.
Apart from fields, what other "big gadgets" can be described in this way ? And what are the corresponding "little gadgets" ?
| This wikipage gives a good list of examples of both monotone and antitone Galois connections.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/98839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 5,
"answer_id": 4
} |
Differentiability of Moreau-Yosida approximation. I want to show that if $X$ is a reflexive Banach space with norm of class $\mathcal{C}^1$ and $f\colon X\to\mathbb{R}\cup \{+\infty\}$ is convex and lower semicontinuous, then $f_{\lambda}$ is differentiable of class $\mathcal{C}^1$.
(where $f_{\lambda}:X\to\mathbb{R}\cup \{+\infty\}$ is the Moreau-Yosida approximation: $$f_\lambda(x)=\inf_{y\in X} \left\{ f(y)+\frac{1}{2\lambda}|x-y|^2\right\})$$
Maybe, this result could be useful:
If $g\colon X\to\mathbb{R}$ is convex and differentiable in every point then $g\in\mathcal{C}^1(X)$.
Many thanks in advance.
| Observe that the subdifferential of the function $y\to \frac{\|x-y\|^2}{2\lambda}+f(y)$ is the operator
$$y\to F(y-x)+\partial f(y),$$
where $F:X\to X^*$ is a duality mapping ($Fx=\{f^*\in X^*\,|\,\langle f,x\rangle=\|x\|^2=\|f\|^2\}$). Now, recall that a point $y$ is a minimazer of a convex function $g$ iff $0\in \partial g(y)$. Since $F$ is a duality mapping, $A=\partial f$ is maximal monotone and $X$ is reflexive, invoking Rockafellar (or Minty - don't remember) theorem, we have that the equation
$$F(y-x)+Ay\ni0$$
has a unigue solution, which gives that the infimum is attained. Contrary to user53153 answer, the argement which realizes the infimum is not only a weak limit of a minimazing sequance, but also a solution to a certain equation. This has a direct impact on Gâteaux differentiability of $f_\lambda$ if we don't assume that the norm is $\mathcal{C}^1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/98907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Probability of choosing the correct stick out of a hundred. Challenge from reality show. So I was watching the amazing race last night and they had a mission in which the
contestants had to eat from a bin with 100 popsicles where only one of those popsicles had a writing on its stick containing the clue.
Immediately I thought well of course choosing the correct stick is 1 in a 100.
So taking the correct stick on the first try probability is $\frac{1}{100}$.
Then on the second attempt it should be $\frac{1}{99}$ and so on.
Multiplying these results give a huge number and so it seems that the more times you try the probability of getting the correct stick decreases.
while it seems that the more times you try it more probable for you to get the correct stick.
So how do you calculate the probability of getting the correct one first try?
the second? What about last? I mean the probability of trying 100 times to get the correct stick?
Thanks.
| The probability of getting the first wrong is $\dfrac{99}{100}$. The probability of getting the second right given the first is wrong is wrong $\dfrac{1}{99}$; the probability of getting the second wrong given that the first is wrong $\dfrac{98}{99}$. And this pattern continues.
Let's work out the probability of getting the correct one on the fourth try: it is the probability of getting the first three wrong $\dfrac{99}{100}\times\dfrac{98}{99}\times\dfrac{97}{98}$ times the probability of getting the fourth correct given the first three were wrong $\dfrac{1}{97}$. It should be obvious that the answer is $\dfrac{1}{100}$.
It will still be $\dfrac{1}{100}$, no matter which position you are considering. This should not be a surprise as each position of the stick is equally likely.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/99029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Why does the following set have no subspaces but {0} and itself? Here's the statement:
The following set, $V$, only has subspaces $\{0\}$ and $V$.
$$V=\{f(t) \colon \mathbb R \to \mathbb R \mid f'(t) = k\cdot f(t) \text{ where } k \text{ is a constant}\}$$
I'm having trouble understanding why there are no other subspaces. Why is this the case here? Examples are welcome. I provided a definition of "subspace" in the comments.
| HINT $\rm\displaystyle\ \begin{align} f{\:'} &=\ \rm k\ f \\ \rm \:\ g' &=\ \rm k\ g \end{align}\ \Rightarrow\ \dfrac{f{\:'}}f\: =\: \dfrac{g'}g\: \iff\: \bigg(\!\!\dfrac{g}f\bigg)' =\ 0\ \iff \ g\: =\: c\ f,\ \ \ c'\: =\ 0,\ $ i.e. $\rm\ c\:$ "constant".
This is a special case of the the Wronskian test for linear dependence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/99106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Solutions to the matrix equation $\mathbf{AB-BA=I}$ over general fields Some days ago, I was thinking on a problem, which states that $$AB-BA=I$$ does not have a solution in $M_{n\times n}(\mathbb R)$ and $M_{n\times n}(\mathbb C)$. (Here $M_{n\times n}(\mathbb F)$ denotes the set of all $n\times n$ matrices with entries from the field $\mathbb F$ and $I$ is the identity matrix.)
Although I couldn't solve the problem, I came up with this problem:
Does there exist a field $\mathbb F$ for which that equation $AB-BA=I$ has a solution in $M_{n\times n}(\mathbb F)$?
I'd really appreciate your help.
| HINT $\ $ Extending a 1936 result of Shoda for characteristic $0,$ Benjamin Muckenhoupt, a 2nd year graduate student of A. Adrian Albert, proved in the mid fifties that in the matrix algebra $\rm\ \mathbb M_n(F)\ $ over a field $\rm\:F\:$, a matrix $\rm\:M\:$ is a commutator $\rm\ M\: = \: A\:B - B\:A\ $ iff $\rm\:M\:$ is traceless, i.e. $\rm\ tr(M) = 0\:.$
From this we infer that $1$ is a commutator in $\rm\mathbb M_n(\mathbb F_p)\ \iff\ n = tr(1) = 0\in \mathbb F_p \iff\ p\ |\ n \:.$
Muckenhoupt and Albert's proof is short and simple and is freely acessible at the link below
A. A. Albert, B. Muckenhoupt. On matrices of trace zero, Michigan Math. J. 4, #1 (1957), 1-3.
Tracing citations to this paper reveals much literature on representation by (sums of) commutators and tracefree matrices. For example, Rosset proved that in a matrix ring $\rm\:\mathbb M_n(R)\:$ over a commutative ring $\rm\:R\:,\:$ every matrix of trace zero is a sum of two commutators.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/99175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "55",
"answer_count": 4,
"answer_id": 3
} |
Integrate $\log(x)$ with Riemann sum In a homework problem I am asked to calculate $\int_1^a \log(x) \mathrm dx$ using a Riemann sum. It also says to use $x_k := a^{k/n}$ as steps for the stair functions.
So far I have this:
My step size is $x_k - x_{k-1}$ which can be reduced to $a^{\frac{k-1}{n}} (a^{\frac{1}{n}} -1)$.
The total sum then is:
$$ S_n = \sum_{k=0}^n \frac{k}{n} \log(a) a^{\frac{k-1}{n}} (a^{\frac{1}{n}} -1) $$
$$ S_n = \log(a) \frac{a^{\frac{1}{n}}}{n} (a^{\frac{1}{n}} -1) \sum_{k=0}^n k a^{\frac{k}{n}} $$
When I punch this into Mathematica to derive the Limit $n \rightarrow \infty$, it gives me $1-a+a \log(a)$ which seems fine to me.
The problem gives a hint, that I should show the Limit of $n(a^{\frac{1}{n}} - 1)$ by setting it equal to a difference quotient. Mathematica says that the limit is $\log(a)$, but that does not really help me out either.
How do I tackle this problem?
Thanks!
| First, notice that $1-a+a \ln (a)$ can't be the (final) answer. It is an antiderivative of $\ln (a)$, but it is not the antiderivative you are looking for : it does not vanish at $0$. The subtelty is that the Riemann sums approximate the integral of the logarithm between $1$ and $a$, and not between $0$ and $a$.
1) The limit of $n (a^{1/n}-1)$
We can assume that $a$ is positive. I will put $f(x) = a^x = e^{x \ln (a)}$ for non-negative $x$. We can see that :
$$\lim_{n \to + \infty} n (a^{1/n}-1) = \lim_{n \to + \infty}\frac{f(1/n)-f(0)}{1/n} = \lim_{h \to 0} \frac{f(h)-f(0)}{h} = f' (0).$$
Interpretating a limit as the derivative of some well-chosen function is a useful trick (that is, before you learn more powerful and general methods). Now, find by yourself the result of Mathematica :)
2) Back to your problem
As a preliminary remark, I advise you to be careful about the bounds in your sums. A nice Riemann sum is a sum going from $0$ to $n-1$, or from $1$ to $n$, so that it has exactly $n$ terms and does not overflow from the domain of integration. here, we are looking at :
$$ S_n = \sum_{k=1}^n (x_k^{(n)} - x_{k-1}^{(n)}) \ln(x_k^{(n)}) = \sum_{k=1}^n a^{\frac{k-1}{n}} (a^{\frac{1}{n}}-1) \ln(a^{\frac{k}{n}}) = \ln (a) a^{-\frac{1}{n}} n (a^{\frac{1}{n}}-1) \left[ \frac{1}{n} \sum_{k=1}^n \frac{k}{n} a^{\frac{k}{n}} \right]$$
(I prefer sums going from $0$ to $n-1$, but since $\ln (0) = - \infty$ it is a tad easier to use a sum from $1$ to $n$)
As $n$ goes to $+ \infty$, we know that $a^{-1/n}$ converges to $1$ and that $n (a^{1/n}-1)$ converges to $\ln (a)$, so that :
$$ \int_1^a \ln (x) dx = \lim_{n \to +\infty} S_n = \ln (a)^2 \lim_{n \to + \infty} \left[ \frac{1}{n} \sum_{k=1}^n \frac{k}{n} a^{\frac{k}{n}} \right].$$
To compute the expression in brackets, look at Joriki's post. As a side note, we can remark that it is a Riemann sum. Hence, with a change of variable ($u = x \ln (a)$):
$$ \int_1^a \ln (x) dx = \ln (a)^2 \int_0^1 x a^x dx = \int_0^{\ln (a)} u e^u du,$$
or equivalently:
$$ \int_0^a \ln (x) dx = \int_{- \infty}^{\ln (a)} u e^u du.$$
Alas, this integral is usually computed with an integration by parts, in other words by the same trick on usually compute an antiderivative of the logarithm, so that we are back at the beginning (one could have obtained this equality with a mere change of variable).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/99243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Spaces with equal homotopy groups but different homology groups? Since it's fairly easy to come up with a two spaces that have different homotopy groups but the same homology groups ($S^2\times S^4$ and $\mathbb{C}\textrm{P}^3$). Are there any nice examples of spaces going the other way around? Are there any obvious ways to approach a problem like this?
| Standard example is $\mathbb RP^2\times S^3$ and $\mathbb RP^3\times S^2$ (they have same homotopy groups since they both have $\pi_1=\mathbb Z/2$ and the universal cover is in both cases $S^2\times S^3$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/99302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 2,
"answer_id": 1
} |
Absolute value of Brownian motion I need to show that $$R_t=\frac{1}{|B_t|}$$ is bounded in $\mathcal{L^2}$ for $(t \ge 1)$, where $B_t$ is a 3-dimensional standard Brownian motion.
I am trying to find a bound for $\mathbb{E}[\int_{t=1}^{\infty}R^2_t]$.
Asymptotically $B_t^i$ is between $\sqrt{t}$ and $t$. I also know that $|B_t| \to \infty$, but the rate is not clear.
Hints would be helpful.
| Since $B_t$ and $\sqrt{t}B_1$ are identically distributed, $\mathrm E(R_t^2)=t^{-1}\mathrm E(R_1^2)$, hence $\mathrm E(R_t^2)\leqslant\mathrm E(R_1^2)$ for every $t\geqslant1$ and it remains to show that $\mathrm E(R_1^2)$ is finite. Now, the density of the distribution of $B_1$ is proportional to $\mathrm e^{-\|x\|^2/2}$ and $B_1$ has dimension $3$ hence the density of the distribution of $Y=\|B_1\|$ is proportional to $\varphi(y)=y^{3-1}\mathrm e^{-y^2/2}=y^2\mathrm e^{-y^2/2}$ on $y\gt0$. Since the function $y\mapsto y^{-2}\varphi(y)=\mathrm e^{-y^2/2}$ is Lebesgue integrable, the random variable $Y^{-2}=R_1^2$ is integrable.
On the other hand, $\mathrm E\left(\int\limits_1^{+\infty}R_t^2\mathrm dt\right)$ is infinite.
Edit The distribution of $B_1$ yields the distribution of $Y=\|B_1\|$ by the usual change of variables technique. To see this, note that in dimension $n$ and for every test function $u$,
$$
\mathrm E(u(Y))\propto\int_{\mathbb R^n} u(\|x\|)\mathrm e^{-\|x\|^2/2}\mathrm dx\propto\int_0^{+\infty}\int_{S^{n-1}}u(y)\mathrm e^{-y^2/2}y^{n-1}\mathrm d\sigma_{n-1}(\theta)\mathrm dy,
$$
where $\sigma_{n-1}$ denotes the uniform distribution on the unit sphere $S^{n-1}$ and $(y,\theta)\mapsto y^{n-1}$ is proportional to the Jacobian of the transformation $x\mapsto(y,\theta)$ from $\mathbb R^n\setminus\{0\}$ to $\mathbb R_+^*\times S^{n-1}$. Hence,
$$
\mathrm E(u(Y))\propto\int_0^{+\infty}u(y)y^{n-1}\mathrm e^{-y^2/2}\mathrm dy,
$$
which proves by identification that the distribution of $Y$ has a density proportional to $y^{n-1}\mathrm e^{-y^2/2}$ on $y\gt0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/99369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Mathematics understood through poems? Along with Diophantus mathematics has been represented in form of poems often times. Bhaskara II composes in Lilavati:
Whilst making love a necklace broke.
A row of pearls mislaid.
One sixth fell to the floor.
One fifth upon the bed.
The young woman saved one third of them.
One tenth were caught by her lover.
If six pearls remained upon the string
How many pearls were there altogether?
Or to cite from modern examples: Poetry inspired by mathematics which includes Tom Apostol's Where are the zeros of Zeta of s? to be sung to to the tune of "Sweet Betsy from Pike". Or Tom Lehrer's derivative poem here.
Thus my motivation is to compile here a collection of poems that explain relatively obscure concepts. Rap culture welcome but only if it includes homological algebra or similar theory. (Please let us not degenerate it to memes...). Let us restrict it to only one poem by answer so as to others can vote on the richness of the concept.
| Prof. Geoffrey K. Pullum's "Scooping the Loop Snooper: A proof that the Halting Problem is undecidable", in the style of Dr. Seuss.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/99406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32",
"answer_count": 19,
"answer_id": 13
} |
truth table equivalency I am stuck on this question and attempting to answer it makes me feel that its equivalent to searching for a needle in a large pond...
I need help with this, can someone explain how I even attempt to find the solution to this?
Question: Find a logical statement equivalent to $(A \to B) \& \sim C$, the statement must use only operators $\sim, |$.
I know that I can do $(A \& \sim B) \, | \, C$ which is logically equivalent but it says not to use anything other than $\sim, |$. The statement I have uses "$\&$".
| I will assume that "|" is NAND operator defined as :
$A | B \Leftrightarrow \lnot(A \land B)$
If it is so then we can write :
$(A \rightarrow B) \land \lnot C \Leftrightarrow (\lnot A \lor B) \land \lnot C \Leftrightarrow (\lnot A \land \lnot C) \lor (B \land \lnot C) \Leftrightarrow$
$\Leftrightarrow \lnot(\lnot A \mid \lnot C) \lor \lnot(B \mid \lnot C) \Leftrightarrow \lnot ((\lnot A \mid \lnot C) \land (B \mid \lnot C)) \Leftrightarrow$
$\Leftrightarrow (\lnot A \mid \lnot C) \mid (B \mid \lnot C)$
On the other hand if " | " is OR operator then we have :
$(A \rightarrow B) \land \lnot C \Leftrightarrow (\lnot A \lor B) \land \lnot C \Leftrightarrow \lnot(\lnot(\lnot A \lor B) \lor C)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/99469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Convergence of the next series I'm trying to determine the convergence of this series:
$$\sum \limits_{n=1}^\infty\left(\frac12·\frac34·\frac56·...\frac{2n-3}{2n-2}·\frac{2n-1}{2n}\right)^a$$
I've tried using D'Alambert criteria for solving it.
$$\lim_{n->\infty}\frac{(\frac12·\frac34·\frac56·...\frac{2n-3}{2n-2}·\frac{2n-1}{2n}\frac{2n}{2n+1})^a}{(\frac12·\frac34·\frac56·...\frac{2n-3}{2n-2}·\frac{2n-1}{2n})^a} =
\lim_{n->\infty}\left(\frac{(\frac12·\frac34·\frac56·...\frac{2n-3}{2n-2}·\frac{2n-1}{2n}·\frac{2n}{2n+1})}{(\frac12·\frac34·\frac56·...\frac{2n-3}{2n-2}·\frac{2n-1}{2n})}\right)^a$$
Which becomes:
$$\lim_{n->\infty}\left(\frac{2n}{2n+1}\right)^a$$
But after that, the limit is 1, so its convergence is unknown.
Any idea?
| Rewrite the summand in terms of factorials like this: $$ \left( \frac{(2n)!}{ 2^{2n} (n!)^2} \right)^a .$$ Applying Stirling's approximation gives $$ \frac{(2n)!}{ 2^{2n} (n!)^2} \sim \frac{1}{\sqrt{\pi n} } $$ so to finish off, apply what you know about the convergence of $ \displaystyle \sum \frac{1}{n^p} $ for various $p.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/99521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Calculating $\prod (\omega^j - \omega^k)$ where $\omega^n=1$. Let $1, \omega, \dots, \omega^{n-1}$ be the roots of the equation $z^n-1=0$, so that the roots form a regular $n$-gon in the complex plane. I would like to calculate
$$ \prod_{j \ne k} (\omega^j - \omega^k)$$
where the product runs over all $j \ne k$ with $0 \le j,k < n$.
My attempt so far
Noting that if $k-j = d$ then $\omega^j - \omega^k = \omega^j(1-\omega^d)$, I can re-write the product as
$$ \prod_{d=1}^{\lfloor n/2 \rfloor} \omega^{n(n-1)/2}(1-\omega^d)^n$$
I thought this would be useful but it hasn't led me anywhere.
Alternatively I could exploit the symmetry $\overline{1-\omega^d} = 1-\omega^{n-d}$ somehow, so that the terms in the product are of the form $|1-\omega^d|^2$. I tried this and ended up with a product which looked like
$$\prod_{j=0}^{n-1} |1 - \omega^j|^n $$
(with awkward multiplicative powers of $-1$ left out). This appears to be useful, but calculating it explicitly is proving harder than I'd have thought.
The answer I'm expecting to find is something like $n^n$.
My motivation for this comes from Galois theory. I'm trying to calculate the discriminant of the polynomial $X^n+pX+q$. I know that it must be of the form $ap^n+bq^{n-1}$ for some $a,b \in \mathbb{Z}$, and putting $p=0,q=-1$, the polynomial becomes $X^n-1$. This has roots $1, \omega, \dots, \omega^{n-1}$, so that $(-1)^{n-1}b$ is (a multiple of) the product you see above. An expression for $a$ can be found similarly by setting $p=-1,q=0$.
| First, note that
$$\prod_{k=1}^{n-1} (1-w^k) = n$$
The proof is that $\prod_{k=1}^{n-1}(x-w^k) = 1+x+x^2+...+x^{n-1}$, then substitute $x=1$.
Now, you can rewrite:
$$\prod_{j\neq k} (w^j-w^k) = \prod_{j=0}^{n-1} \prod_{i=1}^{n-1} (w^j-w^{i+j})$$
$$= \prod_{j=0}^{n-1} w^{j(n-1)} n = n^n w^{\frac{n(n-1)^2}2}$$
If $n$ is odd, then $w^{\frac{n(n-1)^2}2}= 1$, otherwise $w^{\frac{n(n-1)^2}2}=-1$. So we can write our formula as $(-1)^{n-1}n^n$ or as $-(-n)^n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/99587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Essays on the real line? Are there any essays on real numbers (in general?).
Specifically I want to learn more about:
*
*The history of (the system of) numbers;
*their philosophical significance through history;
*any good essays on their use in physics and the problems of modeling a 'physical' line.
Cheers.
I left this vague as google only supplied Dedekind theory of numbers which was quite interesting but not really what I was hoping for.
| You might try consulting The World of Mathematics, edited by James R. Newman. This is a four-volume compendium of articles on various topics in mathematics. It was published in 1956 so is not exactly cutting-edge, but then again, neither is our understanding of the construction of the real numbers. It contains an essay by Dedekind himself, which is just part of a section of articles on the number concept.
There is also a book simply called Number by Tobias Dantzig that is a classic history of the number system.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/99659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
The square of an integer is congruent to 0 or 1 mod 4 This is a question from the free Harvard online abstract algebra lectures. I'm posting my solutions here to get some feedback on them. For a fuller explanation, see this post.
This problem is from assignment 6. The notes from this lecture can be found here.
a) Prove that the square $a^2$ of an integer $a$ is congruent to 0 or 1 modulo 4.
b) What are the possible values of $a^2$ modulo 8?
a) Let $a$ be an integer. Then $a=4q+r, 0\leq r<4$ with $\bar{a}=\bar{r}$. Then we have $a^2=a\cdot a=(4q+r)^2=16q^2+8qr+r^2=4(4q^2+2qr)+r^2, 0\leq r^2<4$ with $\bar{a^2}=\bar{r^2}$. So then the possible values for $r$ with $r^2<4$ are 0,1. Then $\bar{a^2}=\bar{0}$ or $\bar{1}$.
b) Let $a$ be an integer. Then $a=8q+r, 0\leq r<8$ with $\bar{a}=\bar{r}$. Then we have $a^2=a\cdot a=(8q+r)^2=64q^2+16qr+r^2=8(8q^2+2qr)+r^2, 0\leq r^2<8$ with $\bar{a^2}=\bar{r^2}$. So then the possible values for $r$ with $r^2<8$ are 0,1,and 2. Then $\bar{a^2}=\bar{0}$, $\bar{1}$ or $\bar{4}$.
Again, I welcome any critique of my reasoning and/or my style as well as alternative solutions to the problem.
Thanks.
| $$\begin{align}
x^2 \mod 4 &\equiv (x \mod 4)(x \mod 4) \pmod 4 \\
&\equiv \begin{cases}0^2 \mod 4 \\
1^2 \mod 4 \\
2^2 \mod 4 \\
3^2 \mod 4 \end{cases} \\
&\equiv \begin{cases}0 \mod 4 \\
1 \mod 4 \\
4 \mod 4 \\
9 \mod 4 \end{cases} \\
&\equiv \begin{cases}0 \mod 4 \\
1 \mod 4 \\
0 \mod 4 \\
1 \mod 4 \end{cases}
\end{align}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/99716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 5,
"answer_id": 0
} |
Morita equivalence of acyclic categories (Crossposted to MathOverflow.)
Call a category acyclic if only the identity morphisms are invertible and the endomorphism monoid of every object is trivial. Let $C, D$ be two finite acyclic categories. Suppose that they are Morita equivalent in the sense that the abelian categories $\text{Fun}(C, \text{Vect})$ and $\text{Fun}(D, \text{Vect})$ are equivalent (where $\text{Vect}$ is the category of vector spaces over a field $k$, say algebraically closed of characteristic zero). Are $C, D$ equivalent? (If so, can we drop the finiteness condition?)
Without the acyclic condition this is false; for example, if $G$ is a finite group regarded as a one-object category, $\text{Fun}(G, \text{Vect})$ is completely determined by the number of conjugacy classes of $G$, and it is easy to write down pairs of nonisomorphic finite groups with the same number of conjugacy classes (take, for example, any nonabelian group $G$ with $n < |G|$ conjugacy classes and $\mathbb{Z}/n\mathbb{Z}$).
On the other hand, I believe this result is known to be true if $C, D$ are free categories on finite graphs by basic results in the representation theory of quivers, and I believe it's also known to be true if $C, D$ are finite posets.
| On MO, Benjamin Steinberg links to a paper of Leroux with a counterexample.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/99784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
How can I solve the differential equation $y'+y^{2}=f(x)$? $$y'+y^{2}=f(x)$$
I know how to find endless series solution via endless integral or endless derivatives and power series solution if we know $f(x)$. I also know how to find general solution if we know one particular solution ($y_0$).
I am looking for an exact analitic solution $y= L({f(x)})$ without knowing a particular solution, if it exists. (Here $L$ defines an operator such as integral, derivative, radical, or any defined function.) If it does not exist, could you please prove why we cannot find it?
Note: This equation is related to second order differantial linear equation. If we put $y=u'/u$, this equation will turn into $u''(x)-f(x).u(x)=0$. If we the find general solution of $y'+y^{2}=f(x)$, it means that $u''(x)-f(x).u(x)=0$ will be solved as well. As we know, many functions such as Bessel function or Hermite polinoms and so many special functions are related to Second order linear differential equations.
Thank you for answers.
EDIT:
I asked the question in mathoverflow too. You can also find the link below for details. (1-Endless transform, 2-Endless Integral,3-Endless Derivatives,4-Power series) and answers about the subject.
https://mathoverflow.net/questions/87041/looking-for-the-solution-of-first-order-non-linear-differential-equation-y-y
| Interesting. In Maple I tried $y'+y^2 = \sin(x)$, and the solution involves Mathieu functions $S, C, S', C'$.
I tried $y'+y^2=x$, and the solution involves Airy functions Ai, Bi.
I tried $y'+y^2=1/x$, and the solution involves Bessel functions $I_0, I_1, K_0, K_1$.
This is a Riccati equation. For more info, in particular how its solutions are related to solutions of a second-order linear equation, look that up.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/99850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
If $xy$ is a unit, are $x$ and $y$ units? I know if $x$ and $y$ are units, in say a commutative ring, then $xy$ is a unit with $(xy)^{-1}=y^{-1}x^{-1}$.
But if $xy$ is a unit, does it necessarily follow that $x$ and $y$ are units?
| Yes. Let $z=xy$. If $z$ is a unit with inverse $z^{-1}$, then $x$ is a unit with inverse $yz^{-1}$, and $y$ is a unit with inverse $xz^{-1}$, because
$$x(yz^{-1})=(xy)z^{-1}=zz^{-1}=1$$
$$y(xz^{-1})=(yx)z^{-1}=(xy)z^{-1}=zz^{-1}=1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/99949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 5,
"answer_id": 1
} |
If $B \ (\supseteq A)$ is a finitely-generated $A$-module, then $B$ is integral over $A$. I'm going through a proof of the statement:
Let $A$ and $B$ be commutative rings.
If $A \subseteq B$ and $B$ is a finitely generated $A$-module, then all $b \in B$ are integral over $A$.
Proof:
Let $\{c_1, ... , c_n\} \subseteq B$ be a set of generators for $B$ as an $A$-module, i.e $B = \sum_{i=1}^n Ac_i$. Let $b \in B$ and write $bc_i = \sum_{j=1}^n a_{ij}c_j $ with $a_{ij} \in A$, which says that $(bI_n - (a_{ij}))c_j = 0 $ for $ 1 \leq j \leq n$. Then we must have that $\mathrm{det}(bI_n - (a_{ij})) = 0 $. This is a monic polynomial in $b$ of degree $n$.
Why are we not done here? The proof goes on to say:
Write $1 = \alpha_1 c_1 + ... + \alpha_n c_n$, with the $\alpha_i \in A$. Then $\mathrm{det}(bI_n - (a_{ij})) = \alpha_1 (\mathrm{det}...) c_1 + \alpha_2 (\mathrm{det}...) c_2 + ... + \alpha_n (\mathrm{det}...) c_n = 0$. Hence every $b \in B$ is integral over $A$.
I understand what is being done here on a technical level, but I don't understand why it's being done. I'd appreciate a hint/explanation. Thanks
| Another way to phrase it, slightly different to Georges's answer and comments,
is as follows:
In the first paragraph of the proof, $B$ could be replaced by any f.g. $A$-module $M$, and $b$ could any endomorphism of that $A$-module. What we conclude is that every $\varphi \in End_A(M)$ is integral over $A$.
In particular, if $M$ is in fact a $B$-module, then we conclude that the image of $B$ in $End_A(M)$ is integral over $A$.
The point of the second paragraph is to observe that (since $B$ is a ring with $1$), the natural map $B \to End_A(B)$ (given by $B$ acting on itself through
multiplication) is injective, so that $B$ coincides with its image in $End_A(B)$. Only after making this additional observation can we conclude that
$B$ is integral over $A$.
Just as something to think about, what you'll see is that the argument proves
that if $B$ is an $A$-algebra which admits a faithful module which is f.g. over $A$, then $B$ is integral over $A$. On the other hand, if $B$ just admits a module that is f.g. over $A$, but not necessarily faithful, then we can't conclude that $B$ is integral over $A$. (See if you can find a counterexample.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/100124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Restricted Integer Partitions Two Integer Partition Problems
Let $P(n,k,m)$ be the number of partitions of $n$ into $k$ parts with all parts $\leq m$.
So $P(10,3,4) = 2$, i.e., (4,4,2); (4,3,3).
I need help proving the following:
$P(2n,3,n-1) = P(2n-3,3, n-2)$
$P(4n+3, 3, 2n+1) = P(4n,3,2n-1) + n + 1$.
| For the first one: For any partition $2n=a+b+c$ where $a,b,c \leq n-1$, we have a partition $2n-3=(a-1)+(b-1)+(c-1)$ where $a-1,b-1,c-1 \leq n-2$ and vice versa.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/100186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
problem with continuous functions $f,g:\mathbb{R}\longrightarrow\mathbb{R}$
f,g are continuous functions. $\forall q\in\mathbb{Q}$ $f\left(q\right)\leq g\left(q\right)$
I need to prove that $\forall x\in\mathbb{R}$ $f\left(x\right)\leq g\left(x\right)$
| Hint: Note that if $(x_n)_{n\in\mathbb{N}}$ is a convergent sequence of real numbers and $f,g$ are continuous functions, $\lim\limits_{n\to\infty} f(x_n)=f(\lim\limits_{n\to\infty}x_n)$ and $\lim\limits_{n\to\infty} g(x_n)=g(\lim\limits_{n\to\infty}x_n)$, and that for any real number $x$ we have some sequence $(x_n)_{n\in\mathbb{N}}$ of rational numbers that converges to it. How can we manipulate limits to show that $f(x)=f(\lim\limits_{n\to\infty}x_n)\leq g(\lim\limits_{n\to\infty}x_n)=g(x)$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/100239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
The meaning of Implication in Logic How do I remember Implication Logic $(P \to Q)$ in simple English?
I read some sentence like
*
*If $P$ then $Q$.
*$P$ only if $Q$.
*$Q$ if $P$.
But I am unable to correlate these sentences with the following logic.
Even though the truth table is very simple, I don't want to remember it without knowing its actual meaning.
$$\begin{array}{ |c | c || c | }
\hline
P & Q & P\Rightarrow Q \\ \hline
\text T & \text T & \text T \\
\text T & \text F & \text F \\
\text F & \text T & \text T \\
\text F & \text F & \text T \\ \hline
\end{array}$$
| I would like to share my own understanding of this.
I like to think of Implication as a Promise rather than Causality which is the natural tendency when you come across it the first time.
Example:
You have a nice kid and you make him the following promise to him:
If you get an A in your exam, then I will buy you a car.
In this case P is kid gets A in exam and Q is You buy him a car.
Now let's see how this promise holds with various values for P and Q
If P is true (Kid gets A in exam) and Q is true (You bought him car) then your promise has held and $P \Rightarrow Q$ is true.
If P is true (Kid gets A in exam) and Q is false (You didn't buy him a car) then your promise didn't hold so $P \Rightarrow Q$ is false.
If P is false (Kid didn't get A in exam) and Q is true (You bought him car) then your promise still holds and $P \Rightarrow Q$ is true and that's because you only said what will happen if he get's an A, you basically didn't say what will happen if he doesn't which could imply anything. Basically you didn't break your promise and this is the weak property which most people find confusing in implication.
If P is false (Kid didn't get A in exam) and Q is false (you didn't buy him a car) then your promise has also held and $P \Rightarrow Q$ is true because you only promised and guaranteed a car if he gets an A.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/100286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 9,
"answer_id": 0
} |
Is there a formula for solving integrals of this form? I was wondering if there was a substitution formula to solve integrals of this form:
$\int f(g(x))g''(x)dx$
| No, not a nice one, anyway. It is worthwhile, I think, to point out that integration rules, such as the usual substitution rule, do not always "solve" ("evaluate" is the proper term) the given integral. The usual substitution rule, for instance, only transforms the integral into another integral which may or may not be easily handled.
Of course, if the antiderivative of $f$ is known, then the usual substitution rule will allow you to evaluate integrals of the form $\int f\bigl(g(x)\bigr)g'(x)\,dx$.
I don't think a formula of the type you seek would be very useful, as it can't handle all cases when an antiderivative of the "outer function" is known: consider $\int \sin(x^2)\cdot 2\,dx$. This can't be expressed in an elementary way.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/100330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
If $n\ge 3$, $4^n \nmid 9^n-1$ Could anyone give me a hint to prove the following?
If $n\ge 3$, $4^n \nmid 9^n-1$
| Hint :
Try to prove using induction :
$1.$ $9^3 \not \equiv 1 \pmod {4^3}$
$2.$ suppose : $9^k \not \equiv 1 \pmod {4^k}$
$3.$ $9^k \not \equiv 1 \pmod {4^k} \Rightarrow 9^{k+1} \not \equiv 9 \pmod {4^k}$
So you have to prove :
$ 9^{k+1} \not \equiv 9 \pmod {4^k} \Rightarrow 9^{k+1} \not \equiv 1 \pmod {4^{k+1}}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/100393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 4
} |
Every non zero commutative ring with identity has a minimal prime.
Let $A$ be a non zero commutative ring with identity. Show that the set of prime ideals of $A$ has minimal elements with respect to inclusion.
I don´t know how to prove that, I can suppose that the ring is an integral domain, otherwise the ideal $(0)$ is a prime ideal , but I don´t know how to proceed. Probably it's a Zorn application.
| Below is a hint, with further remarks on the structure of the set of prime ideals, from Kaplansky's excellent textbook Commutative Rings. For a recent survey on the poset structure of prime ideals in commutative rings see R & S Wiegand, Prime ideals in Noetherian rings: a survey, in T. Albu, Ring and Module Theory, 2010.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/100443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 0
} |
Showing that $ \int_{0}^{1} \frac{x-1}{\ln(x)} \mathrm dx=\ln2 $ I would like to show that
$$ \int_{0}^{1} \frac{x-1}{\ln(x)} \mathrm dx=\ln2 $$
What annoys me is that $ x-1 $ is the numerator so the geometric power series is useless.
Any idea?
| This is a classic example of differentiating inside the integral sign.
In particular, let $$J(\alpha)=\int_0^1\frac{x^\alpha-1}{\log(x)}\;dx$$. Then one has that $$\frac{\partial}{\partial\alpha}J(\alpha)=\int_0^1\frac{\partial}{\partial\alpha}\frac{x^\alpha-1}{\log(x)}\;dx=\int_0^1x^\alpha\;dx=\frac{1}{\alpha+1}$$ and so we know that $\displaystyle J(\alpha)=\log(\alpha+1)+C$. Noting that $J(0)=0$ tells us that $C=0$ and so $J(\alpha)=\log(\alpha+1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/100495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 3,
"answer_id": 0
} |
One vs multiple servers - problem Consider the following problem:
We have a simple queueing system with $\lambda%$ - probabilistic intensity of queries per some predefined time interval.
Now, we can arrange the system as a single high-end server ($M/M/1$, which can handle the queries with the intensity of $2\mu$) or as two low-end servers ($M/M/2$, each server working with intensity of $\mu$).
So, the question is - which variant is better in terms of overall performance?
I suspect that it's the first one, but, unfortunately, my knowledge of queuing / probability theory isn't enough.
Thank you.
| You need to specify what you mean by "overall performance", but for most measures the two server system will have better performance. Intuitively, a "complicated" customer, one that has a long service time will shut down the M/M/1 queue but only criple the M/M/2 queue.
If we let the utiliztion be $$\rho=\frac{\lambda}{2\mu}$$ then some of the usual performance measures are
$L_q$ the average length of the queue, $W_q$ the average waiting time, and $\pi_0$ the probability that the queue is empty. For the M/M/1 queue these measures are
$$L_q=\frac{\rho^2}{1-\rho}$$
$$W_q=\frac{\rho^2}{\lambda(1-\rho)}$$
$$\pi_0=1-\rho$$
and for the M/M/2 queue
$$L_q=\frac{2\rho^3}{1-\rho^2}$$
$$W_q=\frac{2\rho^3}{\lambda(1-\rho^2)}$$
$$\pi_0=\frac{1-\rho}{1+\rho}$$
So, the system is empty more often in the M/M/1 queue, but the expected wait time and the expected queue length are less for the M/M/2 (as $\frac{2\rho}{1+\rho}<1$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/100571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Linear operator's linearity $$f:R^n \to R^3 \ \ \ \ \ \ \ \ \ f(x,y,z)=(x-z,y,az^2)$$
I have to find $n$ and $a$ such that $f$ is a linear operator.
$$x-z=0$$
$$y=0$$
$$az^2=0$$
I found $n$ to be 3.
For $az^2$ to be equal to $0$, either $z$ is $0$ or $a$ is $0$, right? The $z^2$ is confusing me, I don't know from what $R^n \to R^3$ it is. Any idea please? After findng $a$ and $n$, I have to write the matrix of $f$ and find the $dim(KerF)$ Thank you.
| The fact that $n=3$ comes from inspection.
In order for $f:\mathbb{R}^3\to\mathbb{R}^3:(x,y,z)\mapsto(x-z,y,az^2)$ to be a linear operator you need
$$f(\vec{x}+\vec{u})=f(\vec{x})+f(\vec{u}), \quad\text{or}$$
$$\forall \vec{x},\vec{u}\in\mathbb{R}^3:\quad\begin{cases}(x+u)-(z+w)=(x-z)+(u-w) \\ (y+v)=(y)+(v) \\ a(z+w)^2=az^2+aw^2.\end{cases}$$
(Note: $\vec{x}=(x,y,z),\vec{u}=(u,v,w)$ here.) The first two check out but the last one implies $2azw=0$ for all $z,w\in\mathbb{R}$, which is obviously false unless $a=0$.
Now we have that the matrix associated to $f$ as a linear map is given by
$$f:\begin{pmatrix}x\\y\\z\end{pmatrix}\mapsto x\begin{pmatrix}1\\0\\0\end{pmatrix}+y\begin{pmatrix}0\\1\\0\end{pmatrix}+z\begin{pmatrix}-1\\0\\0\end{pmatrix}\quad\text{hence}\quad f(\vec{x})=\begin{pmatrix}1&0&-1\\0&1&0\\0&0&0\end{pmatrix}\vec{x}.$$
Finally, to find $\mathrm{Ker} f$, solve $x-z,y,0=0$, which is parametrized by $(t,0,t)$ and hence $\mathrm{dim}=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/100702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Multiplying Infinite Cardinals (by Zero Specifically) On the Wikipedia page on Cardinal Numbers, Cardinal Arithmetic including multiplication is defined. For finite cardinals there is multiplication by zero, but for infinite cardinals only defines multiplication for nonzero cardinals. Is multiplication of an infinite cardinal by zero undefined? If so, why is it?
Also does $\kappa\cdot\mu= \max\{\kappa,\mu\}$ simply means that the multiplication of the two is simply the cardinality of the higher cardinal? Why is this?
| For any cardinal $\kappa$ whatsoever, $0\cdot\kappa=\kappa\cdot 0=0$. This is an immediate consequence of the definition and the fact that for any set $X$, $\varnothing\times X=\varnothing$.
Yes, if one assumes the axiom of choice, the product of two infinite cardinals is simply the larger of them; so is their sum. The product of a non-zero finite cardinal and an infinite cardinal is that infinite cardinal, so it’s also simply the larger of the two. This fails when the finite cardinal is $0$, because then the product is $0$.
Even without the axiom of choice it’s true that if $\kappa$ and $\mu$ are well-orderable cardinals, $\kappa\cdot\mu=\max\{\kappa,\mu\}$. This is proved by constructing a bijection between $\kappa\times\mu$ and $\max\{\kappa,\mu\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/100820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Local diffeomorphism from $\mathbb R^2$ onto $S^2$ Is there any local diffeomorphism from $\mathbb R^2$ onto $S^2$?
| First note that there actually are no covering maps $\mathbb{R}^2 \to S^2$. This is because $S^2$ is simply connected and hence is its own universal cover; if there were a covering map $\mathbb{R}^2 \to S^2$, then by the universal property of the universal cover there would be a covering map $S^2 \to \mathbb{R}^2$. But there can't even be a continuous surjection $S^2 \to \mathbb{R}^2$ because $S^2$ is compact and $\mathbb{R}^2$ is not.
Thus any local diffeomorphism $\mathbb{R}^2 \to S^2$ answers your question. For example, the inverse of the stereographic projection map does the job.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/100884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 0
} |
Hindu calendar (lunar) to Gregorian calendar We have to convert the Hindu calendar (the lunar one) to the Gregorian calendar. According to the (Dutch) Wikipedia (sorry for that, but it has more information than other websites), it is based on he angle between the sun and moon. Now I have many questions about that, such as what angle do they mean? If I draw a line between the sun and moon, I am missing a line to calculate the angle. But the more concrete question is: how can we convert from the Hindu calendar to the Gregorian calendar? Also, we have noticed that the conversion is not injective; it can happen that one Hindu date relates to two Gregorian dates. We decide in that case to pick the first Gregorian date, but we are still not able to convert dates.
One of the days we know the conversion of is:
Basant Pachami (d-m-Y) : 5-11-2068 (Hindu) : 1-28-2012 (Gregorian)
| I don't have the entire answer but I hope this will at least help a bit you if not more.
Have you looked at http://www.webexhibits.org/calendars/calendar-indian.html
Quoting:
"All astronomical calculations are performed with respect to a Central Station at longitude 82°30’ East, latitude 23°11’ North."
Why do you wish to convert these dates?
If this is a one-time thing, you might be able to find something that does it for you.
I found http://www.rajan.com/calendar/ but this is Gregorian <> Nepali (I was under the assumption that it is the same, maybe it isn't)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/100961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
On the set of integer solutions of $x^2+y^2-z^2=-1$. Let
$$
\mathcal R=\{x=(x_1,x_2,x_3)\in\mathbb Z^3:x_1^2+x_2^2-x_3^2=-1\}.
$$
The group $\Gamma= M_3(\mathbb Z)\cap O(2,1)$ acts on $\mathcal R$ by left multiplication.
It's known that there is only one $\Gamma$-orbits in $\mathcal R$, i.e. $\Gamma \cdot e_3=\mathcal R$ where $e_3=(0,0,1)$.
Could anybody give me a proof of this fact?
Thanks.-.
[Comments:
(i) $O(2,1)$ is the subgroup in $GL_3(\mathbb R)$ which preserves the form $x_1^2+x_2^2-x_3^2$, that is
$$
O(2,1)=\{g\in GL_3(\mathbb R): g^t I_{2,1} g=I_{2,1}\}\qquad\textrm{where}\quad
I_{2,1}=
\begin{pmatrix}1&&\\&1&\\&&-1\end{pmatrix}.
$$
(ii) $g(\mathcal R)\subset \mathcal R$ for any $g\in \Gamma$ because $g$ has integer coefficients and we can write
$$
x_1^2+x_2^2-x_3^2=x^tI_{2,1}x,
$$
then
$$
(gx)^t I_{2,1} (gx)= x^t (g^tI_{2,1}g)x=x^tI_{2,1}x=-1.
$$]
| Do you know about Frink's paper?
http://www.maa.org/sites/default/files/Orrin_Frink01279.pdf
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/101001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Compute: $\int_{0}^{1}\frac{x^4+1}{x^6+1} dx$ I'm trying to compute: $$\int_{0}^{1}\frac{x^4+1}{x^6+1}dx.$$
I tried to change $x^4$ into $t^2$ or $t$, but it didn't work for me.
Any suggestions?
Thanks!
| First substitute $x=\tan\theta$. Simplify the integrand, noticing that $\sec^2\theta$ is a factor of the original denominator. Use the identity connecting $\tan^2\theta$ and $\cos2\theta$ to write the integrand in terms of $\cos^22\theta$. Now the substitution $t=\tan2\theta$ reduces the integral to a standard form, which proves $\pi/3$ to be the correct answer. This method seems rather roundabout in retrospect, but it requires only natural substitutions, standard trigonometric identities, and straightforward algebraic simplification.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/101049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 7,
"answer_id": 2
} |
Showing $C=\bigg\{ \sum_{n=1}^\infty a_n 3^{-n}: a_n=0,2 \bigg\}$ is uncountable Let us define the set
$$C=\bigg\{ \sum_{n=1}^\infty a_n 3^{-n}: a_n=0,2 \bigg\}$$
This is the Cantor set, could anyone help me prove it is uncountable? I've been trying a couple of approaches, for instance assume it is countable, list the elements of $C$ as decimal expansions, then create a number not in the list, I am having trouble justifying this though.
Secondly i've been trying to create a function $f$ such that $f(C)=[0,1]$.
Many thanks
| Note the "obvious" bijection between $C$ and $P(\mathbb N)$ defined as:
$$f(A)= 2\sum_{n=1}^\infty\frac{\chi_A(n)}{3^n}$$
Where $\chi_A$ is the characteristics function of $A$ ($1$ for $n\in A$, $0$ otherwise).
Suppose that $A\neq B$ and $x=\min (A\Delta B)$, wlog $x\in A$ then $f(A)-f(B)\ge\dfrac2{3^x}>0$.
In the other direction, if $x\in C$ then we can write $A=\{n\in\mathbb N\mid a_n=2\}$ and it is quite clear why $f(A)=x$.
Now we use the fact that $P(\mathbb N)$ is uncountable (direct result of Cantor's theorem) and we are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/101075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Show that $u+v$ bisects $u$ and $v$ only if $|u|=|v|$ I want to show that if I have two Euclidian vectors in $\mathbb{R}^n$ than the sum of these two vectors bisects the angle between the two vectors. Said more mathematically.
Let $(u,v) \in \mathbb{R}^n$
Then $\angle(u,v+u) = \angle(u+v,v)$ if and only if $|u|=|v|$
I tried using the fact that
$$ \angle(u,v) = \arccos \left( \frac{u \cdot v}{|u| |v|} \right) $$
Alas this attempt was futile
Now for $\mathbb{R}^2$ this is obviously true, as can bee seen from my illustration.
How do I prove this with some rigor? Thanks for all tips and advices, this is not a homework question.
| The first prove:
Two vectors form a parallelogram ABCD, we know that in a parallelogram diagonals bisect by their point of intersection. It means that segment from A to point of intersections of diagonals is a middle line of triangle ABD. There is a criteria that middle line is bisecting line if and only if the triangle is isosceles. That proves your problem.
The second prove:
Let $|u|=|v|$ is equal $u^2=v^2$ then $\cos(\phi_1)=\frac{(u+v,u)}{|u+v||u|}=\frac{u^2+(u,v)}{|u+v||u|}=\frac{v^2+(u,v)}{|u+v||v|}=\cos(\phi_2)$
Let $\cos(\phi_1)=\cos(\phi_2)$ then $\frac{u^2+(u,v)}{|u|}=\frac{v^2+(u,v)}{|v|}$ => $|u|+\frac{(u,v)}{|u|}=|v|+\frac{(u,v)}{|v|}$=>$(|u|-|v|)(1-\frac{(u,v)}{|u||v|})=0$ if $u$ and $v$ aren't collinear, it means that $|u|=|v|$ But if they are, it means that angle is zero. And angle $\phi_1$ and $\phi_2$ are also zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/101159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is a linear equation with three variables a plane? In linear algebra, why is the graph of a three variable equation of the form $ax+by+cz+d=0$ a plane? With two variables, it is easy to convince oneself that the graph is a line (using similar triangles, for example). However with three variables, this same technique does not seem to work: setting one of the variables to be a constant yields a line in 3-D space (I think), and one could repeat this process for each constant value of that variable, but in the end there seems not to be an easy way to check that all these lines are coplanar.
I don't remember seeing why this is in a book, and Khan Academy's video, for example, simply states that this is the case.
| Look at the equation as a dot product or inner product:
$$
\left[ \begin{array}{ccc}
a & b & c \end{array} \right]
\left[ \begin{array}{c}
x \\
y \\
z \end{array} \right] = -d.
$$
Then it is clear to see that the point $(x, y, z)$ that satisfies the equation is any point in the plane that is perpendicular to the vector $\left[ \begin{array}{ccc}
a & b & c \end{array} \right]$ (this fixes its orientation) and is the right distance from the origin to yield a dot product of $-d$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/101242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
terminology: euler form and trigonometric form Am I right, that the following is the so-called trigonometric form of the complex number $c \in \mathbb{C}$?
$|c| \cdot (\cos \alpha + \mathbf{i} \sin \alpha)$
And the following is the Euler form of the very same number, right?
$|c|\cdot \mathbf{e}^{\mathbf{i}\alpha}$
I think there must be a mistake in one of my tutor's notes..
| They are the same, and can also be called "polar coordinates" for the complex number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/101311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding how many terms of the harmonic series must be summed to exceed x? The harmonic series is the sum
1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + ... + 1/n + ...
It is known that this sum diverges, meaning (informally) that the sum is infinite and (more formally) that for any real number x, there there is some number n such t that the sum of the first n terms of the harmonic series is greater than x. For example, given x = 3, we have that
1 + 1/2 + 1/3 + ... + 1/11 = 83711/27720 ≈ 3.02
So eleven terms must be summed together to exceed 3.
Consider the following question
Given an integer x, find the smallest value of n such that the sum of the first n terms of the harmonic series exceeds x.
Clearly we can compute this by just adding in more and more terms of the harmonic series, but this seems like it could be painfully slow. The best bound I'm aware of on the number of terms necessary is 2O(n), which uses the fact that
1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 + 1/7 + 1/8 + ...
is greater than
1 + (1/2) + (1/4 + 1/4) + (1/8 + 1/8 + 1/8 + 1/8) + ...
which is in turn
1 + 1/2 + 1/2 + 1/2 + ...
where each new 1/2 takes twice as many terms as the previous to accrue. This means that the brute-force solution is likely to be completely infeasible for any reasonably large choice of x.
Is there way to calculate the harmonic series which requires less operations than the brute-force solution?
| The DiGamma function (the derivative of the logarithm of the Gamma function) is directly related to the harmonic numbers: ψ(n) = Hn-1 - γ, where γ is Euler's constant (0.577...).
You can use one of the approximations in the Wikipedia article to compute an approximate value for Hn, and then use that in a standard root finding algorithm like bisection to locate the solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/101371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 6,
"answer_id": 0
} |
Subsets and Splits