Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Solve the following quadratic inequalities by graphing the corresponding function Looking at these questions and I am not confident in my abilities to solve them.
Solve the following quadratic inequalities by graphing the corresponding function. Note: a) and b) are separate questions.
$$ a) y \le -2x^2+16x-24\\
b) y > \frac 13 (x-1)^2-3
$$
Help would be appreciated!
| Step one:
Learn to draw graphs of the equalities.
Suppose we are given
$$
y \le -2x^2+16x-24.
$$
The matching equality is
$$
y = -2x^2+16x-24.
$$
We can factorise this, then graph it.
$$
\begin{align}
y &= -2(x^2-8x+12)\\
&= -2(x-6)(x-2)
\end{align}
$$
This means that the roots of the polynomial are at $x=6$ and $x=2$ respectively. The negative sign of the coefficient of the highest power tells us that the parabola is "upside down", compared to the simplest form of the parabola ($y=x^2$).
Now, we put two points on the cartesian plane at $(6,0)$ and $(2,0)$ respectively. For a parabola, we know that the maxima (or minima) will lie exactly half-way between the roots. i.e. when $x=4$. We can find the $y$ value of this maxima by substituting $x=4$ into the equation:
$$
\begin{align}
y &= -2x^2+16x-24\\
&= -2(4)^2+16(4)-24\\
&= 8
\end{align}
$$
So the turning point (a maxima in this case) lies at $(4,8)$.
We now have three points of the parabola, $(2,0)$, $(4,8)$, and $(6,0)$, which we can plot as follows:
Now we can join the dots to make a parabola.
Step two:
Solve inequalities using graphs.
To solve the original inequality, we simply check the two regions on either side of the parabola to see whether they make the inequality true or false.
We can choose any point in the region, provided it is not actually on the parabola. Suppose we check the point $(0,0)$, which lies above and to the left of the parabola. Substituting $x=0$, $y=0$ into the inequality, we get
$$
\begin{align}
y &\le -2x^2+16x-24\\
0 &\le -2(0)^2 +16(0) - 24\\
0 &\le -24\\
\end{align}
$$
Of course, this is FALSE, which means that the region above the parabola does not satisfy the inequality.
Testing the region below the inequality, we substitute any point in that region. For instance, $(4,1)$.
$$
\begin{align}
y &\le -2x^2+16x-24\\
1 &\le -2(4)^2 + 16(4) -24\\
1 &\le 8\\
\end{align}
$$
This is evidently TRUE.
This means that the inequality is true for any point in that region below the parabola. (i.e. The shaded region.)
(I'm stealing @Kaster's image, because Wolfram|Alpha didn't want to play nicely)
We can describe this region using set theory as follows:
$$
\begin{align}
R = \{(x,y) \in \mathbb{R}^2:-\infty \le x \le \infty,\,\, y \le -2x^2 +16x -24\}
\end{align}
$$
However, the simplest way to describe the region without using the graph is to use the inequality given in the question:
$$
y \le -2x^2+16x-24.
$$
Because this inequality defines a region, we can't write it any more concisely than that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/423164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
prime notation clarification When I first learned calculus, I was taught that $'$ for derivatives was only a valid notation when used with function notation: $f'(x)$ or $g'(x)$, or when used with the coordinate variable $y$, as in $y'$.
But I have seen on a number of occasions, both here and in the classroom, where it will be used with an expression. E.g. $(x+\frac{1}{x})'$ to mean $\frac{d}{dx}(x+\frac{1}{x})$. It has always been my understanding that this notation is not considered valid because it doesn't indicate what the independent variable that the expression is being differentiated with respect to is. E.g. in $(ax+bx^2)'$, the variable could be $a$, $b$, or $x$. This problem also exists with $y'$ but I figured this was an exception because $y$ and $x$ usually represent the coordinate axes so it can be assumed that the independent variable for $x$ is $y$ when taking $y'$.
So is this notation valid, just putting a $'$ at the end of an expression?
| What you're seeing is a "shorthand" an instructor or such may use in the process of computing the derivative of a function with respect to $x$. Usually when you seem something like $(ax + bx^2)'$, it's assumed from the context that we are taking the derivative of the expression, with respect to $x$. That is, "$(ax + bx^2)'$" is taken to mean "evaluate $\,\frac d{dx}(ax + bx^2)$", just as one assumes from context that $y'$ refers to the derivative, $f'(x)$, of $y = f(x)$.
I prefer to stick with $f'(x)...$ or $y'$, using $\frac d{dx}(\text{some function of x})$ when evaluating the derivative of a function with respect to $x$, particularly when trying to convey information to another person. (On scratch paper, or in my own work, I might get a little informal and slip into using a "prime" to abbreviate what I'm doing.) But I would prefer the more formal or "official" conventions/notations were used in "instructive contexts", to avoid confusion or possible ambiguity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/423214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is the cone locally compact Let $X$ denote the cone on the real line $\mathbb{R}$. Decide whether $X$ is locally
compact. [The cone on a space $Y$ is the quotient of $Y \times I$ obtained by
identifying $Y \times \{0\}$ to a point.]
I am having a hard time showing that there exists a locally compact neighborhood around $Y \times \{0\}$. Some help would be nice.
| Here is a way of showing that no neighborhood of $r=\Bbb R\times\{0\}\in X$ is compact. The idea is to find in any neighborhood $V$ of $r$ a closed subspace homeomorphic to $\Bbb R$. Since the subspace is not compact, $V$ cannot be compact.
So let $V$ be a neighborhood of $r$ in $X$. Then $V$ contains the image of an open set $U$ around $\Bbb R\times\{0\}$. Since the interval $[n,n+1]$ for any $n\in\Bbb Z$ is compact, there is an $\epsilon_n>0$ such that $[n,n+1]\times[0,ϵ_n]$ is contained in $U$. Let $b_n=\min\{ϵ_n,ϵ_{n-1}\}$. Define
$$
f(x) = (x-n)b_{n+1}+(n+1-x)b_n,\quad n\in\Bbb Z,\quad x\in[n,n+1]
$$
This map has a graph $\Gamma$ homeomorphic to $\Bbb R$ and contained in $U$. The quotient map $q:\Bbb R\times I\to X$ embeds $\Gamma$ as a closed subspace of $V$, so $q(\Gamma)$ had to be compact if $V$ were compact.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/423276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
If $G$ is a group, $H$ is a subgroup of $G$ and $g\in G$, is it possible that $gHg^{-1} \subset H$?
If $G$ is a group, $H$ is a subgroup of $G$ and $g\in G$, is it possible that $gHg^{-1} \subset H$ ?
This means, $gHg^{-1}$ is a proper subgroup of $H$. We know that $H \cong gHg^{-1}$, so if $H$ is finite then we have a contradiction since the isomorphism between the two subgroups implies that they have the same order so $gHg^{-1}$ can't be proper subgroup of $H$.
So, what if $H$ is infinite is there an example for such $G , H , g$ ?
Edit: I suppose that $H$ has a subgroup $N$ such that $N$ is a normal subgroup of $G$.
| Let $\mathbb{F}_2 = \langle a,b \mid \ \rangle$ be the free group of rank two. It is known that the subgroup $F_{\infty}$ generated by $S= \{b^nab^{-n} \mid n \geq 0 \}$ is free over $S$. Then $bF_{\infty}b^{-1}$ is freely generated by $bSb^{-1}= \{b^n a b^{-n} \mid n \geq 1\}$, hence $bF_{\infty}b^{-1} \subsetneq F_{\infty}$.
(Otherwise, $a$ can be written over $bSb^{-1}$, which is impossible since $a \in F_{\infty}$ and $F_{\infty}$ is free over $S$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/423328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
a codeword over $\operatorname{GF}(4)$ -> two codewords over $\operatorname{GF}(2)$ using MAGMA A codeword $X$ over $\operatorname{GF}(4)$ is given. How can I write it as $X= A+wB$ using MAGMA? where $A$ and $B$ are over $\operatorname{GF}(2)$ and $w^2 + w =1$.
Is there an easy way, or do I have to write some for loops and if statements?
| Probably there is an easier way, but the following function should do the job:
function f4tof2(c)
n := NumberOfColumns(c);
V := VectorSpace(GF(2),n);
ets := [ElementToSequence(c[i]) : i in [1..n]];
return [V![ets[i][1] : i in [1..n]],V![ets[i][2] : i in [1..n]]];
end function;
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/423418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluating $\int_0^\infty \frac{e^{-kx}\sin x}x\,\mathrm dx$ How to evaluate the following integral?
$$\int_0^\infty \frac{e^{-kx}\sin x}x\,\mathrm dx$$
| One more option:$$\begin{align}\int_0^\infty\int_0^\infty e^{-(k+y)x}\sin x\mathrm{d}x\mathrm{d}y&=\Im\int_0^\infty\int_0^\infty e^{-(k+y-i)x}\mathrm{d}x\mathrm{d}y\\&=\int_0^\infty\tfrac{1}{(k+y)^2+1}\mathrm{d}y\\&=[\arctan(k+y)]_0^\infty\\&=\tfrac{\pi}{2}-\arctan k.\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/423489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 4
} |
Let $f \colon \Bbb C \to \Bbb C$ be a complex valued function given by $f(z)=u(x,y)+iv(x,y).$ I am stuck on the following question :
MY ATTEMPT:
By Cauchy Riemann equation ,we have $u_x=v_y,u_y=-v_x.$ Now $v(x,y)=3xy^2 \implies v_x=3y^2 \implies -u_y=3y^2 \implies u=-y^3+ \phi(x) $. Now,I am not sure which way to go? Can someone give some explanation about which way to go in order to pick the correct option?
| Hint: you used the second C.R. equation arriving at $u(x,y)=-y^3+\phi(x)$. What does it happen if you apply the other C.R. equation, i.e. $u_x=v_y$, to your result?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/423669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Are there two $\pi$s? The mathematical constant $\pi$ occurs in the formula for the area of a circle, $A=\pi r^2$,
and in the formula for the circumference of a circle, $C= 2\pi r$. How does one prove that these constants are the same?
| One way to see it is if you consider a circle with radius $r$ and another circle with radius $r+\Delta r$ (where $\Delta r\ll r$) around the same point, and consider the area between the two circles.
As with any shape, the area is proportional to the square of a typical length; the radius is such a typical length. That is, a circle of radius $r$ has the area $Cr^2$ with some constant $C$. Now the area in between the two circles has the area $\Delta A = C(r+\Delta r)^2-Cr^2\approx 2Cr\,\Delta r$. That relation gets exact as $\Delta r\to 0$.
On the other hand, the distance between the two circles is constant, and therefore for sufficiently small $\Delta r$ you can "unroll" this shape into a rectangle (again, the error you make when doing this vanishes in the limit $\Delta r\to 0$). That rectangle has as one side the circumference, $2\pi r$, and as the other side $\Delta r$. Since the area of a rectangle is the product of its side lengths, we get as area $\Delta A = 2\pi r\,\Delta r$.
Comparing the two equations, we get $2Cr\,\Delta r=2\pi r\,\Delta r$, that is, $C=\pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/423836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 2
} |
Nitpicky Sylow Subgroup Question Would we call the trivial subgroup of a finite group $G$ a Sylow-$p$ subgroup if $p \nmid |G|$? Or do we just only look at Sylow-$p$ subgroups as being at least the size $p$ (knowing that a Sylow-$p$ subgroup is a subgroup of $G$ with order $p^k$ where $k$ is the largest power of $p$ that has $p^k \mid |G|$)?
| For what it is worth, I consider all primes $p$, not just those that divide the group order.
This makes many statements smoother. For instance, the defect group of the principal block is the Sylow $p$-subgroup, and a block is semisimple if and only if the defect group is trivial. Thus the principal block is semisimple iff $p$ does not divide the order of the group. It would be awkward to state the theorem only for non-principal blocks to avoid mentioning size $p^0$ Sylow $p$-subgroups.
Another reason is induction. For instance, a group is called $p$-closed if it has a normal Sylow $p$-subgroup. Subgroups and quotient groups of $p$-closed groups are $p$-closed. Except that if we only allow Sylows for $p$ that divides $G$ we have to redefine $p$-closed to be “normal Sylow $p$-subgroup or $p$ does not divide the order of the group” and now every time we consider a subgroup or quotient group we have to consider two cases: normal Sylow $p$-subgroup or $p$ does not divide the order of the group.
For this reason, most finite group theorists allow the trivial primes as well. For instance: Alperin, Aschbacher, Gorenstein, Huppert, Kurzweil and Stellmacher, Suzuki, etc. all explicitly allow primes that do not divide the order of the group.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/424033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Maximum cycle in a graph with a path of length $k$ I don't understand why this stands:
Let $G$ be a graph containing a cycle $C$, and assume that $G$ contains a path of length at least $k$ between two vertices of $C$.
Then $G$ contains a cycle of length at least $\sqrt{k}$.
Since we can extend the cycle $C$ with the vertices of the path, why don't we get a cycle of length $k+2$? ($2$ being the minimum number of vertices belonging to $C$ between the vertices where $C$ connect to it).
I really don't see where that square root is coming from.
For reference this is exercise $3$ from Chapter $1$ of the Diestel book.
| Here is my solution. Let $s$ and $t$ two vertices of $C$ such that
there is a $st$-path $P$ of lenght $k$. If $|V(P) \cap V(C)|\geq \sqrt{k}$ then the proof follows, because the cycle we want is $C$. Otherwise, consider that
$|V(P) \cap V(C)| < \sqrt{k}$. Then, as $|V(P)| \geq k$, by pigeon principle, there is a subpath of $P$ of size at least $\sqrt{k}$ internally disjoint from some subpath of $C$. Joining this subpaths we get the desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/424107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
} |
Proving a lemma - show the span of a union of subsets is still in the span This is part of proving a larger theorem but I suspect my prof has a typo in here (I emailed him about it to be sure)
The lemma is written as follows:
Let $V$ be a vector space. Let {$z, x_1, x_2, ..., x_n$} be a subset of $V$. Show that if $z \in\ span(${$x_1, x_2,..., x_n$}$)$, then $span(${$z, x_1, x_2,..., x_r$}$)=span(${$ x_1, x_2,..., x_r$})
I feel like this should be really simple and I saw a proof that you can take out a vector from a subset and not change the span, but I am unsure of the reverse -- assuming that is what this lemma is about. (To me, the "if" implies something besides just unifying the two sets should follow).
Anyhow, the proof should, I think, start with that we can modify a subset (let's call it S) without affecting the span if we start like this:
$\exists x \in S$ such that $ x \in span(S-${$x$})
then you build up a linearly independent subset somehow. The proof that you can take vectors out of the subset says that since $x\in span(${$x$}$)\ \exists\ \lambda_1, \lambda_2,..., \lambda_n \ \in\ K$ such that $x=\sum_{i=1}^n\lambda_i x_i $
since we know $ span(S) \supset span(S-${$x$}) we just need to show $ span(S) \subseteq span(S-${$x$})
But honestly I am not sure I understand what's happening here well enough to prove the above lemma. (This class moves fast enough that we're essentially memorizing proofs rather than re-deriving them I guess).
I am really starting to hate linear algebra. :-(
(Edited to fix U symbol and make it a "is a member of" symbol)
| We want to show $\operatorname{span}\{z, x_1, \dots, x_n\} = \operatorname{span}\{x_1, \dots, x_n\}$. In general, to show $X = Y$ where $X, Y$ are sets, we want to show that $X \subseteq Y$ and $Y \subseteq X$.
So suppose $v \in \operatorname{span}\{x_1, \dots, x_n\}$. Then, we can find scalars $c_1, \dots, c_n$ such that $$v = c_1x_1 + \dots + c_nx_n$$ so clearly, $v \in \operatorname{span}\{z, x_1, \dots, x_n\}$. This proves $$\operatorname{span}\{x_1, \dots, x_n\} \subseteq \operatorname{span}\{z, x_1, \dots, x_n\}$$
Now let $v \in \operatorname{span}\{z, x_1, \dots, x_n\}$. Again, by definition, there are scalars $c_1, \dots, c_{n+1}$ such that $v = c_1x_1 + \dots + c_nx_n + c_{n+1}z$. But hold on, $z \in \operatorname{span}\{x_1, \dots, x_n\}$, right? This means there are scalars $a_1, \dots, a_n$ such that $z = a_1x_1 + \dots + a_nx_n$. Hence,
$$v = c_1x_1 + \dots + c_nx_n + c_{n+1}(a_1x_1 + \dots + a_nx_n)$$
$$= (c_1 + c_{n+1}a_1)x_1 + \dots + (c_n+c_{n+1}a_n)x_n$$
and so we conclude that $v \in \operatorname{span}\{x_1, \dots, x_n\}$. Therefore,
$$\operatorname{span}\{z, x_1, \dots, x_n\} = \operatorname{span}\{x_1, \dots x_n\}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/424166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
To what extent the statement "Data is normally distributed when mode, mean and median scores are all equal" is correct? I read that normally distributed data have equal mode, mean and median. However in the following data set, Median and Mean are equal but there is no Mode and the data is "Normally Distributed":
$ 1, 2, 3, 4, 5 $
I am wondering to how extent the statement is correct? Is there a more accurate definition for "normal distribution"?
| It is not correct at all. Any unimodal probability distribution symmetric about the mode (for which the mean exists) will have mode, mean and median all equal.
For the definition of normal distribution, see e.g. Wikipedia.
Strictly speaking, data can't be normally distributed, but it can be a
sample from a normal distribution. In a sample of $3$ or more points from a continuous distribution such as the normal distribution, with probability $1$ the data points will all be distinct (so there is no mode), and the mean will not be exactly the same as the median. It is only the probability distribution the data is taken from that can have mode, mean and median equal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/424349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Looking for reference to a couple of proofs regarding the Stereographic Projection. I'm looking for a reference to rigorous proofs of the following two claims (if someone is willing to write down a proof that would also be excellent):
*
*The Stereographic Projection is a Homeomorphism between $S^{n}\backslash\left\{ N\right\}$ (the sphere without its north pole) and $\mathbb{R}^{n}$ for $n\geq2$.
*The Stereographic Projection is a Homeomorphism between $S^{n}$ and the one point compactification of $\mathbb{R}^{n}$
Help would be appreciated.
| For the first request just try to write down explicitly the function that defines such a projection, by considering an hyperplane which cuts the sphere along the equator.
Consider $S^n$ in $R^{n+1}$, with $R^n$ as the subset with $x_{n+1}=0$. The North pole is $(0,0,..,0,1)$ and the image of each point is the intersection of the line between such a point and the north pole and the above mentioned hyperplane. Thus you need to find (solving with respect to $t$) $\{(0,...,1) + t((x_1,..,x_{n+1})-(0,...,1)): t \in \mathbb{R} \}\bigcap\{x_{n+1}=0\}$ which yields the desired $t$ and so the image of the point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/424456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $\frac{x}{1-x}\cdot\frac{y}{1-y}\cdot\frac{z}{1-z} \ge 8$.
If $x,y,z$ are positive proper fractions satisfying $x+y+z=2$, prove that $$\dfrac{x}{1-x}\cdot\dfrac{y}{1-y}\cdot\dfrac{z}{1-z}\ge 8$$
Applying $GM \ge HM$, I get $$\left[\dfrac{x}{1-x}\cdot\dfrac{y}{1-y}\cdot\dfrac{z}{1-z}\right]^{1/3}\ge \dfrac{3}{\frac 1x-1+\frac 1y-1+\frac 1z-1}\\=\dfrac{3}{\frac 1x+\frac 1y+\frac 1z-3}$$
Then how to proceed. Please help.
| Write $(1-x)=a, (1-y)=b \text { and} (1-z)=c$
$x=2-(y+z)=b+c$
$y=2-(z+x)=a+c$
$z=2-(x+y)=a+b$
Thus we have the same expression in simpler form:
$\dfrac{b+c}{a} \cdot \dfrac{a+c}{b} \cdot \dfrac{a+b}{c}$
Now we have AM-GM:
$b+c \ge 2 \sqrt{bc}$
$a+c \ge 2 \sqrt{ac}$
$b+a \ge 2 \sqrt{ba}$
$\dfrac{b+c}{a} \cdot \dfrac{a+c}{b} \cdot \dfrac{a+b}{c} \ge \dfrac{2^3 abc}{abc} =8$, Done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/424529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 0
} |
Number of distinct points in $A$ is uncountable How can one show:
Let $X$ be a metric space and $A$ is subset of $X$ be a connected set with
at least two distinct points then the number of distinct points in $A$
is uncountable.
| We will show that if $A$ is countable then $A$ is not connected.
Let $a,b$ be two distinct points in $A$ and let $d$ be the metic on $X$. Then, since $d$ is real valued, there are uncountably many $r\in \mathbb R$ such that $0<r<d(a,b)$. Let $r_0$ be such that $\forall x\in A$, $d(a,x)\ne r_0$ and $0<r_0<d(a,b)$. This is possible because we are assuming $A$ is countable. Then the two open sets $U$, $V$ defined by $$U=\{x\in X: d(a,x)<r_0\}\cap A$$ and $$V=\{x\in X: d(a,x)>r_0\}\cap A$$ are disjoint, their union equals $A$ and $U\cap \bar V=\bar U\cap V=\emptyset$.
Therefore, $A$ is not connected.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/424674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Contour integration with branch cut This is an exercise in a course on complex analysis I am taking:
Determine the function $f$ using complex contour integration:
$$\lim_{R\to\infty}\frac{1}{2\pi i}\int_{c-iR}^{c+iR}\frac{\exp(tz)}{(z-i)^{\frac{1}{2}}(z+i)^{\frac{1}{2}}} dz$$
Where $c>0$ and the branch cut for $z^\frac{1}{2}$ is to be chosen on $\{z;\Re z=0, \Im z \leq0\}$.
Make a distinction between:
$$t>0, \quad t=0, \quad t<0$$
I think I showed that for $t<0$, $f(t)=0$ by using Jordan's Lemma. For $t=0$ I think the answer must be $f(0)=\frac{1}{2}$. For $t>0$ however, I have no idea what contour I have to define, nor how I have to calculate the residues in $i$ and $-i$.
| For $t\equiv-\tau<0$, consider a half-circle of radius $M$ centred at $c$ and lying on the right of its diameter that goes from $c-iM$ to $c+iM$. By Cauchy's theorem
$$
\int_{c-iM}^{x+iM}\frac{e^{tz}}{\sqrt{1+z^2}}dz
=\int_{-\pi/2}^{+i\pi/2}\frac{e^{-\tau(c+Me^{i\varphi})}}{\sqrt{1+(c+Me^{i\varphi})^2}}iMe^{i\varphi}d\varphi;
$$
the right-hand side is bounded in absolute value by an argument in the style of Jordan's lemma, and hence goes to $0$ as $M\to\infty$.
So, indeed:
$$\boxed{
\int_{c-i\infty}^{c+i\infty}\frac{e^{tz}}{\sqrt{1+z^2}}dz=0,\text{ for }t<0.}
$$
For $t=0$ the integral diverges logarithmically, but we can compute its Cauchy principal value:
$$
\int_{c-iM}^{x+iM}\frac{1}{\sqrt{1+z^2}}dz=\left[ \sinh^{-1}z \right]_{c-iM}^{c+iM}=\log\frac{cM^{-1}+i+\sqrt{(cM^{-1}+i)^2+M^{-2}}}{cM^{-1}-i+\sqrt{(cM^{-1}-i)^2+M^{-2}}}
$$
and for $M\to\infty$
$$
\boxed{
PV\int_{c-i\infty}^{c+i\infty}\frac{1}{\sqrt{1+z^2}}dz=i\pi.
}
$$
Finally, for $t>0$, consider the contour below:
It is easy to see that the integrals along the horizontal segments vanish in the $M\to\infty$ limit, as well as the integral along the arc on the left (the latter, again by Jordan's lemma). Even the integrals on the small arcs give no contribution as the contour approaches the branch cuts.
The only contribution comes from the branch discontinuities:
$$
\int_{c-i\infty}^{c+i\infty}\frac{e^{tz}}{\sqrt{1+z^2}}dz=
4i \int_{1}^{+\infty}\frac{{\sin (ty)}}{\sqrt{y^2-1}}dy.
$$
Now, letting $y=\cosh \psi$, we have
$$
4i\Im \int_1^{+\infty}\frac{e^{ity}}{\sqrt{y^2-1}}dy=
2i\Im \int_{-\infty}^{+\infty}e^{it\cosh\psi}d\psi=
2i\Im \left(i\pi H_0^{(1)}(t)\right)=i2\pi J_0(t),
$$
thanks to the integral representation of cylindrical Bessel functions.
In this step, the analytic continuation $t\mapsto t+i\delta$, for a small $\delta>0$ which is then sent to $0$, has been employed.
So, finally
$$
\boxed{
\int_{c-i\infty}^{c+i\infty}\frac{e^{tz}}{\sqrt{1+z^2}}dz
=i2\pi J_0(t), \text{ for }t>0.
}
$$
To sum up
$$
f(t)=
\begin{cases}
0 &\text{if }t<0\\
1/2 &\text{if }t=0\text{ (in the }PV\text{ sense)}\\
J_0(t)&\text{if }t>0.
\end{cases}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/424725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Integral of polylogarithms and logs in closed form: $\int_0^1 \frac{du}{u}\text{Li}_2(u)^2(\log u)^2$ Is it possible to evaluate this integral in closed form?
$$ \int_0^1 \frac{du}{u}\text{Li}_2(u)^2\log u \stackrel{?}{=} -\frac{\zeta(6)}{3}.$$
I found the possible closed form using an integer relation algorithm.
I found several other possible forms for similar integrals, including
$$ \int_0^1 \frac{du}{u}\text{Li}_2(u)^2(\log u)^2 \stackrel{?}{=} -20\zeta(7)+12\zeta(2)\zeta(5).$$
There doesn't seem to be an equivalent form when the integrand contains $(\log u)^3$, at least not just in terms of $\zeta$.
Does anybody know a trick for evaluating these integrals?
Update. The derivation of the closed form for the second integral follows easily along the ideas O.L. used in the answer for the first integral.
Introduce the functions
$$ I(a,b,c) = \int_0^1 \frac{du}{u}(\log u)^c \text{Li}_a(u)\text{Li}_b(u) $$
and
$$ S(a,b,c) = \sum_{n,m\geq1} \frac{1}{n^am^b(n+m)^c}. $$
Using integration by parts, the expansion of polylogarithms from their power series definition and also that
$$ \int_0^1 (\log u)^s u^{t-1}\,du = \frac{(-1)^s s!}{t^{s+1}},$$
check that
$$ I(2,2,2) = -\frac23 I(1,2,3) = 4S(1,2,4). $$
Now use binomial theorem and the fact that $S(a,b,c)=S(b,a,c)$ to write
$$ 6S(1,2,4) + 2S(3,0,4) = 3S(1,2,4) + 3S(2,1,4)+S(0,3,4)+S(3,0,4) = S(3,3,1). $$
Now, using Mathematica,
$$ S(3,3,1) = \sum_{n,m\geq1}\frac{1}{n^3m^3(n+m)} = \sum_{m\geq1}\frac{H_m}{m^6} - \frac{\zeta(2)}{m^5} + \frac{\zeta(3)}{m^4}, $$
and
$$ \sum_{m\geq1}\frac{H_m}{m^6} = -\zeta(4)\zeta(3)-\zeta(2)\zeta(5)+4\zeta(7), $$
so
$$ S(3,3,1) = 4\zeta(7)-2\zeta(2)\zeta(5). $$
Also,
$$ S(0,3,4) = \zeta(3)\zeta(4) - \sum_{m\geq1} \frac{H_{n,4}}{m^3} = -17\zeta(7)+10\zeta(2)\zeta(5)+\zeta(3)\zeta(4), $$
from which it follows that
$$ I(2,2,2) = \frac23\left(S(3,3,1)-2S(0,3,4)\right) = -20\zeta(7)+12\zeta(2)\zeta(5). $$
| I've decided to publish my work so far - I do not promise a solution, but I've made some progress that others may find interesting and/or helpful.
$$\text{Let } I_{n,k}=\int_{0}^{1}\frac{\text{Li}_{k}(u)}{u}\log(u)^{n}du$$
Integrating by parts gives $$I_{n,k}=\left[\text{Li}_{k+1}(u)\log(u)^{n}\right]_{u=0}^{u=1}-\int_{0}^{1}\frac{\text{Li}_{k+1}(u)}{u}n\log(u)^{n-1}du$$
$$\text{Hence, }I_{n,k}=-nI_{n-1,k+1} \implies I_{n,k}=(-1)^{r}\frac{n!}{(n-r)!}I_{n-r,k+r}$$
Taking $r=n$ gives $I_{n,k}=(-1)^{n}n!I_{0,n+k}$.
$$\text{But obviously } I_{0,n+k}=\int_{0}^{1}\frac{\text{Li}_{n+k}(u)}{u}du=\text{Li}_{n+k+1}(1)-\text{Li}_{n+k+1}(0)=\zeta(n+k+1)$$
$$\text{Now consider }J_{n,k,l}=\int_{0}^{1}\frac{\text{Li}_{k}(u)}{u}\text{Li}_{l}(u)\log(u)^{n}du$$
Integrating by parts again,
$$J_{n,k,l}=\left[\text{Li}_{k+1}(u)\text{Li}_{l}(u)\log(u)^{n}\right]_{0}^{1}-\int_{0}^{1}\frac{\text{Li}_{l-1}(u)}{u}\text{Li}_{k+1}(u)\log(u)^{n}-\int_{0}^{1}\frac{n\log(u)^{n-1}}{u}\text{Li}_{k+1}(u)\text{Li}_{l}(u) du$$
So $J_{n,k,l}=-J_{n,k+1,l-1}-nJ_{n-1,k+1,l}$; continuing in the spirit of the first part suggests that we ought to try to increase the first and second indices, while decreasing the third. If we can succeed in this, we have found a closed form.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/424807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 4,
"answer_id": 1
} |
Looking for a good counterargument against vector space decomposition. How do I see that I cannot write $\mathbb{R}^n = \bigcup_{\text{all possible }M} \operatorname{span}(M)$, where $M$ runs over the subsets with $n-1$ elements in it of the set of vectors $N=\{a_1,\ldots,a_n,\ldots,a_m\} \in \mathbb{R}^n$, where the total dimension of the span of all of them is $n$.
| Let $V$ be a vector space over an infinite field $F$, and $V_1, \ldots, V_n$ proper subspaces. Then I claim $\bigcup _j V_j$ is not a vector space.
For each $k$ let $u_k$ be a vector not in $V_k$. We then inductively find
vectors $w_k$ not in $\bigcup_{j \le k} V_j$. Namely, if $w_k \notin \bigcup_{j \le k} V_j$, and $u_{k+1} \notin V_{k+1}$, consider $f(t) = t u_{k+1} + (1-t) w_k$ for scalars $t$. If this was in $V_j$ for two different values of $t$, say $t_1 \ne t_2$, then it would be in $V_j$ for all $t$, because
$$f(t) = \dfrac{t - t_1}{t_2 - t_1} f(t_2) + \dfrac{t - t_2}{t_1 - t_2} f(t_1)$$
This is not the case for any $j \in \{1,\ldots,k+1\}$ because $f(0) \notin V_{k+1}$ and $f(1) \notin V_j$ for $j \le k$. So there are at most $k+1$
values of the scalar $t$ for which $f(t) \in \bigcup_{j \le k+1} V_j$, and
infinitely many for which it is not.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/424878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Continuity and Metric Spaces How do I show that the function $f:X \to \mathbb R$ given by $$f(x)=\frac{d(a,x)}{d(a,b)}$$ is continuous.
Given that $(X,d)$ is a metric space, and $a,b$ are distinct points in $X$.
| If $d(x,y)<d(a,b)\cdot\varepsilon$ then
$$
|f(x)-f(y)| = \left|\frac{d(a,x)}{d(a,b)} - \frac{d(a,y)}{d(a,b)}\right| \le \frac{d(x,y)}{d(a,b)}<\varepsilon.
$$
The first inequality follows from two instances of the triangle inequality: $d(a,x)+d(x,y)\ge d(a,y)$ and $d(a,y)+d(y,x)\ge d(a,x)$.
So given $\varepsilon>0$, let $\delta =d(a,b)\cdot\varepsilon$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/424937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What is the average weight of a minimal spanning tree of $n$ randomly selected points in the unit cube? Suppose we pick $n$ random points in the unit cube in $\mathbb{R}_3$, $p_1=\left(x_1,y_1,z_1\right),$ $p_2=\left(x_2,y_2,z_2\right),$ etc. (So, $x_i,y_i,z_i$ are $3n$ uniformly distributed random variables between $0$ and $1$.) Let $\Gamma$ be a complete graph on these $n$ points, and weight each edge $\{p_i,p_j\}$ by $$w_{ij}=\sqrt{\left(x_i-x_j\right)^2+\left(y_i-y_j\right)^2+\left(z_i-z_j\right)^2}.$$
Question: What is the expected value of the total weight of a minimal spanning tree of $\Gamma$?
(Note: Here total weight means the sum of all edges in the minimal spanning tree.)
A peripheral request: The answer is probably a function of $n$, but I don't have the computing power or a good implementation of Kruskall's algorithm to suggest what this should look like. If someone could run a simulation to generate this average over many $n$, it might help towards a solution to see this data.
| If $n = 0$ or $n = 1$ the answer obviously is 0. If $n = 2$ we have
$$E\left((x_1 - x_2)^2\right) = E(x_1^2 - 2x_1x_2 + x_2^2) = E(x_1^2) - 2E(x_1)\cdot E(x_2) + E(x_2^2) \\= \frac13 - 2\frac12\cdot\frac12 + \frac13 = \frac16.$$
The same for $y$- and $z$-coordinates. So $E(w_{12}) = \sqrt{\frac16 + \frac 16 + \frac16} = \frac1{\sqrt2}$ and spanning tree contains the edge $\{\,1, 2\,\}$ only.
I see it is possible to consider several cases for $n = 3$, however for arbitrary $n$ I don't expect to get close form of the answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/424995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
Characterizing continuous exponential functions for a topological field Given a topological field $K$ that admits a non-trivial continuous exponential function $E$, must every non-trivial continuous exponential function $E'$ on $K$ be of the form $E'(x)=E(r\sigma (x))$ for some $r \in K$* and $\sigma \in Aut(K/\mathbb{Q})$?
If not, for which fields other than $\mathbb{R}$ is this condition met?
Thanks to Zev
| It seems that as-stated, the answer is false. I'm not satisfied with the following counterexample, however, and I'll explain afterwards.
Take $K = \mathbb{C}$ and let $E(z) = e^z$ be the standard complex exponential. Take $E'(z) = \overline{e^z} = e^{\overline{z}}$, where $\overline{z}$ is the complex conjugate of $z$. Then $E'(z)$ is not of the form $E(r z)$, and yet is a perfectly fine homomorphism from the additive to the multiplicative groups of $\mathbb{C}$.
Here's why I'm not satisfied: you can take any automorphism of a field and cook up new exponentials by post-composition or pre-composition. In the case I mentioned, these two coincide.
This won't work in $\mathbb{R}$ because there are no nontrivial continuous automorphisms there. I would be interested in seeing an answer to a reformulation to this problem that reflected this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/425071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Show that $(1+x)(1+y)(1+z)\ge 8(1-x)(1-y)(1-z)$.
If $x>0,y>0,z>0$ and $x+y+z=1$, prove that $$(1+x)(1+y)(1+z)\ge 8(1-x)(1-y)(1-z).$$
Trial: Here $$(1+x)(1+y)(1+z)\ge 8(1-x)(1-y)(1-z) \\ \implies (1+x)(1+y)(1+z)\ge 8(y+z)(x+z)(x+y)$$ I am unable to solve the problem. Please help.
| $$(1+x)(1+y)(1+z) \ge 8(1-x)(1-y)(1-z) \Leftrightarrow $$
$$(2x+y+z)(x+2y+z)(x+y+2z) \ge 8(y+z)(x+z)(x+y)$$
Let $a=x+y, b=x+z, c=y+z$. Then the inequality to prove is
$$(a+b)(a+c)(b+c) \ge 8abc \,,$$
Which follows immediately from AM-GM:
$$a+b \ge 2 \sqrt{ab}$$
$$a+c \ge 2 \sqrt{ac}$$
$$b+c \ge 2 \sqrt{bc}$$
Simplification The solution above can be simplified the following way:
By AM-GM
$$2\sqrt{(1-y)(1-z)}\le 1-y+1-z=1+x \,.$$
Similarly
$$2\sqrt{(1-x)(1-z)}\le 1+y \,.$$
$$2\sqrt{(1-x)(1-y)}\le 1+z \,.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/425134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Calculate the probability of two teams have been drawns If we know that team A had a $39\%$ chance of winning and team B $43\%$ chance of winning, how we can calculate the probability of the teams drawn?
My textbook mention the answer but I cannot understand the logic behind it. The answer is $18\%$. As working is not shown I guess that this is how the find $18\%$ probability of two teams withdrawn:
$$ (100\% - 39\%) - 43\% = 18\%$$
But I cannot understand the logic behind it. I appreciate if someone can explain it to me.
| The sum of all events' probabilities is equal to 1. In this case, there are three disjoint events: team A winning, team B winning or a draw. Since we know the sum of these probabilities is 1, we can get the probability of a draw as follows:
$$
Pr(\text{Draw})=1-Pr(\text{Team A wins})-Pr(\text{Team B wins})=1-0.39-0.43=0.18
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/425188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Where to start Machine Learning? I've recently stumbled upon machine learning, generating user recommendations based on user data, generating text teaser based on an article. Although there are tons of framework that does this(Apache Mahout, Duine framework and more) I wanted to know the core principles, core algorithms of machine learning and in the future implement my own implementation of machine learning.
Where do I actually start learning Machine Learning as its basics or Machine Learning starting with concepts then to implementation? Although I have an weak to average math skills(I think that this will not hurt? If So what branches of mathematics should I study before Jumping to Machine Learning?)
Note that this is not related to my academics rather I want to learn this as an independent researcher and I am quite fascinated how machine learning works
| I would also recommend the course Learning from data by Yaser Abu-Mostafa from Caltech. An excellent course!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/425230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 8,
"answer_id": 6
} |
Question about linear systems of equations Let $X=\{x_1,\cdots,x_n\}$ be a set of variables in $\mathbb{R}$.
Let $S_1$ be a set of linear equations of the form $a_1 x_1+\cdots+a_n x_n=b$ that are independent.
Let $k_1=|S_1|<n$ where $|S_1|$ denotes the rank of $S_1$ (i.e., the number of independent equations).
That is, $S_1$ does not contain enough equations to uniquely specify the values of the variables in $X$.
How many other equations are needed to solve the system uniquely? The answer is $n - k_1$.
Let $M$ be a set of $n-k_1$ equations such that $S_1 \cup M$ is full rank (i.e., the system $S_1 \cup M$ can be solved uniquely). My first question is that how can one find a set $M$?
Let $S_2$ be a set of independent equations such that $|S_2|<n$ too. Now I want to find an $M$ such that both $S_1 \cup M$ and $S_2 \cup M$ are uniquely solvable. How can I find such an $M$? Note that $|M|$ must be $\ge \max(n-k_1,n-k_2)$.
What if we extend the question to $S_1,\cdots,S_m$ such that $S_1\cup M,\cdots,S_m\cup M$ are all uniquely solvable?
A partial answer is also appreciated.
| Here is a reasonably easy algorithm if you already know some basic stuff about matrices:
Look at the coefficients matrix's rows as vectors in $\;\Bbb R^n\;$ :
$$A:=\{ v_1=(a_{11},\ldots,a_{1n})\;,\;v_2=(a_{21},\ldots,a_{2n})\,\ldots,v_k=(a_{k1},\ldots,a_{kn})\}\;\;(\text{where $\,k=k_1\;$ for simplicity of notation)}$$
Since you're given the equations are independent that means $\,A\,$ is a linearly independent set of vectors in $\,\Bbb R^n\;$ .
Well, now just "simply" complete the set $\,A\,$ to a basis of $\,\Bbb R^n\,$ and that's all...you can do this by taking the matrix
$$\begin{pmatrix}a_{11}&a_{12}&\ldots&a_{1n}\\a_{21}&a_{22}&\ldots&a_{2n}\\\ldots&\ldots&\ldots&\ldots\\a_{k1}&a_{k2}&\ldots&a_{kn}\end{pmatrix}$$
and adding each time a new row $\,(b_1,\ldots,b_n)\,$ . Check whether this matrix is singular or not (for example, reducing this matrix and checking the row you added doesn't become all zeros!), and continue on until you get an $\,n\times n\$ regular matrix and voila!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/425297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does $\sum_{n=1}^{\infty} \frac{\ln(n)}{n^2+2}$ converge? I'm trying to find out whether
$$\sum_{n=1}^{\infty} \frac{\ln(n)}{n^2+2}$$
is convergent or divergent?
| Clearly
$$
\frac{\ln(n)}{n^2+2}\leq \frac{\ln(n)}{n^2}
$$
You can apply the integral test to show that $\sum\frac{\ln(n)}{n^2}$ converges. You only need to check that $\frac{\ln(n)}{n^2}$ is decreasing. But, the derivative is clearly negative for $n>e$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/425368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 1
} |
What is the range of values of the random variable $Y = \frac{X - \min(X)}{\max(X)-\min(X)}$? Suppose $X$ is an arbitrary numeric random variable. Define the variable $Y$ as
$$Y=\frac{X-\min(X)}{\max(X)-\min(X)}.$$
Then what is the range of values of $Y$?
| If $X$ takes values over any finite (closed) interval, then the range of $Y$ is $[0,1]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/425414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Vorticity equation in index notation (curl of Navier-Stokes equation) I am trying to derive the vorticity equation and I got stuck when trying to prove the following relation using index notation:
$$
{\rm curl}((\textbf{u}\cdot\nabla)\mathbf{u}) = (\mathbf{u}\cdot\nabla)\pmb\omega - ( \pmb\omega \cdot\nabla)\mathbf{u}
$$
considering that the fluid is incompressible $\nabla\cdot\mathbf{u} = 0 $, $\pmb \omega = {\rm curl}(\mathbf{u})$ and that $\nabla \cdot \pmb \omega = 0.$
Here follows what I've done so far:
$$
(\textbf{u}\cdot\nabla) \mathbf{u} = u_m\frac{\partial u_i}{\partial x_m} \mathbf{e}_i = a_i \mathbf{e}_i \\
{\rm curl}(\mathbf{a}) = \epsilon_{ijk} \frac{\partial a_k}{\partial x_j} \mathbf{e}_i = \epsilon_{ijk} \frac{\partial}{\partial x_j}\left( u_m\frac{\partial u_k}{\partial x_m} \right) \mathbf{e}_i = \\
= \epsilon_{ijk}\frac{\partial u_m}{\partial x_j}\frac{\partial u_k}{\partial x_m} \mathbf{e}_i + \epsilon_{ijk}u_m \frac{\partial^2u_k}{\partial x_j \partial x_m} \mathbf{e}_i \\
$$
the second term $\epsilon_{ijk}u_m \frac{\partial^2u_k}{\partial x_j \partial x_m} \mathbf{e}_i$ seems to be the first term "$(\mathbf{u}\cdot\nabla)\pmb\omega$" from the forementioned identity. Does anyone have an idea how to get the second term?
| The trick is the following:
$$ \epsilon_{ijk} \frac{\partial u_m}{\partial x_j} \frac{\partial u_m}{\partial x_k} = 0 $$
by antisymmetry.
So you can rewrite
$$ \epsilon_{ijk} \frac{\partial u_m}{\partial x_j} \frac{\partial u_k}{\partial x_m} = \epsilon_{ijk} \frac{\partial u_m}{\partial x_j}\left( \frac{\partial u_k}{\partial x_m} - \frac{\partial u_m}{\partial x_k} \right) $$
Note that the term in the parentheses is something like $\pm\epsilon_{kml} \omega_l$
Lastly use the product property for Levi-Civita symbols
$$ \epsilon_{ijk}\epsilon_{lmk} = \delta_{il}\delta_{jm} - \delta_{im}\delta_{jl} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/425494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 0
} |
Function problem Show that function $f(x) =\frac{x^2+2x+c}{x^2+4x+3c}$ attains any real value if $0 < c \leq 1$ Problem :
Show that function $f(x)=\dfrac{x^2+2x+c}{x^2+4x+3c}$ attains any real value if $0 < c \leq 1$
My approach :
Let the given function $f(x) =\dfrac{x^2+2x+c}{x^2+4x+3c} = t $ where $t$ is any arbitrary constant.
$\Rightarrow (t-1)x^2+2(2t-1)x+c(3t-1)=0$
The argument $x$ must be real, therefore $(2t-1)^2-(t-1)(3tc-c) \geq 0$.
Now how to proceed further? Please guide. Thanks.
| $(2t-1)^2-(t-1)(3tc-c) \geq 0\implies 4t^2+1-4t-(3t^2c-4tc+c)\geq 0\implies t^2(4-3c)+4(c-1)t+(1-c)\geq 0$
Now a quadratic polynomial $\geq 0$ $\forall t\in \Bbb R$ iff coefficient of second power of variable is positive and Discriminant $\leq 0$
which gives $4-3c>0\implies c<\frac{4}{3}$ and $D=16(c-1)^2+4(4-3c)(c-1)\leq 0\implies 4(c-1)(4c-4+4-3c)\leq 0\implies 4(c-1)c\leq 0\implies 0\leq c\leq 1$
So $c<\frac{4}{3}$ and $0\leq c\leq 1\implies 0\leq c\leq 1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/425573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Quadratic residues, mod 5, non-residues mod p 1) If $p\equiv 1\pmod 5$, how can I prove/show that 5 is a quadratic residue mod p?
2) If $p\equiv 2\pmod 5$, how can is prove/show that 5 is a nonresidue(quadratic) mod p?
| 1) $(5/p) = (p/5)$ since $p$ is $1(mod 5)$ then $(p/5) = (1/5) = 1$. So 5 is a quadratic residue mod p.
2) again $(5/p) = (p/5)$ since $p$ is $2(mod5)$ then $(p/5) = (2/5) = -1$ since 5 is 5(mod8). So 5 is not a quadratic residue
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/425704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Why does Monte-Carlo integration work better than naive numerical integration in high dimensions? Can anyone explain simply why Monte-Carlo works better than naive Riemann integration in high dimensions? I do not understand how chosing randomly the points on which you evaluate the function can yield a more precise result than distributing these points evenly on the domain.
More precisely:
Let $f:[0,1]^d \to \mathbb{R}$ be a continuous bounded integrable function, with $d\geq3$. I want to compute $A=\int_{[0,1]^d} f(x)dx$ using $n$ points. Compare 2 simple methods.
The first method is the Riemann approach. Let $x_1, \dots, x_n$ be $n$ regularly spaced points in $[0,1]^d$ and $A_r=\frac{1}{n}\sum_{i=1}^n f(x_i)$. I have that $A_r \to A$ as $n\to\infty$. The error will be of order $O(\frac{1}{n^{1/d}})$.
The second method is the Monte-Carlo approach. Let $u_1, \dots, u_n$ be $n$ points chosen randomly but uniformly over $[0,1]^d$. Let $A_{mc}=\frac{1}{n}\sum_{i=1}^n f(u_i)$. The central limit theorem tells me that $A_{mc} \to A$ as $n\to \infty$ and that $A_{mc}-A$ will be in the limit a gaussian random variable centered on $0$ with variance $O(\frac{1}{n})$. So with a high probability the error will be smaller than $\frac{C}{\sqrt{n}}$ where $C$ does not depend (much?) on $d$.
An obvious problem with the Riemann approach is that if I want to increase the number of points while keeping a regular grid I have to go from $n=k^d$ to $n=(k+1)^d$ which adds a lots of points. I do not have this problem with Monte-Carlo.
But if the number of points is fixed at $n$, does Monte-Carlo really yield better results than Riemann? It seems true in most cases. But I do not understand how chosing the points randomly can be better. Does anybody have an intuitive explanation for this?
| I think it is not the case that random points perform better than selecting the points manually as done in the Quasi-Monte Carlo methods and the sparse grid method:
http://www.mathematik.hu-berlin.de/~romisch/papers/Rutg13.pdf
Also in Monte Carlo methods one usually uses random numbers to generate an adaptive integration method.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/425782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38",
"answer_count": 4,
"answer_id": 3
} |
Help with a conditional probability problem There are 6 balls in a bag and they are numbered 1 to 6.
We draw two balls without replacement.
Is the probability of drawing a "6" followed by drawing an "even" ball the same as the probability of drawing an "even" ball followed by drawing a "6".
According to Bayes Theorem these two possibilities should be the same:
Pr(A and B) = Pr(A) x Pr(B∣A)
Pr(A and B) = Pr(B) x Pr(A∣B)
However, when I try to work this out I am getting two different probabilities, 2/30 and 3/30 for the two different scenarios listed above. The first scenario is fairly straight-forward to determine,
Pr(6) x Pr(even∣6 has already been drawn)
1/6 x 2/5 = 2/30
however, I think I am doing something wrong with the second scenario,
Pr(even) x Pr(6∣even has already been drawn)
3/6 x ?????
Any help would be greatly appreciated as this is really bugging me.
Thank you in advance....
| There is no need to compute anything. All orders of drawing the balls are equally likely.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/425869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Sum of greatest common divisors As usually, let $\gcd(a,b)$ be the greatest common divisor of integer numbers $a$ and $b$.
What is the asymptotics of
$$\frac{1}{n^2} \sum_{i=1}^{i=n} \sum_{j=1}^{j=n} \gcd(i,j)$$
as $n \to \infty?$
| Of the lattice points $[1,n] \times [1,n], 1-\frac 1{p^2}$ have no factor $p$ in the $\gcd, \frac 1{p^2}-\frac 1{p^4}$ have a factor $p$ in the $\gcd\frac 1{p^4}-\frac 1{p^6}$, have a factor $p^2$ in the $\gcd, \frac 1{p^6}-\frac 1{p^8}$ have a factor $p^3$ in the $\gcd$ and so on. That means that a prime $p$ contributes a factor $(1-\frac 1{p^2}+p(\frac 1{p^2}-\frac 1{p^4})+p^2({p^4}-\frac 1{p^6})+\dots$ or $\sum_{i=0}^\infty(p^{-i}-p^{-i-2})=\sum_{i=0}^\infty p^{-i}(1-p^{-2})=\frac {1-p^{-2}}{1-p^{-1}}=1+\frac 1p$. I don't know how to justify the use of the fact that $\gcd$ is multiplicative to turn this into $$\lim_{m \to \infty}\prod_{p \text {prime}}^m(1+\frac 1p)$$ to get the asymptotics, but it seems like it should work by taking, say $m=\sqrt n$ and letting $n \to \infty$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/425954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
$u,v$ are harmonic conjugate with each other in some domain $u,v$ are harmonic conjugate with each other in some domain , then we need to show
$u,v$ must be constant.
as $v$ is harmonic conjugate of $u$ so $f=u+iv$ is analytic.
as $u$ is harmonic conjugate of $v$ so $g=v+iu$ is analytic.
$f-ig=2u$ and $f+ig=2iv$ are analytic, but from here how to conclude that $u,v$ are constant? well I know they are real valued function, so by open mapping theorem they are constant?
| Your proof is correct. I add some remarks:
*
*$v$ is a conjugate of $u$ if and only if $-u$ is a conjugate of $v$ (since $u+iv$ and $v-iu$ are constant multiples of each other)
*Since the harmonic conjugate is unique up to additive constant, the assumption that $u$ is a conjugate of $v$ implies (because of 1) that $u=-u+\text{const}$, and conclusion follows.
*Related to 1: the Hilbert transform $H$ satisfies $H\circ H=-\text{id}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/426002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Cantor's Diagonal Argument Why Cantor's Diagonal Argument to prove that real number set is not countable, cannot be applied to natural numbers? Indeed, if we cancel the "0." in the proof, the list contains all natural numbers, and the argument can be applied to this set.
| How about this slightly different (but equivalent) form of the proof? I assume that you already agree that the natural numbers $\mathbb{N}$ are countable, and your question is with the real numbers $\mathbb{R}$.
Theorem: Let $S$ be any countable set of real numbers. Then there exists a real number $x$ that is not in $S$.
Proof: Cantor's Diagonal argument. Note that in this version, the proof is no longer by contradiction, you just construct an $x$ not in $S$.
Corollary: The real numbers $\mathbb{R}$ are uncountable.
Proof: The set $\mathbb{R}$ contains every real number as a member by definition. By the contrapositive of our Theorem, $\mathbb{R}$ cannot be countable.
Note that this formulation will not work for $\mathbb{N}$ because $\mathbb{N}$ is countable and contains all natural numbers, and thus would be an instant counterexample for the hypothesis of the natural number version of our Theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/426084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Evaluating $\int_0^\infty \frac{dx}{1+x^4}$. Can anyone give me a hint to evaluate this integral?
$$\int_0^\infty \frac{dx}{1+x^4}$$
I know it will involve the gamma function, but how?
| Following is a computation that uses Gamma function:
For any real number $k > 1$, let $I_k$ be the integral:
$$I_k = \int_0^\infty \frac{dx}{1+x^k}$$
Consider two steps in changing the variable. First by $y = x^k$ and then by $z = \frac{y}{1+y}$. Notice:
$$\frac{1}{1+y} = 1 - z,\quad y = \frac{z}{1-z}\quad\text{ and }\quad dy = \frac{dz}{(1-z)^2}$$
We get:
$$\begin{align}
I_k = & \int_0^{\infty}\frac{1}{1 + y} d y^{\frac{1}{k}} = \frac{1}{k}\int_0^\infty \frac{1}{1+y}y^{\frac{1}{k}-1} dy\\
= & \frac{1}{k}\int_0^\infty (1-z) \left(\frac{z}{1-z}\right)^{\frac{1}{k}-1} \frac{dz}{(1-z)^2}
= \frac{1}{k}\int_0^\infty z^{\frac{1}{k}-1} (1-z)^{-\frac{1}{k}} dz\\
= & \frac{1}{k} \frac{\Gamma(\frac{1}{k})\Gamma(1 - \frac{1}{k})}{\Gamma(1)}
= \frac{\pi}{k \sin\frac{\pi}{k}}
\end{align}$$
For $k = 4$, we get:
$$I_4 = \int_0^\infty \frac{dx}{1+x^4} = \frac{\pi}{4\sin \frac{\pi}{4}} = \frac{\pi}{2\sqrt{2}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/426152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 7,
"answer_id": 5
} |
My text says$ \left\{\begin{pmatrix}a&a\\a&a\end{pmatrix}:a\ne0,a\in\mathbb R\right\}$ forms a group under matrix multiplication. My text says$$\left\{\begin{pmatrix}a&a\\a&a\end{pmatrix}:a\ne0,a\in\mathbb R\right\}$$ forms a group under matrix multiplication.
But I can see $I\notin$ the set and so not a group.
Am I right?
| It's important to note that this set of matrices forms a group but it does NOT form a subgroup of the matrix group $GL_2(\mathbb{R})$ (the group we are most familiar with as being a matrix group - the group of invertible $2\times 2$ matrices) as no elements in this set have non-zero determinant. In particular, we are looking at a subset of $Mat(\mathbb{R},2)$ which is disjoint from $GL_2(\mathbb{R})$.
The identity of the group will then be the matrix $\pmatrix{\frac{1}{2}&\frac{1}{2}\\ \frac{1}{2}&\frac{1}{2}}$ and the inverse of the element $\pmatrix{a&a\\ a&a}$ will be $\dfrac{1}{4}\pmatrix{a^{-1}&a^{-1}\\ a^{-1}&a^{-1}}$ (you should check this).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/426228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 4,
"answer_id": 0
} |
Domain and range of a multiple non-connected lines from a function? How do you find the domain and range of a function that has multiple non-connected lines?
Such as, $ f(x)=\sqrt{x^2-1}$. Its graph looks like this:
I'm wanting how you would write this with a set eg: $(-\infty, \infty)$.
P.S. help me out with the title. Not sure how to describe this.
| You can find domain of the function by simply analyzing the behavior of the function. For
$$
f(x) = \sqrt{x^2-1}
$$
you can conclude that the expression under the square root must be non-negative. So
$$
x^2-1 \ge 0 \\
(x-1)(x+1) \ge 0 \\
x \in (-\infty, -1] \cup [1, +\infty)
$$
Latter is your domain. $D[f] = (-\infty, -1] \cup [1, +\infty)$
Finding range of the function sometimes is more trickier, but for this particular function not so much. Again, you have to observe its behavior. It's square root, and square root in real analysis can take any non-negative value, so range is $E[f] = [0, +\infty)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/426318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Yet another $\sum = \pi$. Need to prove. How could one prove that
$$\sum_{k=0}^\infty \frac{2^{1-k} (3-25 k)(2 k)!\,k!}{(3 k)!} = -\pi$$
I've seen similar series, but none like this one...
It seems irreducible in current form, and I have no idea as to what kind of transformation might aid in finding proof of this.
| Use Beta function, I guess... for $k \ge 1$,
$$
\int_0^1 t^{2k}(1-t)^{k-1}dt = B(k,2k+1) = \frac{(k-1)!(2k)!}{(3k)!}
$$
So write
$$
f(x) = \sum_{k=0}^\infty \frac{2(3-25k)k!(2k)!}{(3k)!}x^k
$$
and compute $f(1/2)$ like this:
$$\begin{align}
f(x) &= 6+\sum_{k=1}^\infty \frac{(6-50k)k(k-1)!(2k)!}{(3k)!} x^k
\\ &= 6+\sum_{k=1}^\infty (6-50k)k x^k\int_0^1 t^{2k}(1-t)^{k-1}dt
\\ &= 6+\int_0^1\sum_{k=1}^\infty (6-50k)k x^k t^{2k}(1-t)^{k-1}\;dt
\\ &= 6+\int_0^1 \frac{4t^2x(14t^3x-14t^2x-11)}{(t^3x-t^2x+1)^3}\;dt
\\ f\left(\frac{1}{2}\right) &=
6+\int_0^1\frac{16t^2(7t^3-7t^2-11)}{(t^3-t^2+2)^3}\;dt
= -\pi
\end{align}$$
......
of course any calculus course teaches you how to integrate a rational function...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/426389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 1,
"answer_id": 0
} |
Why $x^2 + 7$ is the minimal polynomial for $1 + 2(\zeta + \zeta^2 + \zeta^4)$?
Why $f(x) = x^2 + 7$ is the minimal polynomial for $1 + 2(\zeta + \zeta^2 + \zeta^4)$ (where $\zeta = \zeta_7$ is a primitive root of the unit) over $\mathbb{Q}$?
Of course it's irreducible by the Eisenstein criterion, however it apparently does not satisfies $1 + 2(\zeta + \zeta^2 + \zeta^4)$ as a root, I tried to calculate several times however I couldn't get $f(1 + 2(\zeta + \zeta^2 + \zeta^4))$ = 0$.
Thanks in advance.
| If you don't already know the primitive polynomial, you can find it with Galois theory. The element given is an element of the cyclotomic field, and so it's conjugates are all the roots of the primitive polynomial. In fact, there is only one different conjugate, obtained for example by cubing each primitive root in the original expression. So $1+2(\zeta^{3}+\zeta^{5}+\zeta^{6})$ is also a root, and there are no others. Call these $r_1$ and $r_2$. The minimum polynomial must be $(x-r_1)(x-r_2)$. The sum of the roots is zero, so we only need to compute the product, which is easily found to equal 7.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/426522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Making exponent of $a^x$ object of the function Is it possible to make a variable the subject of a formula when it is an exponent in the equation? For example:
$$y=a^x\quad a\;\text{is constant}$$
For example, let the constant $a = 5.$
$$
\begin{array}{c|l}
\text{x} & \text{y} \\
\hline
1 & 5 \\
2 & 25 \\
3 & 125 \\
4 & 652 \\
5 & 3125 \\
\end{array}
$$
I cannot find the relation between x and y. The constant is making the equation a bit complicated. Appreciate if someone can help me here.
| Try taking the natural log "ln" of each side of your equation:
$$y = a^x \implies \ln y = \ln\left(a^x\right) = x \ln a \iff x = \dfrac{\ln y}{\ln a}$$
If $a = 5$, then we have $$x = \dfrac{\ln y}{\ln 5}$$
This gives us an equation with $x$ expressed in terms of $y$. $\;\ln a = \ln 5$ is simply a constant so $\dfrac 1{\ln 5}$ would be the coefficient of $\ln y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/426629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Maximal ideals in rings of polynomials Let $k$ be a field and $D = k[X_1, . . . , X_n]$ the polynomial ring in $n$ variables over $k$.
Show that:
a) Every maximal ideal of $D$ is generated by $n$ elements.
b) If $R$ is ring and $\mathfrak m\subset D=R[X_1,\dots,X_n]$ is maximal ideal such that $\mathfrak m \cap R$ is maximal and generated by $s$ elements, then $\mathfrak m$ is generated by $s + n$ elements.
The days that I am trying to solve. Help me.
| The answer to question a) can be found as Corollary 12.17 in these (Commutative Algebra) notes. The proof is left as an exercise, but the proof of it is just collecting together the previous results in the section.
(As Patrick DaSilva has mentioned, as written your question b) follows trivially from part a). I'm guessing it's not what you meant to ask.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/426690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Substituting an equation into itself, why such erratic behavior? Until now, I thought that substituting an equation into itself would $always$ yield $0=0$. What I mean by this is for example if I have $3x+4y=5$, If I substitute $y=\dfrac {5-3x}{4}$, I will eventually end up with $0=0$. However, consider the equation $\large{\sqrt {x+1}+\sqrt{x+2}=1}$ . If we multiply by the conjugate, we get $\dfrac {-1}{\sqrt{x+1}-\sqrt{x+2}}=1$, or $\large{\sqrt{x+2}-\sqrt{x+1}=1}$. Now we can set this equation equal to the original, so $\sqrt{x+2}-\sqrt{x+1}=\sqrt {x+1}+\sqrt{x+2}$ , and you get $0=2 \sqrt{x+1}$ which simplifies to $x=-1$ , which is actually a valid solution to the original! So how come I am not getting $0=0$ , but I am actaully getting useful information out of this? Is there something inherently wrong with this? Thanks.
| You didn't actually substitute anything (namely a solution for $x$) into the original equation; if you would do that the $x$ would disappear. Instead you combined the equation with a modified form of itself to obtain a new equation that is implied by the original one; the new equation may or may not have retained all information from the original one. As it happens the new equation has a unique solution and it also solves the original equation; this shows the new equation implies the original one, and you did not in fact loose any information.
If you consider the operation of just adding a multiple of an equation to itself, you can see what can happen: in most cases you get something equivalent to the original equation, but if the multiple happened to be by a factor$~-1$ then you are left with $0=0$ and you have lost (in this case all) information contained in the equation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/426753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 3
} |
how to prove a parametric relation to be a function For example lets suppose that I have given the functions $f:\mathbb{R}\longrightarrow \mathbb{R}$ and $g:\mathbb{R}\longrightarrow \mathbb{R}$. If my relation is $R=\{(x,(y,z))\in \mathbb{R}\times \mathbb{R}^{2}: y=f(x) \wedge z=g(x)\}$ How to prove formally (from a set theoretic stand point) that $R$ is a function. I have a try but I'm not convince:
Let suppose to have $(x,(y,z))\in R$ and also $(x,(y',z'))\in R$. Then $y=f(x), z=g(x)$ and also $y'=f(x), z'=g(x)$ by definition. Then $y=y'$ and also $z=z'$. Therefore $(y,z)=(y',z')$.
In a more general case if I have the functions $f_1, f_2,...,f_n:\mathbb{R}^m\longrightarrow \mathbb{R}$ and I define the function $f:\mathbb{R}^{m}\longrightarrow\mathbb{R}^{n}$ such that $f(x_{1},x_{2},...,x_{m})=(f_{1}(y),f_{2}(y),...,f_{n}(y))$ with $y=(x_1,x_2,...,x_m)$, how to justify that it is indeed a function? If my try is fine I suppose this can be done by induction. Any comment will be appreciated.
| Let's prove a vastly more general statement.
Let $I$ be an index set, and for every $i\in I$ let $f_i$ be a function. Then the relation $F$ defined on $X=\bigcap_{i\in I}\operatorname{dom}(f_i)$ by $F=\{\langle x,\langle f_i(x)\mid i\in I\rangle\rangle\mid x\in X\}$ is a function.
Proof. Let $x\in X$, and suppose that $\langle x,\langle y_i\mid i\in I\rangle\rangle,\langle x,\langle z_i\mid i\in I\rangle\rangle\in F$, then for every $i\in I$ we have $y_i=f_i(x)=z_i$, therefore the sequences are equal and $F$ is a function. $\square$
Now you care about the case where $\operatorname{dom}(f_i)$ are all equal, so the intersection creating $X$ is trivial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/426824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Which of the following groups is not cyclic?
Which of the following groups is not cyclic?
(a) $G_1 = \{2, 4,6,8 \}$ w.r.t. $\odot$
(b) $G_2 = \{0,1, 2,3 \}$ w.r.t. $\oplus$ (binary XOR)
(c) $G_3 =$ Group of symmetries of a rectangle w.r.t. $\circ$ (composition)
(d) $G_4 =$ $4$th roots of unity w.r.t. $\cdot$ (multiplication)
Can anyone explain me this question?
| Hint: For a group to be cyclic, there must be an element $a$ so that all the elements can be expressed as $a^n$, each for a different $n$. The terminology comes because this is the structure of $\Bbb {Z/Z_n}$, where $a=1$ works (and often others). I can't see what the operator is in your first example-it is some sort of unicode. For b, try each element $\oplus$ itself. What do you get? For c, there are two different types of symmetry-those that turn the rectangle upside down and those that do not.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/426915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Need help in proving that $\frac{\sin\theta - \cos\theta + 1}{\sin\theta + \cos\theta - 1} = \frac 1{\sec\theta - \tan\theta}$ We need to prove that $$\dfrac{\sin\theta - \cos\theta + 1}{\sin\theta + \cos\theta - 1} = \frac 1{\sec\theta - \tan\theta}$$
I have tried and it gets confusing.
| $$\frac{\sin\theta-\cos\theta+1}{\sin\theta+\cos\theta-1}$$
$$=\frac{\tan\theta-1+\sec\theta}{\tan\theta+1-\sec\theta}(\text{ dividing the numerator & the denominator by} \cos\theta )$$
$$=\frac{\tan\theta-1+\sec\theta}{\tan\theta-\sec\theta+(\sec^2\theta-\tan^2\theta)} (\text{ putting } 1=\sec^2\theta-\tan^2\theta) $$
$$=\frac{\tan\theta+\sec\theta-1}{\tan\theta-\sec\theta-(\tan\theta-\sec\theta)(\tan\theta+\sec\theta)}$$
$$=\frac{\tan\theta+\sec\theta-1}{-(\tan\theta-\sec\theta)(\tan\theta+\sec\theta-1)}$$
$$=\frac1{\sec\theta-\tan\theta}$$
Alternatively using Double-angle formula by putting $\tan\frac\theta2=t,$
$$\text{ LHS= }\frac{\sin\theta-\cos\theta+1}{\sin\theta+\cos\theta-1}=\frac{\frac{2t}{1+t^2}-\frac{1-t^2}{1+t^2}+1}{\frac{2t}{1+t^2}+\frac{1-t^2}{1+t^2}-1}$$
$$=\frac{2t-(1-t^2)+1+t^2}{2t+(1-t^2)-(1+t^2)} =\frac{2t+2t^2}{2t-2t^2}=\frac{1+t}{1-t}\text{assuming }t\ne0$$
$$\text{ RHS= }\frac1{\sec\theta-\tan\theta}=\frac1{\frac{1+t^2}{1-t^2}-\frac{2t}{1-t^2}}=\frac{1-t^2}{(1-t)^2}=\frac{1+t}{1-t} \text{assuming }t-1\ne0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/426981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 3
} |
Drawing points on Argand plane
The points $5 + 5i$, $1− 3i$, $− 4 + 2i$ and $−2 + 6i$ in the
Argand plane are:
(a) Collinear
(b) Concyclic
(c) The vertices of a parallelogram
(d) The vertices of a square
So when I drew the diagram, I got an rectangle in the 1st and 2nd quadrant. So, are they vertices of parallelogram? I am not sure!
| Its not collinear, nor a square, nor a parallelogram. Therefore, it must be Concyclic
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/427050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Question on Wolstenholme's theorem In one of T. Apostol's student textbooks on analytic number theory (i.e., Introduction to Analytic Number Theory, T. Apostol, Springer, 1976) Wolstenholme's theorem is stated (Apostol, Chapt. 5, page 116) as follows (more or less):
For any prime ($p \geq 5$),
\begin{equation}
((p - 1)!)(1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{p - 1}) \equiv 0 \pmod {p^2}.
\end{equation}
Suppose one was to multiply through both sides of the congruence with the inverse of $(p - 1)!$ modulo $p^{2}$. One gets
\begin{equation}
1\cdot(1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{p - 1}) \equiv 0 \pmod {p^2}.
\end{equation}
Does this congruence "make sense," since one has a finite sum of fractions on the left hand side, not a finite sum of integers (Cf. Apostol, Exercise 11, page 127)?
By the expression ``inverse of $(p - 1)!$ modulo $p^{2}$," I mean multiplying through by $t$, so that
$$((p - 1)!)t \equiv 1 \pmod{p^2}.$$
Just asking. After all I do not know everything.... :-)
| In modular arithmetic, you should interpret Egyptian fractions (of the form $\frac 1a$) as the modular inverse of $a \bmod p^2$, in which case this makes perfect sense.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/427123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
prime factors of numbers formed by primorials Let $p,q$ be primes with $p \leq q$. The product $2\cdot3\cdot\dots\cdot p$ is denoted with $p\#$, the
product $2\cdot3\cdot\dots\cdot q$ is denoted with $q\#$ (primorials).
Now $z(p,q)$ is defined by $z(p,q) = p\#+q\#/p\#$
For example $z(11,17) = 2\cdot3\cdot5\cdot7\cdot11 + 13\cdot17$
What can be said about the prime factors of $z(p,q)$ besides the simple fact that
they must be greater than $q$?
| The number $z(p,q)$ is coprime to any of these primes.
It is more likely to be prime, especially for small values, but not necessarily so. For example, $z(7,11) = 13*17$ is the smallest composite example, but one fairly easily finds composites (like $z(11,13)$, $z(5,19)$, $z(3,11)$, $z(13,19)$, $(13,23)$, and $z(13,37)$ to $z(13,59)$ inclusive. For $z(17,p)$, it is composite for all $p$ between $23$ and $113$ inclusive, only $19$ and $127$ yielded primes.
There is nothing exciting, like proper powers, for valeues of $q<120$.
There does not seem to be any particular pattern to the primes. Some are small, and some are fairly large.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/427181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Does $\sum_{n=1}^{\infty} \frac{\sin(n)}{n}$ converge conditionally? I think that the series $$\sum_{n=1}^{\infty} \dfrac{\sin(n)}{n}$$ converges conditionally, but I´m not able to prove it. Any suggestions ?
| Using Fourier series calculations it follows
$$
\sum_{n=1}^{\infty}\frac{\sin(n x)}{n}=\frac{\pi-x}{2}
$$
for every $x\in(-\pi,\pi)$. Your sum is $\frac{\pi-1}{2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/427255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Looking for an easy lightning introduction to Hilbert spaces and Banach spaces I'm co-organizing a reading seminar on Higson and Roe's Analytic K-homology. Most participants are graduate students and faculty, but there are a number of undergraduates who might like to participate, and who have never taken a course in functional analysis. They are strong students though, and they do have decent analysis, linear algebra, point-set topology, algebraic topology...
Question: Could anyone here recommend a very soft, easy, hand-wavy reference I could recommend to these undergraduates, which covers and motivates basic definitions and results of Hilbert spaces, Banach spaces, Banach algebras, Gelfand transform, and functional calculus?
It doesn't need to be rigourous at all- it just needs to introduce and to motivate the main definitions and results so that they can "black box" the prerequisites and get something out of the reading seminar. They can do back and do things properly when they take a functional analysis course next year or so.
| I don't know how useful this will be, but I have some lecture notes that motivate the last three things on your list by first reinterpreting the finite dimensional spectral theorem in terms of the functional calculus. (There is also a section on the spectral theorem for compact operators, but this is just pulled from Zimmer's Essential Results of Functional Analysis.) I gave these lectures at the end of an undergraduate course on functional analysis, though, so they assume familiarity with Banach and Hilbert spaces.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/427315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 1,
"answer_id": 0
} |
Why is $ \lim_{n \to \infty} \left(\frac{n-1}{n+1} \right)^{2n+4} = \frac{1}{e^4}$ According to WolframAlpha, the limit of
$$ \lim_{n \to \infty} \left(\frac{n-1}{n+1} \right)^{2n+4} = \frac{1}{e^4}$$
and I wonder how this result is obtained.
My approach would be to divide both nominator and denominator by $n$, yielding
$$ \lim_{n \to \infty} \left(\frac{1-\frac{1}{n}}{1 + \frac{1}{n}} \right)^{2n+4} $$
As $ \frac{1}{n} \to 0 $ as $ n \to \infty$, what remains is
$$ \lim_{n \to \infty} \left(\frac{1-0}{1 + 0} \right)^{2n+4} = 1 $$
What's wrong with my approach?
| $$ \lim_{n \to \infty} \left(\frac{n-1}{n+1} \right)^{2n+4} $$
$$ =\lim_{n \to \infty} \left(\left(1+\frac{(-2)}{n+1} \right)^{\frac{n+1}{-2}}\right)^{\frac{-2(2n+1)}{n+1}}$$
$$ = \left(\lim_{n \to \infty}\left(1+\frac{(-2)}{n+1} \right)^{\frac{n+1}{-2}}\right)^{\lim_{n \to \infty}\left(\frac{-4-\frac2n}{1+\frac1n}\right)}$$
$$=(e)^{-4}\text{ as } n \to \infty, \frac{n+1}{-2}\to-\infty\text{ and } \lim_{m\to\infty}\left(1+\frac1m\right)^m=e$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/427410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Approximating hypergeometric distribution with poisson I'm am currently trying to show that the hypergeometric distribution converges to the Poisson distribution.
$$
\lim_{n,r,s \to \infty, \frac{n \cdot r}{r+s} \to \lambda} \frac{\binom{r}{k} \binom{s}{n-k}}{\binom{r+s}{n}} = \frac{\lambda^k}{k!}e^{-\lambda}
$$
I know, how to show for specific values, that the hypergeometric distribution converges to the binomial distribution and from there we proved in our script that the binomial distribution converges to the Poisson distribution for specific values.
No the question is, can i show the approximation directly via the limit above? I came from the limit above to
$$
\lim_{n,r,s \to \infty, \frac{n \cdot r}{r+s} \to \lambda} \frac{\binom{r}{k} \binom{s}{n-k}}{\binom{r+s}{n}} = \cdots = \frac{\lambda^k}{k!}\frac{\frac{(r-1)!}{(r-k)!}\binom{s}{n-k}}{\binom{r+s-1}{n-1}}\left(\frac{1}{\lambda}\right)^{k-1}
$$
But how to show that ?
$$
\frac{\frac{(r-1)!}{(r-k)!}\binom{s}{n-k}}{\binom{r+s-1}{n-1}}\left(\frac{1}{\lambda}\right)^{k-1}
= e^{-\lambda}
$$
| This is the simplest proof I've been able to find.
Just by rearranging factorials, we can rewrite the hypergeometric probability function as
$$ \mathrm{Prob}(X=x) = \frac{\binom{M}{x}\binom{N-M}{K-x}}{\binom{N}{K}} = \frac{1}{x!} \cdot \dfrac{M^{(x)} \, K^{(x)}}{N^{(x)}} \cdot \dfrac{(N-K)^{(M-x)}}{(N-x)^{(M-x)}}, $$
where $a^{(b)}$ is the falling power $a(a-1)\cdots(a-b+1)$.
Since $x$ is fixed,
\begin{align*}
\dfrac{M^{(x)} \, K^{(x)}}{N^{(x)}}
&= \prod_{j=0}^{x-1} \dfrac{(M-j) \cdot (K-j)}{(N-j)} \\
&= \prod_{j=0}^{x-1} \left( \dfrac{MK}{n} \right) \cdot \dfrac{(1-j/M) \cdot (1-j/K)}{(1-j/N)} \\
&= \left( \dfrac{MK}{N} \right) ^x \; \prod_{j=0}^{x-1} \dfrac{(1-j/M) \cdot (1-j/K)}{(1-j/N)},
\end{align*}
which $\to \lambda^x$ as $N$, $K$ and $M$ $\to \infty$ with $\frac{MK}{N} = \lambda$.
Lets replace $N-x$, $K-x$ and $M-x$ by new variables $n$, $k$ and $m$ for simplicity. Since $x$ is fixed, as $N,K,M \to \infty$ with $KM/N \to \lambda$, so too $n,k,m \to \infty$ with $nk/m \to \lambda$. Next we write
$$ A = \dfrac{(N-K)^{(M-x)}}{(N-x)^{(M-x)}} = \dfrac{(n-k)^{(m)} }{(n)^{(m)}} = \prod_{j=0}^{m-1} \left( \dfrac{n-j-k}{n-j} \right)= \prod_{j=0}^{m-1} \left( 1 - \dfrac{k}{n-j} \right)$$
and take logs:
$$ \ln \, A = \sum_{j=0}^{m-1} \ln \left( 1 - \dfrac{k}{n-j} \right). $$
Since the bracketed quantity is an increasing function of $j$ we have
$$ \sum_{j=0}^{m-1} \ln \left( 1 - \dfrac{k}{n} \right) \le \ln \, A \le \sum_{j=0}^{m-1} \ln \left( 1 - \dfrac{k}{n-m+1} \right), $$
or
$$ m \, \ln \left( 1 - \dfrac{k}{n} \right) \le \ln \, A \le m \, \ln \left( 1 - \dfrac{k}{n-m+1} \right). $$
But $\ln (1-x) < -x$ for $0 < x < 1$, so
$$ m \, \ln \left( 1 - \dfrac{k}{n} \right) \le \ln \, A < -m \, \left( \dfrac{k}{n-m+1} \right), $$
and dividing through by $km/n$ gives
$$ \frac{n}{k} \, \ln \left( 1 - \dfrac{k}{n} \right) \le \dfrac{\ln \, A}{km/n} < - \, \left( \dfrac{n}{n-m+1} \right) = - \, \left( \dfrac{1}{1-m/n+1/n} \right). $$
Finally, we let $k$, $m$ and $n$ tend to infinity in such a way that $km/n \to \lambda$. Since both $k/n \to 0$ and $m/n \to 0$, both the left and right bounds $\to -1$. (The left bound follows from $\lim_{n \to \infty} (1-1/n)^n = e^{-1}$, which is a famous limit in calculus.) So by the Squeeze Theorem we have $\ln \, A \to -\lambda$, and thus $A \to e^{-\lambda}$. Putting all this together gives the result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/427491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Cheating in multiple choice tests. What is the probability that in a multiple choice test exam session, where $k$ people took the test (that contains $n$ questions with 2 possible answers each and where exactly one answer to each question is the correct one) cheating has occurred, i.e. there exists at least two tests that are identical ?
My solution is $$1-\frac{\binom{2^{n}}{k}}{\binom{2^{n}+k-1}{k}},$$ since there are $2^n$ possible tests and the set of all multisets with $k$ elements from $2^n$ is the size of all possible "exam sessions". For the numerator I counted only those exam sessions where cheating didn't occur, i.e. all tests are different from eachother. Since what I'm looking for is the complement of this set I get the above.
| It is not reasonable to consider two identical tests as evidence of cheating. And people do not choose answers at random. So let us reword the problem as follows.
We have $k$ people who each toss a fair coin $n$ times. What is the probability that at least two of them will get identical sequences of heads and tails?
The required probability $1$ minus the probability that the sequences are all different. We go after that probability.
There are $2^n$ possible sequences of length $n$ made up of H and/or T. In order to make typing easier, and also not to get confused, let's call this number $N$. So each of our $k$ people independently produces one of these $N$ sequences.
Write down the sequences chosen by the various people, listed in order of student ID. There are $N^k$ equally likely possibilities.
Now we count the number of ways to obtain a list of $k$ distinct sequences. This is $N(N-1)(N-2)\cdots (N-k+1)$. So the probability the sequences are all different is
$$\frac{N(N-1)(N-2) \cdots (N-k+1)}{N^k}.\tag{A}$$
So the answer to the original problem is $1$ minus the expression in (A).
The numerator can be written in various other ways, for example as $k!\dbinom{N}{k}$.
Remark: I prefer to think of this in elementary Birthday Problem terms. The probability that the sequence obtained by Student $2$ is different from the one obtained by Student $1$ is $\frac{N-1}{N}$. Given that fact, the probability that the sequence obtained by Student $3$ is different from both of the first two sequences is $\frac{N-2}{N}$. And so on. So the probability in (A) can be thought of as
$$\frac{N-1}{N}\cdot\frac{N-2}{N}\cdots \frac{N-k+1}{N}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/427622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Show that the interior of a convex set is convex Question: Let $P\subseteq \mathbb{R}^n$ be a convex set. Show that $\text{int}(P)$ is a convex set.
I know that a point $x$ is said to be an interior point of the set $P$ if there is an open ball centered at $x$ that is contained entirely in $P$. The set of all interior points of $P$ is denoted by $\text{int}(P)$.
Also, to say that a set $P$ is convex means that if $x,y \in P$ then $tx+(1-t)y \in P$ for all $t \in (0,1)$.
How to go about the above proof?
| I'll give a proof based on the following picture:
Suppose that $x$ and $y$ are interior points of a convex set $\Omega \subset \mathbb R^n$. Let $0 < \theta < 1$. We wish to show that the point $z = \theta x + (1 - \theta) y$ is in the interior of $\Omega$.
There exists an open ball $A$ centered at $x$ such that $A \subset \Omega$. Let $r$ be the radius of $A$, and let $B$ be the open ball of radius $\theta r$ centered at $z$.
Claim: $B \subset \Omega$.
Proof: Let $\hat z \in B$. There exists a point $\hat x \in \mathbb R^n$ such that
$$
\hat z = \theta \hat x + (1 - \theta) y.
$$
I will show that $\hat x \in A$. It will follow, by the convexity of $\Omega$, that $\hat z \in \Omega$. Since $\hat z$ is an arbitrary point in $B$, this will show that $B \subset \Omega$.
To complete the proof, note that
\begin{align}
\| \theta(\hat x - x)\| &= \| \theta \hat x - \theta x \| \\
&= \| \hat z - (1 - \theta) y - (z - (1 - \theta) y) \| \\
&= \| \hat z - z \| \\
&\leq \theta r.
\end{align}
It follows that $\| \hat x - x \| \leq r$, which shows that $\hat x \in A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/427721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 1
} |
Clues for $\lim_{x\to\infty}\sum_{k=1}^{\infty} \frac{(-1)^{k+1} (2^k-1)x^k}{k k!}$ Some clues for this questions?
$$\lim_{x\to\infty}\sum_{k=1}^{\infty} \frac{(-1)^{k+1} (2^k-1)x^k}{k k!}$$
| Take the derivative and use the exponential series. Thus if the sum is $f(x)$, then
$$x f'(x) = \sum_{n=1}^{\infty} (-1)^{n+1} \frac{2^n-1}{n!} x^n = e^{-x}-e^{-2 x}$$
Then
$$f(x) = \int_0^x dt \frac{e^{-t}-e^{-2 t}}{t}$$
(because you know $f(0)=0$). Thus, using Fubini's theorem, one can show that
$$\lim_{x \to\infty} f(x) = \int_0^{\infty} dt \frac{e^{-t}-e^{-2 t}}{t} = \log{2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/427810",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Finite union of compact sets is compact Let $(X,d)$ be a metric space and $Y_1,\ldots,Y_n \subseteq X$ compact subsets. Then I want to show that $Y:=\bigcup_i Y_i$ is compact only using the definition of a compact set.
My attempt: Let $(y_n)$ be a sequence in $Y$. If $\exists 1 \leq i \leq n\; \exists N \in \mathbb N \; \forall j \geq N\; y_j \in Y_i$ then $(y_n)$ has a convergent subsequence because $Y_i$ is compact. Otherwise,
$$
\forall 1 \leq i \leq n \; \forall N \in \mathbb N\; \exists j \geq N\; y_j \notin Y_i
$$ Assuming for the moment that $n = 2$ and using induction later we have that
$$
\forall N \in \mathbb N \; \exists j \geq N \; y_j \in Y_1 \backslash Y_2
$$ With this we can make a subsequence $\bigl(y_{n_j}\bigr)_{j=0}^\infty$ in $Y_1 \backslash Y_2$. This sequence lies in $Y_1$ and thus has a convergent subsequence. This convergent subsequence of the subsequence will then also be a convergence subsequence of the original sequence. Now we may use induction on $n$.
| Let $\mathcal{O}$ be an open cover of $Y$. Since $\mathcal{O}$ is an open cover of each $Y_i$, there exists a finite subcover $\mathcal{O}_i \subset \mathcal{O}$ that covers each $Y_i$. Then $\bigcup_{i=1}^n \mathcal{O}_i \subset \mathcal{O}$ is a finite subcover. That's it; no need to deal with sequences.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/427886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
Determine whether $F(x)= 5x+10$ is $O(x^2)$ Please, can someone here help me to understand the Big-O notation in discrete mathematics?
Determine whether $F(x)= 5x+10$ is $O(x^2)$
| It is as $x \to \infty$.
Actually, $5x+10 = o(x^2)$ as $x \to \infty$
(little-oh)
since $\lim_{x \to \infty} \frac{5x+10}{x^2} = 0$.
However,
$5x+10 \ne O(x^2)$ as $x \to 0$,
and $5x \ne O(x^2)$ as $x \to 0$,
because
there is no real $c$ such that
$5x < c x^2$ as $x \to 0$.
Since $x \to 0$ and $x \to \infty$ are the two common
limits for big-oh notation,
it is important to state which one is meant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/427971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
The positive root of the transcendental equation $\ln x-\sqrt{x-1}+1=0$ I numerically solved the transcendental equation
$$\ln x-\sqrt{x-1}+1=0$$
and obtained an approximate value of its positive real root $$x \approx 14.498719188878466465738532142574796767250306535...$$
I wonder if it is possible to express the exact solution in terms of known mathematical constants and elementary or special functions (I am especially interested in those implemented in Mathematica)?
| Yes, it is possible to express this root in terms of special functions implemented in Mathematica.
Start with your equation
$$\ln x-\sqrt{x-1}+1=0,\tag1$$
then take exponents of both sides
$$x\ e^{1-\sqrt{x-1}}=1.\tag2$$
Change the variable
$$z=\sqrt{x-1}-1,\tag3$$
then plug this into $(2)$ and divide both sides by $2$
$$\left(\frac{z^2}2+z+1\right)e^{-z}=\frac12.\tag4$$
Now the left-hand side looks very familiar. Indeed, as it can be seen from DLMF 8.4.8 or the formulae $(2),(3)$ on this MathWorld page, it is a special case (for $a=3$) of the regularized gamma function
$$Q(a,z)=\frac{\Gamma(a,z)}{\Gamma(a)},\tag5$$
implemented in Mathematica as GammaRegularized[a, z].
Its inverse with respect to $z$ is denoted as $Q^{-1}(a,s)$ and implemented in Mathematica as InverseGammaRegularized[a, s]. We can use this function to express the positive real root of the equation $(4)$ is a closed form
$$z=Q^{-1}\left(3,\ \frac12\right).\tag6$$
Finally, using $(3)$ we can express the positive real root of your equation $(1)$ as follows:
$$x=\left(Q^{-1}\left(3,\ \frac12\right)+1\right)^2+1.\tag7$$
The corresponding Mathematica expression is
(InverseGammaRegularized[3, 1/2] + 1)^2 + 1
We can numerically check that substitution of this expression into the left-hand side of the equation $(1)$ indeed yields $0$.
I was not able to express the result in terms of simpler functions (like Lambert W-function).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/428057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 1,
"answer_id": 0
} |
Boston Celtics VS. LA Lakers- Expectation of series of games?
Boston celtics & LA Lakers play a series of games. the first team who win 4 games, win the whole series.
*
*The probability of win or lose game is equal (1/2)
a. what is the expectation of the number of games in the series?
So i defined an indicator: $x_i=1$ if the game $i$ was played.
It is clear that $E[x_1]=E[x_2]=E[x_3]=E[x_4]=1$.
for the 5th game, for each team we have 5 different scenarios (W=win, L=lose) and the probability to win is:
*
*W W W L $((\frac12)^3=\frac18)$
*W W L L $((\frac12)^2=\frac14)$
*W L L L $((\frac12)^1=\frac12)$
$\frac12+\frac14+\frac18=\frac78$
We can use the complementary event: if only one game is L in a series of 4 games (so the 5th game will be played): $1-\frac18=\frac78$
The same calculation is for the 6th and 7th game:
6th: W W W L L or W W L L L => $(\frac18+\frac14=\frac38)$
7th: W W W L L L => $(\frac18)$
And the expectation is $E[x]=1+1+1+1+\frac78+\frac38+\frac18=\frac{43}8$
What am i missing here, and how can I fix it?
| If we have the family of all length-7 sequences composed of W and L, we see that each of these sequences represent one of $2^7$ outcomes to our task at hand with equal probability. Then, we see that the number of games played is pre-decided for each such given sequence (e.x.: WWWLWLL and WWWLWWW both result in five games played, while WLWLWLW result in seven games). So, we can find the probability of each event (number of games played) by counting how many sequences fall into each category.
Note: O indicates that this can be either W or L; this occurs when the outcome of the series is already decided and additional games become irrelevant to the total number of games played. Also, let us assume that the W team wins the seven-game series without loss of generality.
Four games: WWWWOOO
$2^3=8$ sequences
Five games: (combination of WWWL)WOO
$2^2{{4}\choose{3}} = 16$ sequences
Six games: (combination of WWWLL)WO
$2{{5}\choose{3}} = 20$ sequences
Seven games: (combination of WWWLLL)W
${{6}\choose{3}} = 20$ sequences
So, the expected value for number of games played is:
$4(\frac{8}{64})+5(\frac{16}{64})+6(\frac{20}{64})+7(\frac{20}{64})=\frac{93}{16}=5.8125$ games
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/428104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
The notations change as we grow up In school life we were taught that $<$ and $>$ are strict inequalities while $\ge$ and $\le$ aren't. We were also taught that $\subset$ was strict containment but. $\subseteq$ wasn't.
My question: Later on, (from my M.Sc. onwards) I noticed that $\subset$ is used for general containment and $\subsetneq$ for strict. The symbol $\subseteq$ wasn't used any longer! We could have simply carried on with the old notations which were analogous to the symbols for inequalities. Why didn't the earlier notations stick on? There has to be a history behind this, I feel. (I could be wrong)
Notations are notations I agree and I am used to the current ones. But I can't reconcile the fact that the earlier notations for subsets (which were more straightforward) were scrapped while $\le$ and $\ge$ continue to be used with the same meaning. So I ask.
| This is very field dependent (and probably depends on the university as well). In my M.Sc. thesis, and in fact anything I write today as a Ph.D. student, I still use $\subseteq$ for inclusion and $\subsetneq$ for proper inclusion. If anything, when teaching freshman intro courses I'll opt for $\subsetneqq$ when talking about proper inclusion.
On the other hand, when I took a basic course in algebraic topology the professor said that we will write $X\setminus x$ when we mean $X\setminus\{x\}$, and promptly apologized to me (the set theory student in the crowd).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/428184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
A vector field is a section of $T\mathcal{M}$. By definition, a vector field is a section of $T\mathcal{M}$. I am familiar with the concept of vector field, as well as tangent plane of a manifold.
But such definition is not intuitive to me at all. Could some one give me some intuition? Thank you very much!
| Remember that a point of the tangent bundle consists of pair $(p,v)$, where $p \in M$ and $v \in T_pM$. We have the projection map $\pi: TM \to M$ which acts by $(p,v) \to p$. A section of $\pi$ is a map $f$ so that $\pi \circ f$ is the identity. So for each $p \in M$, we have to choose an element of $TM$ that projects back down to $p$. So for each $p \in M$ we're forced to choose a pair $(p,v)$ with $v \in T_pM$. This is the same information as choosing a tangent vector in each tangent space, which is the same information as a vector field. If we insist that $f$ is smooth (as we usually do), then we get a smooth vector field.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/428349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
Hölder- continuous function $f:I \rightarrow \mathbb R$ is said to be Hölder continuous if $\exists \alpha>0$ such that $|f(x)-f(y)| \leq M|x-y|^\alpha$, $ \forall x,y \in I$, $0<\alpha\leq1$. Prove that $f$ Hölder continuous $\Rightarrow$ $f$ uniformly continuous and if $\alpha>1$, then f is constant.
In order to prove that $f$ Hölder continuous $\Rightarrow$ $f$ uniformly continuous, it is enough to note that $|f(x)-f(y)| \leq M |x-y|^\alpha \leq M|x-y|$, since $\alpha \leq 1$. This implies that f is Lipschitz $\Rightarrow$ f is uniformly continuous.
But how can I prove that if $\alpha >1$, then $f$ is constant?
| Hint:
For some $\epsilon>0$ and all $x\ne y$, you have $\Bigl|{f(x)-f(y)\over x-y}\Bigr|\le M|x-y|^\epsilon$ for some $\epsilon>0$.
Why must $f'(x)$ exist? What is the value of $f'(x)$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/428487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
How can I measure the distance between two cities in the map? Well i know that the distance between Moscow and London by km it's about 2,519 km and the distance between Moscow and London in my map by cm it's about 30.81 cm and the Scale for my map is 1 cm = 81.865 km but when i tried to measure the distance between other two cities for example between London and Berlin with my map scale the result was wrong so i think that's because the spherical of earth ???!!
Now i want to know how can i measure the distance between tow cities in the map also how can i know the scale of a map ?
| The calculation is somewhat complex. A simplification is to assume that the Earth is a sphere and finding the great-circle distance. A more complex calculation instead uses an oblate spheroid as a closer approximation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/428553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Convergence in $L_1$ and Convergence of the Integrals Am I right with the following argument? (I am a bit confused by all those types of convergence.)
Let $f, f_n \in L_1(a,b)$ with $f_n$ converging to $f$ in $L_1$, meaning
$$\lVert f_n-f \rVert_1 = \int_a^b |f_n(x)-f(x)|dx \rightarrow 0 \ , $$
Then the integral $\int_a^b f_n dx$ converges to $\int_a^b f dx$. To show this we look at$$\left| \int_a^b f_n(x) dx - \int_a^b f(x) dx \right | \leq \int_a^b | f_n(x) - f(x)| dx \rightarrow 0 \ .$$
If this is indeed true, is there something similar for the other $L_p(a,b)$ spaces, or is this something special to $L_1(a,b)$?
| Let $f_n \to f$ in $L^p(\Omega)$. Then, we also have that $\|f_n\|_p \to \|f\|_p$. So "something similar" holds.
As for the convergence of $\int f_n$ to $\int f$, this is generally not guaranteed by $L^p$ convergence, unless the measure of the underlying space is finite (like it is in your example).
In that case we have
$$
\left| \int_{\Omega} f_n(x) dx - \int_{\Omega} f(x)\,dx \right| = \left| \int_{\Omega} (f_n(x) dx - f(x))\,dx \right| \leq \int_{\Omega} | f_n(x) - f(x)|\, dx \leq |\Omega|^{\frac1q} \|fn-f\|_p \rightarrow 0
$$
by Hölder inequality. This is the same inequality that gives you that $L^p(\Omega) \subset L^1(\Omega)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/428632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Rees algebra of a monomial ideal User fbakhshi deleted the following question:
Let $R=K[x_1,\ldots,x_n]$ be a polynomial ring over a field $K$ and $I=(f_1,\ldots,f_q)$ a monomial ideal of $R$. If $f_i$ is homogeneous of degree $d\geq 1$ for all $i$, then prove that
$$
R[It]/\mathfrak m R[It]\simeq K[f_1t,\ldots, f_q t]\simeq K[f_1,\ldots,f_q] \text{ (as $K$-algebras).}
$$
$R[It]$ denotes the Rees algebra of $I$ and $\mathfrak m=(x_1,\ldots,x_n)$.
| $R[It]/\mathfrak mR[It]$ is $\oplus_{n\geq 0}{I^n/\mathfrak mI^n}.$ Let $\phi$ be any homogeneous polynomial of degree $l$. Consider $I_l$ to be the $k$-vector space generated by all $\phi(f_1,\ldots,f_q).$ Then $k[f_1,\ldots,f_q]=\oplus_{l\geq 0}{I_l}.$ Now $\dim_{k}{I_l}=\dim_{k}{I^l/\mathfrak mI^l}.$ Hence $k[f_1,\ldots,f_q]\simeq {R[It]/\mathfrak mR[It]}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/428689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Proving the monotone decreasing and find the limit ?? Let $a,b$ be positive real number. Set $x_0 =a$ and $x_{n+1}= \frac{1}{x_n^{-1}+b}$ for $n≥0$
(a) Prove that $x_n$ is monotone decreasing.
(b) Prove that the limit exists and find it.
Any help? I don't know where to start.
| To prove the limit exists use the fact every decreasing bounded below sequence is convergent. To find the limit just assume $ \lim_{n\to \infty} x_n = x = \lim_{n\to \infty} x_{n+1} $ and solve the equation for $x$
$$ x=\frac{1}{1/x+b} .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/428829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
If there is a continuous function between linear continua, then this function has a fixed point? Let $f:X\to X$ be a continuous map and $X$ be a linear continuum. Is it true that $f$ has a fixed point?
I think the answer is "yes" and here is my proof:
Assume to the contrary that for any $x\in X$, either $f(x)<x$ or $f(x)>x$. Then, $A=\{x: f(x)<x\}$ and $B=\{x: f(x)>x\}$ are disjoint and their union gives $X$. Now if we can show that both $A$ and $B$ are open we obtain a contradiction because $X$ is connected.
How can we show that $A$ and $B$ are open in $X$?
| The function $f(x)=x+1$ is a counterexample. Here both sets $A$ and $B$ are open, but one of them is empty :-)
Brouwer fixed point theorem asserts that the closed ball has the property you are looking for: every continuous self-map will have a fixed point. But the proof requires tools well beyond the general topological arguments you outlined. The most straightforward proof passes via relative homology or homotopy, and exploits the nontriviality of certain homology (resp. homotopy) classes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/428932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $d^n(x^n)/dx^n = n!$ by induction I need to prove that $d^n(x^n)/dx^n = n!$ by induction.
Any help?
| Hint: Are you familiar with proofs by induction? Well, the induction step could be written as $$d^{n+1}(x^{n+1}) / dx^{n+1} = d^n \left(\frac{d(x^{n+1})} {dx}\right) /dx^n $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/429001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
How To Calculate a Weighted Payout Table I am looking to see if there is some sort of formula I can use to calculate weighted payout tables. I am looking for something similar to the PGA payout distribution, but the problem is I want my payout table to be flexible to accommodate a variable or known number of participants.
As in golf, the payout distribution goes to 70 players. So that payout distribution, while weighted, is pretty mush constant from tourney to tourney.
With my calculation, I want the weighting to be flexible by having a variable as the denominator for the payout pool.
In other words, I would like the formula to handle 10 participants, or 18 participants, or 31 or 92, etc.
Let me know if there is some sort of mathematical payout weighed formula I could use.
Thanks.
| There are lots of them. You haven't given enough information to select just one. A simple one would be to pick $n$ as the number of players that will be paid and $p$ the fraction that the prize will reduce from one to the next. The winner gets $1$ (times the top prize), second place gets $p$, third $p^2$ and so on. The sum of all this is $\frac {1-p^n}{1-p}$, so if the winner gets $f$ the total purse is $f\frac {1-p^n}{1-p}$. Pick $p$, and your total purse, and you can determine each prize as $f, fp, fp^2 \ldots fp^{n-1}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/429069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $x\not\leq y$, then is $x>y$, or $x\geq y$? I'm currently reading about surreal numbers from here.
At multiple points in this paper, the author has stated that if $x\not\leq y$, then $x\geq y$.
Shoudn't the relation be "if $x\not\leq y$, then $x>y$"?
Hasn't the possibility of $x=y$ already been negated when we said $x\not\leq y$?
Thanks in advance.
| You are correct, if we were speaking of $\leq/\geq$ relations we know and love, as standard ordering relations on the reals: The negation of $x \leq y$ is exactly $x > y$, and that would be the correct assertion if we were talking about a "trichotomous" ordering, where we take that for any two real numbers, one and only one of the following hold. $x\lt y, \lor x = y, \lor x>y$.
But your text is not wrong that $x \nleq y \implies x\geq y$, (that is, the right hand side is implied by the left hand side, and this would be a valid implication in even in the standard real numbers). And it seems your text is using strictly $\leq$ and $\geq$ so that for any two numbers $x, y$, we have one and only one of the following relations to consider: $x \geq y$ or $x \leq y$, andthese relations do not necessarily hold the same properties we know and love with respect to their standard meanings on the reals.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/429151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Hartshorne III 9.3 why do we need irreducibility and equidimensionality? We are trying to prove:
Corollary 9.6: Let $f\colon X \to Y$ be a flat morphism of schemes of finite type over a field $k$, and assume that $Y$ is irreducible. Then the following are equivalent:
(i) every irreducible component of $X$ has dimension equal to $\dim Y + n$;
(ii) for any point $y \in Y$ (closed or not), every irreducible component of the fibre $X_y$ has dimension $n$.
(i) $\Rightarrow$ (ii)
Hartshorne argues that since $Y$ is irreducible and $X$ is equidimensional and both are of finite type over $k$, we have
$$\dim_x X = \dim X - \dim \operatorname{cl}(\{x\})$$
$$\dim_y X = \dim Y - \dim \operatorname{cl}(\{y\})$$
Hartshorne makes a reference to II Ex 3.20, where one should prove several equalities for an integral scheme of finite type over a field $k$. We have that $Y$ is irreducible, so we only need it to be reduced, and then its corresponding equality will be justified. But how do we get reducedness then? And what about $X$?
| I was confused by this as well. The desired equalities follow from the general statement
Let $X$ be a scheme of finite type over a field $k$ and let $x\in X$. Then $$\dim \mathcal{O}_x+\dim \{x\}^-=\sup_{x\in V \text{ irreducible component}} \dim V$$
where the sup on the right is taken over all irreducible components of $X$ containing $x$.
I posted a proof of this as an answer to my own question here: Dimension of local rings on scheme of finite type over a field.
The proof uses the special case from exercise II 3.20 in Hartshorne.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/429194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Polynomials Question: Proving $a=b=c$. Question:
Let $P_1(x)=ax^2-bx-c, P_2(x)=bx^2-cx-a \text{ and } P_3=cx^2-ax-b$
, where $a,b,c$ are non zero reals. There exists a real $\alpha$ such
that $P_1(\alpha)=P_2(\alpha)=P_3(\alpha)$. Prove that $a=b=c$.
The questions seems pretty easy for people who know some kind of calculus. Since, the question is from a contest, no calculus can be used. I have got a solution which is bit long(No calculus involved), I'm looking for a simplest way to solve this, which uses more direct things like $a-b|P(a)-P(b)$ .
| Hint: if $a=b=c$ then all three polynomials are equal. A useful trick to show that polynomials are equal is the following: if a polynomial $Q$ of degree $n$ (like $P_1-P_2$) has $n+1$ distinct roots (points $\beta$ such that $Q(\beta)=0$) then $Q$ is the zero polynomial. In particular, if a quadratic has three zeroes, then it must be identically zero. It follows that any two quadratics which agree at three distinct points must be identical. (So you should try to construct a quadratic from $P_1,P_2,P_3$ that has three distinct zeroes and somehow conclude from that that $a=b=c$.)
This result can be proved using the factor theorem, and requires no calculus (indeed, like the result of your question, it holds in polynomial rings where analysis can't be developed in a particularly meaningful sense, so any proof using calculus is rather unsatisfactory).
Disclaimer: I haven't actually checked to see whether this approach works, but it's more or less the only fully general trick to show that two polynomials are the same.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/429272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 1
} |
A basic question on Type and Cotype theory I'm studying basic theory of type and cotype of banach spaces, and I have a simple question. I'm using the definition using averages. All Banach spaces have type 1, that was easy to prove, using the triangle inequality. But I'm having a hard time trying to show that all Banach spaces have cotype $\infty$.
What I'm trying to show is that there existsc $C>0$ such that, for every $x_1, \dotsc, x_n$ in a Banach space $X$,
$$\left( \frac {{\displaystyle \sum\limits_{\varepsilon_i = \pm 1}} \lVert \sum^n_{i=1} \varepsilon_i x_i\rVert} {2^n} \right) \ge
C \max_{1\le i \le n} \lVert x_i \rVert $$
How is it done ? This is supposed to be trivial, as the literature keeps telling me "it's easy to see".
Thanks !
| The argument is by induction:
It is trivial for $n=1$. For the case $n=2$ note that we have, by the triangle inequality and the fact that $\|z\|=\|-z\|$,
$$
\| x-y \| + \|x+y\| \geq 2\max\{ \| x\|, \| y\|\},
$$
so that the inequality in this case follows with $C=1$. For the general case consider a vector $\bar{\varepsilon}=(\varepsilon_2,\ldots,\varepsilon_n) \in \{0,1\}^{n-1}$ and $\bar{x}=(x_2,\ldots,x_n)$ and the natural dot product
$$
\bar{\varepsilon}\cdot \bar{x}= \sum_{j=2}^n \varepsilon_ix_i\in X.
$$
Then the left hand side of the desired inquality (which we call $A$) can be rewritten as
$$
A=\frac{\sum_{\bar{\varepsilon}} \sum_{\varepsilon_1=\pm1} \| \varepsilon_1x_1 + \bar{\varepsilon}\cdot \bar{x}\|}{2^n}.
$$
Notice that, if $y=\bar{\varepsilon}\cdot \bar{x}$ then, by the argument for $n=2$,
$$
\sum_{\varepsilon_1=\pm1} \| \varepsilon_1x_1+y\| \geq 2\max\{ \| x_1\|,\|y\|\},
$$
so that, plugging into the previous inequality, and recalling the obvious inequality $\max\{ \sum_j a_j, \sum_jb_j\} \leq \sum_j \max\{ a_j,b_j\}$, we obtain
$$
A\geq \max\left\{ \| x_1\|, \frac{\sum_{\bar{\varepsilon}} \| \bar{\varepsilon}\cdot \bar{x}\|}{2^{n-1}} \right\} \geq \max_{1\leq i\leq n} \| x_i\|.
$$
This is what you want, with $C=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/429412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Show that the $\max{ \{ x,y \} }= \frac{x+y+|x-y|}{2}$. Show that the $\max{ \{ x,y \} }= \dfrac{x+y+|x-y|}{2}$.
I do not understand how to go about completing this problem or even where to start.
| Without loss of generality, let $y=x+k$ for some nonnegative number $k$. Then,
$$
\frac{x+(x+k)+|x-(x+k)|}{2} = \frac{2x+2k}{2} = x+k = y
$$
which is equal to $\max(x,y)$ by the assumption.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/429622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33",
"answer_count": 7,
"answer_id": 5
} |
Probability of getting 'k' heads with 'n' coins This is an interview question.( http://www.geeksforgeeks.org/directi-interview-set-1/)
Given $n$ biased coins, with each coin giving heads with probability $P_i$, find the probability that on tossing the $n$ coins you will obtain exactly $k$ heads. You have to write the formula for this (i.e. the expression that would give $P (n, k)$).
I can write a recurrence program for this, but how to write the general expression ?
| Consider the function
$[ (1-P_1) + P_1x] \times [(1-P_2) + P_2 x ] \ldots [(1-P_n) + P_n x ]$
Then, the coefficient of $x^k$ corresponds to the probability that there are exactly $k$ heads.
The coefficient of $x^k$ in this polynomial is $\sum_{k-\mbox{subset} S} [\prod_{i\in{S}} \frac{1-p_i}{p_i} \prod_{j \not \in S} p_j] $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/429698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 1
} |
Help to compute the following coefficient in Fourier series $\int_{(2n-1)\pi}^{(2n+1)\pi}\left|x-2n\pi\right|\cos(k x)\mathrm dx$
$$\int_{(2n-1)\pi}^{(2n+1)\pi}\left|x-2n\pi\right|\cos(k x)\mathrm dx$$
where $k\geq 0$, $k\in\mathbb{N} $ and $n\in\mathbb{R} $.
it is a $a_k$ coefficient in a Fourier series.
| Here is the final answer by maple
$$ 2\,{\frac {2\, \left( -1 \right) ^{k} \left( \cos \left( \pi \,kn
\right) \right) ^{2}-2\, \left( \cos \left( \pi \,kn \right)
\right) ^{2}+ \left( -1 \right) ^{k+1}+1}{{k}^{2}}}
. $$
Added: More simplification leads to the more compact form
$$ 2\,{\frac {\cos \left( 2\,\pi \,kn \right) \left( \left( -1 \right)
^{k}-1 \right) }{{k}^{2}}}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/429779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
A proposed proof by induction of $1+2+\ldots+n=\frac{n(n+1)}{2}$
Prove: $\displaystyle 1+2+\ldots+n=\frac{n(n+1)}{2}$.
Proof
When $n=1,1=\displaystyle \frac{1(1+1)}{2}$,equality holds.
Suppose when $n=k$, we have $1+2+\dots+k=\frac{k(k+1)}{2}$
When $n = k + 1$:
\begin{align}
1+2+\ldots+k+(k+1) &=\frac{k(k+1)}{2}+k+1 =\frac{k(k+1)+2k+2}{2}\\
&=\frac{k^2+3k+2}{2}\\
\text{[step]}&=\displaystyle\frac{(k+1)(k+2)}{2}=\displaystyle\frac{(k+1)((k+1)+1)}{2}
\end{align}
equality holds.
So by induction, the original equality holds $\forall n\in \mathbb{N}$.
Question 1: any problems in writing?
Question 2: Why [step] happen to equal? i.e., why does $k^2+3k+2=(k+1)(k+2)$ hold?
| Q1: No problems, that's the way induction works.
Q2: go back one step:
$$k(k+1)+2k+2=k(k+1)+2(k+1)=(k+1)(k+2)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/429931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Why does «Massey cube» of an odd element lie in 3-torsion? The cup product is supercommutative, i.e the supercommutator $[-,-]$ is trivial at the cohomology level — but not at the cochain level, which allows one to produce various cohomology operations.
The simplest (in some sense) of such (integral) operations is the following «Massey cube». Suppose $a$ is an integral $k$-cocycle, $k$ is odd; $[a,a]=0\in H^{2k}$, so $[a,a]=db$ (where $b$ is some cochain); define $\langle a\rangle^3:=[a,b]\in H^{3k-1}$ (clearly this is a cocycle; it doesn’t depend on choice of $b$, since by supercommutativity $[a,b’-b]=0\in H^{3k-1}$ whenever $d(b-b’)=0$).
The question is,
why $\langle a\rangle^3$ lies in 3-torsion?
For $k=3$, for example, this is true since $H^8(K(\mathbb Z,3);\mathbb Z)=\mathbb Z/3$, but surely there should be a more direct proof? (Something like Jacobi identity, maybe?)
| Recall that $d(x\cup_1y)=[x,y]\pm dx\cup_1 y\pm x\cup_1dy$.
In particular, in the definition from the question one can take $b=a\cup_1a$. So $\langle a\rangle^3=[a,a\cup_1a]$.
Now $d((a\cup_1a)\cup_1a)=[a,a\cup_1a]+(d(a\cup_1 a))a=\langle a\rangle^3+[a,a]\cup_1 a$. Now by Hirsch formula $a^2\cup_1a=a(a\cup_1a)+(a\cup_1a)a=\langle a\rangle^3$.
So $$3\langle a\rangle^3=d(a\cup_1a\cup_1a)=0\in H^{3k-1}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/429976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Why does the inverse of the Hilbert matrix have integer entries?
Let $A$ be the $n\times n$ matrix given by
$$A_{ij}=\frac{1}{i + j - 1}$$
Show that $A$ is invertible and that the inverse has integer entries.
I was able to show that $A$ is invertible. How do I show that $A^{-1}$ has integer entries?
This matrix is called the Hilbert matrix. The problem appears as exercise 12 in section 1.6 of Hoffman and Kunze's Linear Algebra (2nd edition).
| Be wise, generalize (c)
I think the nicest way to answer this question is the direct computation of the inverse - however, for a more general matrix including the Hilbert matrix as a special case. The corresponding formulas have very transparent structure and nontrivial further generalizations.
The matrix $A$ is a particular case of the so-called Cauchy matrix with elements
$$A_{ij}=\frac{1}{x_i-y_j},\qquad i,j=1,\ldots, N.$$
Namely, in the Hilbert case we can take
$$x_i=i-\frac{1}{2},\qquad y_i=-i+\frac12.$$
The determinant of $A$ is given in the general case by
$$\mathrm{det}\,A=\frac{\prod_{1\leq i<j\leq N}(x_i-x_j)(y_j-y_i)}{\prod_{1\leq i,j\leq N}(x_i-y_j)}.\tag{1}$$
Up to an easily computable constant prefactor, the structure of (1) follows from the observation that $\mathrm{det}\,A$ vanishes whenever there is a pair of coinciding $x$'s or $y$'s. (In the latter case $A$ contains a pair of coinciding raws/columns). For our $x$'s and $y$'s the determinant is clearly non-zero, hence $A$ is invertible.
One can also easily find the inverse $A^{-1}$, since the matrix obtained from a Cauchy matrix by deleting one row and one column is also of Cauchy type, with one $x$ and one $y$ less. Taking the ratio of the corresponding two determinants and using (1), most of the factors cancel out and one obtains
\begin{align}
A_{mn}^{-1}=\frac{1}{y_m-x_n}\frac{\prod_{1\leq i\leq N}(x_n-y_i)\cdot\prod_{1\leq i\leq N}(y_m-x_i)}{\prod_{i\neq n}(x_n-x_i)\cdot\prod_{i\neq m}(y_m-y_i)}.\tag{2}
\end{align}
For our particular $x$'s and $y$'s, the formula (2) reduces to
\begin{align}
A_{mn}^{-1}&=\frac{(-1)^{m+n}}{m+n-1}\frac{\frac{(n+N-1)!}{(n-1)!}\cdot
\frac{(m+N-1)!}{(m-1)!}}{(n-1)!(N-n)!\cdot(m-1)!(N-m)!}=\\
&=(-1)^{m+n}(m+n-1){n+N-1 \choose N-m}{m+N-1 \choose N-n}{m+n-2\choose m-1}^2.
\end{align}
The last expression is clearly integer. $\blacksquare$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/430060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31",
"answer_count": 1,
"answer_id": 0
} |
Is there an inverse to Stirling's approximation? The factorial function cannot have an inverse, $0!$ and $1!$ having the same value. However, Stirling's approximation of the factorial $x! \sim x^xe^{-x}\sqrt{2\pi x}$ does not have this problem, and could provide a ballpark inverse to the factorial function. But can this actually be derived, and if so how? Here is my work:
$$
\begin{align}
y &= x^xe^{-x}\sqrt{2\pi x}\\
y^2 &= 2\pi x^{2x + 1}e^{-2x}\\
\frac{y^2}{2\pi} &= x^{2x + 1}e^{-2x}\\
\ln \frac{y^2}{2\pi} &= (2x + 1)\ln x - 2x\\
\ln \frac{y^2}{2\pi} &= 2x\ln x + \ln x - 2x\\
\ln \frac{y^2}{2\pi} &= 2x(\ln x - 1) + \ln x
\end{align}
$$
That is as far as I can go. I suspect the solution may require the Lambert W function.
Edit: I have just realized that after step 3 above, one can divide both sides by e to get
$$\left(\frac{x}{e}\right)^{2x + 1} = \frac{y^2}{2e\pi}$$
Can this be solved?
| As $n$ increases to infinity we want to know roughly the size of the $x$ that satisfies the equation $x! = n$. By Stirling
$$
x^x e^{-x} \sqrt{2\pi x} \sim n
$$
Just focusing on $x^x$ a first approximation is $\log n / \log\log n$. Now writing $x = \log n / \log\log n + x_1$ and solving approximately for $x_1$, this time using $x^x e^{-x}$ we get
$$
x = \frac{\log n}{\log\log n} + \frac{\log n \cdot ( \log\log\log n + 1)}{(\log\log n)^2} + x_2
$$
with a yet smaller $x_2$ which can be also determined by plugging this into $x^x e^{-x}$. You'll notice eventually that the $\sqrt{2\pi x}$ is too small to contribute. You can continue in this way, and this will give you an asymptotic (non-convergent) serie (in powers of $\log\log n$). For more I recommend looking at De Brujin's book "Asymptotic methods in analysis". He specifically focuses on the case of $n!$ case in one of the Chapters (don't have the book with me to check).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/430167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 0
} |
Action of the state I have the following question:
let $A$ be a C*-algebra and let $a$ be a self adjoint element of $A$. Is it true that
for any state $f$ acting on $A$ $$f(a) \in \mathbb{R}.$$
Let me remind that a state is a positive linear functional of norm $1$.
I think it is due to the fact that every state has to satisty,
$f(x^*)=\overline{f(x)}$, for all $x \in A$.
Then we easily obtain
$f(a)=f(a^*) = \overline{f(a)}$, thus $f(a) \in \mathbb{R}$, but I don't know how to show that it has this *-property.
| Suppose that $a$ is a self-adjoint element in the C$^*$-algebra $A$. Then, by applying the continuous functional calculus, we can write $a$ as the difference of two positive elements $a=a_+ - a_-$ such that $a_+a_-=a_-a_+=0$. See, for example, Proposition VIII.3.4 in Conway's A Course in Functional Analysis, or (*) below.
Once we have this fact in hand it is easy to show the desired property. As $f$ is positive, $f(a_+)$ and $f(a_-)$ are positive (and thus real) and so $f(a)=f(a_+)-f(a_-)$ is real.
It is also now easy to show the self-adjointness property that you mentioned: each $a \in A$ (now not necessarily self-adjoint) can be written as $a=x+iy$, where $x$ and $y$ are self-adjoint. We can take $x=\frac{1}{2}(a+a^*)$ and $y=\frac{-i}{2}(a-a^*)$. Then $$f(a)=f(x+iy)=f(x)+if(y)$$ and $$f(a^*)=f(x-iy)=f(x)-if(y).$$
As $f(x)$ and $f(y)$ are real, we have $f(a^*)=\overline{f(a)}$.
(*) How do we show that $a=a_+-a_-$ where $a_+a_-=a_-a_+=0$ and $a_+$ and $a_-$ are positive? As $a$ is self-adjoint, its spectrum is a subset of the real line and we also know that we can apply the continuous functional calculus. If $g(t)=\max (t, 0)$ and $h(t)=\min(t,0)$, then $a_+=g(a)$ and $a_-=h(a)$ are the elements we need. Why does this work? Think about splitting a continuous function on $\mathbb R$ into its positive and negative parts.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/430251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove these integral inequalities? a) $f(x)>0$ and $f(x)\in C[a,b]$
Prove $$\left(\int_a^bf(x)\sin x\,dx\right)^2 +\left(\int_a^bf(x)\cos x\,dx\right)^2 \le \left(\int_a^bf(x)\,dx\right)^2$$
I have tried Cauchy-Schwarz inequality but failed to prove.
b) $f(x)$ is differentiable in $[0,1]$
Prove
$$|f(0)|\le \int_0^1|f(x)|\,dx+\int_0^1|f'(x)|dx$$
Any Helps or Tips,Thanks
| Hint: For part a), use Jensen's inequality with weighted measure $f(x)\,\mathrm{d}x$. Since $f(x)>0$, Jensen says that for a convex function $\phi$
$$
\phi\left(\frac1{\int_Xf(x)\mathrm{d}x}\int_Xg(x)\,f(x)\mathrm{d}x\right)
\le\frac1{\int_Xf(x)\mathrm{d}x}\int_X\phi(g(x))\,f(x)\mathrm{d}x
$$
Hint: For part b), note that for $x\in[0,1]$,
$$
f(0)-f(x)\le\int_0^1|f'(t)|\,\mathrm{d}t
$$
and integrate over $[0,1]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/430313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
The continuity of measure Let $m$ be the Lebesgue Measure. If $\{A_k\}_{k=1}^{\infty}$ is an ascending collection of measurable sets, then
$$m\left(\cup_{k=1}^\infty A_k\right)=\lim_{k\to\infty}m(A_k).$$
Can someone share a story as to why this is called one of the "continuity" properties of measure?
| Since $\{A_k\}_{k=1}^\infty$ is an ascending family of sets we can vaguely write that
$$
\lim\limits_{k\to\infty} A_k=\bigcup\limits_{k=1}^\infty A_k \qquad(\color{red}{\text{note: this is not rigor!}})
$$
then this property can be written as
$$
m\left(\lim\limits_{k\to\infty} A_k\right)=\lim\limits_{k\to\infty}m(A_k)
$$
Which looks very similar to Heine definition of continuiuty.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/430386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Check solutions of vector Differential Equations I have solved the vector ODE: $x\prime = \begin{pmatrix}1& 1 \\ -1 &1 \end{pmatrix}x$
I found an eigenvalue $\lambda=1+i$ and deduced the corresponding eigenvector:
\begin{align}
(A-\lambda I)x =& 0 \\
\begin{pmatrix}1-1-i & 1 \\-1& 1-1-i \end{pmatrix}x =& 0 \\
\begin{pmatrix} -i&1\\-1&-i\end{pmatrix}x =&0
\end{align}
Which is similar to:
$\begin{pmatrix}i&-1\\0&0 \end{pmatrix}x = 0$ By Row Reduction.
Take $x_2=1$ as $x_2$ is free. We then have the following equation:
\begin{align}
&ix_1 - x_2 = 0 \\
\iff& ix_1 = 1 \\
\iff& x_1 = \frac{1}{i}
\end{align}
Thus the corresponding eigenvector for $\lambda=1+i$ is: $\begin{pmatrix} \dfrac{1}{i} \\ 1\end{pmatrix}$.
My solution should then be:
\begin{align}
x(t) =& e^{(1+i)t}\begin{pmatrix} \dfrac{1}{i} \\ 1\end{pmatrix} \\
=& e^t e^{it}\begin{pmatrix} \dfrac{1}{i} \\ 1\end{pmatrix} \\
=& e^t\left(\cos(t) + i\sin(t)\right)\begin{pmatrix} \dfrac{1}{i} \\ 1\end{pmatrix} \\
\end{align}
By taking only the real parts we have the general solution:
$\left(c_1e^t\cos(t) + c_2e^t\sin(t)\right)\begin{pmatrix} \dfrac{1}{i} \\ 1\end{pmatrix}$
How can I quickly check this is correct? Idealy I would like to use Sage to verify. I think this would be faster than differentiating my solution and checking whether I get the original equation.
| Let me work through the other eigenvalue, and see if you can follow the approach.
For $\lambda_2 = 1-i$, we have:
$[A - \lambda_2 I]v_2 = \begin{bmatrix}1 -(1-i) & 1\\-1 & 1-(1-i)\end{bmatrix}v_2 = 0$
The RREF of this is:
$\begin{bmatrix}1 & -i\\0 &0\end{bmatrix}v_2 = 0 \rightarrow v_2 = (i, 1)$
To write the solution, we have:
$\displaystyle x[t] = \begin{bmatrix}x_1[t]\\ x_2[t]\end{bmatrix} = e^{\lambda_2 t}v_2 = e^{(1-i)t}\begin{bmatrix}i\\1\end{bmatrix} = e^te^{-it}\begin{bmatrix}i\\1\end{bmatrix} = e^t(\cos t - i \sin t) \begin{bmatrix}i\\1\end{bmatrix} = e^t\begin{bmatrix} \sin t + i \cos t\\ \cos t -i \sin t \end{bmatrix} = e^t\begin{bmatrix}c_1 \cos t + c_2 \sin t\\ -c_1 \sin t + c_2 \cos t\end{bmatrix}$
Note, I put $c_1$ with the imaginary terms, and $c_2$ with the other terms, but this is totally arbitrary since these are just some constants.
For the validation:
*
*take $x'[t]$ of that solution we just derived.
*then, take the product $Ax$ and verify that it matches the $x'[t]$ expressions from the previous calculation.
I would recommend emulating this with the other eigenvalue/eigenvector and see if you can get a similar result. Lastly, note that $1/i = -i$ (just multiply by $i/i$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/430452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show that $7 \mid( 1^{47} +2^{47}+3^{47}+4^{47}+5^{47}+6^{47})$ I am solving this one using the fermat's little theorem but I got stuck up with some manipulations and there is no way I could tell that the residue of the sum of each term is still divisible by $7$. what could be a better approach or am I on the right track? Thanks
| $6^{47} \equiv (-1)^{47} = -1^{47}\mod 7$
$5^{47} \equiv (-2)^{47} = -2^{47}\mod 7$
$4^{47} \equiv (-3)^{47} = -3^{47}\mod 7$
Hence $ 1^{47} +2^{47}+3^{47}+4^{47}+5^{47}+6^{47} \equiv 0 \mod 7$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/430506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 4
} |
Linear algebra - Coordinate Systems I'm preparing for an upcoming Linear Algebra exam, and I have come across a question that goes as follows: Let U = {(s, s − t, 2s + 3t)}, where s and t are any real numbers. Find the coordinates of x = (3, 4, 3) relative to the basis B if x is in U . Sketch the set U in the xyz-coordinate system.
It seems that in order to solve this problem, I'll have to find the basis B first! how do I find that as well?
The teacher hae barely covered the coordinate systems and said she will less likely include anything from the section on the exam, but I still want to be safe. The book isn't of much help. It explains the topic but doesn't give any examples.
Another part of the question ask about proving that U is a subspace of R3, but I was able to figure that one out on my own. I'd appreciate if someone could show me how to go about solving the question above.
| This might be an answer, depending on how one interprets the phrase "basis $B$", which is undefined in the question as stated:
Note that $(s, s - t, 2s + 3t) = s(1, 1, 2) + t(0, -1, 3)$. Taking $s = 1$, $t = 0$ shows that $(1, 1, 2) \in U$. Likewise, taking $s = 0$, $t = 1$ shows $(0, -1, 3) \in U$ as well. Incidentally, the vectors $(1, 1, 2)$ and $(0, -1, 3)$ are clearly linearly independent; to see this in detail, note that if $(s, s - t, 2s + 3t) = s(1, 1, 2) + t(0, -1, 3) = (0,0,0)$, then we must obviously have $s = 0$, whence $s - t = -t = 0$ as well. Assuming $B$ refers to the basis $(1, 1, 2)$, $(0, -1, 3)$ of $U$, it is easy to work out the values of $s$ and $t$ corresponding to $x$: setting $(s, s - t, 2s + 3t) = (3, 4, 3)$, we see that we must have
$s = 3$ whence $t = -1$ follows from $s - t = 4$. These check against $2s + 3t = 3$, as the reader may easily verify. The desired coordinates for $x$ in the basis $B$ are thus $(3, -1)$.
Think that about covers it, if my assumption about $B$ is correct.
Can't provide a graphic, but one is easily constructed noting that the vectors $(1, 1, 2)$, $(0, -1, 3)$ span $U$ in $R^3$ (the "$xyz$" coordinate system).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/430588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Calculating 7^7^7^7^7^7^7 mod 100 What is
$$\large 7^{7^{7^{7^{7^{7^7}}}}} \pmod{100}$$
I'm not much of a number theorist and I saw this mentioned on the internet somewhere. Should be doable by hand.
| Reading the other answers, I realize this is a longer way than necessary, but it gives a more general approach for when things are not as convenient as $7^4\equiv 1\bmod 100$.
Note that, for any integer $a$ that is relatively prime to $100$, we have
$$a^{40}\equiv 1\bmod 100$$
because $\varphi(100)=40$, and consequently
$$a^m\equiv a^n\bmod 100$$
whenever $m\equiv n\bmod 40$. Thus, we need to find $7^{7^{7^{7^{7^{7}}}}}$ modulo $40$. By the Chinese remainder theorem, it is equivalent to know what it is modulo $8$ and modulo $5$.
Modulo $8$, we have $7\equiv -1\bmod 8$, and $-1$ to an odd power is going to be $-1$, so we see that $$7^{7^{7^{7^{7^{7}}}}}\equiv (-1)^{7^{7^{7^{7^{7}}}}} \equiv -1\equiv 7\bmod 8.$$
Modulo $5$, we have $7^4\equiv 1\bmod 5$ (again by Euler's theorem), so we need to know $7^{7^{7^{7^{7}}}}\bmod 4$. But $7\equiv -1\bmod 4$, and $7^{7^{7^{7}}}$ is odd, so that $7^{7^{7^{7^{7}}}}\equiv -1\equiv 3\bmod 4$, so that
$$7^{7^{7^{7^{7^{7}}}}}\equiv 7^3\equiv 343\equiv 3\bmod 5.$$
Applying the Chinese remainder theorem, we conclude that
$$7^{7^{7^{7^{7^{7}}}}}\equiv 23\bmod 40,$$
and hence
$$7^{7^{7^{7^{7^{7^{7}}}}}}\equiv 7^{23}\bmod 100.$$
This is tractable by again using the Chinese remainder theorem to find
$7^{23}\bmod 4$ and $7^{23}\bmod 25$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/430633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
} |
Size of new box rotated and the rescaled I have a box of height h and width w. I rotate it to r degrees. Now I resize it so that it can original box in it. What will be the size of newly box.
Original Box:
Box after rotating some degrees.
New box after rescaling.
So my question is what should be the formula to calculate new size (width, height), position relative to old one.
What I have Width "w", height "h", position (x,y) and angle (t).
| Assuming the old rectangle inscribed in the new one, we have the following picture:
Let $\theta$ ($0 \leq \theta \leq \frac{\pi}{2}$) the rotation angle, $w'$ the new width and $h'$ the new height, then we have the following equations:
$$w' = w \cos \theta + h \sin \theta$$
$$h' = w \sin \theta + h \cos \theta$$
The new rectangle is not similar to the old one, except for $h = w$ when both rectangles are in fact squares.
Edit:
Considering $O$ (the center of both rectangles) as the origin of the coordinate system, the points $E$, $F$, $G$, and $H$ can be calculated by the following equations:
$$E=\left(\frac{w}{2}-w \cos^2 \theta,-\frac{h}{2}-w \sin \theta \cos \theta \right)$$
$$F=\left(\frac{w}{2}+h \sin \theta \cos \theta,-\frac{h}{2}+h \sin^2 \theta \right)$$
$$G=\left(-\frac{w}{2}+w \cos^2 \theta,\frac{h}{2}+w \sin \theta \cos \theta \right)$$
$$H=\left(-\frac{w}{2}-h \sin \theta \cos \theta,-\frac{h}{2}+h \cos^2 \theta \right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/430763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Notation for intervals I have frequently encountered both $\langle a,b \rangle$ and $[a,b]$ as notation for closed intervals. I have mostly encountered $(a,b)$ for open intervals, but I have also seen $]a,b[$. I recall someone calling the notation with $[a,b]$ and $]a,b[$ as French notation.
*
*What are the origins of the two notations?
*Is the name French notation correct? Are they used frequently in France? Or were they perhaps prevalent in French mathematical community at some point? (In this MO answer Bourbaki is mentioned in connection with the notation $]a,b[$.)
Since several answerers have already mentioned that they have never seen $\langle a,b \rangle$ to be used for closed intervals, I have tried to look for some occurrences for this.
The best I can come up with is the article on Czech Wikipedia, where these notations are called Czech notation and French notation. Using $(a,b)$ and $[a,b]$ is called English notation in that article. (I am from Central Europe, too, so it is perhaps not that surprising that I have seen this notation in lectures.)
I also tried to google for interval langle
or "closed interval" langle. Surprisingly, this lead me to a question on MSE where this notation is used for open interval.
| As a French student, all my math teachers (as well as the physics/biology/etc. ones) always used the $[a,b]$ and $]a,b[$ (and the "hybrid" $[a,b[$ and $]a,b]$) notations.
We also, for integer intervals $\{a,a+1,...,b\}$, use the \llbracket\rrbracket notation (in LateX, package {stmaryrd}): $[[ a,b ]]$.
I have never seen the $\langle a,b \rangle$ notation used for intervals, though (only for inner products or more exotic binary operations).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/430851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 2,
"answer_id": 1
} |
antiderivative of $\sum _{ n=0 }^{ \infty }{ (n+1){ x }^{ 2n+2 } } $ I've proven that the radius of convergence of $\sum _{ n=0 }^{ \infty }{ (n+1){ x }^{ 2n+2 } } $ is $R=1$, and that it doesn't converge at the edges.
Now, I was told that this is the derivative of a function $f(x)$, which holds $f(0)=0$.
My next step is to find this function in simple terms, and here I get lost. My attempt:
$f(x)=\sum _{ n=0 }^{ \infty }{ \frac { n+1 }{ 2n+3 } { x }^{ 2n+3 } } $
and this doesn't seem to help.
I'd like to use the fact that $|x|<1$ so I'll get a nice sum based on the sum of a geometric series but I have those irritating coefficients.
Any tips?
| First, consider
$$
g(w)=\sum_{n=0}^{\infty}(n+1)w^n.
$$
Integrating term-by-term, we find that the antiderivative $G(w)$ for $g(w)$ is
$$
G(w):=\int g(w)\,dw=C+\sum_{n=0}^{\infty}w^{n+1}
$$
where $C$ is an arbitrary constant. To make $g(0)=0$, we take $C=0$; then
$$
G(w)=\sum_{n=0}^{\infty}w^{n+1}=\sum_{n=1}^{\infty}w^n=\frac{w}{1-w}\qquad\text{if}\qquad\lvert w\rvert<1.
$$
(Here, we've used that this last is a geometric series with first term $w$ and common ratio $w$.) So, we find
$$
g(w)=G'(w)=\frac{1}{(1-w)^2},\qquad \lvert w\rvert<1.
$$
Now, how does this relate to the problem at hand? Note that
$$
\sum_{n=0}^{\infty}(n+1)x^{2n+2}=x^2\sum_{n=0}^{\infty}(n+1)(x^2)^n=x^2g(x^2)=\frac{x^2}{(1-x^2)^2}
$$
as long as $\lvert x^2\vert<1$, or equivalently $\lvert x\rvert <1$.
From here, you can finish your exercise by integrating this last function with respect to $x$, and choosing the constant of integration that makes its graph pass through $(0,0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/430922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What is the value of the series $\sum\limits_{n=1}^\infty \frac{(-1)^n}{n^2}$? I am given the following series:
$$\sum_{n=1}^\infty \frac{(-1)^n}{n^2}$$
I have used the alternating series test to show that the series converges.
However, how do I go about showing what it converges to?
| Consider the Fourier series of $g(x)=x^2$ for $-\pi<x\le\pi$:
$$g(x)=\frac{a_0}{2}+\sum_{n=1}^{\infty}a_n\cos(nx)+b_n\sin(nx)$$
note $b_n=0$ for an even function $g(t)=g(-t)$ and that:
$$a_n=\frac {1}{\pi} \int _{-\pi }^{\pi }\!{x}^{2}\cos \left( nx \right) {dx}
=4\,{\frac { \left( -1 \right) ^{n}}{{n}^{2}}},$$
$$\frac{a_0}{2}=\frac {1}{2\pi} \int _{-\pi }^{\pi }\!{x}^{2} {dx}
=\frac{1}{3}\pi^2,$$
$$x^2=\frac{1}{3}\,{\pi }^{2}+\sum _{n=1}^{\infty }4\,{\frac { \left( -1 \right) ^{n
}\cos \left( nx \right) }{{n}^{2}}},$$
$$x=0\rightarrow \frac{1}{3}\,{\pi }^{2}+\sum _{n=1}^{\infty }4\,{\frac { \left( -1 \right) ^{n
}}{{n}^{2}}}=0,$$
$$\sum _{n=1}^{\infty }{\frac { \left( -1 \right) ^{n}
}{{n}^{2}}}=-\frac{1}{12}\,{\pi }^{2}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/430973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Vector-by-Vector derivative Could someone please help me out with this derivative?
$$
\frac{d}{dx}(xx^T)
$$
with both $x$ being vector.
Thanks
EDIT:
I should clarify that the actual state I am taking the derivative is
$$
\frac{d}{dx}(xx^TPb)
$$
where $Pb$ has the dimention of $x$ but is independent of $x$. So the whole state $xx^TPb$ is a vector.
EDIT2:
Would it become by any chance the following?
$$
\frac{d}{dx}(xx^TPb) = (Pbx^T)^T + x(Pb)^T = 2x(Pb)^T
$$
| You can always go back to the basics. Let $v$ be any vector and $h$ a real number. Substitute $x \leftarrow x + h v$ to get
$$
(x + hv)(x+hv)^t = (x+hv)(x^t+hv^t) = x x^t + h(xv^t + vx^t) + h^2 vv^t.
$$
The linear term in $h$ is your derivative at $x$ in the direction of $v$, so $xv^t + vx^t$ (which is linear in $v$ as expected).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/431073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
A Covering Map $\mathbb{R}P^2\longrightarrow X$ is a homeomorphism I came across the following problem: Any covering map $\mathbb{R}P^2\longrightarrow X$ is a homeomorphism. To solve the problem you can look at the composition of covering maps
$$
S^2\longrightarrow \mathbb{R}P^2\longrightarrow X
$$
and examine the deck transformations to show that the covering $S^2\longrightarrow X$ only has the identity and antipodal maps as deck transformations.
I've seen these types of problems solved by showing that the covering is one-sheeted. Is there a solution to the problem along those lines?
EDIT: Even if there isn't a way to do it by showing it is one-sheeted, are there other ways?
| 1-Prove that $X$ has to be a compact topological surface;
2-Prove that such a covering has to be finite-sheeted;
3-Deduce from 2 and from $\pi_1(\mathbb{R}P^2)$ that $\pi_1(X)$ is finite;
4- Since the map induced by the covering projection on $\pi_1$ is injective you get $\mathbb{Z}/2\mathbb{Z}< \pi_1(X)$;
5-Conclude using the classification of compact topological surfaces.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/431149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Induction and convergence of an inequality: $\frac{1\cdot3\cdot5\cdots(2n-1)}{2\cdot4\cdot6\cdots(2n)}\leq \frac{1}{\sqrt{2n+1}}$ Problem statement:
Prove that $\frac{1*3*5*...*(2n-1)}{2*4*6*...(2n)}\leq \frac{1}{\sqrt{2n+1}}$ and that there exists a limit when $n \to \infty $.
, $n\in \mathbb{N}$
My progress
LHS is equivalent to $\frac{(2n-1)!}{(2n)!}=\frac{(2n-1)(2n-2)(2n-3)\cdot ....}{(2n)(2n-1)(2n-2)\cdot ....}=\frac{1}{2n}$ So we can rewrite our inequality as:
$\frac{1}{2n}\leq \frac{1}{\sqrt{2n+1}}$ Let's use induction:
For $n=1$ it is obviously true. Assume $n=k$ is correct and show that $n=k+1$ holds.
$\frac{1}{2k+2}\leq \frac{1}{\sqrt{2k+3}}\Leftrightarrow 2k+2\geq\sqrt{2k+3}\Leftrightarrow 4(k+\frac{3}{4})^2-\frac{5}{4}$ after squaring and completing the square. And this does not hold for all $n$
About convergence: Is it not enough to check that $\lim_{n \to \infty}\frac{1}{2n}=\infty$ and conclude that it does not converge?
| There's a direct proof to the inequality of $\frac{1}{\sqrt{2n+1}}$, though vadim has improved on the bound.
Consider $A = \frac{1}{2} \times \frac{3}{4} \times \ldots \times \frac{2n-1} {2n}$
and $B = \frac{2}{3} \times \frac{4}{5} \times \ldots \times \frac{2n}{2n+1}$.
Then $AB = \frac{1}{2n+1}$. Since each term of $A$ is smaller than the corresponding term in $B$, hence $A < B$. Thus $A^2 < AB = \frac{1}{2n+1}$, so
$$A < \frac{1}{\sqrt{2n+1}}$$
Of course, the second part that a limit exists follows easily, and is clearly 0.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/431234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 0
} |
find the power representation of $x^2 \arctan(x^3)$ Wondering what im doing wrong in this problem im ask to find the power series representation of
$x^2 \arctan(x^3)$
now i know that arctan's power series representation is this
$$\sum_{n=0}^\infty (-1)^n \frac{x^{2n+1}}{2n+1} $$
i could have sworn that for solving for this i could just have use that formula and then distribute the $x^2$ but i'm getting the wrong answer here is the steps im taking.
1- plug in $x^3$ for $x$
$$x^2\sum_{n=0}^\infty (-1)^n \frac{(x^3)^{2n+1}}{2n+1} $$
2- distribute the $2n+1$ to $x^3$
$$x^2\sum_{n=0}^\infty (-1)^n \frac{x^{6n+3}}{2n+1} $$
3- distribute the $x^2$
$$\sum_{n=0}^\infty (-1)^n \frac{x^{6n+5}}{2n+1} $$
however this is wrong according to my book the answer should be
$$\sum_{n=0}^\infty (-1)^n \frac{x^{3n+2}}{n} $$
can someone please point out my mistake. Please forgive me in advance for any mathematical blunders that i post. I'm truly sorry.
Thanks
Miguel
| The book's answer would be right if it said
$$
\sum_{\text{odd }n\ge 0} (-1)^{(n-1)/2)} \frac{x^{3n+2}}{n}.
$$
That would be the same as your answer, i.e.
$$
\sum_{\text{odd }n\ge 0} (-1)^{(n-1)/2)} \frac{x^{3n+2}}{n} = \sum_{n\ge0} (-1)^n\frac{x^{6n+5}}{2n+1}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/431298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
compute an integral with residue I have to find the value of
$$\int_{-\infty}^{\infty}e^{-x^2}\cos({\lambda x})\,dx$$
using residue theorem. What is a suitable contour? Any help would be appreciate! Thanks...
| Hint: By symmetry, we can let $\gamma$ be the path running along the real axis and get that our integral is just $$\int_\gamma e^{-z^2}e^{i\lambda z} dz.$$ Now what happens when you combine these terms and complete the square? Your answer should become a significantly simpler problem.
But be careful with the resulting path. There's still a good bit to do with this approach.
Edit, more steps: You obtain from this the integral $$\int_{\gamma'} e^{-z^2} dz,$$ where $\gamma'$ runs along the real axis, shifted by $-\frac{\lambda i}{2}$. Now, you can show that if you integrate along the real axis, your answer is $$\int_{-\infty}^\infty e^{-x^2} dx = \sqrt{\pi}.$$ This can be done by contour integration. I claim that the integral we are interested in can be obtained from this using contour integration. Relate the path we want to this path and do some estimates on the ends.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/431376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Number of Spanning Trees in a certain graph Let $k,n\in\mathbb N$ and define the simple graph $G_{k,n}=([n],E)$, where $ij\in E\Leftrightarrow 0 <|i-j|\leq k$ for $i\neq j\in [n]$.
I need to calculate the number of different spanning trees.
I am applying Kirchoff's Matrix Tree theorem to solve this but i am not getting the answer. for example : $k=3$ and $n=5$, my matrix is
\begin{pmatrix}
3 & -1 &-1& -1& 0\\
-1 & 4 &-1 &-1 &-1\\
-1 &-1 & 4 &-1 &-1\\
-1 &-1 &-1 & 4 &-1\\
0 & -1 &-1 &-1 &3
\end{pmatrix}
and the final answer as per the Kirchoff's theorem is the determinant of any of the co-factor of the matrix. proceeding in the same way i am getting something else but the answer is 75.
Is there any another approach to solve this problem or my process is wrong? please help
Thank you
| The answer seems correct. You can check with a different method in this case, because the graph you are considering is the complete graph minus one specific edge E.
By Cayley's formula, there are $5^3=125$ spanning trees of the complete graph on 5 vertices. Each such tree has four edges, and there are 10 possible edges in the complete graph. By taking a sum over all edges in all spanning trees, you can show that $\frac{2}{5}$ of the spanning trees will contain the specific edge $E$. So the remaining number of spanning trees is $\frac{3}{5} \times 125 = 75$, which agrees with your answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/431556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
convergence of a series $a_1 + a_1 a_2 + a_1 a_2 a_3 +\cdots$ Suppose all $a_n$ are real numbers and $\lim_{n\to\infty} a_n$ exists.
What is the condition for the convergence( or divergence ) of the series
$$ a_1 + a_1 a_2 + a_1 a_2 a_3 +\cdots $$
I can prove that $ \lim_{n\to\infty} |a_n| < 1 $ ( or > 1 ) guarantees
absolute convergence ( or divergence ).
What if $ \lim_{n\to\infty} a_n = 1 \text{ and } a_n < 1 \text{ for all } n $ ?
|
What if $\lim_{n\to\infty}a_n=1$ and $a_n<1$ for all $n$?
Then the series may or may not converge. A necessary criterion for the convergence of the series is that the sequence of products
$$p_n = \prod_{k = 1}^n a_k$$
converges to $0$.
If the $a_n$ converge to $1$ fast enough, say $a_n = 1 - \frac{1}{2^n}$ ($\sum \lvert 1 - a_n\rvert < \infty$ is sufficient, if no $a_n = 0$), the product converges to a nonzero value, and hence the series diverges.
If the convergence of $a_n \to 1$ is slow enough ($a_n = 1 - \frac{1}{\sqrt{n+1}}$ is slow enough), the product converges to $0$ fast enough for the series to converge.
Let $a_n = 1 - u_n$, with $0 < u_n < 1$ and $u_n \to 0$. Without loss of generality, assume $u_n < \frac14$.
Then $\log p_n = \sum\limits_{k = 1}^n \log (1 - u_k)$. Since for $0 < x < \frac14$, we have $-\frac32 x < \log (1-x) < -x$, we find
$$ -\frac32 \sum_{k=1}^n u_k < \log p_n < -\sum_{k=1}^n u_k,$$
and thus can deduce that if $\sum u_k < \infty$, then $\lim\limits_{n\to\infty}p_n > 0$, so the series does not converge.
On the other hand, if $\exists C > 1$ with $\sum\limits_{k = 1}^n u_k \geqslant C\cdot \log n$, then $p_n < \exp (-C \cdot\log n) = \frac{1}{n^C}$, and the series converges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/431619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Show that $(x_n)$ converge to $l$.
Let $(x_n)$ be a sequence of reals. Show that if every subsequence $(x_{n_k})$ of $(x_n)$ has a further subsequence $(x_{n_{k_r}})$ that converge to $l$, then $(x_n)$ converge to $l$.
I know the fact that subsequence of $(x_n)$ converge to the limit same as $(x_n)$ does, but I'm not sure if I can apply this.
Thank you.
| You are correct with your doubts as that argument applies only if you know that the sequence converges in the first place.
Now for a proof, assume the contrary, that is: there exists $\epsilon>0$ such that for all $N\in\mathbb N$ there exists $n>N$ with $|x_n-l|\ge\epsilon$.
For $N\in\mathbb N$ let $f(N)$ denote one such $n$.
Now consider the following subsequence $(x_{n_k})$ of $(x_n)$: Let $n_1=f(1)$ and then recursively $n_{k+1}=f(n_k)$. Since $(n_k)$ is a strictly increasing infinite subsequence of the naturals, this gives us a subsqeunece $(x_{n_k})$. By construction, $|x_{n_k}-l|\ge \epsilon$ for all $k$.
On the other hand, by the given condition, there exists a subsubsequence converging to $l$, so especially some (in fact almost all) terms of the subsubsequence fulfill $|x_{n_{k_r}}-l|<\epsilon$ - which is absurd. Therefore, the assumption in the beginning was wrong. Instead, $x_n\to l$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/431681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.