Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Is the support of a Borel measure measured the same as the whole space? Wikipedia says
Let (X, T) be a topological space. Let μ be a measure on the Borel σ-algebra on X. Then the support
(or spectrum) of μ is defined to be the set of all points x in X for
which every open neighbourhood Nx of x has positive measure.
The support of μ is a Borel-measurable subset, because
The support of a measure is closed in X.
I wonder if the support of μ is measured the same as the whole space?
It is equivalent to that the complement of the support of μ has 0 measure. But the following property seems to say it is not true
Under certain conditions on X and µ, for instance X being a
topological Hausdorff space and µ being a Radon measure, a measurable
set A outside the support has measure zero
So when does the support of a measure on a Borel sigma algebra have different measure from the whole space?
Thanks!
| For an example with a probability measure, consider the following standard counterexample: let $X = [0, \omega_1]$ be the uncountable ordinal space (with endpoint), with its order topology. This is a compact Hausdorff space which is not metrizable. Define a probability measure on the Borel sets of $X$ by taking $\mu(B) = 1$ if $B$ contains a closed uncountable set, $\mu(B)=0$ otherwise. It is known that this defines a countably additive probability measure; see Example 7.1.3 of Bogachev's Measure Theory for the details.
If $x \in X$ and $x < \omega_1$, then $[0, x+1)$ is an open neighborhood of $x$ which is countable, hence has measure zero. So $x$ is not in the support of $\mu$. In fact the support of $\mu$ is simply $\{\omega_1\}$. But $\mu(\{\omega_1\}) = 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Arc Length Problem I am currently in the middle of the following problem.
Reparametrize the curve $\vec{\gamma } :\Bbb{R} \to \Bbb{R}^{2}$ defined by $\vec{\gamma}(t)=(t^{3}+1,t^{2}-1)$ with respect to arc length measured from $(1,-1)$ in the direction of increasing $t$.
By reparametrizing the curve, does this mean I should write the equation in cartesian form? If so, I carry on as follows.
$x=t^{3}+1$ and $y=t^{2}-1$
Solving for $t$
$$t=\sqrt[3]{x-1}$$
Thus,
$$y=(x-1)^{2/3}-1$$
Letting $y=f(x)$, the arclength can be found using the formula
$$s=\int_{a}^{b}\sqrt{1+[f'(x)]^{2}}\cdot dx$$
Finding the derivative yields
$$f'(x)=\frac{2}{3\sqrt[3]{x-1}}$$
and
$$[f'(x)]^{2}=\frac{4}{9(x-1)^{2/3}}.$$
Putting this into the arclength formula, and using the proper limits of integration (found by using $t=1,-1$ with $x=t^{3}+1$) yields
$$s=\int_{0}^{2}\sqrt{1+\frac{4}{9(x-1)^{2/3}}}\cdot dx$$
I am now unable to continue with the integration as it has me stumped. I cannot factor anything etc. Is there some general way to approach problems of this kind?
| Hint:
Substitute $(x-1)^{1/3}=t$. Your integral will boil down to $$\int_{-1}^1t\sqrt{4+9t^2}\rm dt$$
Now set $4+9t^2=u$ and note that $\rm du=18t~~\rm dt$ which will complete the computation. (Note that you need to change the limits of integration while integrationg over $u$.)
A Longer way:
Now integrate by parts with $u=t$ and $\rm d v=\sqrt{4+9t^2}\rm dt$ and to get $v$, you'd like to keep $t=\dfrac{2\tan \theta}{3}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
The derivative of a complex function.
Question:
Find all points at which the complex valued function $f$ define by $$f(z)=(2+i)z^3-iz^2+4z-(1+7i)$$ has a derivative.
I know that $z^3$,$z^2$, and $z$ are differentiable everywhere in the domain of $f$, but how can I write my answer formally? Please can somebody help?
Note:I want to solve the problem without using Cauchy-Riemann equations.
| I'm not sure where the question is coming from (what you know/can know/etc.).
But some things that you might use: you might just give the derivative, if you know how to take it. Perhaps you might verify it with the Cauchy-Riemann equations. Alternatively, differentiation is linear (which you might prove, if you haven't), and finite linear combination of differentiable functions is also differentiable. Or you know the series expansion, it's finite, and converges in infinite radius - thus it's holomorphic.
But any of these would lead to a complete solution. Does that make sense?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Strengthened finite Ramsey theorem I'm reading wikipedia article about Paris-Harrington theorem, which uses strengthened finite Ramsey theorem, which is stated as "For any positive integers $n, k, m$ we can find $N$ with the following property: if we color each of the $n$-element subsets of $S = \{1, 2, 3,..., N\}$ with one of $k$ colors, then we can find a subset $Y$ of $S$ with at least $m$ elements, such that all $n$ element subsets of $Y$ have the same color, and the number of elements of $Y$ is at least the smallest element of $Y$".
After poking around for a while I interpreted this as follows.
Let $n,k$ and $m$ be positive integers. Let $S(N)=\{1,...,N\}$ and $S^{(n)}(M)$ be the set of $n$-element subsets of $S(M)$. Let $f:S^{(n)}(M)\to \{1,...,k\}$ be some $k$-colouring of $S^{(n)}(M)$. Theorem states that for any $n, k, m$ there is a number $N$ for which we can find $Y\subseteq S(N)$ such that $|Y|\geq m$, the induced colouring $f':Y^{(n)}\to \{0,...,k\}$ boils down to a constant function (every $n$-subset of $Y$ has the same colour) and $|Y|\geq\min{Y}$.
In this correct?
I also faced another formulation where it is stated that "for any $m,n,c$ there exists a number $N$ such that for every colouring $f$ of $m$-subsets of $\{0,...,N-1\}$ into $c$ colours there is an $f$-homogeneous $H\subset\{0,..,N-1\}$...".
How $f$-homogeneous is defined?
| Yes, your interpretation of the first formulation is correct.
In the second formulation the statement that $H$ is $f$-homogeneous simply means that every $m$-subset of $H$ is given the same color by $f$: in your notation, the induced coloring $$f\,':H^{(m)}\to\{0,\dots,c-1\}$$ is constant. However, the formulation is missing the important requirement that $|H|\ge\min H$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What on earth does "$r$ is not a root" even mean? Method of Undetermined Coeff Learning ODE now, and using method of Undetermined Coeff
$$y'' +3y' - 7y = t^4 e^t$$
The book said that $r = 1$ is not a root of the characteristic equation. The characteristic eqtn is $r^2 + 3r - 7 = 0$ and the roots are $r = -3 \pm \sqrt{37}$
Where on earth are they getting $r = 1$ from?
| $1$ comes from the $e^t$ on the right side. If it was $e^{kt}$ they would take $r=k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Complex Roots of Unity and the GCD I'm looking for a proof of this statement. I just don't know how to approach it. I recognize that $z$ has $a$ and $b$ roots of unity, but I can't seem to figure out what that tells me.
If $z \in \mathbb{C}$ satisfies $z^a = 1$ and $z^b = 1$ then
$z^{gcd(a,b)} = 1$.
| Hint $\:$ The set of $\rm\:n\in \mathbb Z$ such that $\rm\:z^n = 1\:$ is closed under subtraction so it is closed under $\rm\:gcd$.
Recall gcds may be computed by repeated subtraction (anthyphairesis, Euclidean algorithm)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Determining variance from sum of two random correlated variables I understand that the variance of the sum of two independent normally distributed random variables is the sum of the variances, but how does this change when the two random variables are correlated?
| Let's work this out from the definitions. Let's say we have 2 random variables $x$ and $y$ with means $\mu_x$ and $\mu_y$. Then variances of $x$ and $y$ would be:
$${\sigma_x}^2 = \frac{\sum_i(\mu_x-x_i)(\mu_x-x_i)}{N}$$
$${\sigma_y}^2 = \frac{\sum_i(\mu_y-y_i)(\mu_y-y_i)}{N}$$
Covariance of $x$ and $y$ is:
$${\sigma_{xy}} = \frac{\sum_i(\mu_x-x_i)(\mu_y-y_i)}{N}$$
Now, let us consider the weighted sum $p$ of $x$ and $y$:
$$\mu_p = w_x\mu_x + w_y\mu_y$$
$${\sigma_p}^2 = \frac{\sum_i(\mu_p-p_i)^2}{N} = \frac{\sum_i(w_x\mu_x + w_y\mu_y - w_xx_i - w_yy_i)^2}{N} = \frac{\sum_i(w_x(\mu_x - x_i) + w_y(\mu_y - y_i))^2}{N} = \frac{\sum_i(w^2_x(\mu_x - x_i)^2 + w^2_y(\mu_y - y_i)^2 + 2w_xw_y(\mu_x - x_i)(\mu_y - y_i))}{N} \\ = w^2_x\frac{\sum_i(\mu_x-x_i)^2}{N} + w^2_y\frac{\sum_i(\mu_y-y_i)^2}{N} + 2w_xw_y\frac{\sum_i(\mu_x-x_i)(\mu_y-y_i)}{N} \\ = w^2_x\sigma^2_x + w^2_y\sigma^2_y + 2w_xw_y\sigma_{xy}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45",
"answer_count": 4,
"answer_id": 1
} |
Is every invertible rational function of order 0 on a codim 1 subvariety in the local ring of the subvariety? I have been trying to read Fulton's Intersection Theory, and the following puzzles me.
All schemes below are algebraic over some field $k$ in the sense that they come together with a morphism of finte type to $Spec k$.
Let $X$ be a variety (reduced irreducible scheme), and let $Y$ be a codimension $1$ subvariety, and let $A$ be its local ring (in particular a $1$-dimensional Noetherian local domain). Let $K(X)$ be the ring of rational functions on $X$ (the local ring at the generic point of X). Let $K(X)^*$ be the abelian group (under multiplication) of units of $K(X)$, and $A^*$ -- the group of units of $A$.
On the one hand, for any $r\in K(X)^*$ define the order of vanishing of $r$ at $Y$ to be $ord_Y(r)=l_A(A/(a))-l_A(A/(b))$ where $r=a/b$ for $a$ and $b$, and $l_A(M)$ is the length of an $A$-module $M$. On the other hand, it turns out that $Y$ is in the support of the principal Cartier divisor $div(r)$ if and only if $r\not\in A^*\subset K(X)^*$.
It is obvious that $Y$ not in the support of $div(r)$ implies that $ord_r(Y)=0$, since the former claims that $r\in A^*$ from which it follows that $ord_Y(b)=ord_Y(rb)=ord_Y(a)$ since obviously $ord_Y(r)=0$ for any unit. The contrapositive states that $ord_r(Y)\neq0$ implies $Y$ is in the support of $div(r)$. Because the latter can be shown to be a proper closed set and thus containing finitely many codim. $1$ subvarieties, which shows that the associated Weyl divisor $[D]=\sum_Y ord_Y(r)[Y]$ is well-defined.
What is not clear to me is whether or not the converse is true, i.e. whether $Y$ in the support of $div(r)$ implies that $ord_Y(r)\neq0$. I find myself doubting since if I am not mistaken, this is equivalent to the statement $l_A(A/(a))=l_A(A/(b))$ if and only if $(a)=(b)$, where $A$ is any $1$-dimensional local Noetherian domain (maybe even a $k$-algebra) which seems much too strong. Any (geometric) examples to give me an idea of what is going would be much appreciated.
| The support of a Cartier divisor $D$ on $X$ is the union of all closed subvarieties $Z\subset X$ such that the local equation of $D$ at the generic point $z$ of $Z$ is not a unit of the local ring $O_{X,z}$. Note that $Z$ can be of codimension $>1$. However, let $f_Z$ be the local equation of $D$ and let $p\in\mathrm{Spec}(O_{X,z})$ such that $f_ZO_{X,z}\subseteq p$ and $p$ is minimal with that property. Then by the Principal Ideal Theorem $p$ is of height $1$ and $f_Z$ is not a unit in the localization $(O_{X,z})_p$. The latter is the local ring of the codimenions $1$ subvariety $Y$ having $p$ as its generic point. This shows that if $Z$ is in the support of $D$, then every codimension $1$ subvariety $Y\subset X$ such that $Z\subseteq Y$ is in the support of $D$. Does this solve your problem?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
confusion on legendre symbol i know that $\left(\frac{1}{2}\right)=1$ since $1^2\equiv 1 \pmod2$ now since
$3\equiv 1\pmod2$ we should have $\left(\frac{3}{2}\right)=\left(\frac{1}{2}\right)=1$ but on Maple i get that $\left(\frac{3}{2}\right)=-1$ why?
| The Legendre symbol, the Jacobi symbol and the Kronecker symbol are successive generalizations that all share the same notation. The first two are usually only defined for odd lower arguments (primes in the first case), whereas the Kronecker symbol is also defined for even lower arguments.
Since the distinction is merely historic, I guess it makes sense for math software to treat them all the same; Wolfram|Alpha returns $-1$ for JacobiSymbol(3,2). See the Wikipedia article for the definition for even lower arguments; the interpretation that a value of $-1$ indicates a quadratic non-residue is no longer valid in this case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What's the sum of $\sum_{k=1}^\infty \frac{2^{kx}}{e^{k^2}}$? I already asked a similar question on another post:
What's the sum of $\sum \limits_{k=1}^{\infty}\frac{t^{k}}{k^{k}}$?
There are no problems with establishing a convergence for this power series:
$$\sum_{k=1}^\infty \frac{2^{kx}}{e^{k^2}}$$
but I have problems in determining its sum.
| $$\sum_{k=1}^{\infty}\frac{2^{kx}}{e^{k^{2}}} = -\frac{1}{2} + \frac{1}{2} \prod_{m=1}^{\infty} \left( 1 - \frac{1}{e^{2m}} \right) \left( 1+ \frac{ 2^x }{e^{2m-1} } \right) \left( 1 + \frac{1}{2^x e^{2m-1} }\right ). $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Similarity Transformation Let $G$ be a subgroup of $\mathrm{GL}(n,\mathbb{F})$. Denote by $G^T$ the set of transposes of all elements in $G$. Can we always find an $M\in \mathrm{GL}(n,\mathbb{F})$ such that $A\mapsto M^{-1}AM$ is a well-defined map from $G$ to $G^T$?
For example if $G=G^T$ then any $M\in G$ will do the job.
Another example, let $U$ be the set of all invertible upper triangular matrices. Take $M=(e_n\,\cdots\,e_2\,e_1)$ where $e_i$ are the column vectors that make $I=(e_1\,e_2\,\cdots\,e_n)$ into an identity matrix. Then $M$ do the job. For $n=3$, here what the $M$ will do
$$\begin{pmatrix}a&b&c\\ 0&d&e\\ 0&0&f\end{pmatrix}\mapsto M^{-1}\begin{pmatrix}a&b&c\\ 0&d&e\\ 0&0&f\end{pmatrix} M=\begin{pmatrix}f&e&c\\0&d&b\\0&0&a\end{pmatrix}^T$$
Thank you.
| The answer is no, in general. For example, when ${\mathbb F}$ is the field of two elements, let $G$ be the stabilizer of the one-dimensional subspace of ${\mathbb F}^3,$ (viewed as column vectors, with ${\rm GL}(3,\mathbb{F})$ acting by left multiplication) consisting of vectors with $0$ in positions $2$ and $3$. Then $G \cong S_{4},$ but $G$ does not stabilize any $2$-dimensional subspace. However $G^{T}$ is the stabilizer of the $2$-dimensional subspace consisting of vectors with $0$ in position $1$. Hence the subgroups $G$ and $G^{T}$ are not conjugate within ${\rm GL}(3,\mathbb{F}).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Should I ignore $0$ when do inverse transform sampling? Generic method
*
*Generate $U \sim \mathrm{Uniform}(0,1)$.
*Return $F^{-1}(U)$.
So, in step 1, $U$ has domain/support as $[0,1]$, so it is possible that $U=0$ or $U=1$,
but $F^{-1}(0)=-\infty$. Should I reject the value $U=0$ and $U=1$ before applying step 2?
For example, discrete distribution sampling: $X$ takes on values $x_1, x_2, x_3$ with probability $p_1,p_2,p_3$
*
*Generate $U \sim \mathrm{Uniform}(0,1)$.
*Find the smallest $k$ such that $F(x_k)\geq U$ ($F$ is the CDF).
However, if $U=0$, and $p_1=0$, $k$ would be $1$. It could generate $x_1$ though its probability $p_1=0$. Is it acceptable?
| In theory, it doesn't matter: the event $U=0$ occurs with probability $0$, and can thus be ignored. (In probabilistic jargon, it almost never happens.)
In practice, it's possible that your PRNG may return a value that is exactly $0$. For a reasonably good PRNG, this is unlikely, but it may not be quite unlikely enough for you to bet that it never happens. Thus, if your program would crash on $U=0$, a prudent programmer should check for that event and generate a new random number if it occurs.
(Note that many PRNG routines are defined to return values in the range $0 \le U < 1$: for example, the Java default PRNG is defined to return values of the form $m\cdot2^{-53}$ for $m \in \{0,1,\dotsc,2^{53}-1\}$. If you're using such a PRNG, a cheap but effective way to avoid the case $U=0$ is to use the number $1-U$ instead of $U$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/115805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If we know the GCD and LCM of two integers, can we determine the possible values of these two integers? I know that $\gcd(a,b) \cdot \operatorname{lcm}(a,b) = ab$, so if we know $\gcd(a,b)$ and $\operatorname{lcm}(a,b)$ and we want to find out $a$ and $b$, besides factoring $ab$ and find possible values, can we find these two integers faster?
| If you scale the problem by dividing through by $\rm\:gcd(a,b)\:$ then you are asking how to determine coprime factors of a product. This is equivalent to factoring integers.
Your original question, in the special case $\rm\:gcd(a,b) = lcm(a,b),\:$ is much easier:
Hint $\:$ In any domain $\rm\:gcd(a,b)\ |\ a,b\ |\ lcm(a,b)\ $ so $\rm\:lcm(a,b)\ |\ gcd(a,b)\ \Rightarrow $ all four are associate. Hence all four are equal if they are positive integers. If they are polynomials $\ne 0$ over a field then they are equal up to nonzero constant multiples, i.e. up to unit factors.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 0
} |
Multiple function values for a single x-value I'm curious if it's possible to define a function that would have more than two functionvalues for one single x-value.
I know that it's possible to get two y-values by using the root (one positive, one negative: $\sqrt{4} = -2 ; +2$).
Is it possible to get three or more function values for one single x-value?
| Let's consider some multivalued functions (not 'functions' since these are one to one by definition) :
$y=x^n$ has $n$ different solutions $\sqrt[n]{y}\cdot e^{2\pi i \frac kn}$ (no more than two will be real)
The inverse of periodic functions will be multivalued (arcsine, arccosine and so on...) with an infinity of branches (the principal branch will usually be considered and the principal value returned).
The logarithm is interesting too (every branch gives an additional $2\pi i$).
$i^i$ has an infinity of real values since $i^i=(e^{\pi i/2+2k \pi i})^i=e^{-\pi/2-2k \pi}$ (replace one of the $i$ by $ix$ to get a multivalued function).
The Lambert W function has two rather different branches $W_0$ and $W_{-1}$
and so on...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Find Double of Distance Between 2 Quaternions I want to find the geometric equivalent of vector addition and subtraction in 3d for quaternions. In 3d difference between 2 points(a and b) gives the vector from one point to another. (b-a) gives the vector from b to a and when I add this to b I find the point which is double distance from a in the direction of (b-a). I want to do the same thing for unit quaternions but they lie on 4d sphere so direct addition won't work. I want to find the equivalent equation for a-b and a+b where a and b are unit quaternions. It should be something similar to slerp but it is not intuitive to me how to use it here because addition produces a quaternion outside the arc between 2 quaternions.
| Slerp is exactly what you want, except with the interpolation parameter $t$ set to $2$ instead of lying between $0$ and $1$. Slerp is nothing but a constant-speed parametrization of the great circle between two points $a$ and $b$ on a hypersphere, such that $t = 0$ maps to $a$ and $t = 1$ maps to $b$. Setting $t = 2$ will get you the point on the great circle as far from $b$ as $b$ is from $a$. See my other answer to a related question on scaling figures lying in a hypersphere.
Update: Actually, it just occurred to me that this is overkill, though it gives the right answer. The simpler solution is that the quaternion that maps $a$ to $b$ is simply $ba^{-1}$ (this plays the role of "$b-a$"), and applying that quaternion to $b$ gives you $ba^{-1}b$ (analogous to "$(b - a) + b$") which is what you want.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that this limit is equal to $\liminf a_{n}^{1/n}$ for positive terms.
Show that if $a_{n}$ is a sequence of positive terms such that $\lim\limits_{n\to\infty} (a_{n+1}/a_n) $ exists, then this limit is equal to $\liminf\limits_{n\to\infty} a_n^{1/n}$.
I am not event sure where to start from, any help would be much appreciated.
| I saw this proof today and thought it's a nice one:
Let $a_n\ge 0$, $\lim\limits_{n \to \infty}a_n=L$. So there are 2 options:
(1) $L>0$:
$$
\lim\limits_{n \to \infty}a_n=L
\iff \lim\limits_{n \to \infty}\frac{1}{a_n}=\frac{1}{L}$$
Using Cauchy's Inequality Of Arithmetic And Geometric Means we get:
$$\frac{n}{a_1^{-1}+\dots+a_n^{-1}}\le\sqrt[n]{a_1\cdots a_n}\le \frac{a_1+ \cdots + a_n}{n}$$
Applying Cesaro Theorem on $a_n$, notice that RHS $\mathop{\to_{n \to \infty}} L$ , and that by applying Cesaro Thm on $1/a_n$, LHS$\mathop{\to_{n \to \infty}} \frac{1}{1/L}=L$ . And so from the squeeze thm:
$$\lim\limits_{n \to \infty}\sqrt[n]{a_1\cdots a_n}=L$$
(2) $L=0$:
$$0\le\sqrt[n]{a_1\cdots a_n}\le \frac{a_1+ \cdots + a_n}{n} $$
$$\Longrightarrow\lim\limits_{n \to \infty}\sqrt[n]{a_1\cdots a_n}=0=L$$
Now, define:
$$b_n = \begin{cases}{a_1} &{n=1}\\\\ {\frac{a_n}{a_{n-1}}} &{n>1}\end{cases}$$
and assume $\lim\limits_{n \to \infty}b_n=B $.
Applying the above result on $b_n$ we get:
$$\frac{n}{b_1^{-1}+\dots+b_n^{-1}}\le\sqrt[n]{b_1\cdots b_n}\le \frac{b_1+ \cdots + b_n}{n}$$
$$\frac{n}{b_1^{-1}+\dots+b_n^{-1}}\le\sqrt[n]{a_1\cdot (a_2/a_1)\cdots (a_n/a_{n-1})}\le \frac{b_1+ \cdots + b_n}{n}$$
$$\frac{n}{b_1^{-1}+\dots+b_n^{-1}}\le\sqrt[n]{a_n}\le \frac{b_1+ \cdots + b_n}{n}$$
$$\Longrightarrow\lim\limits_{n \to \infty}\sqrt[n]{a_n}=B$$
So we can conclude that if $\lim\limits_{n\to\infty} \frac{a_{n+1}}{a_n}$ exists and equal to $B$, then $\lim\limits_{n \to \infty}\sqrt[n]{a_n}=B$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
proving by $\epsilon$-$\delta $ approach that $\lim_{(x,y)\rightarrow (0,0)}\frac {x^n-y^n}{|x|+|y|}$exists for $n\in \mathbb{N}$ and $n>1$ As the topic, how to prove by $\epsilon$-$\delta $ approach $\lim_{(x,y)\rightarrow (0,0)}\frac {x^n-y^n}{|x|+|y|}$ exists for $n\in \mathbb{N}$ and $n>1$
| You may use that
$$\left|\frac{x^n-y^n}{|x|+|y|}\right|\leq \frac{|x|^n-|y|^n}{|x|+|y|}\leq \frac{|x|}{|x|+|y|}|x|^{n-1}+\frac{|y|}{|x|+|y|}|y|^{n-1}\leq|x|^{n-1}+|y|^{n-1}.$$
Since you impose $x^2+y^2< \delta \leq 1$ you have $|x|, |y|<1\Rightarrow |x|^{n-1}<|x|,\ |y|^{n-1}<|y|.$
Then you have
$|x|^{n-1}+|y|^{n-1}<|x|+|y|\leq 2\sqrt{x^2+y^2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
RSA cryptography Algebra
This is a homework problem I am trying to do.
I have done part 2i) as well as 2ii) and know how to do the rest. I am stuck on 2iii) and 2vii).
I truly dont know 2vii because it could be some special case I am not aware of. As for 2iii) I tried to approach it the same way as I did 2ii in which case I said you could take the 2 equations and use substitution method to isolate each variable and plug it in to find your variable values but 2iii) that doesnt work so I dont know how to do it.
| For $s$ sufficiently small, we can go from $b^2=n+s^2$ to $b^2\approx n$. Take the square root and you approximately have the average of $p$ and $q$. Since $s$ is small so is their difference (relatively), so we can search around $\sqrt{n}$ for $p$ or $q$. The part (iv) means absolute difference and should have written that. Take the square root of your number and you will find $p$ and $q$ very close nearby.
For (vii), say cipher and plaintexts are equal, so $m\equiv m^e$ modulo $n$. There are only a certain # of $m$ that allow this: those with $m\equiv0$ or $1$ mod $n$ or $p|m$ and $q|(m-1)$ or vice-versa, by elementary number theory. If the scheme uses padding to avoid these messages, then no collision is possible between plain and cipher, but otherwise if you allow arbitrary numbers as messages it clearly is.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the value of $\sin(x)$ if $x$ tends to infinity?
What is the value of $\sin(x)$ if $x$ tends to infinity?
As in wikipedia entry for "Sine", the domain of $\sin$ can be from $-\infty$ to $+\infty$. What is the value of $\sin(\infty)$?
| Suppose $\lim_{x \to \infty} \sin(x) = L$. $\frac{1}{2} > 0$, so we may take $\epsilon = \frac{1}{2}$.
let N be any positive natural number. then $2\pi (N + \frac{1}{4}) > N$ as is $2\pi (N+\frac{3}{4})$.
but $\sin(2\pi (N + \frac{1}{2})) = \sin(\frac{\pi}{2}) = 1$.
so if $L < 0$, we have a $y > N$ (namely $2\pi (N + \frac{1}{4})$) with:
$|\sin(y) - L| = |1 - L| = |1 + (-L)| = 1 + |L| > 1 > \epsilon = \frac{1}{2}$.
similarly, if $L \geq 0$, we have for $ y = 2\pi (N+\frac{3}{4}) > N$:
$|\sin(y) - L| = |-1 - L| = |(-1)(1 + L)| = |-1||1 + L| = |1 + L| = 1 + L \geq 1 > \epsilon = \frac{1}{2}$.
thus there is NO positive natural number N such that:
$|\sin(y) - L| < \frac{1}{2}$ when $y > N$, no matter how we choose L.
since every real number L fails this test for this particular choice of $\epsilon$, $\lim_{x \to \infty} \sin(x)$ does not exist.
(edit: recall that $\lim_{x \to \infty} f(x) = L$ means that for every $\epsilon > 0$, there is a positive real number M such that $|f(y) - L| < \epsilon$ whenever $y > M$. note that there is no loss of generality by taking M to be a natural number N, since we can simply choose N to be the next integer greater than M.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
The position of a particle moving along a line is given by $ 2t^3 -24t^2+90t + 7$ for $t >0$ For what values of $t$ is the speed of the particle increasing?
I tried to find the first derivative and I get
$$6t^2-48t+90 = 0$$
$$ t^2-8t+15 = 0$$
Which is giving me $ t>5$ and $0 < t < 3$, but the book gives a different answer
| Let's be careful. The velocity is $6(t^2-8t+15)$. This is $\ge 0$ when $t \ge 5$ and when $t\le 3$. So on $(5,\infty)$, and also on $(0,3)$, this is the speed. It is not the speed on $(3,5)$. There the speed is $-6(t^2-8t+15)$.
When $t > 5$ and also when $t< 3$, the derivative of speed is $6(2t-8)$, and is positive when $t>4$. So the speed is certainly increasing over the interval $(5,\infty)$.
In the interval $(3,5)$, the derivative of speed is $-6(2t-8)$. This is positive in the interval $(3,4)$.
So the speed is increasing on $(5,\infty)$ and on $(3,4)$.
Note that the derivative of speed does not exist at $3$ and at $5$.
Remark: Occasionally, I have asked questions of this nature, though not quite as sadistic. Here there is a double twist. Even if the student notices that the question doesn't ask where $s(t)$ is increasing, there is the velocity/speed trap as backup. Not a good idea, it only proves one can fool most of the people most of the time.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 3
} |
Divide inside a Radical It has been so long since I have done division inside of radicals that I totally forget the "special rule" for for doing it. -_-
For example, say I wanted to divide the 4 out of this expression:
$\sqrt{1 - 4x^2}$
Is this the right way to go about it?
$\frac{16}{16} \cdot \sqrt{1 - 4x^2}$
$16 \cdot \frac{\sqrt{1 - 4x^2}}{16}$
$16 \cdot \sqrt{\frac{1 - 4x^2}{4}} \Longleftarrow \text{Took the square root of 16 to get it in the radicand as the divisor}$
I know that this really a simple, question. Can't believe that I forgot how to do it. :(
| The correct way to do this, after fixing the mistake pointed out by Donkey_2009, is:
$\dfrac{2}{2} \cdot \sqrt{1-4x^2}$
$= 2 \cdot \dfrac{\sqrt{1-4x^2}}{2}$
$= 2 \cdot \dfrac{\sqrt{1-4x^2}}{\sqrt{4}} \qquad \Leftarrow$ applied $x = \sqrt{x^2}$
$= 2 \cdot \sqrt{\dfrac{1-4x^2}{4}} \qquad \Leftarrow$ applied $\frac{\sqrt a}{\sqrt b} = \sqrt{\frac a b}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
} |
If $f(x)=f'(x)+f''(x)$ then show that $f(x)=0$
A real-valued function $f$ which is infinitely differentiable on $[a.b]$ has the following properties:
*
*$f(a)=f(b)=0$
*$f(x)=f'(x)+f''(x)$ $\forall x \in [a,b]$
Show that $f(x)=0$ $\forall x\in [a.b]$
I tried using the Rolle's Theorem, but it only tells me that there exists a $c \in [a.b]$ for which $f'(c)=0$.
All I get is:
*
*$f'(a)=-f''(a)$
*$f'(b)=-f''(b)$
*$f(c)=f''(c)$
Somehow none of these direct me to the solution.
| Hint $f$ can't have a positive maximum at $c$ since then $f(c)>0, f'(c)=0, f''(c) \le 0$ implies that $f''(c)+f'(c)-f(c) < 0$. Similarly $f$ can't have a negative minimum. Hence $f = 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 4,
"answer_id": 1
} |
LTL is a star-free language but it can describe $a^*b^\omega$. Contradiction? Does the statement "LTL is a star-free language"(from wiki) mean that the expressiveness power of LTL is equivalent to that of star-free languages? Then why can you describe in LTL the following language with the star: $a^*b^\omega$?
$$\mathbf{G}(a \implies a\mathbf{U}b) \land G(b \implies \mathbf{X}b)$$
So, what does the sentence "LTL is star-free language" mean? Can you give an example of regular language with star which cannot be expressed in LTL? (not an example of LTL < NBA, but an example of LTL < regular language with star)
| Short answer: $a^*b^{\omega}$ describes a star-free language.
Longer answer:
In order to show that let's consider two definitions of a regular star-free language :
*
*Language has a maximum star height of 0.
*Language is in the class of star-free languages, which is defined as follows:
it's the smallest subset of $\Sigma^{\omega}$ which contains $\Sigma^{\omega}$, all singletons $\{x\}$, $x \in \Sigma$, and which is closed under finite union, concatenation and complementation.
It's possible to see that those two definitions are equivalent. We can also note that all finite languages are star-free.
An $\omega$-regular language is called $\omega$-star-free if it's a finite union of languages of type $XY^{\omega}$, where $X$ and $Y^+$ are star-free.
Now, $L = a^*$ can be described as $\Sigma^* \setminus (\Sigma^* (\Sigma \setminus a) \Sigma^*)$, so $L$ is a star-free language. Since $L' = a^* b^{\omega}$ can be written as concatenation in the form of $XY^{\omega}$ where $X = L$ and $Y = \{b\}$ (it's easy to show that $Y^+$ is star-free) we can conclude that $L'$ is star-free.
For more information (and equivalent definitions) I can refer you to the following papers: First-order definable languages, On the Expressive Power of Temporal Logic, On the expressive power of temporal logic for infinite words
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Can a basis for a vector space be made up of matrices instead of vectors? I'm sorry if this is a silly question. I'm new to the notion of bases and all the examples I've dealt with before have involved sets of vectors containing real numbers. This has led me to assume that bases, by definition, are made up of a number of $n$-tuples.
However, now I've been thinking about a basis for all $n\times n$ matrices and I keep coming back to the idea that the simplest basis would be $n^2$ matrices, each with a single $1$ in a unique position.
Is this a valid basis? Or should I be trying to get column vectors on their own somehow?
| Yes, you are right. A vector space of matrices of size $n$ is actually, a vector space of dimension $n^2$. In fact, just to spice things up: The vector space of all
*
*diagonal,
*symmetric and
*triangular matrices of dimension $n\times n$
is actually a subspace of the space of matrices of that size.
As with all subspaces, you can take any linear combination and stay within the space. (Also, null matrix is in all the above three).
Try to calculate the basis for the above 3 special cases: For the diagonal matrix, the basis is a set of $n$ matrices such that the $i^{th}$ basis matrix has $1$ in the $(i,i)$ and $0$ everywhere else.
Try to figure out the basis vectors/matrices for symmetric and triangular matrices.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 0
} |
Efficiently solving a special integer linear programming with simple structure and known feasible solution Consider an ILP of the following form:
Minimize $\sum_{k=1}^N s_i$ where
$\sum_{k=i}^j s_i \ge c_1 (j-i) + c_2 - \sum_{k=i}^j a_i$ for given constants $c_1, c_2 > 0$ and a given sequence of non-zero natural numbers $a_i$, for all $1 \le i \le j \le N$, and the $s_i$ are non-zero natural numbers.
Using glpk, it was no problem to solve this system in little time for $N=100$, with various parameters values. Sadly, due to the huge numbers of constraints, this does not scale well to larger values of $N$ - glpk takes forever in trying to find a feasible solution for the relaxed problem.
I know that every instance of this problem has a (non-optimal) feasible solution, e.g., $s_i = \max \{ 1, 2r - a_i \}$ for a certain constant $r$, and the matrix belonging to the system is totally unimodular. How can I make use of this information to speed up calculations? Would using a different tool help me?
Edit: I tried using CPlex instead. The program runs much faster now, but the scalability issues remain. Nevertheless, I can now handle the problem I want to address. It may be interesting to note that while it is possible to provide a feasible but non-optimal solution to CPlex (see the setVectors function in the Concert interface), this makes CPlex assume that the given solution is optimal (which is not neccesarily the case) and hence give wrong results.
It would still be interesting to know if there is a better solution that does not involve throwing more hardware at the problem.
| I did not find a satisfying solution for this, so I will just re-iterate what I found: Using CPlex, the problem scales somewhat better. Sadly, it does not seem possible to tell CPlex that you have a feasible solution, only that you have a (claimed to be) optimal solution, which wastes effort.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Probability that, given a set of uniform random variables, the difference between the two smallest values is greater than a certain value Let $\{X_i\}$ be $n$ iid uniform(0, 1) random variables. How do I compute the probability that the difference between the second smallest value and the smallest value is at least $c$?
I've messed around with this numerically and have arrived at the conjecture that the answer is $(1-c)^n$, but I haven't been able to derive this.
I see that $(1-c)^n$ is the probability that all the values would be at least $c$, so perhaps this is related?
| There's probably an elegant conceptual way to see this, but here is a brute-force approach.
Let our variables be $X_1$ through $X_n$, and consider the probability $P_1$ that $X_1$ is smallest and all the other variables are at least $c$ above it. The first part of this follows automatically from the last, so we must have
$$P_1 = \int_0^{1-c}(1-c-t)^{n-1} dt$$
where the integration variable $t$ represents the value of $X_1$ and $(1-c-t)$ is the probability that $X_2$ etc satisfies the condition.
Since the situation is symmetric in the various variables, and two variables cannot be the least one at the same time, the total probability is simply $nP_1$, and we can calculate
$$ n\int_0^{1-c}(1-c-t)^{n-1} dt = n\int_0^{1-c} u^{n-1} du = n\left[\frac1n u^n \right]_0^{1-c} = (1-c)^n $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
The set of functions which map convergent series to convergent series Suppose $f$ is some real function with the above property, i.e.
if $\sum\limits_{n = 0}^\infty {x_n}$ converges, then $\sum\limits_{n = 0}^\infty {f(x_n)}$ also converges.
My question is: can anything interesting be said regarding the behavior of such a function close to $0$, other than the fact that $f(0)=0$?
| Answer to the next question: no.
Let $f\colon\mathbb{R}\to\mathbb{R}$ be defined by
$$
f(x)=\begin{cases}
n\,x & \text{if } x=2^{-n}, n\in\mathbb{N},\\
x & \text{otherwise.}
\end{cases}
$$
Then $\lim_{x\to0}f(x)=f(0)=0$, $f$ transforms convergent series in convergent series, but $f(x)/x$ is not bounded in any open set containing $0$. In particular $f$ is not differentiable at $x=0$. This example can be modified to make $f$ continuous.
Proof.
Let $\sum_{k=1}^\infty x_k$ be a convergent series. Let $I=\{k\in\mathbb{N}:x_k=2^{-n}\text{ for some }n\in\mathbb{N}\}$. For each $k\in I$ let $n_k\in\mathbb{N}$ be such that $x_k=2^{-n_k}$. Then
$$
\sum_{k=1}^\infty f(x_k)=\sum_{k\in I} n_k\,2^{-n_k}+\sum_{n\not\in I} x_n.
$$
The series $\sum_{k\in I} n_k\,2^{-n_k}$ is convergent. It is enough to show that also $\sum_{n\not\in I} x_n$ is convergent. This follows from the equality
$$
\sum_{n=1}^\infty x_n=\sum_{n\in I} x_n+\sum_{n\not\in I} x_n
$$
and the fact that $\sum_{n=1}^\infty x_n$ is convergent and $\sum_{k\in I} x_n$ absolutely convergent.
The proof is wrong. $\sum_{k\in I} x_n$ may be divergent. Consider the series
$$
\frac12-\frac12+\frac14-\frac14+\frac14-\frac14+\frac18-\frac18+\frac18-\frac18+\frac18-\frac18+\frac18-\frac18+\dots
$$
It is convergent, since its partial sums are
$$
\frac12,0,\frac14,0,\frac14,0,\frac18,0,\frac18,0,\frac18,0,\frac18,0,\dots
$$
The transformed series is
$$
\frac12-\frac12+\frac24-\frac14+\frac24-\frac14+\frac38-\frac18+\frac38-\frac18+\frac38-\frac18+\frac38-\frac18+\dots
$$
whose partial sums are
$$
\frac12,0,\frac12,\frac14,\frac34,\frac12,\frac78,\frac34,\frac98,1,\frac{11}8,\frac54,\dots
$$
which grow without bound.
On the other hand, $f(x)=O(x)$, the condition in Antonio Vargas' comment, is not enough when one considers series of arbitrary sign. Let
$$
f(x)=\begin{cases}
x\cos\dfrac{\pi}{x} & \text{if } x\ne0,\\
0 & \text{if } x=0,
\end{cases}
\quad\text{so that }|f(x)|\le|x|.
$$
Let $x_n=\dfrac{(-1)^n}{n}$. Then $\sum_{n=1}^\infty x_n$ converges, but
$$
\sum_{n=1}^\infty f(x_n)=\sum_{n=1}^\infty\frac1n
$$
diverges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/116964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31",
"answer_count": 3,
"answer_id": 2
} |
Probability that sum of rolling a 6-sided die 10 times is divisible by 10? Here's a question I've been considering: Suppose you roll a usual 6-sided die 10 times and sum up the results of your rolls. What's the probability that it's divisible by 10?
I've managed to solve it in a somewhat ugly fashion using the following generating series:
$(x+x^2+x^3+x^4+x^5+x^6)^{10} = x^{10}(x^6 - 1)(1+x+x^2+\cdots)^{10}$ which makes finding the probability somewhat doable if I have a calculator or lots of free time to evaluate binomials.
What's interesting though is that the probability ends up being just short of $\frac{1}{10}$ (in fact, it's about 0.099748). If instead, I roll the die $n$ times and find whether the sum is divisible by $n$, the probability is well approximated by $\frac{1}{n} - \epsilon$.
Does anyone know how I can find the "error" term $\epsilon$ in terms of $n$?
| The distribution of the sum converges to normal distribution with speed (if I remember correctly) of $n^{-1/2}$ and that error term could be dominating other terms (since you have got just constant numbers (=six) of samples out of resulting distribution). However, there is a small problem -- probability that your sum will be divisible by $n$ tends to $0$ with speed $n^{-1}$. Still, I typed something into Mathematica and this is what I've got:
In[1]:=
X := Sum[
Erfc[(6*Sqrt[2]*(7n/2 - k*n+1))/(35n)]/2
-Erfc[(6*Sqrt[2]*(7n/2 - k*n))/(35n)]/2,
{k, {1,2,3,4,5,6}}
]
In[2]:= Limit[X, n -> Infinity]
Out[2]= 0
In[3]:= N[Limit[X*n, n -> Infinity]]
Out[3]= -0.698703
In[4]:= N[Limit[(X+1/n)*n, n -> Infinity]]
Out[4]= 0.301297
The Erfc is the cumulative probability function of normal distribution,
the formula inside is to adjust for mean and variation. $X$ should be approximation of what you are looking for. What it shows (In[3] and In[4]) is that there is a typo in my formula or this does not converge to what you think it should (in fact that may be true (I am not sure), i.e. in each sixth part you are always off from center (or wherever the mean of that part value is) by constant margin). Hope that helps ;-)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 2
} |
Coding theory (existence of codes with given parameters) Explain why each of the following codes can't exist:
*
*A self complementary code with parameters $(35, 130, 15)$. (I tried using Grey Rankin bound but 130 falls within the bound)
*A binary $(15, 2^8, 5)$ code. (I tried Singleton Bound but no help)
*A $10$-ary code $(11, 100, 10)$ subscript 10. (I tried using Singleton Bound but again, it falls within the bound)
| Let me elaborate on problem #2. As I said in my comment that claim is wrong, because there does exist a binary code of length 15, 256 words and minimum Hamming distance 5.
I shall first give you a binary $(16,256,6)$ code aka the Nordstrom-Robinson code.
Consider the $\mathbf{Z}_4$-submodule $N$ of $\mathbf{Z}_4^8$ generated by the rows of the matrix
$$
G=\left(
\begin{array}{cccccccc}
1&3&1&2&1&0&0&0\\
1&0&3&1&2&1&0&0\\
1&0&0&3&1&2&1&0\\
1&0&0&0&3&1&2&1
\end{array}\right).
$$
Looking at the last four columns tells you immediate that $N$ is a free
$\mathbf{Z}_4$-module with rows of $G$ as a basis, and therefore it has $256$ elements. It is easy to generate all 256 them, e.g. by a fourfold loop. Let us define a function called the Lee weight $w_L$. It is a modification of the Hamming weight. We define it first on elements of $\mathbf{Z}_4$ by declaring $w_L:0\mapsto 0$, $1\mapsto 1$,
$2\mapsto 2$, $3\mapsto 1$, and then extend the definition to vectors $\vec{w}=(w_1,w_2,\ldots,w_8)$ by
$$
w_L(\vec{w})=\sum_{i=1}^8w_L(w_i).
$$
It is now relatively easy to check (e.g. by listing all the 256 elements of $N$, but there are also cleaner ways of doing this) that for any non-zero $\vec{w}\in N$ we have $w_L(\vec{w})\ge 6$.
Then we turn the $\mathbf{Z}_4$-module $N$ into a binary code. We turn each element of $\mathbf{Z}_4$ to a pair of bits with the aid of the Gray mapping
$\varphi:\mathbf{Z}_4\rightarrow \mathbf{Z}_2^2$ defined as follows: $\varphi(0)=00$, $\varphi(1)=01$, $\varphi(2)=11$, $\varphi(3)=10$. We then extend this componentwise to a mapping from $\mathbf{Z}_4^8$ to $\mathbf{Z}_2^{16}$. For example, the first generating vector becomes
$$
\varphi: 13121000 \mapsto 01\ 10\ 01\ 11\ 01\ 00\ 00\ 00.
$$
The mapping $\varphi$ is not a homomorphism of groups, so the image $\varphi(N)$ is not
a subgroup of $\mathbf{Z}_2^{16}$, i.e. $\varphi(N)$ is not a linear code. However, we make the key observation that $\varphi$ is an isometry. Basically it turns the Lee weight into Hamming weight. So if $\varphi(\vec{w})$ and $\varphi(\vec{w}')$ are two distinct elements of
$\varphi(N)$, then
$$
d_{Hamming}(\varphi(\vec{w}),\varphi(\vec{w}'))=w_L(\vec{w}-\vec{w}')\ge6.
$$
It is easy to show this by first checking that this relation holds for all pairs of
elements of $\mathbf{Z}_4$. As the corresponding function on vectors is the componentwise sum, the relation holds for vectors as well.
Therefore $\varphi(N)$ is a (non-linear) binary $(16,256,6)$ code.
Finally, we get at a non-linear binary $(15,256,5)$-code by dropping, say, the last bit
from all the vectors of $\varphi(N)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Probabilistic paradox: Making a scratch in a dice changes the probability? For dices that we cannot distinguish we have learned in class, that the correct sample space is $\Omega _1 = \{ \{a,b\}|a,b\in \{1,\ldots,6\} \}$, whereas for dices that we can distinguish we have $\Omega _2 = \{ (a,b)|a,b\in \{1,\ldots,6\} \}$.
Now here's the apparent paradox: Suppose we have initially two identical dices. We want to evaluate the event that the sum of the faces of the two dices is $4$. Since $ 4=1+3=2+2$, we have $P_1(\mbox{Faces}=4)=\frac{2}{|\Omega_1|}=\frac{2}{21}$. So far so good. But if we now make a scratch in one dice, we can distinguish them, so suddenly the probability changes and we get $P_2(\mbox{Faces}=4)=\frac{3}{|\Omega_2|}=\frac{3}{36}=\frac{1}{12}$ (we get $3$ in the numerator since $(3,1) \neq (1,3$)).
Why does a single scratch change the probability of the sum of the faces being $4$ ?
(My guess would be that either these mathematical models, $\Omega _1,\Omega _2$, don't describe the reality - meaning rolling two dices - or they do, but in the first case, although the dices are identical we can still distinguish them, if we, say, always distinguish between the left dice and the right, so applying the first model was actually wrong. But then what about closing the eyes during the experiment ?)
| The correct probability distribution for dice treats them as distinguishable. If you insist on using the sample space for indistinguishable dice, the outcomes are not equally likely.
However, if you are doing quantum mechanics and the "numbers" become individual quantum states, indistinguishable dice must be treated using either Fermi or Bose statistics, depending on whether they have half-integer or integer spin.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Applying Euler's Theorem to Prove a Simple Congruence I have been stuck on this exercise for far too long:
Show that if $a$ and $m$ are positive integers with $(a,b)=(a-1,m)=1$, then
$$1+a+a^2+\cdots+a^{\phi(m)-1}\equiv0\pmod m.$$
First of all, I know that
$$1+a+a^2+\cdots+a^{\phi(m)-1}=\frac{a^{\phi(m)-2}-1}{a-1},$$
and by Euler's theorem,
$$a^{\phi(m)}\equiv1\pmod m.$$
Now, because $(a,m)=1$, we have
$$a^{\phi(m)-2}\equiv a^{-2}\pmod m,$$
$$a^{\phi(m)-2}-1\equiv a^{-2}-1\pmod m,$$
and because $(a-1,m)=1$,
$$\frac{a^{\phi(m)-2}-1}{a-1}\equiv\frac{a^{-2}-1}{a-1}\pmod m,$$
$$1+a+a^2+\cdots+a^{\phi(m)-1}\equiv\frac{a^{-2}-1}{a-1}\pmod m.$$
However, I get stuck here. Is there a way to show that the RHS of that last expression is congruent to zero modulus $m$? Thanks in advance!
Note: I really do not know if I am tackling this problem correctly to begin with.
| Hint: From $a^{\phi(m)}-1$ is congruent to 0 mod m and is congruent to the $(a^{\phi(m)}-1)/(a-1)$ mod m
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Given 5 children and 8 adults, how many ways can they be seated so that there are no two children sitting next to each other.
Possible Duplicate:
How many ways are there for 8 men and 5 women to stand in a line so that no two women stand next to each other?
Given 5 children and 8 adults, how many different ways can they be seated so that no two children are sitting next to each other.
My solution:
Writing out all possible seating arrangements:
tried using $\displaystyle \frac{34*5!*8!}{13!}$ To get the solution, because $13!$ is the sample space. and $5!$ (arrangements of children) * $34$ (no two children next to each other) * $8!$ (# of arrangements for adults).
| The solution below assumes the seats are in a row:
This is a stars and bars problem. First, order the children (5! ways). Now, suppose the adults are identical. They can go in any of the places on either side or between of the children. Set aside 4 adults to space out the children, and place the other 4 in any arrangement with the 5 children; there are $\binom{9}{4}$ ways to do this. Finally, re-order the adults. So we get $$8!5!\binom{9}{4}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Dummit Foote 10.5.1(d) commutative diagram of exact sequences.
I solved other problems, except (d): if $\beta$ is injective, $\alpha$ and $\gamma$ are surjective, then $\gamma$ is injective.
Unlike others, I don't know where to start.
| As the comments mention, this exercise is false as stated. Here's a counterexample: let $A$ and $B$ be groups, with usual inclusion and projection homomorphisms $$\iota_A(a) = (a,1),$$ $$\iota_B(b) = (1,b)$$ and $$\pi_B(a,b) = b.$$
Then the following diagram meets the stated requirements, except $\pi_B$ is not injective.
$$\require{AMScd}
\begin{CD}
A @>{\iota_A}>> A\times B @>{\iota_B \circ \pi_B}>> A\times B\\
@V{\operatorname{Id}}VV @V{\operatorname{Id}}VV @V{\pi_B}VV\\
A @>{\iota_A}>> A \times B @>{\pi_B}>> B
\end{CD}$$
This indeed exploits the fact that $\varphi$ is not required to be surjective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Coin sequence paradox from Martin Gardner's book "An event less frequent in the long run is likely to happen before a more frequent event!"
How can I show that THTH is more likely to turn up before HTHH with a probability of
9/14, even though waiting time of THTH is 20 and HTHH, 18!
I would be very thankful if you could show me the way of
calculating the probability of turning up earlier,
and the waiting time. Thank you!
| Here is a both nontrivial and advanced solution.
Consider an ideal gambling where a dealer Alice tosses a fair coin repeatedly. After she made her $(n-1)$-th toss (or just at the beginning of the game if $n = 1$), the $n$-th player Bob joins the game. He bets $2^0\$$ that the $n$-th coin is $T$. If he loses, he leaves the game. Otherwise he wins $2^1\$$, which he bets that the $(n+1)$-th coin is $H$. If he loses, he leaves the game. Otherwise he wins $2^2\$$, which he bets that the $(n+2)$-th coin is $T$. This goes on until he wins $2^4\$$ for $THTH$ and leaves the game, or the game stops.
Thus let $X^{(n)}_k$ be the r.v. of the winning the $n$-th player made when $k$-th coin toss is made. If we let
$$X_n = X^{(1)}_n + \cdots + X^{(n)}_n,$$
Then it is easy to see that $X_n - n$ is a martingale null at 0. Thus if $S$ denotes the stopping time of the first occurrence of $THTH$, and if we assume that $\mathbb{E}[S] < \infty$, then
$$0 = \mathbb{E}[X_{S} - S],$$
thus we have
$$\mathbb{E}[S] = \mathbb{E}[X_{S}] = 2^4 + 2^2 = 20.$$
Let $Y_n$ be the total winning corresponding to $HTHH$, and $T$ be the stopping time of the first occurrence of $HTHH$. Then we have
$$\mathbb{E}[T] = \mathbb{E}[Y_{T}] = 2^4 + 2^1 = 18.$$
Finally, let $U = S \wedge T$ be the minimum of $S$ and $T$. Then it is also a stopping time. Now let $p = \mathbb{P}(S < T)$ be the probability that $THTH$ precedes $HTHH$. Then
$$ \mathbb{E}[U] = \mathbb{E}[X_{U}] = \mathbb{E}[X_{S}\mathbf{1}_{\{S < T\}}] + \mathbb{E}[X_{T}\mathbf{1}_{\{S > T\}}] = 20p + 0(1-p),$$
and likewise
$$ \mathbb{E}[U] = \mathbb{E}[Y_{U}] = \mathbb{E}[Y_{S}\mathbf{1}_{\{S < T\}}] + \mathbb{E}[Y_{T}\mathbf{1}_{\{S > T\}}] = (2^3 + 2^1)p + 18(1-p).$$
Therefore we have $p = 9/14 \approx 64.2857 \%$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 1
} |
Ordered partitions of an integer Let $k>0$ and $(l_1,\ldots,l_n)$ be given with $l_i>0$ (and the $l_i's$ need not be distinct). How do I count the number of distinct tuples
$$(a_1,\ldots,a_r)$$
where $a_1+\ldots+a_r=k$ and each $a_i$ is some $l_j$. There will typically be a different length $r$ for each such tuple.
If there is not a reasonably simple expression, is there some known asymptotic behavior as a function of $k$?
| The generation function for $P_d(n)$, where no part appears more than $d$ times is given by
$$
\prod_{k=1}^\infty \frac{1-x^{(d+1)k}}{1-x^k} = \sum_{n=0}^\infty P_d(n)x^n.
$$
In your case $d=1$ and the product simplifies to
$$
\prod_{k=1}^\infty (1+x^k)= 1+x+x^2+2 x^3+2 x^4+3 x^5+4 x^6+5 x^7+6 x^8+8 x^9+10 x^{10}+12 x^{11}+\cdots
$$
See here(eqn. 47) or at OEIS. Interestlingly the number of partitions of $n$ into distinct parts matches the number of partitions of $n$ into odd parts. Something that was asked/answered here and is known as Euler identity:
$$
\prod_{k=1}^\infty (1+x^k) = \prod_{k=1}^\infty (1-x^{2k-1})^{-1}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Quadratic Diophantine equation in three variables How would one determine solutions to the following quadratic Diophantine equation in three variables:
$$x^2 + n^2y^2 \pm n^2y = z^2$$
where n is a known integer and $x$, $y$, and $z$ are unknown positive integers to be solved.
Ideally there would be a parametric solution for $x$, $y$, and $z$.
[Note that the expression $y^2 + y$ must be an integer from the series {2, 6, 12, 20, 30, 42 ...} and so can be written as either $y^2 + y$ or $y^2 - y$ (e.g., 12 = $3^2 + 3$ and 12 = $4^2 - 4$). So I have written this as +/- in the equation above.]
Thanks,
| We will consider the more general equation:
$X^2+qY^2+qY=Z^2$
Then, if we use the solutions of Pell's equation: $p^2-(q+1)s^2=1$
Solutions can be written in this ideal:
$X=(-p^2+2ps+(q-1)s^2)L+qs^2$
$Y=2s(p-s)L+qs^2$
$Z=(p^2-2ps+(q+1)s^2)L+qps$
And more:
$X=(p^2+2ps-(q-1)s^2)L-p^2-2ps-s^2$
$Y=2s(p+s)L-p^2-2ps-s^2$
$Z=(p^2+2ps+(q+1)s^2)L-p^2-(q+2)ps-(q+1)s^2$
$L$ - integer and given us.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 1
} |
Another residue theory integral I need to evaluate the following real convergent improper integral using residue theory (vital that i use residue theory so other methods are not needed here)
I also need to use the following contour (specifically a keyhole contour to exclude the branch cut):
$$\int_0^\infty \frac{\sqrt{x}}{x^3+1}\ \mathrm dx$$
| Close format for this type of integrals:
$$ \int_0^{\infty} x^{\alpha-1}Q(x)dx =\frac{\pi}{\sin(\alpha \pi)} \sum_{i=1}^{n} \,\text{Res}_i\big((-z)^{\alpha-1}Q(z)\big) $$
$$ I=\int_0^\infty \frac{\sqrt{x}}{x^3+1} dx \rightarrow \alpha-1=\frac{1}{2} \rightarrow \alpha=\frac{3}{2}$$
$$ g(z) =(-z)^{\alpha-1}Q(z) =\frac{(-z)^{\frac{1}{2}}}{z^3+1} =\frac{i \sqrt{z}}{z^3+1}$$
$$ z^3+1=0 \rightarrow \hspace{8mm }z^3=-1=e^{i \pi} \rightarrow \hspace{8mm }z_k=e^{\frac{\pi+2k \pi}{3}}
$$
$$z_k= \begin{cases} k=0 & z_1=e^{i \frac{\pi}{3}}=\frac{1}{2}+i\frac{\sqrt{3}}{2} \\ k=1 &
z_2=e^{i \pi}=-1 \\k=2 & z_3=e^{i \frac{5 \pi}{3}}=\frac{1}{2}-i\frac{\sqrt{3}}{2} \end{cases}$$
$$R_1=\text{Residue}\big(g(z),z_1\big)=\frac{i \sqrt{z_1}}{(z_1-z_2)(z_1-z_3)}$$
$$R_2=\text{Residue}\big(g(z),z_2\big)=\frac{i \sqrt{z_2}}{(z_2-z_1)(z_2-z_3)}$$
$$R_3=\text{Residue}\big(g(z),z_3\big)=\frac{i \sqrt{z_3}}{(z_3-z_2)(z_3-z_1)}$$
$$ I=\frac{\pi}{\sin\left( \frac{3}{2} \pi\right)} (R_1+R_2+R_3) = \frac{\pi}{-1} \left(\frac{-1}{3}\right)=\frac{\pi}{3}$$
Matlab Program
syms x
f=sqrt(x)/(x^3+1);
int(f,0,inf)
ans =
pi/3
Compute R1,R2,R3 with Malab
z1=exp(i*pi/3);
z2=exp(i*pi);
z3=exp(5*i*pi/3);
R1=i*sqrt(z1)/((z1-z2)*(z1-z3));
R2=i*sqrt(z2)/((z2-z1)*(z2-z3));
R3=i*sqrt(z3)/((z3-z2)*(z3-z1));
I=(-pi)*(R1+R2+R3);
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Find $y$ to minimize $\sum (x_i - y)^2$ I have a finite set of numbers $X$. I want to minimize the following expression by finding the appropriate value for y:
$$\sum\limits_{i=1}^n (x_i - y)^2$$
| This is one of those problems where you just turn the crank and out pops the answer. The basic optimization technique of "set the derivative equal to zero and solve" to find critical points works in its simplest form without issue here.
And as the others have mentioned, the special form of being quadratic allows you to apply the specialized techniques you've learned for dealing with quadratic equations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Proof of greatest integer theorem: floor function is well-defined I have to prove that
$$\forall x \in \mathbb{R},\exists !\,n \in \mathbb{Z} \text{ s.t. }n \leq x < n+1\;.$$
where $\exists !\,n $ means there exists a unique (exactly one) $n$.
I'm done with proving that there are at least one integers for the solution.
I couldn't prove the "uniqueness" of the solution, and so I looked up the internet, and here's what I found:
Let $\hspace{2mm}n,m \in \mathbb{Z} \text{ s.t. }n \leq x < n+1$ and $m \leq x < m+1$.
Since $n \leq x \text{ and } -(m+1) < -x$, by adding both, $n + (-m-1) < (x-x) = 0$. And (some steps here) likewise, $(x-x) < n+m+1$.
Now, can I add up inequalities like that, even when the book is about real analysis (and in which assumptions are supposed to be really minimal)?
Or should I also prove those addition of inequalities?
Thank you :D
| The usual proof in the context of real analysis goes like this:
Let $A= \{ n \in \mathbb Z : n \le x \}$. Then $A$ is not empty. Indeed, there is $n\in \mathbb N$ such that $n>-x$, because $\mathbb N$ is unbounded. But then $-n\in A$.
Let $\alpha=\sup A$. Then there is $n\in A$ such that $\alpha-1<n\le\alpha$. But then $\alpha<n+1\le\alpha+1\le x+1$ and so $n\le x$ and $n+1\notin A$, that is, $n\le x < n+1$.
If $m\in A$ then $m\le n$ because $m>n$ implies $m\ge n+1>x$. If $m<n$ then $m+1\le n\le x$ and $m$ cannot be a solution. Hence the solution is unique.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Name this paradox about most common first digits in numbers I remember hearing about a paradox (not a real paradox, more of a surprising oddity) about frequency of the first digit in a random number being most likely 1, second most likely 2, etc. This was for measurements of seemingly random things, and it didn't work for uniformly generated pseudorandom numbers. I also seem to recall there was a case in history of some sort of banking fraud being detected because the data, which was fudged, was not adhering to this law.
It was also generalisable so that it didn't matter what base number system you used, the measurements would be distributed in an analagous way.
I've googled for various things trying to identify it but I just can't find it because I don't know the right name to search for.
I would like to read some more about this subject, so if anyone can tell me the magic terms to search for I'd be grateful, thanks!
| It is Benford's Law
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
Asymptotics of a solution Let $x(n)$ be the solution to the following equation
$$
x=-\frac{\log(x)}{n} \quad \quad \quad \quad (1)
$$
as a function of $n,$ where $n \in \mathbb N.$
How would you find the asymptotic behaviour of the solution, i.e. a function $f$ of $n$ such that there exist constants $A,B$ and $n_0\in\mathbb N$ so that it holds
$$Af(n) \leq x(n) \leq Bf(n)$$
for all $n > n_0$
?
| Call $u_n:t\mapsto t\mathrm e^{nt}$, then $x(n)$ solves $u_n(x(n))=1$. For every $a$, introduce
$$
x_a(n)=\frac{\log n}n-a\frac{\log\log n}n.
$$
Simple computations show that, for every fixed $a$, $u_n(x_a(n))\cdot(\log n)^{a-1}\to1$ when $n\to\infty$. Thus, for every $a\gt1$, there exists some finite index $n(a)$ such that $x(n)\geqslant x_a(n)$ for every $n\geqslant n(a)$, and, for every $a\lt1$, there exists some finite index $n'(a)$ such that $x(n)\leqslant x_a(n)$ for every $n\geqslant n'(a)$. Finally, when $n\to\infty$,
$$
nx(n)=\log n-\log\log n+o(\log\log n).
$$
The assertion in your post holds with $f(n)=(\log n)/n$, $n_0=\max\{n(A),n'(B)\}$, $B=1$ and every $A\lt1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Are $3$ and $11$ the only common prime factors in $\sum\limits_{k=1}^N k!$ for $N\geq 10$? The question was stimulated by this one. Here it comes:
When you look at the sum $\sum\limits_{k=1}^N k!$ for $N\geq 10$, you'll always find $3$ and $11$ among the prime factors, due to the fact that
$$
\sum\limits_{k=1}^{10}k!=3^2\times 11\times 40787.
$$
Increasing $N$ will give rise to factors $3$ resp. $11$.
Are $3$ and $11$ the only common prime factors in $\sum\limits_{k=1}^N k!$ for $N\geq 10$?
I think, one has to show, that $\sum\limits_{k=1}^{N}k!$ has a factor of $N+1$, because the upcoming sum will always share the $N+1$ factor as well. This happens for
$$
\underbrace{1!+2!}_{\color{blue}{3}}+\color{blue}{3}! \text{ and } \underbrace{1!+2!+\cdots+10!}_{3^2\times \color{red}{11}\times 40787}+\color{red}{11}!
$$
| As pointed out in the comments, the case is trivial( at least if you know some theorems) if you fix $n>10$, instead of $n>p-1$ for prime $p$.
It follows from Wilson's Theorem, that if you have a multiple of $p$ at index $n= p-2$ you won't at index $p-1$ because it will decrease out of being one for that index. $p>12$ implies at least $1$ index where it is NOT a multiple if it worked at index $11$. That leaves us with $p<12$ which would have to be factors of the sum up to $p-1$, $2$ is out as the sum is odd, $5$, needs $24+1+2+6=33$ to be a multiple of $5$, it isn't. Lastly $7$ needs $1+2+6+24+120+720=873$ to be a multiple which would force $33$ to be a multiple of $7$ which it isn't.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 1,
"answer_id": 0
} |
Set of harmonic functions is locally equicontinuous (question reading in Trudinger / Gilbarg) I'm working through the book Elliptic Parial Differential Equations of Second Order by D. Gilbarg and N. S. Trudinger. Unfortunately I get stuck at some point. On page 23 they prove the following Theorem:
Let $u$ be harmonic in $\Omega$ and let $\Omega'$ be any compact subset of $\Omega$. Then for any multi-index $\alpha$ we have
$$\sup_{\Omega'}|D^\alpha u|\le \left(\frac{n|\alpha|}{d}\right)^{|\alpha|} \sup_{\Omega}|u|$$
where $d=\operatorname{dist}(\Omega',\partial\Omega)$.
Now they conclude:
An immediate consequence of the bound above is the equicontinuity on compact subdomains of the derivatives of any bounded set of harmonic functions.
How could they conclude that?
Let $\{u_i\}$ a family of of bounded harmonic functions: why are the $u_i$ equicontinuous on compact subdomains?
Thanks for your help,
hulik
| If $\{u_i\}_{i\in \mathcal{I}}$ is a bounded family of harmonic functions defined in $\Omega$ (i.e., there exists $M\geq 0$ s.t. $|u_i(x)|\leq M$ for $x\in \Omega$) then inequality:
$$\sup_{\Omega'}|D^\alpha u|\le \left(\frac{n|\alpha|}{d}\right)^{|\alpha|} \sup_{\Omega}|u|$$
with $|\alpha|=1$ implies:
$$\sup_{\Omega'}|\nabla u_i|\le C(\Omega^\prime)\ \sup_{\Omega}|u_i| \leq C(\Omega^\prime)\ M$$
for each $i\in \mathcal{I}$ (here $C(\Omega^\prime)\geq 0$ is a suitable constant depending on $\Omega^\prime$). Therefore the family $\{u_i\}_{i\in \mathcal{I}}$ is equi-Lipschitz on each compact subdomain $\Omega^\prime \subseteq \Omega$, for:
$$\forall i \in \mathcal{I},\quad |u_i(x)-u_i(y)|\leq C(\Omega^\prime)\ M\ |x-y|$$
for all $x,y\in \Omega^\prime$, and equi-continuity follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/117969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving an algebraic identity using the axioms of field I am trying to prove (based on the axioms of field) that
$$a^3-b^3=(a-b)(a^2+ab+b^2)$$
So, my first thought was to use the distributive law to show that
$$(a-b)(a^2+ab+b^2)=(a-b)\cdot a^2+(a-b)\cdot ab+(a-b)\cdot b^2$$
And then continuing from this point.
My problem is that I'm not sure if the distributive law is enough to prove this identity. Any ideas? Thanks!
| Indeed you need distributive, associative and commutative laws to prove your statement.
In fact:
$$\begin{split}
(a-b)(a^2+ab+b^2) &= (a+(-b))a^2 +(a+(-b))ab+(a+(-b))b^2\\
&= a^3 +(- b)a^2+a^2b+(-b)(ab)+ab^2+(-b)b^2\\
&= a^3 - ba^2+a^2b - b(ab) +ab^2-b^3\\
&= a^3 - a^2b+a^2b - (ba)b +ab^2-b^3\\
&= a^3 -(ab)b +ab^2-b^3\\
&= a^3 -ab^2 +ab^2-b^3\\
&= a^3-b^3\; .
\end{split}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
closed form of a Cauchy (series) product I hope this hasn't been asked already, though I have looked around the site and found many similar answers.
Given:
Form Cauchy product of two series: $a_k\;x^k$ and $\tfrac{1}{1-x}=1+x+x^2+\cdots$.
So I come up with,
$\sum_{n=0}^{\infty}\;c_n = \sum_{n=0}^{\infty}\;\sum_{k+l=n}\;a_l\;b_k = \cdots = \sum_{n=0}^{\infty}\;x^n\;\sum_{k=0}^n\;a_k = x^0\;(a_0)+x^1\;(a_0+a_1)+x^2\;(a_0+a_1+a_2)+\cdots$.
It asks for what values of $x$ this would be valid:
This is a funny question to me because it depends upon the coefficients in the power series, right? If I take the ratio test, I get $\lim_{n\to\infty}\;\bigg| \frac{a_{k+1}\;x^{k+1}}{a_k\;x^k}\bigg| = |x|\cdot \lim_{k\to\infty}\;\big| \frac{a_{k+1}}{a_k} \big|$. For this series to be convergent, doesn't this have to come to a real number, $L$ (not in $\mathbb{\bar{R}}$)? Therefore, $|x|<1/L$?
I know that the other series, $\sum_{n=0}^{\infty}\;x^n$ converges for $r \in (-1,1)$.
So for the product to be convergent, doesn't the requirement of $x$ have to be, $|x| < \min\{1,1/L\}$?
The reason I include this, other than the questions above, is that the question suggests using "this approach" to attain a closed form $\sum_{k=0}^{\infty}\;k\;x^k$, for $x \in (-1,1)$. By using the ratio test (my favorite), I'm pretty sure that for this to converge, $|x|<1$ - which is given. I tried writing out some of the terms but they do not seem to reach a point whereby future terms cancel (as they do in a series like $\sum_{n=1}^{\infty} \frac{1}{k\;(k+2)\;(k+4)}$). I've tried bounding (Squeeze) them but didn't get very far.
Thanks for any suggestions!
| Note that $\sum_{n\ge 0}nx^n$ is almost the Cauchy product of $\sum_{n\ge 0}x^n$ with itself: that Cauchy product is
$$\left(\sum_{n\ge 0}x^n\right)^2=\sum_{n\ge 0}x^n\sum_{k=0}^n 1^2=\sum_{n\ge 0}(n+1)x^n\;.\tag{1}$$
If you multiply the Cauchy product in $(1)$ by $x$, you get
$$x\sum_{n\ge 0}(n+1)x^n=\sum_{n\ge 0}(n+1)x^{n+1}=\sum_{n\ge 1}nx^n=\sum_{n\ge 0}nx^n\;,\tag{2}$$
since the $n=0$ term is $0$ anyway. Combining $(1)$ and $(2)$, we have
$$\sum_{n\ge 0}nx^n=x\left(\sum_{n\ge 0}x^n\right)^2=x\left(\frac1{1-x}\right)^2=\frac{x}{(1-x)^2}\;,$$
with convergence for $|x|<1$ by the reasoning that you gave.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the identity under a given binary operation I have two problems quite similar. The first:
In $\mathbb{Z}_8$ find the identity of the following commutative operation:
$$\overline{a}\cdot\overline{c}=\overline{a}+\overline{c}+2\overline{a}\overline{c}$$
I say:
$$\overline{a}\cdot\overline{i}=\overline{a}+\overline{i}+2\overline{a}\overline{i} = \overline{a}$$
Since $\overline{a}$ is always cancelable in $\mathbb{Z}_9$ I can write:
$$\overline{i}+2\overline{a}\overline{i} = \overline{0}$$
$$\overline{i}(\overline{1}+2\overline{a}) = \overline{0}$$
so $i=0$ whatever $\overline{a}$.
Second question:
In $\mathbb{Z}_9\times\mathbb{Z}_9$ find the identity of the following
commutative operation: $$(\overline{a}, \overline{b})\cdot
(\overline{c}, \overline{d})= (\overline{a}+ \overline{c},
\overline{8}\overline{b}\overline{d})$$
So starting from:
$$(\overline{a}, \overline{b})\cdot
(\overline{e_1}, \overline{e_2})= (\overline{a}+ \overline{e_1},
\overline{8}\overline{b}\overline{e_2})=(\overline{a}, \overline{b})$$
that is:
$$\overline{a}+\overline{e_1}=\overline{a}\qquad (1)$$
$$\overline{8}\overline{b}\overline{e_2}=\overline{b}\qquad (2)$$
In (1) there's always cancellable element for $\overline{a}$ since $\overline{-a}$ is always present in $\mathbb{Z}_9$.
In (2) I should multiply both member for $\overline{8^{-1}}$ and $\overline{b^{-1}}$ to know exactly $\overline{e_2}$.
This happens only if both number are invertible. $\overline{8}$ is easily to demonstrate that it's invertible, cause $gcd(8,9)=1$.
But what about $b$.
| Hint $\ $ Identity elements $\rm\:e\:$ are idempotent $\rm\:e^2 = e\:$. Therefore
$\rm(1)\ \ mod\ 8\!:\ \ e = e\cdot e = 2e+2e^2\ \Rightarrow\ e\:(1+2e) = 0\ \Rightarrow\ e = \ldots\:$ by $\rm n^2 \equiv 1\:$ for $\rm\:n\:$ odd
$\rm(2)\ \ mod\ (9,9)\!:\ \ (a,b) = (a,b)^2 = (2a,-bb)\ \Rightarrow\ (-a, b\:(b+1)) = (0,0)\ \Rightarrow\ \ldots$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
De Rham cohomology of $S^n$ Can you find mistake in my computation of $H^{k}(S^{n})$.
Sphere is disjoint union of two spaces:
$$S^{n} = \mathbb{R}^{n}\sqcup\mathbb{R^{0}},$$
so
$$H^{k}(S^n) = H^{k}(\mathbb{R}^{n})\oplus H^{k}(\mathbb{R^{0}}).$$
In particular
$$H^{0}(S^{n}) = \mathbb{R}\oplus\mathbb{R}=\mathbb{R}^{2}$$
and
$$H^{k}(S^{n}) = 0,~~~k>0.$$
Where is mistake? Thanks a lot!
| You are wrong: $S^n$ is not the disjoint union $\mathbb R^n \sqcup \mathbb R^0$ - topologically.
Although $S^n$ is $\mathbb R^n$ with one point at infinity, the topology of this point at infinity is very different from that of $\mathbb R^0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Existence theorems for problems with free endpoints? It is well known that the problem of minimizing
$$ J[y] = \int_{0}^{1} \sqrt{y(x)^2 + \dot{y}(x)^2} dx $$
with $y \in C^2[0,1]$ and $y(0) = 1$ and $y(1) = 0$ has no solutions. However, if we remove the condition $y(1) = 0$ and instead let the value of $y$ at $x = 1$ be free, then an optimal solution does exist.
An easy way to see this is to observe that $J[y]$ is really just the arc length of the plane curve with polar equation $r(\theta) = y(\theta)$. Clearly then, the function $y$ which traces out in that way the shortest line segment joining the point $(0,1)$ (given in polar coordinates) and the ray $\theta = 1$ is the (unique) solution to this new problem.
Inspired by this little example, I wonder: are there results regarding the existence of solutions to variational problems with freedom at one or both endpoints and similar integrands?
| Sure. You can take any smooth $f(x)$ with $f(0) = 0,$ then minimize
$$ \int_0^1 \sqrt{1 + \left( f(\dot{y}(x)) \right)^2} \; dx $$
with $y(0) = 73.$ The minimizer is constant $y.$
More interesting is the free boundary problem for surface area. Given a wire frame that describes a nice curve $\gamma$ in $\mathbb R^3,$ once $\gamma$ is close enough to the $xy$ plane, there is a surface (topologically an annulus) with one boundary component being $\gamma$ and the other being a curve in the $xy$ plane, that minimizes the surface area among all such surfaces. The optimal surface meets the $xy$ plane orthogonally.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove the map has a fixed point Assume $K$ is a compact metric space with metric $\rho$ and $A$ is a map from $K$ to $K$ such that $\rho (Ax,Ay) < \rho(x,y)$ for $x\neq y$. Prove A have a unique fixed point in $K$.
The uniqueness is easy. My problem is to show that there a exist fixed point. $K$ is compact, so every sequence has convergent subsequence. Construct a sequence ${x_n}$ by $x_{n+1}=Ax_{n}$,$\{x_n\}$ has a convergent subsequence $\{ x_{n_k}\}$, but how to show there is a fixed point using $\rho (Ax,Ay) < \rho(x,y)$?
| I don't have enough reputation to post a comment to reply to @андрэ 's question regarding where in the proof it is used that $f$ is a continuous function, so I'll post my answer here:
Since we are told that $K$ is a compact set. $f:K\rightarrow K$ being continuous implies that the $\mathrm{im}(f) = f(K)$ is also a compact set. We also know that compact sets are closed and bounded, which implies the existence of $\inf_{x\in K} f(x)$.
If it is possible to show that $f(K) \subseteq K$ is a closed set, then it is necessarily compact as well:
A subset of a compact set is compact?
However, I am not aware of how you would do this in this case without relying on continuity of $f$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 3,
"answer_id": 1
} |
Modules with $m \otimes n = n \otimes m$ Let $R$ be a commutative ring. Which $R$-modules $M$ have the property that the symmetry map
$$M \otimes_R M \to M \otimes_R M, ~m \otimes n \mapsto n \otimes m$$
equals the identity? In other words, when do we have $m \otimes n = n \otimes m$ for all $m,n \in M$?
Some basic observations:
1) When $M$ is locally free of rank $d$, then this holds iff $d \leq 1$.
2) When $A$ is a commutative $R$-algebra, considered as an $R$-module, then it satisfies this condition iff $R \to A$ is an epimorphism in the category of commutative rings (see the Seminaire Samuel for the theory of these epis).
3) These modules are closed under the formation of quotients, localizations (over the same ring) and base change: If $M$ over $R$ satisfies the condition, then the same is true for $M \otimes_R S$ over $S$ for every $R$-algebra $S$.
4) An $R$-module $M$ satisfies this condition iff every localization $M_{\mathfrak{p}}$ satisfies this condition as an $R_{\mathfrak{p}}$-module, where $\mathfrak{p} \subseteq R$ is prime. This reduces the whole study to local rings.
5) If $R$ is a local ring with maximal ideal $\mathfrak{m}$ and $M$ satisfies the condition, then $M/\mathfrak{m}M$ satisfies the condition over $R/\mathfrak{m}$ (by 3). Now observation 1 implies that $M/\mathfrak{m}M$ has dimension $\leq 1$ over $R/\mathfrak{m}$, i.e. it is cyclic as an $R$-module. If $M$ was finitely generated, this would mean (by Nakayama) that $M$ is also cyclic. Thus, if $R$ is an arbitrary ring, then a finitely generated $R$-module $M$ satisfies this condition iff every localization of $M$ is cyclic. But there are interesting non-finitely generated examples, too (see 2).
I don't expect a complete classification (this is already indicated by 2)), but I wonder if there is any nice characterization or perhaps even existing literature. It is a quite special property. Also note the following reformulation: Every bilinear map $M \times M \to N$ is symmetric.
| The question has an accepted answer at MathOverflow, and perhaps it is time to leave the Unanswered list.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 1,
"answer_id": 0
} |
De Rham cohomology of $S^2\setminus \{k~\text{points}\}$ Am I right that de Rham cohomology $H^k(S^2\setminus \{k~\text{points}\})$ of $2-$dimensional sphere without $k$ points are
$$H^0 = \mathbb{R}$$
$$H^2 = \mathbb{R}^{N}$$
$$H^1 = \mathbb{R}^{N+k-1}?$$
I received this using Mayer–Vietoris sequence. And I want only to verify my result.
If you know some elementery methods to compute cohomology of this manifold, I am grateful to you.
Calculation:
Let's $M = S^2$, $U_1$ - set consists of $k$ $2-$dimensional disks without boundary and $U_2 = S^2\setminus \{k~\text{points}\}$.
$$M = U_1 \cup U_2$$
each punctured point of $U_2$ covered by disk (which contain in $U_1$).
And
$$U_1\cap U_2$$
is a set consists of $k$ punctured disks (which homotopic to $S^1$). Than
collection of dimensions in Mayer–Vietoris sequence
$$0\to H^0(M)\to\ldots\to H^2(U_1 \cap U_2)\to 0$$
is
$$0~~~~~1~~~~~k+\alpha~~~~~k~~~~~0~~~~~\beta~~~~~k~~~~~1~~~~~\gamma~~~~~0~~~~~0$$
whrer $\alpha, \beta, \gamma$ are dimensions of $0-$th, $1-$th and $2-$th cohomolody respectively.
$$1 - (k+\alpha) + k = 0,$$
so
$$\alpha = 1.$$
$$\beta - k + 1 - \gamma = 0,$$
so
$$\beta = \gamma + (k-1).$$
So
$$H^0 = \mathbb{R}$$
$$H^2 = \mathbb{R}^{N}$$
$$H^1 = \mathbb{R}^{N+k-1}$$
Thanks a lot!
| It helps to use the fact that DeRahm cohomology is a homotopy invariant, meaning we can reduce the problem to a simpler space with the same homotopy type. I think the method you are trying will work if you can straighten out the details, but if you're still having trouble then try this:
$S^2$ with $1$ point removed is homeomorphic to the disk $D^2$. If we let $S_k$ denote $S^2$ with $k$ points removed, then $S_k$ is homeomorphic to $D_{k-1}$ (where $D_{k-1}$ denotes $D^2$ with $k-1$ interior points removed).
Hint: Find a nicer space which is homotopy equivalent to $D_{k-1}$. Can you, for instance, make it $1$-dimensional? If you could, that would immediately tell you something about $H^2(S_k)$. If you get that far and know Mayer-Vietoris, you should be able to work out the calculation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Finding probability $P(X+Y < 1)$ with CDF Suppose I have a Cumulative Distribution Function like this:
$$F(x,y)=\frac { (x\cdot y)^{ 2 } }{ 4 } $$
where $0<x<2$ and $0<y<1$.
And I want to find the probability of $P(X+Y<1)$.
Since $x<1-y$ and $y<1-x$, I plug these back into the CDF to get this:
$$F(1-y,1-x)=\frac { ((1-y)\cdot (1-x))^{ 2 } }{ 4 } $$
Because of the constraint where $0<x<2$ and $0<y<1$, I integrate according to the range of values:
$$\int _{ 0 }^{ 1 }{ \int _{ 0 }^{ 2 }{ \frac { ((1-y)\cdot (1-x))^{ 2 } }{ 4 } dxdy } } =\frac { 1 }{ 18 } $$
This answer, however, is incorrect. My intuition for doing this is that because the two variables are somewhat dependent on each other to maintain the inequality of less than $1$, I want to "sum"(or integrate) all the probabilities within the possible range of values of $x$ and $y$ that satisfy the inequality. Somehow, the answer, which is $\frac{1}{24}$, doesn't seem to agree with my intuition.
What have I done wrong?
| We have the cumulative distribution function (CDF)
$$F_{X,Y}(x,y)=\int_0^y\int_0^x f_{X,Y}(u,v)dudv=\frac{(xy)^2}{4}.$$
Differentiate with respect to both $x$ and $y$ to obtain the probability density function (PDF)
$$f_{X,Y}(x,y)=\frac{d^2}{dxdy}\frac{(xy)^2}{4}=xy.$$
Finally, how do we parametrize the region given by $x+y<1$ inside the rectangle? Well, $x$ must be nonnegative so $y$ can be anything from $0$ to $1$, and simultaneously $x$ must be between $0$ and $1-y$;
$$P(X+Y<1)=\int_0^1\int_0^{1-y} xy \; dx dy =\int_0^1 \frac{(1-y)^2}{2}ydy=\frac{1}{24}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Number of ways to pair 2n elements from two different sets Say I have a group of 20 people, and I want to split them to pairs, I know that the
number of different ways to do it is $\frac{(2n)!}{2^n \cdot n!}$
But let's say that I have to pair a boy with a girl?
I got confused because unlike the first option the number of total elements (pairs) isn't $(2n)!$, and I failed to count it. I think that the size of each equivalence class is the same - $2^n \cdot n!$, I am just missing the number of total elements.
Any ideas?
Thanks!
| If you have $n$ boys and $n$ girls, give them each a number and sort the pairs by wlog the boy's number. Then there are $n!$ possible orderings for the girls, so $n!$ ways of forming the pairs.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solve $ x^2+4=y^d$ in integers with $d\ge 3$
Find all triples of integers $(x,y,d)$ with $d\ge 3$ such that $x^2+4=y^d$.
I did some advance in the problem with Gaussian integers but still can't finish it. The problem is similar to Catalan's conjecture.
NOTE: You can suppose that $d$ is a prime.
Source: My head
| See a similar question that I asked recently: Nontrivial Rational solutions to $y^2=4 x^n + 1$
This question might also be related to Fermat's Last Theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/118941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 5
} |
How to come up with the gamma function? It always puzzles me, how the Gamma function's inventor came up with its definition
$$\Gamma(x+1)=\int_0^1(-\ln t)^x\;\mathrm dt=\int_0^\infty t^xe^{-t}\;\mathrm dt$$
Is there a nice derivation of this generalization of the factorial?
| Here is a nice paper of Detlef Gronau Why is the gamma function
so as it is?.
Concerning alternative possible definitions see Is the Gamma function mis-defined? providing another resume of the story Interpolating the natural factorial n! .
Concerning Euler's work Ed Sandifer's articles 'How Euler did it' are of value too, in this case 'Gamma the function'.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30",
"answer_count": 4,
"answer_id": 2
} |
interchanging integrals Why does $$\int_0^{y/2} \int_0^\infty e^{x-y} \ dy \ dx \neq \int_0^\infty \int_0^{y/2} e^{x-y} \ dx \ dy$$
The RHS is 1 and the LHS side is not. Would this still be a legitimate joint pdf even if Fubini's Theorem does not hold?
| The right side,
$$\int_0^\infty \int_0^{y/2} e^{x-y} \ dx \ dy,$$
refers to something that exists. The left side, as you've written it, does not. Look at the outer integral:
$$
\int_0^\infty \cdots\cdots\; dy.
$$
The variable $y$ goes from $0$ to $\infty$. For any particular value of $y$ between $0$ and $\infty$, the integral $\displaystyle \int_0^{y/2} e^{x-y}\;dx$ is something that depends on the value of $y$.
The integral $\displaystyle \int_0^\infty \cdots\cdots dy$ does not depend on anything called $y$.
But when you write $\displaystyle \int _0^{y/2} \int_\text{?}^\text{?} \cdots \cdots$ then that has to depend on something called $y$. What is this $y$? On the inside you've got $\displaystyle\int_0^\infty e^{x-y}\;dy$. Something like that does not depend on anything called $y$, but does depend on $x$. It's like what happens when you write
$$
\sum_{k=1}^4 k^2.
$$
What that means is
$$
1^2 + 2^2 + 3^2 + 4^2
$$
and there's nothing called "$k$" that it could depend on.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Is my proof correct: if $n$ is odd then $n^2$ is odd?
Prove that for every integer $n,$ if $n$ is odd then $n^2$ is odd.
I wonder whether my answer to the question above is correct. Hope that someone can help me with this.
Using contrapositive, suppose $n^2$ is not odd, hence even. Then $n^2 = 2a$ for some integer $a$, and
$$n = 2(\frac{a}{n})$$ where $\frac{a}{n}$ is an integer. Hence $n$ is even.
| You need to show that $a/n$ is an integer. Try thinking about the prime factorizations of $a$ and $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 1
} |
Is $BC([0,1))$ ( space of bounded real valued continuous functions) separable? Is $BC([0,1))$ a subset of $BC([0,\infty))$? It is easy to prove the non-separability of BC([0,$\infty$)) and the separability of C([0,1]). It seems to me we can argue from the fact that any bounded continuous function of BC([0,$\infty$)) must also be in BC([0,1)) to somehow show BC([0,1)) is not separable, but BC([0,1)
| $BC([0,1))$ is not a subset of $BC([0,\infty))$; in fact, these two sets of functions are disjoint. No function whose domain is $[0,1)$ has $[0,\infty)$ as its domain, and no function whose domain is $[0,\infty)$ has $[0,1)$ as its domains. What is true is that $$\{f\upharpoonright[0,1):f\in BC([0,\infty))\}\subseteq BC([0,1))\;.$$
There is, however, a very close relationship between $BC([0,\infty))$ and $BC([0,1))$, owing to the fact that $[0,\infty)$ and $[0,1)$ are homeomorphic. An explicit homeomorphism is $$h:[0,\infty)\to[0,1):x\mapsto \frac2\pi\arctan x\;.$$ This implies that $BC([0,1))$ and $BC([0,\infty))$ are actually homeomorphic, via the map $$H:BC([0,1))\to BC([0,\infty)):f\mapsto f\circ h\;,$$ as is quite easily checked. Thus, one of $BC([0,\infty))$ and $BC([0,1))$ is separable iff the other is.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
} |
Help Understanding Why Function is Continuous I have read that because a function $f$ satisfies
$$
|f(x) - f(y)| \leq |f(y)|\cdot|x - y|
$$
then it is continuous. I don't really see why this is so. I know that if a function is "lipschitz" there is some constant $k$ such that
$$
|f(x) - f(y)| \leq k|x - y|.
$$
But the first inequality doesn't really prove this because the |f(y)| depends on one of the arguments so isn't necessarily lipschitz. So, why does this first inequality imply $f$ is continuous?
| You're right that the dependence on $y$ means this inequality isn't like the Lipschitz condition. But the same proof will show continuity in both cases. (In the Lipschitz case you get uniform continuity for free.) Here's how:
Let $y\in\operatorname{dom} f$; we want to show $f$ is continuous at $y$. So let $\epsilon > 0$; we want to find $\delta$ such that, if $|x-y| < \delta$, then $|f(x) - f(y)| < \delta$. Let's choose $\delta$ later, when we figure out what it ought to be, and just write the proof for now: if $x$ is such that $|x-y| < \delta$ then
$$
|f(x) - f(y)|
< |f(y)|\cdot|x-y|
< |f(y)|\delta
= \epsilon
$$
The first step is the hypothesis you've given; the second step is the assumption on $|x-y|$; the last step is just wishful thinking, because we want to end up with $\epsilon$ at the end of this chain of inequalities. But this bit of wishful thinking tells us what $\delta$ has to be to make the argument work: $\delta = |f(y)|^{-1}\epsilon$.
(If $f$ were Lipschitz, the same thing would work with $|f(y)|$ replaced with $k$, and it would yield uniform continuity because the choice of $\delta$ wouldn't depend on $y$.)
(Oh, and a technical matter: the condition you've stated only makes sense for $x\ne y$; otherwise the LHS is at least $0$ but the RHS is $0$, so the strict inequality cannot hold. But this doesn't affect the argument for continuity; you just assume at the right moment that $x\ne y$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Cauchy Sequence that Does Not Converge What are some good examples of sequences which are Cauchy, but do not converge?
I want an example of such a sequence in the metric space $X = \mathbb{Q}$, with $d(x, y) = |x - y|$. And preferably, no use of series.
| A fairly easy example that does not arise directly from the decimal expansion of an irrational number is given by $$a_n=\frac{F_{n+1}}{F_n}$$ for $n\ge 1$, where $F_n$ is the $n$-th Fibonacci number, defined as usual by $F_0=0$, $F_1=1$, and the recurrence $F_{n+1}=F_n+F_{n-1}$ for $n\ge 1$. It’s well known and not especially hard to prove that $\langle a_n:n\in\Bbb Z^+\rangle\to\varphi$, where $\varphi$ is the so-called golden ratio, $\frac12(1+\sqrt5)$.
Another is given by the following construction. Let $m_0=n_0=1$, and for $k\in\Bbb N$ let $m_{k+1}=m_k+2n_k$ and $n_{k+1}=m_k+n_k$. Then for $k\in\Bbb N$ let $$b_k=\frac{m_k}{n_k}$$ to get the sequence $$\left\langle 1,\frac32,\frac75,\frac{17}{12},\frac{41}{29},\dots\right\rangle\;;$$ it’s a nice exercise to show that this sequence converges to $\sqrt2$.
These are actually instances of a more general source of examples, the sequences of convergents of the continued fraction expansions of irrationals are another nice source of examples; the periodic ones, like this one, are probably easiest.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44",
"answer_count": 11,
"answer_id": 8
} |
How to check convexity? How can I know the function $$f(x,y)=\frac{y^2}{xy+1}$$ with $x>0$,$y>0$ is convex or not?
| The book "Convex Optimization" by Boyd, available free online here, describes methods to check.
The standard definition is if f(θx + (1 − θ)y) ≤ θf(x) + (1 − θ)f(y) for 0≤θ≤1 and the domain of x,y is also convex.
So if you could prove that for your function, you would know it's convex.
The Hessian being positive semi-definite, as mentioned in comments, would also show that the function is convex.
See page 67 of the book for more.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 3,
"answer_id": 0
} |
How to prove $ \phi(n) = n/2$ iff $n = 2^k$? How can I prove this statement ? $ \phi(n) = n/2$ iff $n = 2^k $
I'm thinking n can be decomposed into its prime factors, then I can use multiplicative property of the euler phi function to get the $\phi(n) = \phi(p_1)\cdots\phi(p_n) $. Then use the property $ \phi(p) = p - 1$. But I'm not sure if that's the proper approach for this question.
| Edit: removed my full answer to be more pedagogical.
You know that $\varphi(p) = p-1$, but you need to remember that $\varphi(p^k) = p^{k-1}(p-1).$ Can you take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Represent every Natural number as a summation/subtraction of distinct power of 3 I have seen this in a riddle where you have to chose 4 weights to calculate any weight from 1 to 40kgs.
Some examples,
$$8 = {3}^{2} - {3}^{0}$$
$$12 = {3}^{2} + {3}^{1}$$
$$13 = {3}^{2} + {3}^{1}+ {3}^{0}$$
Later I found its also possible to use only 5 weights to calculate any weight between 1-121.
$$100 = {3}^{4} + {3}^{3} - {3}^{2} + {3}^{0}$$
$$121 = {3}^{4} + {3}^{3} + {3}^{2} + {3}^{1} + {3}^{0}$$
Note: It allows negative numbers too. how I represent 8 and 100.
I want to know if any natural number can be represented as a summation of power of 3. I know this is true for 2. But is it really true for 3? What about the other numbers? Say $4, 5, 6, ... $
| You can represent any number $n$ as $a_k 3^k + a_{k-1} 3^{k-1} + \dots + a_1 3 + a_0$, where $a_i \in \{-1,0,1\}$. This is called balanced ternary system, and as Wikipedia says, one way to get balanced ternary from normal ternary is to add ..1111 to the number (formally) with carry, and then subtract ..1111 without carry. For a generalization, see here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
A smooth function f satisfies $\left|\operatorname{ grad}\ f \right|=1$ ,then the integral curves of $\operatorname{grad}\ f$ are geodesics $M$ is riemannian manifold, if a smooth function $f$ satisfies $\left| \operatorname{grad}\ f \right|=1,$ then prove the integral curves of $\operatorname{grad}\ f$ are geodesics.
| Well $\text{grad}(f)$ is a vector such that $g(\text{grad}(f),-)=df$, therefore integral curves satisfy
$$
\gamma'=\text{grad}(f)\Rightarrow
g(\gamma',X)=df(X)=X(f)
$$
Now let $X,Y$ be a vector fields
$$
XYf=Xg(\text{grad}(f),Y)=
g(\nabla_X\text{grad}(f),Y)+g(\text{grad}(f),\nabla_XY)=
g(\nabla_X\text{grad}(f),Y)+\nabla_XY(f)
$$
and
$$
YXf=Yg(\text{grad}(f),X)=
g(\nabla_Y\text{grad}(f),X)+g(\text{grad}(f),\nabla_YX)=
g(\nabla_Y\text{grad}(f),X)+\nabla_YX(f)
$$
which gives after subtraction and vanishing torsion
$$
[X,Y]f-\nabla_XY(f)+\nabla_YX(f)=0=g(\nabla_X\text{grad}(f),Y)-g(\nabla_Y\text{grad}(f),X)
$$
It follows that
$$
g(\nabla_X\text{grad}(f),Y)=g(\nabla_Y\text{grad}(f),X)
$$
Now the easy part, substitute $X=\text{grad}(f)$ and conclude that for every $Y$
$$
g(\nabla_{\text{grad}(f)}\text{grad}(f),Y)=g(\nabla_Y\text{grad}(f),\text{grad}(f))=0
$$
The last one because $g(\text{grad}(f),\text{grad}(f))=1$ is constant, so
$$
0=Yg(\text{grad}(f),\text{grad}(f))=2g(\nabla_Y\text{grad}(f),\text{grad}(f))
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
} |
Martingales, finite expectation I have some uncertainties about one of the requirements for martingale, i.e. showing that $\mathbb{E}|X_n|<\infty,n=0,1,\dots$ when $(X_n,n\geq 0)$ is some stochastic process. In particularly, in some solutions I find that lets say $\mathbb{E}|X_n|<n$ is accepted, for example here (2nd slide, 1.2 example). So my question is: what is the way of thinking if n goes to infinity? Why are we accepting n as a boundary or maybe I misunderstood something?
| The condition $\mathbb E|X_n|\lt n$ is odd. What is required for $(X_n)$ to be a martingale is, in particular, that each $X_n$ is integrable (if only to be able to consider its conditional expectation), but nothing is required about the growth of $\mathbb E|X_n|$.
Consider for example a sequence $(Z_n)$ of i.i.d. centered $\pm1$ Bernoulli random variables, and a real valued sequence $(a_n)$. Then $X_n=\sum\limits_{k=1}^na_kZ_k$ defines a martingale $(X_n)$ and $\mathbb E|X_n|\gt |a_n|$ by convexity. One sees that the growth of $n\mapsto\mathbb E|X_n|$ cannot be limited a priori.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Numbers are too large to show $65^{64}+64^{65}$ is not a prime I tried to find cycles of powers, but they are too big. Also $65^{n} \equiv 1(\text{mod}64)$, so I dont know how to use that.
| Hint $\rm\ \ x^4 +\: 64\: y^4\ =\ (x^2+ 8\:y^2)^2 - (4xy)^2\ =\ (x^2-4xy + 8y^2)\:(x^2+4xy+8y^2)$
Thus $\rm\ x^{64} + 64\: y^{64} =\ (x^{32} - 4 x^{16} y^{16} + 8 y^{32})\:(x^{32} - 4 x^{16} y^{16} + 8 y^{32})$
Below are some other factorizations which frequently prove useful for integer factorization. Aurifeuille, Le Lasseur and Lucas discovered so-called Aurifeuillian factorizations of cyclotomic polynomials $\rm\;\Phi_n(x) = C_n(x)^2 - n\ x\ D_n(x)^2\,$ (aka Aurifeuillean). These play a role in factoring numbers of the form $\rm\; b^n \pm 1\:$, cf. the Cunningham Project. Below are some simple examples of such factorizations:
$$\begin{array}{rl}
x^4 + 2^2 \quad=& (x^2 + 2x + 2)\;(x^2 - 2x + 2) \\\\
\frac{x^6 + 3^2}{x^2 + 3} \quad=& (x^2 + 3x + 3)\;(x^2 - 3x + 3) \\\\
\frac{x^{10} - 5^5}{x^2 - 5} \quad=& (x^4 + 5x^3 + 15x^2 + 25x + 25)\;(x^4 - 5x^3 + 15x^2 - 25x + 25) \\\\
\frac{x^{12} + 6^6}{x^4 + 36} \quad=& (x^4 + 6x^3 + 18x^2 + 36x + 36)\;(x^4 - 6x^3 + 18x^2 - 36x + 36) \\\\
\end{array}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 0
} |
Is $ f(x) = \left\{ \begin{array}{lr} 0 & : x = 0 \\ e^{-1/x^{2}} & : x \neq 0 \end{array} \right. $ infinitely differentiable on all of $\mathbb{R}$? Can anyone explicitly verify that the function $
f(x) = \left\{
\begin{array}{lr}
0 & : x = 0 \\
e^{-1/x^{2}} & : x \neq 0
\end{array}
\right. $
is infinitely differentiable on all of $\mathbb{R}$ and that $f^{(k)}(0) = 0$ for every $k$?
| For $x\neq 0$ you get:
$$\begin{split}
f^\prime (x) &= \frac{2}{x^3}\ f(x)\\
f^{\prime \prime} (x) &= 2\left( \frac{2}{x^6} - \frac{3}{x^4}\right)\ f(x)\\
f^{\prime \prime \prime} (x) &= 4\left( \frac{2}{x^9} - \frac{9}{x^7} +\frac{6}{x^5} \right)\ f(x)
\end{split}$$
In the above equalities you can see a path, i.e.:
$$\tag{1} f^{(n)} (x) = P_{3n}\left( \frac{1}{x}\right)\ f(x)$$
where $P_{3n}(t)$ is a polynomial of degree $3n$ in $t$.
Formula (1) can be proved by induction. You have three base case, hence you have only to prove the inductive step. So, assume (1) holds for $n$ and evaluate:
$$\begin{split}
f^{(n+1)} (x) &= \left( P_{3n}\left( \frac{1}{x}\right)\ f(x) \right)^\prime\\
&= -\frac{1}{x^2}\ \dot{P}_{3n} \left( \frac{1}{x}\right)\ f(x) + P_{3n} \left( \frac{1}{x}\right)\ f^\prime (x)\\
&= \left[ -\frac{1}{x^2}\ \dot{P}_{3n} \left( \frac{1}{x}\right) +\frac{2}{x^3}\ P_{3n} \left( \frac{1}{x}\right)\right]\ f(x)\\
&= \left[ -t^2\ \dot{P}_{3n}( t) +2t^3\ P_{3n}( t)\right]_{t=1/x}\ f(x)
\end{split}$$
where the dot means derivative w.r.t. the dummy variable $t$; now the function $-t^2\ \dot{P}_{3n}( t) +2t^3\ P_{3n}( t)$ is a polynomial in $t$ of degree $3n+3=3(n+1)$, therefore:
$$f^{(n+1)}(x) = P_{3(n+1)} \left( \frac{1}{x}\right)\ f(x)$$
as you wanted.
Formula (1) proves that $f\in C^\infty (\mathbb{R}\setminus \{0\})$.
Now, for each fixed $n$, you have:
$$\lim_{x\to 0} f^{(n)}(x) = \lim_{x\to 0} P_{3n}\left( \frac{1}{x}\right)\ f(x) =0$$
for $f\in \text{o}(1/x^{3n})$ as $x\to 0$. Therefore, by an elementary consequence of Lagrange's Mean Value Theorem, any derivative of your functions is differentiable also in $0$. Thus $f\in C^\infty (\mathbb{R})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Units and Nilpotents
If $ua = au$, where $u$ is a unit and $a$ is a nilpotent, show that $u+a$ is a unit.
I've been working on this problem for an hour that I tried to construct an element $x \in R$ such that $x(u+a) = 1 = (u+a)x$. After tried several elements and manipulated $ua = au$, I still couldn't find any clue. Can anybody give me a hint?
| If $u=1$, then you could do it via the identity
$$(1+a)(1-a+a^2-a^3+\cdots + (-1)^{n}a^n) = 1 + (-1)^{n}a^{n+1}$$
by selecting $n$ large enough.
If $uv=vu=1$, does $a$ commute with $v$? Is $va$ nilpotent?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/119904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "31",
"answer_count": 6,
"answer_id": 0
} |
Compute $\lim \limits_{x\to\infty} (\frac{x-2}{x+2})^x$ Compute
$$\lim \limits_{x\to\infty} (\frac{x-2}{x+2})^x$$
I did
$$\lim_{x\to\infty} (\frac{x-2}{x+2})^x = \lim_{x\to\infty} \exp(x\cdot \ln(\frac{x-2}{x+2})) = \exp( \lim_{x\to\infty} x\cdot \ln(\frac{x-2}{x+2}))$$
But how do I continue? The hint is to use L Hopital's Rule. I tried changing to
$$\exp(\lim_{x\to\infty} \frac{\ln(x-2)-\ln(x+2)}{1/x})$$
This is
$$(\infty - \infty )/0 = 0/0$$
But I find that I can keep differentiating?
| you can use
$$\left( \frac{x-2}{x+2}\right)^x = \left(1 - \frac{4}{x+2}\right)^x$$
and $(1 + \frac ax)^x \to \exp(a)$,
HTH, AB
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 4
} |
Prove that the following integral is divergent $$\int_0^\infty \frac{7x^7}{1+x^7}$$
Im really not sure how to even start this. Does anyone care to explain how this can be done?
| The only problem is in $+\infty$. We have for $x\geq 1$ that $1+x^7\leq 2x^7$ so $\frac{7x^7}{1+x^7}\geq \frac 72\geq 0$ and $\int_1^{+\infty}\frac 72dt$ is divergent, so $\int_1^{+\infty}\frac{7x^7}{1+x^7}dx$ is divergent. Finally, $\int_0^{+\infty}\frac{7x^7}{1+x^7}dx$ is divergent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120090",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Why do we look at morphisms? I am reading some lecture notes and in one paragraph there is the following motivation: "The best way to study spaces with a structure is usually to look at the maps between them preserving structure (linear maps, continuous maps differentiable maps). An important special case is usually the functions to the ground field."
Why is it a good idea to study a space with structure by looking at maps that preserve this structure? It seems to me as if one achieves not much by going from one "copy" of a structured space to another copy.
| There is no short and simple answer, as has already been mentioned in the comments. It is a general change of perspective that has happened during the 20th century. I think if you had asked a mathematician around 1900 what math is all about, he/she would have said: "There are equations that we have to solve" (linear or polynomial equations, differential and integral equations etc.).
Then around 1950 you would have met more and more people saying "there are spaces with a certain structure and maps betweeen them". And today more and more people would add "...which together are called categories".
It's essentially a shift towards a higher abstraction, towards studying Banach spaces instead of bunches of concrete spaces that happen to have an isomorphic Banach space structure, or studying an abstract group instead of a bunch of isomorphic representations etc.
I'm certain all of this will become clearer after a few years of study.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "39",
"answer_count": 4,
"answer_id": 2
} |
What exactly do we mean when say "linear" combination? I've noticed that the term gets abused alot. For instance, suppose I have
$c_1 x_1 + c_2 x_2 = f(x)$...(1)
Eqtn (1) is such what we say "a linear combination of $x_1$ and $x_2$"
In ODE, sometimes when we want to solve a homogeneous 2nd order ODE like $y'' + y' + y = 0$, we find the characteristic eqtn and solve for the roots and put it into whatever form necessary. But in all casses, the solution takes form of $c_1y_1 + c_2y_2 = y(t)$.
The thing is that $y_1$ and $y_2$ itself doesn't even have linear terms, so does it make sense to say $c_1y_1^2 +c_2y_2^2 = f(t)$ is a "quadratic" combination of y_1 and y_2?
| It's a linear combination in the vector space of continuous (or differentiable or whatever) functions. $y_1$ and $y_2$ are vectors (that is, elements of the vector space in question) and $c_1$ and $c_2$ are scalars (elements of the field for the vector space, in this case $\mathbb{R}$). In linear algebra it does not matter what kind of elements vector spaces consists of (so these might be tuples in the case of $\mathbb{R}^n$, or linear operators, or just continuous functions, or something entirely different), but that vector spaces satisfy the axioms which are algebraic laws.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Sum of three consecutive cubes When I noticed that $3^3+4^3+5^3=6^3$, I wondered if there are any other times where $(a-1)^3+a^3+(a+1)^3$ equals another cube. That expression simplifies to $3a(a^2+2)$ and I'm still trying to find another value of $a$ that satisfies the condition (the only one found is $a=4$)
Is this impossible? (It doesn't happen for $3 \leq a \leq 10000$) Is it possible to prove?
| How about
$$\left(-\frac{1}{2}\right)^3 + \left(\frac{1}{2}\right)^3 + \left(\frac{3}{2}\right)^3 = \left(\frac{3}{2}\right)^3 ?$$
After all, the OP didn't specify where $a$ lives... (by the way, there are infinitely many distinct rational solutions of this form!).
Now for a more enlightened answer: no, there are no other integral solutions with $a\in\mathbb{Z}$, other than $a=0$ and $a=4$. Here is why (what follows is a sketch of the argument, several details would be too lengthy to write fully).
Suppose $(a-1)^3+a^3+(a+1)^3=b^3$. Then $3a^3+6a=b^3$. Hence $(a:b:1)$ is a point on the elliptic curve $E:3Z^3+6ZY^2=X^3$ with origin at $(0:1:0)$. In particular, a theorem of Siegel tells us that there are at most finitely many integral solutions of $3a^3+6a=b^3$ with $a,b\in\mathbb{Z}$. Now the hard part is to prove that there are exactly $2$ integral solutions.
With a change of variables $U=X/Z$ and $V=Y/Z$ followed by a change $U=x/6$ and $V=y/36$, we can look instead at the curve $E':y^2=x^3-648$. This curve has a trivial torsion subgroup and rank $2$, with generators $(18,72)$ and $(9,9)$. Moreover each point $(x,y)$ in $E'$ corresponds to a (projective) point $(x/6:y/36:1)$ on $E$, and a point $(X:Y:Z)$ on $E$ corresponds to a solution $a=Z/Y$ and $b=X/Y$. This means that $E$ is generated by $P_1=(3:2:1)$ and $P_2=(18:3:12)$ which correspond respectively to $(a,b)=(1/2,3/2)$ and $(4,6)$. The origin $(0:1:0)$ corresponds to $(a,b)=(0,0)$.
Now it is a matter of looking through all $\mathbb{Z}$-linear combinations of $P_1$ and $P_2$ to see if any gives another $(a,b)$ integral. However, this is a finite search, because of the way heights of points work, and one can calculate a bound on the height for a point $(a,b)$ to have both integral coordinates. Once this bound is found, a search among a few small linear combinations of $P_1$ and $P_2$ shows that $(0,0)$ and $(4,6)$ are actually the only two possible integral solutions.
Here is another rational solution, not as trivial as the first one I offered, that appears from $P_1-P_2$:
$$\left(-\frac{10}{11}\right)^3 + \left(\frac{1}{11}\right)^3 + \left(\frac{12}{11}\right)^3 = \left(\frac{9}{11}\right)^3 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
Proof that $\pi$ is rational I stumbled upon this proof of $\pi$ being rational (coincidentally, it's Pi Day). Of course I know that $\pi$ is irrational and there have been multiple proofs of this, but I can't seem to see a flaw in the following proof that I found here. I'm assuming it will be blatantly obvious to people here, so I was hoping someone could point it out. Thanks.
Proof:
We will prove that pi is, in fact, a rational number, by induction on
the number of decimal places, N, to which it is approximated. For
small values of N, say 0, 1, 2, 3, and 4, this is the case as 3, 3.1,
3.14, 3.142, and 3.1416 are, in fact, rational numbers. To prove the rationality of pi by induction, assume that an N-digit approximation
of pi is rational. This number can be expressed as the fraction
M/(10^N). Multiplying our approximation to pi, with N digits to the
right of the decimal place, by (10^N) yields the integer M. Adding the
next significant digit to pi can be said to involve multiplying both
numerator and denominator by 10 and adding a number between between -5
and +5 (approximation) to the numerator. Since both (10^(N+1)) and
(M*10+A) for A between -5 and 5 are integers, the (N+1)-digit
approximation of pi is also rational. One can also see that adding one
digit to the decimal representation of a rational number, without loss
of generality, does not make an irrational number. Therefore, by
induction on the number of decimal places, pi is rational. Q.E.D.
| This "proof" shows that any real number is rational...
The mistake here is that you are doing induction on the sequence $\pi_n$ of approximations. And with induction you can get information on each element of the sequence, but not on their limit.
Or, put in another way, the proof's b.s. is on "therefore, by induction on the number of decimal places..."
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 3,
"answer_id": 1
} |
Dynamic Optimization - Infinite dimensional spaces - Reference request Respected community members,
I am currently reading the book "recursive macroeconomic theory" by Sargent and Ljungqvist. While reading this book I have realized that I do not always fully understand what is going on behind "the scenes".
In particular, in chapter 8, the authors uses the Lagrange method. This method is pretty clear to me in the finite dimensional case, i.e. optimization over $R^n$. However here we are dealing with infinite dimensional problems. Why does the Lagrange method work here? Can someone point me to any good references?
To clarify: I do understand how to apply the "recipe" given in the book, however I do not understand why it works. What are the specific assumptions needed in order to assure that a solution exists? That it is unique? This is the kind of questions that I would like to be able to answer rigorously.
I hope I am clear enough about this, if not please let me know. Furthermore I really appreciate any help from you. Thank you for your time.
Btw: If anyone have the time to look into the matters, the book is available for free here: elogica.br.inter.net/bere/boo/sargent.pdf
| A fairly rigorous treatment with many economics applications is Stokey, Lucas and Prescott's (SLP) Recursive Methods in Economic Dynamics.
This MIT OCW course gives good additional readings. I find the ones on transversality conditions very important.
Standard mathematical treatments are Bertsekas's Dynamic Programming and Optimal Control and Puterman's Markov Decision Processes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
numerically solving differential equations $\frac{d^2 \theta}{dx^2} (1 + \beta \theta) + \beta \left(\frac{d \theta}{d x}\right)^2 - m^2 \theta = 0$
Boundary Conditions
$\theta=100$ at $x = 0$, $\frac{d\theta}{dx} = 0$ at $x = 2$
$\beta$ and $m$ are constants.
Please help me solve this numerically (using finite difference).
The squared term is really complicating things!
Thank You!
| Choose an integer $N$, let $h=2/N$ and let $\theta_k$ be the approximation given by the finite difference method to the exact value $\theta(k\,h)$, $0\le k\le N$. We get the system of $N-1$ equations
$$
\frac{\theta_{k+1}-2\,\theta_k+\theta_{k-1}}{h^2}(1+\beta\,\theta_k)+\beta\,\Bigl(\frac{\theta_k-\theta_{k-1}}{h}\Bigr)^2-m^2\,\theta_k=0,\quad 1\le k\le N-1\tag1
$$
complemented with two more coming from the boundary conditions:
$$
\theta_0=100,\quad \theta_N-\theta_{N-1}=0.
$$
I doubt that this nonlinear system can be solved explicitly.
I suggest two ways of proceeding. The first is to solve the system numerically. The other is to apply a shooting method to the equation.
Choose a starting value $\theta_N=a$. The system (1) can be solved recursively, obtaining at the end a value $\theta_0=\theta_0(a)$. If $\theta_0(a)=100$, you are done. If not, change the value of $a$ and repeat the process. Your first goal is to find two values $a_1$ and $a_2$ such that $\theta_0(a_1)<100<\theta_0(a_2)$. Then use the bissection method to approximate a value of $a$ such that $\theta_0(a)=100$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Number of distinct limits of subsequences of a sequence is finite? "The number of distinct limits of subsequences of a sequence is finite?"
I've been mulling over this question for a while, and I think it is true, but I can't see how I might prove this formally. Any ideas?
Thanks
| No, the following is a counter-example: Let $E: \mathbb N \to \mathbb N^2$ be an enumeration of $\mathbb N^2$, and set $a_n = (E(n))_1$. Then $a_n$ contains a constant sub-sequence $a_{n_i} = k$ for every $k \in \mathbb N$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Show that $\displaystyle{\frac{1}{9}(10^n+3 \cdot 4^n + 5)}$ is an integer for all $n \geq 1$ Show that $\displaystyle{\frac{1}{9}(10^n+3 \cdot 4^n + 5)}$ is an integer for all $n \geq 1$
Use proof by induction. I tried for $n=1$ and got $\frac{27}{9}=3$, but if I assume for $n$ and show it for $n+1$, I don't know what method to use.
| ${\displaystyle{\frac{1}{9}}(10^n+3 \cdot 4^n + 5)}$ is an integer for all $n \geq 1$
Proof by induction:
For $n=1, {\displaystyle{\frac{1}{9}}(10^1+3 \cdot 4^1 + 5) = \frac{27}{9} = 3}$, so the result holds for $n=1$
Assume the result to be true for $n=m$, i.e. $\displaystyle{\frac{1}{9}(10^m+3 \cdot 4^m + 5)}$ is an integer
To show ${\displaystyle{\frac{1}{9}}(10^{m+1}+3 \cdot 4^{m+1} + 5)}$ is an integer.
$$
\begin{align*}
\displaystyle{\frac{1}{9}(10^{m+1}+3 \cdot 4^{m+1} + 5) -(10^m+3 \cdot 4^m +5 )} &= \displaystyle{\frac{1}{9}\left((10^{m+1}-10^m) +3\cdot (4^{m+1}-4^m) \right)} \\
&=\displaystyle{\frac{1}{9}\left[\left(10^m(10-1)+3 \cdot 4^m (4-1) \right)\right]}\\
&= \left(10^m+4^m\right)
\end{align*}
$$
which is an integer, and therefore ${\displaystyle{\frac{1}{9}}(10^{m+1}+3 \cdot 4^{m+1} + 5)}$ is an integer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 9,
"answer_id": 1
} |
How to prove a trigonometric identity $\tan(A)=\frac{\sin2A}{1+\cos 2A}$ Show that
$$
\tan(A)=\frac{\sin2A}{1+\cos 2A}
$$
I've tried a few methods, and it stumped my teacher.
| First, lets develop a couple of identities.
Given that $\sin 2A = 2\sin A\cos A$, and $\cos 2A = \cos^2A - \sin^2 A$ we have
$$\begin{array}{lll}
\tan 2A &=& \frac{\sin 2A}{\cos 2A}\\
&=&\frac{2\sin A\cos A}{\cos^2 A-\sin^2A}\\
&=&\frac{2\sin A\cos A}{\cos^2 A-\sin^2A}\cdot\frac{\frac{1}{\cos^2 A}}{\frac{1}{\cos^2 A}}\\
&=&\frac{2\tan A}{1-\tan^2A}
\end{array}$$
Similarly, we have
$$\begin{array}{lll}
\sec 2A &=& \frac{1}{\cos 2A}\\
&=&\frac{1}{\cos^2 A-\sin^2A}\\
&=&\frac{1}{\cos^2 A-\sin^2A}\cdot\frac{\frac{1}{\cos^2 A}}{\frac{1}{\cos^2 A}}\\
&=&\frac{\sec^2 A}{1-\tan^2A}
\end{array}$$
But sometimes it is just as easy to represent these identities as
$$\begin{array}{lll}
(1-\tan^2 A)\sec 2A &=& \sec^2 A\\
(1-\tan^2 A)\tan 2A &=& 2\tan A
\end{array}$$
Applying these identities to the problem at hand we have
$$\begin{array}{lll}
\frac{\sin 2A}{1+\cos 2A}&=& \frac{\sin 2A}{1+\cos 2A}\cdot\frac{\frac{1}{\cos 2A}}{\frac{1}{\cos 2A}}\\
&=& \frac{\tan 2A}{\sec 2A +1}\\
&=& \frac{(1-\tan^2 A)\tan 2A}{(1-\tan^2 A)(\sec 2A +1)}\\
&=& \frac{(1-\tan^2 A)\tan 2A}{(1-\tan^2 A)\sec 2A +(1-\tan^2 A)}\\
&=& \frac{2\tan A}{\sec^2 A +(1-\tan^2 A)}\\
&=& \frac{2\tan A}{(\tan^2 A+1) +(1-\tan^2 A)}\\
&=& \frac{2\tan A}{2}\\
&=& \tan A\\
\end{array}$$
Lessons learned: As just a quick scan of some of the other answers will indicate, a clever substitution can shorten your workload considerably.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 3
} |
Proof by contrapositive Prove that if the product $ab$ is irrational, then either $a$ or $b$ (or both) must be irrational.
How do I prove this by contrapositive? What is contrapositive?
| The statement you want to prove is:
If $ab$ is irrational, then $a$ is irrational or $b$ is irrational.
The contrapositive is:
If not($a$ is irrational or $b$ is irrational), then not($ab$ is irrational).
A more natural way to state this (using DeMorgan's Law) is:
If both $a$ and $b$ are rational, then $ab$ is rational.
This last statement is indeed true. Since the truth of a statement and the truth of its contrapositive always agree, one can conclude the original statement is true, as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
An example of an endomorphism Could someone suggest a simple $\phi\in $End$_R(A)$ where $A$ is a finitely generated module over ring $R$ where $\phi$ is injective but not surjective? I have a hunch that it exists but I can't construct an explicit example. Thanks.
| Let $R=K$ be a field, and let $A=K[x]$ be the polynomial ring in one variable over $K$ (with the module structure coming from multiplication). Then let $\phi(f)=xf$. It is injective, but has image $xK[x]\ne K[x]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Blowing up a singular point on a curve reduces its singular multiplicity by at least one Let $X$ be the affine plane curve given by $y^2=x^3$, and $O=(0,0)$. Then $X$ has a double singularity at $O$, since its tangent space at $O$ is the doubled $x$-axis. How do we see that, if $\widetilde{X}$ is the blow-up of $X$ at $O$, then $O$ is a nodal point of $\widetilde{X}$, i.e. the tangent space of $\widetilde{X}$ at $O$ consists of two distinct tangent lines?
| Blowing up a cuspidal plane curve actually yields a nonsingular curve. So the multiplicity is actually reduced by more than one. This is e.g. Exercise 19.4.C in Vakil's notes "Foundations of Algebraic Geometry".
One can compute this quite easily in local charts following e.g. Lecture 20 in Harris' Book "Algebraic Geometry". Recall that $\tilde X$ can be computed by blowing up the affine plane first and then taking the proper transform of the curve $X$. So: The blow-up of the affine plane is given by the points $z_1 W_2=z_2 W_1$ in $\mathbb A^2\times \mathbb P^1$ with coordinates $z_1, z_2$ on $\mathbb A^2$ and $W_1,W_2$ on $\mathbb P^1$. Taking Euclidean coordinates $w_1=W_1/W_2$ on $U_2=\{W_2\neq0\}$ yields an isomorphism from $U_2$ to $\mathbb A^2$ with coordinates $(z_2,w_1)$. We have $z_1^2-z_2^3=z_2^2w_1^2-z_2^3$ on $U_2$ and thus the proper transform $\tilde X$ is defined by the polynomial $w_1^2-z_2$. Hence it is smooth!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Graph Theory - How can I calculate the number of vertices and edges, if given this example An algorithm book Algorithm Design Manual has given an description:
Consider a graph that represents the street map of Manhattan in New York City. Every junction of two streets will be a vertex of the graph. Neighboring junctions are connected by edges. How big is this graph? Manhattan is basically a grid of 15 avenues each crossing roughly 200 streets. This gives us about 3,000 vertices and 6,000 edges, since each vertex neighbors four other vertices and each edge is shared between two vertices.
If it says "The graph is a grid of 15 avenues each crossing roughly 200 streets", how can I calculate the number of vertices and edges? Although the description above has given the answers, but I just can't understand.
Can anyone explain the calculation more easily?
Thanks
| Every junction between an avenue and a street is a vertex. As there are $15$ avenues and (about) $200$ streets, there are (about) $15*200=3000$ vertices. Furthermore, every vertex has an edge along an avenue and an edge along a street that connect it to two other vertices. Hence, there are (about) $2*3000 = 6000$ edges1. Does that answer your question?
1 With regards to edges, a visual way to imagine it would be to imagine that the avenues are going north-south and the streets are going east-west. Start with the junction/vertex in the northwesternmost corner. It is adjacent to two other vertices: one south along the avenue and one east along the street. Similarly, every vertex has a vertex to the south and a vertex to the east (as well as north and west for most of them, but those are irrelevant for this). Hence, there are two edges for every vertex.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/120992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 2
} |
Is it mathematically correct to write $a \bmod n \equiv b$? This is not a technical question, but a question on whether we can use a particular notation while doing modular arithmetic.
We write $a \equiv b \bmod n$, but is it right to write $a \bmod n \equiv b$?
| It is often correct. $\TeX$ distinguishes the two usages: the \pmod control sequence is for "parenthesized" $\pmod n$ used to contextualize an equivalence, as in your first example, and the \bmod control sequence is for "binary operator" $\bmod$ when used like a binary operator (in your second example).
But in the latter case, you should use $=$, not $\equiv$. $7\bmod4 = 3$, and the relation here is a numeric equality, indicated by $=$, not a modular equivalence, which would be indicated by $\equiv$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
Left or right edge in cubic planar graph Given a cubic planar graph, if I "walk" on one edge to get to a vertex, it it possible to know which of the other two edges is the left edge and which one is the right edge? Am I forced to draw the graph on paper, without edge crossing, and visually identify left and right edges?
| My comment as an answer so it can be accepted:
The answer is no: This can't be possible, since you could draw the mirror image instead, and then left and right edges would be swapped, so they can't be determined by the abstract graph alone.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Traveling between integers- powers of 2
Moderator Note: At the time that this question was posted, it was from an ongoing contest. The relevant deadline has now passed.
Consider the integers. We can only travel directly between two integers with a difference whose absolute value is a power of 2 and every time we do this it is called a step. The distance $d$ between two integers is the minimum number of steps required to get from one to the other. Note however that we can travel backwards. For instance $d(2,17)$ is 2: $2+16=18 \rightarrow 18-1=17$.
How can we prove that for any integer n, we will always have some $d(a,b)=n$ where$b>a$?
If we are only able to take forward steps I know that the number of 1s in the binary representation of $b-a$ would be $d(a,b)$. However, we are able to take steps leftward on the number line...
| It is easy to see that the function $s(n):=d(0,n)$ $\ (n\geq1)$ satisfies the following recursion:
$$s(1)=1,\qquad s(2n)\ =\ s(n), \qquad s(2n+1)=\min\{s(n),s(n+1)\}+1 \ .$$
In particular $s(2)=s(4)=1$, $s(3)=2$.
Consider now the numbers $$a_r:={1\over6}(4^r+2)\qquad (r\geq2)$$ satisfying the recursion
$$a_2=3,\qquad a_{r+1}=4 a_r-1\quad (r\geq2).$$
The first few of these are $3$, $11$, $43$, $171$. I claim that
$$s(a_r-1)=s(a_r+1)=r-1,\quad s(a_r)=r\qquad (r\geq2)\ .$$
The claim is true for $r=2$. Assume that it is true for $r$.
Then
$$s(2a_r-1)=\min\{s(a_r),s(a_r-1)\}+1=r,\qquad s(2a_r)=r$$
and therefore
$$s(4a_r-2)=s(4a_r)=r,\quad s(4a_r-1)=\min\{s(2a_r),s(2a_r-1)\}+1=r+1\ .$$
The last line can be read as
$$s(a_{r+1}-1)=s(a_{r+1}+1)=r, \qquad s(a_{r+1})=r+1\ .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 0
} |
If $A$ is a subset of $B$, then the closure of $A$ is contained in the closure of $B$. I'm trying to prove something here which isn't necessarily hard, but I believe it to be somewhat tricky. I've looked online for the proofs, but some of them don't seem 'strong' enough for me or that convincing. For example, they use the argument that since $A\subset \overline{B} $, then $ \overline{A} \subset \overline{B} $. That, or they use slightly altered definitions. These are the definitions that I'm using:
Definition #1: The closure of $A$ is defined as the intersection of all closed sets containing A.
Definition #2: We say that a point x is a limit point of $A$ if every neighborhood of $x$ intersects $A$ in some point other than $x$ itself.
Theorem 1: $ \overline{A} = A \cup A' $, where $A'$ = the set of all limit points of $A$.
Theorem 2: A point $x \in \overline{A} $ iff every neighborhood of $x$ intersects $A$.
Prove: If $ A \subset B,$ then $ \overline{A} \subset \overline{B} $
Proof: Let $ \overline{B} = \bigcap F $ where each $F$ is a closed set containing $B$. By hypothesis, $ A \subset B $; hence, it follows that for each $F \in \overline{B} $, $ A \subset F \subset \overline{B} $. Now that we have proven that $ A \subset \overline{B} $, we show $A'$ is also contained in $\overline{B} $.
Let $ x \in A' $. By definition, every neighborhood of x intersects A at some point other than $x$ itself. Since $ A \subset B $, every neighborhood of $x$ also intersects $B$ at some other point other than $x$ itself. Then, $ x \in B \subset \overline{B} $.
Hence, $ A \cup A' \subset \overline{B}$. But, $ A \cup A' = \overline{A}$. Hence, $ \overline{A} \subset \overline{B}.$
Is this proof correct?
Be brutally honest, please. Critique as much as possible.
| I think it's much simpler than that. By definition #1, the closure of A is a subset of any closed set containing A; and the closure of B is certainly a closed set containing A (because it contains B, which contains A). QED.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 7,
"answer_id": 1
} |
Find the value of $(-1)^{1/3}$. Evaluate $(-1)^{\frac{1}{3}}$.
I've tried to answer it by letting it be $x$ so that $x^3+1=0$.
But by this way, I'll get $3$ roots, how do I get the actual answer of $(-1)^{\frac{1}{3}}$??
| Just put it like complex numbers: We know that $z=\sqrt[k]{m_\theta}$, so $z=\sqrt[3]{-1}$ $-1=1_{\pi}$ $\alpha_n=\dfrac{\theta+k\pi}{n}$ $\alpha_0=60$ $\alpha_1=180$ $\alpha_2=300$
So the answers are:
$z_1=1_{\pi/3}$
$z_2=1_{\pi}$
$z_3=1_{5\pi/3}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Suppose two $n\times n$ matricies, $A$ and $B$, how many possible solutions are there. Suppose i construct a $n\times n$ matrix $C$, by multiplying two $n\times n$ matrices $A$ and $B$ i.e. $AB = C$. Given $B$ and $C$, how many other $A$'s can yield $C$ also i.e. is it just the exact $A$, infinitely many other $A$'s or no other $A$'s. There are no assumptions made about the invertability of $A$ and $B$. In the case that $A$ and $B$ are invertable there exists only one such $A$.
| In general, there could be infinitely many $A$.
Given two solutions $A_1B=C$ and $A_2B=C$, we see that $(A_1-A_2)B=0$
So, if there is at least one solution to $AB=C$, you can see that there are as many solutions to $AB=C$ as there are to $A_0B=0$
Now if $B$ is invertible, the only $A_0$ is $A_0=0$.
If $A_0B=0$ then $(kA_0)B=0$ for any real $k$, so if there is a non-zero root to $A_0B=0$ there are infinitely many.
So you have to show that if $B$ is not invertible, then there is at least one matrix $A_0\neq 0$ such that $A_0B=0$.
Indeed, the set of such $A_0$ forms a vector space which depends on the "rank" of $B$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Eigenvalue or Eigenvector for a bidiagonal $n\times n$ matrix Let $$J =
\begin{bmatrix} a & b & 0 & 0 & \cdots & \cdots\\\\ 0 & a & b & 0 & \cdots & \cdots\\\\ \vdots & \vdots & \ddots & \cdots & \cdots & \cdots \\\\ \vdots & \vdots & \vdots & \ddots & \cdots & \cdots \\\\ \vdots & \vdots & \vdots &\ddots & a & b \\\\ \vdots & \vdots & \vdots & \vdots & 0 & a \\ \end{bmatrix}$$
I have to find eigenvalues and eigenvectors for $J$.
My thoughts on this...
a =
2 3
0 2
octave-3.2.4.exe:2> b=[2,3,0;0,2,3;0,0,2]
b =
2 3 0
0 2 3
0 0 2
octave-3.2.4.exe:3> eig(a)
ans =
2
2
octave-3.2.4.exe:4> eig(b)
ans =
2
2
2
octave-3.2.4.exe:5>
I can see that the eigenvalue is $a$ for $n \times n$ matrix.
Any idea how I can prove it that is the diagonal for any $N \times N$ matrix.
Thanks!!!
I figured out how to find the eigenvalues. But my eigenvector corresponding to the eigenvalue a comes out to be a zero vector... if I try using matlab, the eigenvector matrix has column vctors with 1 in the first row and zeros in rest of the col vector...
what am I missing? can someone help me figure out that eigenvector matrix?
| (homework) so some hints:
*
*The eigenvalues are the roots of ${\rm det}(A-xI) = 0.$
*The determinant of a triangular matrix is the product of all diagonal entries.
*How many diagonal entries does an $n\times n$ matrix have?
*How many roots does $(a - x)^n = 0$ have?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Stability of trivial solution for DE system with non-constant coefficient matrix Given the arbitrary linear system of DE's
$$x'=A(t)x,$$
with the condition that the spectral bound of $A(t) $ is uniformly bounded by a negative constant, is the trivial solution always stable? All the $(2\times 2)$ matrices I've tried which satisfy the above property yield stable trivial solutions, which seems to suggest this might be the case in general. I can't think of a simple counterexample, so I'm asking if one exists. If there isn't what would be some steps toward proving the statement?
This is indeed homework.
| You can elongate a vector a bit over a short time using a constant matrix with negative eigenvalues, right? Now just do it and at the very moment it starts to shrink, change the matrix. It is not so easy to come up with an explicit formula (though some periodic systems will do it) but this idea of a counterexample is not hard at all ;).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why does $PSL(2,\mathbb C)\cong PGL(2,\mathbb C)$ but $PSL(2,\mathbb R) \not\cong PGL(2,\mathbb R)$? Why does $PSL(2,\mathbb C)\cong PGL(2,\mathbb C)$ but $PSL(2,\mathbb R) \not\cong PGL(2,\mathbb R)$?
| You have surjective morphisms $xL(n,K)\to PxL(n,K)$ (whose kernel consists of the multiples of the identity) for $x\in\{G,S\}$, $n\in\mathbb N$ and and $K\in\{\mathbb C,\mathbb R\}$. You also have embeddings $SL(n,K)\to GL(n,K)$. Since the kernel of the composed morphism $SL(n,K)\to GL(n,K)\to PGL(n,K)$ contains (and in fact coincides with) the kernel of the morphism $SL(n,K)\to PSL(n,K)$ (it is the (finite) set of multiples of the identity in $SL(n,K)$), one may pass to the quotient to obtain a morphism $PSL(n,K)\to PGL(n,K)$, which is injective (because of "coincides with" above). The question is whether this morphism is surjective.
The question amounts to the following: given and $g\in GL(n,K)$, does its image in $PGL(n,K)$ coincide with the image of some $g'\in SL(n,K)\subset GL(n,K)$? For that to happen, there should be a $\lambda\in K^\times$ such that $\lambda g\in SL(n,K)$, and since $\det(\lambda g)=\lambda^n\det g$, one is led to search for solutions $\lambda$ of $\lambda^n=(\det g)^{-1}$, where $(\det g)^{-1}$ could be any element of $K^\times$. It is easy to see how the solvability of this polynomial equation depends on $n$ and $K$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 5,
"answer_id": 1
} |
first order differential equation I needed help with this Differential Equation, below:
$$dy/dt = t + y, \text{ with } y(0) = -1$$
I tried $dy/(t+y) = dt$ and integrated both sides, but it looks like the $u$-substitution does not work out.
| This equation is not separable. In other words, you can't write it as $f(y)\;dy=g(t)\;dt$. A differential equation like this can be solved by integrating factors. First, rewrite the equation as:
$$\frac{dy}{dt}-y=t$$
Now we multiply the equation by an integrating factor so we can use the product rule, $d(uv)=udv+vdu.$ For this problem, that integrating factor would be $e^{-t}$.
$$e^{-t}\frac{dy}{dt}-e^{-t}y=\frac d{dt}(e^{-t}y)=te^{-t}$$
$$e^{-t}y=\int te^{-t}dt=-te^{-t}+\int e^{-t}dt=-te^{-t}-e^{-t}+C$$
$$y=Ce^t-t-1$$
For this specific problem, we could also follow Iasafro's suggestion.
$$z=y+t,\frac{dz}{dt}=\frac{dy}{dt}+1,\frac{dy}{dt}=\frac{dz}{dt}-1$$
$$\frac{dz}{dt}-1=z,\frac{dz}{dt}=z+1,\frac{dz}{z+1}=dt$$
As you can see, this substitution resulted in a separable equation, allowing you to integrate both sides.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Limit of $\arctan(x)/x$ as $x$ approaches $0$? Quick question:
I came across the following limit: $$\lim_{x\rightarrow 0^{+}}\frac{\arctan(x)}{x}=1.$$
It seems like the well-known limit:
$$\lim_{x\rightarrow 0}\frac{\sin x}{x}=1.$$
Can anyone show me how to prove it?
| We can make use of L'Hopital's rule. Since $\frac{d}{dx}\arctan x=\frac{1}{x^2+1}$ and $\frac{d}{dx}x=1$, we have
$$\lim\limits_{x\to0^+}\frac{\arctan x}{x}=\lim\limits_{x\to0^+}\frac{1}{x^2+1}=1.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 7,
"answer_id": 1
} |
Deriving even odd function expressions What is the logic/thinking process behind deriving an expression for even and odd functions in terms of $f(x)$ and $f(-x)$?
I've been pondering about it for a few hours now, and I'm still not sure how one proceeds from the properties of even and odd functions to derive:
$$\begin{align*}
E(x) &= \frac{f(x) + f(-x)}{2}\\
O(x) &= \frac{f(x) - f(-x)}{2}
\end{align*}$$
What is the logic and thought process from using the respective even and odd properties,
$$\begin{align*}
f(-x) &= f(x)\\
f(-x) &= -f(x)
\end{align*}$$
to derive $E(x)$ and $O(x)$?
The best I get to is:
For even: $f(x)-f(-x)=0$ and for odd: $f(x)+f(-x)=0$
Given the definition of $E(x)$ and $O(x)$, it makes a lot of sense (hindsight usually is) but starting from just the properties. Wow, I feel I'm missing something crucial.
| This is more intuitive if one views it in the special case of polynomials or power series expansions, where the even and odd parts correspond to the terms with even and odd exponents, e.g. bisecting into even and odd parts the power series for $\:\rm e^{{\it i}\:x} \:,\;$
$$\begin{align}
\rm f(x) \ &= \ \rm\frac{f(x)+f(-x)}{2} \;+\; \frac{f(x)-f(-x)}{2} \\[.4em]
\Rightarrow\quad \rm e^{\,{\large \it i}\,x} \ &= \ \rm\cos(x) \ +\ {\it i} \ \sin(x) \end{align}\qquad$$
Similarly one can perform multisections into $\rm\:n\:$ parts using $\rm\:n\:$'th roots of unity - see my post here for some examples and see Riordan's classic
textbook Combinatorial Identities for many applications. Briefly,
with $\rm\:\zeta\ $ a primitive $\rm\:n$'th root of unity, the $\rm\:m$'th $\rm\:n$-section selects the linear progression of $\rm\: m+k\:n\:$ indexed terms from a series $\rm\ f(x)\ =\ a_0 + a_1\ x + a_2\ x^2 +\:\cdots\ $ as follows
$\rm\quad\quad\quad\quad a_m\ x^m\ +\ a_{m+n}\ x^{m+n} +\ a_{m+2\:n}\ x^{m+2n}\ +\:\cdots $
$\rm\quad\quad\, =\,\ \frac{1}{n} \big(f(x)\ +\ f(x\zeta)\ \zeta^{-m}\ +\ f(x\zeta^{\:2})\ \zeta^{-2m}\ +\:\cdots\: +\ f(x\zeta^{\ n-1})\ \zeta^{\ (1-n)m}\big)$
Exercisse $\;$ Use multisections to give elegant proofs of the following
$\quad\quad\rm\displaystyle sin(x)/e^{x} \ \ $ has every $\rm 4k\,$'th term zero in its power series
$\quad\quad\rm\displaystyle cos(x)/e^{x} \ \, $ has every $\rm 4k\!+\!2\,$'th term zero in its power series
See the posts in this thread for various solutions and more on multisections. When you later study representation theory of groups you will learn that this is a special case of much more general results, with relations to Fourier and other transforms. It's also closely related to various Galois-theoretic results on modules, e.g. see my remark about Hilbert's Theorem 90 in the linked thread.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Find the angle in a triangle if the distance between one vertex and orthocenter equals the length of the opposite side Let $O$ be the orthocenter (intersection of heights) of the triangle $ABC$. If $\overline{OC}$ equals $\overline{AB}$, find the angle $\angle$ACB.
| Position the circumcenter $P$ of the triangle at the origin, and let the vectors from the $P$ to $A$, $B$, and $C$ be $\vec{A}$, $\vec{B}$, and $\vec{C}$. Then the orthocenter is at $\vec{A}+\vec{B}+\vec{C}$. (Proof: the vector from $A$ to this point is $(\vec{A}+\vec{B}+\vec{C})-\vec{A} = \vec{B}+\vec{C}$. The vector coinciding with the side opposite vertex $A$ is $\vec{B}-\vec{C}$. Now $(\vec{B}+\vec{C})\cdot(\vec{B}-\vec{C}) = |\vec{B}|^2 - |\vec{C}|^2 = R^2-R^2 = 0$, where $R$ is the circumradius. So the line through $A$ and the head of $\vec{A}+\vec{B}+\vec{C}$ is the altitude to $BC$. Similarly for the other three altitudes.)
Now the vector coinciding with $OC$ is $\vec{O}-\vec{C}=\vec{A}+\vec{B}$. Thus $|OC|=|AB|$ if and only if
$$|\vec{A}+\vec{B}|^2 = |\vec{A}-\vec{B}|^2$$
if and only if
$$\vec{A}\cdot\vec{A} + \vec{B}\cdot\vec{B} + 2\vec{A}\cdot\vec{B} = \vec{A}\cdot\vec{A} + \vec{B}\cdot\vec{B} - 2\vec{A}\cdot\vec{B}$$
if and only if
$$4\vec{A}\cdot\vec{B} = 0$$
if and only if
$$\angle APB = \pi /2$$
if and only if
$$\boxed{\angle ACB = \pi/4 = 45^\circ}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Hom of the direct product of $\mathbb{Z}_{n}$ to the rationals is nonzero. Why is $\mathrm{Hom}_{\mathbb{Z}}\left(\prod_{n \geq 2}\mathbb{Z}_{n},\mathbb{Q}\right)$ nonzero?
Context: This is problem $2.25 (iii)$ of page $69$ Rotman's Introduction to Homological Algebra:
Prove that
$$\mathrm{Hom}_{\mathbb{Z}}\left(\prod_{n \geq 2}\mathbb{Z}_{n},\mathbb{Q}\right) \ncong \prod_{n \geq 2}\mathrm{Hom}_{\mathbb{Z}}(\mathbb{Z}_{n},\mathbb{Q}).$$
The right hand side is $0$ because $\mathbb{Z}_{n}$ is torsion and $\mathbb{Q}$ is not.
| Let $G=\prod_{n\geq2}\mathbb Z_n$ and let $t(G)$ be the torsion subgroup, which is properly contained in $G$ (the element $(1,1,1,\dots)$ is not in $t(G)$, for example) Then $G/t(G)$ is a torsion-free abelian group, which therefore embeds into its localization $(G/t(G))\otimes_{\mathbb Z}\mathbb Q$, which is a non-zero rational vector space, and in fact generates it as a vector space. There is a non-zero $\mathbb Q$-linear map $(G/t(G))\otimes_{\mathbb Z}\mathbb Q\to\mathbb Q$ (here the Choice Police will observe that we are using the axiom of choice, of course...). Composing, we get a non-zero morphism $$G\to G/t(G)\to (G/t(G))\otimes_{\mathbb Z}\mathbb Q\to\mathbb Q.$$
Remark. If $H$ is a torsion-free abelian group, its finitely generated subgroups are free, so flat. Since $H$ is the colimit of its finitely generated subgroups, it is itself flat, and tensoring the exact sequence $0\to\mathbb Z\to\mathbb Q$ with $H$ gives an exact sequence $0\to H\otimes_{\mathbb Z}\mathbb Z=H\to H\otimes_{\mathbb Z}\mathbb Q$. Doing this for $H=G/t(G)$ shows $G/t(G)$ embeds in $(G/t(G))\otimes_{\mathbb Z}\mathbb Q$, as claimed above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Residue of a pole of order 6 I am in the process of computing an integral using the Cauchy residue theorem, and I am having a hard time computing the residue of a pole of high order.
Concretely, how would one compute the residue of the function $$f(z)=\frac{(z^6+1)^2}{az^6(z-a)(z-\frac{1}{a})}$$ at $z=0$?
Although it is not needed here, $a$ is a complex number with $|a|<1$.
Thanks in advance for any insight.
| $$g(z)=\frac{1}{(z-a)(z-\frac{1}{a})}=\frac{\frac{1}{a-\frac{1}{a}}}{z-a}+\frac{\frac{-1}{a-\frac{1}{a}}}{z-\frac{1}{a}}$$
we know:
$$(a+b)^n =a^n+\frac{n}{1!}a^{n-1}b+\frac{n(n-1)}{2!}a^{n-2}b^2+...+b^n$$
$$ \text{As regards }: |a|<1 $$
Taylor series of f(z) is:
$$g(z)=\frac{\frac{1}{a-\frac{1}{a}}}{z-a}+\frac{\frac{1}{a-\frac{1}{a}}}{\frac{1}{a}-z}=(\frac{1}{a-\frac{1}{a}}) \left[ \frac{-\frac{1}{a}}{1-\frac{z}{a}}+\frac{a}{1-az} \right]$$
$$g(z)=(\frac{1}{a-\frac{1}{a}}) \left[ \frac{-1}{a} \sum_{n=0}^{\infty}(\frac{z}{a})^n+a \sum_{n=0}^{\infty} (az)^n \right]$$
$$f(z)=\frac{(z^6+1)^2}{az^6}g(z)=\frac{z^{12}+2z^2+1}{az^6}g(z)=\left(
\frac{z^6}{a} + \frac{2}{az^4} + \frac{1}{az^6} \right)g(z)$$
$$ f(z)= \left(
\frac{z^6}{a} + \frac{2}{az^4} + \frac{1}{az^6} \right) \left(\frac{1}{a-\frac{1}{a}}\right) \left[ \frac{-1}{a} \sum_{n=0}^{\infty}(\frac{z}{a})^n+a \sum_{n=0}^{\infty} (az)^n \right]$$
$$ \text{ so residue is coefficient of term }z^{-1} $$
$$ f(z)=\frac{1}{a(a-\frac{1}{a})} \left[ \frac{-1}{a}\left( \sum_{n=0}^{\infty}\frac{z^{n+6}}{a^n} +2\sum_{n=0}^{\infty}\frac{z^{n-4}}{a^n} +\sum_{n=0}^{\infty} \frac{z^{n-6}}{a^n}\right)
+a \left( \sum_{n=0}^{\infty} a^nz^{n+6}+2\sum_{n=0}^{\infty} a^nz^{n-4} +\sum_{n=0}^{\infty} a^nz^{n-6} \right) \right]$$
$$ \text{residue of function at z=0 is :} $$
$$ \frac{1}{a(a-\frac{1}{a})} \left[ \frac{-1}{a}\left( 0 +2\frac{1}{a^3} +\frac{1}{a^5}\right)
+a \left( 0+2a^3 +a^5 \right) \right] $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/121977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Dimension of subspace of all upper triangular matrices If $S$ is the subspace of $M_7(R)$ consisting of all upper triangular matrices, then $dim(S)$ = ?
So if I have an upper triangular matrix
$$
\begin{bmatrix}
a_{11} & a_{12} & . & . & a_{17}\\
. & a_{22} & . & . & a_{27}\\
. & . & . & . & .\\
0 & . & . & . & a_{77}\\
\end{bmatrix}
$$
It looks to me that this matrix can potentially have 7 pivots, therefore it is linearly independent and so it will take all 7 column vectors to span it. But that answer is marked as incorrect when I enter it so what am I missing here?
| I guess the answer is 1+2+3+...+7=28. Because every element in matrices in S can be a base in that space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Can One use mathematica or maple online? Is it possible to use some of these algebra packages online ?
I have some matrices that I would like to know the characteristic polynomial of.
Where could I send them to get a nicely factorised answer ?
My PC is very slow & it would be nice to use someone elses super powerful computer !
Any suggestions
| For Mathematica, you can try Wolfram Alpha:
factor the characteristic polynomial of [[0, 3], [1, 4]]
For Sage, you can try Sagenb.org. There, you can do
import numpy
n=numpy.array([[0, 3],[1, 4]],'complex64')
m = matrix(n)
m.characteristic_polynomial().factor()
I'm not an expert on this in Sage, but the numpy appears to be necessary to get the polynomial to factor over the complex numbers.
Wolfram Alpha is going to be a lot more user friendly. Notice I just typed in the description of what I wanted and I got it. But, it's not as powerful because it's not a full fledged computer algebra system. If you want to do several steps of calculations where you use a previous step in the next one, it's going to be very difficult or impossible. But, with Sage, which can be used online for free, or you can download it for free, it will be more complicated to use but more powerful as well, overall. So, it depends on what your needs are.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 7,
"answer_id": 0
} |
Subsets and Splits