Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Solve for $x$; $\tan x+\sec x=2\cos x;-\infty\lt x\lt\infty$
Solve for $x$; $\tan x+\sec x=2\cos x;-\infty\lt x\lt\infty$
$$\tan x+\sec x=2\cos x$$
$$\left(\dfrac{\sin x}{\cos x}\right)+\left(\dfrac{1}{\cos x}\right)=2\cos x$$
$$\left(\dfrac{\sin x+1}{\cos x}\right)=2\cos x$$
$$\sin x+1=2\cos^2x$$
$$2\cos^2x-\sin x+1=0$$
Edit:
$$2\cos^2x=\sin x+1$$
$$2(1-\sin^2x)=\sin x+1$$
$$2\sin^2x+\sin x-1=0$$
$\sin x=a$
$$2a^2+a-1=0$$
$$(a+1)(2a-1)=0$$
$$a=-1,\dfrac{1}{2}$$
$$\arcsin(-1)=-90^\circ=-\dfrac{\pi}{2}$$
$$\arcsin\left(\dfrac{1}{2}\right)=30^\circ=\dfrac{\pi}{6}$$
$$180^\circ-30^\circ=150^\circ=\dfrac{5 \pi}{6}$$
$$x=\dfrac{\pi}{6},-\dfrac{\pi}{2},\dfrac{5 \pi}{6}$$
I actually do not know if those are the only answers considering my range is infinite:$-\infty\lt x\lt\infty$
| \begin{eqnarray}
\left(\dfrac{\sin x}{\cos x}\right)+\left(\dfrac{1}{\cos x}\right)&=&2\cos x\\
\sin x + 1 &=& 2 \cos^2 x \\
\sin x + 1 &=& 2(1-\sin^2 x)\\
\end{eqnarray}
Then $\sin x = -1$ or $\sin x = 1/2$ and the solutions are $-\pi + 2k \pi, \pi/6 + 2 k\pi$ and $5\pi/6 + 2k \pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/171792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
} |
Is there a proof of Gödel's Incompleteness Theorem without self-referential statements? For the proof of Gödel's Incompleteness Theorem, most versions of proof use basically self-referential statements.
My question is, what if one argues that Gödel's Incompleteness Theorem only matters when a formula makes self-reference possible?
Is there any proof of Incompleteness Theorem that does not rely on self-referential statements?
| Roughly speaking, the real theorem is that the ability to express the theory of integer arithmetic implies the ability to express formal logic.
Gödel's incompleteness theorem is really just a corollary of this: once you've proven the technical result, it's a simple matter to use it to construct variations of the Liar's paradox and see what they imply.
Strictly speaking, you still cannot create self-referential statements: the (internal) self-referential statement can only be interpreted as such by invoking the correspondence between external logic (the language in which you are expressing your theory) and internal logic (the language which your theory expresses).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/171863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 6,
"answer_id": 1
} |
limit of an integral with a Lorentzian function We want to calculate the $\lim_{\epsilon \to 0} \int_{-\infty}^{\infty} \frac{f(x)}{x^2 + \epsilon^2} dx $ for a function $f(x)$ such that $f(0)=0$. We are physicist, so the function $f(x)$ is smooth enough!.
After severals trials, we have not been able to calculate it except numerically.
It looks like the normal Lorentzian which tends to the dirac function, but a
$\epsilon$ is missing.
We wonder if this integral can be written in a simple form as function of $f(0)$ or its derivatives $f^{(n)}(0)$ in 0.
Thank you very much.
| I'll assume that $f$ has compact support (though it's enough to suppose that $f$ decreases very fast). As $f(0)=0$ he have $f(x)=xg(x)$ for some smooth $g$. Let $g=h+k$, where $h$ is even and $k$ is odd. As $k(0)=0$, again $k(x)=xm(x)$ for some smooth $m$.
We have $$\int_{-\infty}^{\infty} \frac{f(x)}{x^2 + \epsilon^2} dx =\int_{-\infty}^{\infty} \frac{xg(x)}{x^2 + \epsilon^2} dx =\int_{-\infty}^{\infty} \frac{x(h(x)+xm(x))}{x^2 + \epsilon^2} dx = \int_{-\infty}^{\infty} \frac{x^2m(x)}{x^2 + \epsilon^2} dx $$
(the integral involing $h$ is $0$ for parity reasons)
and
$$\int_{-\infty}^{\infty} \frac{x^2m(x)}{x^2 + \epsilon^2} dx=\int_{-\infty}^{\infty} m(x)dx-\int_{-\infty}^{\infty} \frac{m(x)}{(x/\epsilon)^2 + 1} dx. $$
The last integral converges to $0$, so the limit is
$\int_{-\infty}^{\infty} m(x)dx$
where (I recall)
$$m(x)=\frac{f(x)+f(-x)}{2x^2}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/171910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Evaluating $\int(2x^2+1)e^{x^2}dx$ $$\int(2x^2+1)e^{x^2}dx$$
The answer of course:
$$\int(2x^2+1)e^{x^2}\,dx=xe^{x^2}+C$$
But what kind of techniques we should use with problem like this ?
| You can expand the integrand, and get
$$2x^2e^{x^2}+e^{x^2}=$$
$$x\cdot 2x e^{x^2}+1\cdot e^{x^2}=$$
Note that $x'=1$ and that $(e^{x^2})'=2xe^{x^2}$ so you get
$$=x\cdot (e^{x^2})'+(x)'\cdot e^{x^2}=(xe^{x^2})'$$
Thus you integral is $xe^{x^2}+C$. Of course, the above is integration by parts in disguise, but it is good to develop some observational skills with problems of this kind.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/171960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
An example of bounded linear operator Define $\ell^p = \{ x (= \{ x_n \}_{-\infty}^\infty) \;\; | \;\; \| x \|_{\ell^p} < \infty \} $ with $\| x \|_{\ell^p} = ( \sum_{n=-\infty}^\infty \|x_n \|^p )^{1/p} $ if $ 1 \leqslant p <\infty $, and $ \| x \|_{\ell^p} = \sup _{n} | x_n | $ if $ p = \infty $. Let $k = \{ k_n \}_{-\infty}^\infty \in \ell^1 $.
Now define the operator $T$ , for $x \in \ell^p$ , $$ (Tx)_n = \sum_{j=-\infty}^\infty k_{n-j} x_j \;\;(n \in \mathbb Z).$$ Then prove that $T\colon\ell^p \to\ell^p$ is a bounded, linear operator with $$ \| Tx \|_{\ell^p} \leqslant \| k \|_{\ell^1} \| x \|_{\ell^p}. $$
Would you give me a proof for this problem?
| In the first comment I suggested the following strategy: write $T=\sum_j T_j$, where $T_j$ is a linear operator defined by $T_jx=\{k_jx_{n-j}\}$. You should check that this is indeed correct, i.e., summing $T_j$ over $j$ indeed gives $T$. Next, show that $\|T_j\|=|k_j|$ using the definition of the operator norm. Finally, use the triangle inequality $\|Tx\|_{\ell^p}\le \sum_j \|T_jx\|_{\ell_p}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/171996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Exam question: Normals While solving exam questions, I came across this problem:
Let $M = \{(x, y, z) \in \mathbb{R^3}; (x - 5)^5 + y^6 + z^{2010} = 1\}$. Show that for every unit vector, $v \in \mathbb{R^3}$, exists a single vertex $p \in M$ such that $N(p) = v$, where $N(p)$ is the outer surface normal to $M$ at $p$.
I don't know where to start. Ideas?
| It is straightforward, but tedious, to analytically show that the assertion is false. A picture illustrates the problem more clearly.
First, note that if $z=0$, then the normal will also have a zero $z$ component, so we can take $z=0$ and look for issues there. Plotting the contour of $(x-5)^5+y^6 = 1$ gives the picture below:
Pick a normal $v$ to the surface around $(2,2.5)$. Then note that in the vicinity of $(6,1)$, it is possible to find a normal that matches $v$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/172058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving:$\tan(20^{\circ})\cdot \tan(30^{\circ}) \cdot \tan(40^{\circ})=\tan(10^{\circ})$ how to prove that : $\tan20^{\circ}.\tan30^{\circ}.\tan40^{\circ}=\tan10^{\circ}$?
I know how to prove
$ \frac{\tan 20^{0}\cdot\tan 30^{0}}{\tan 10^{0}}=\tan 50^{0}, $
in this way:
$ \tan{20^0} = \sqrt{3}.\tan{50^0}.\tan{10^0}$
$\Longleftrightarrow \sin{20^0}.\cos{50^0}.\cos{10^0} = \sqrt{3}.\sin{50^0}.\sin{10^0}.\cos{20^0}$
$\Longleftrightarrow \frac{1}{2}\sin{20^0}(\cos{60^0}+\cos{40^0}) = \frac{\sqrt{3}}{2}(\cos{40^0}-\cos{60^0}).\cos{20^0}$
$\Longleftrightarrow \frac{1}{4}\sin{20^0}+\frac{1}{2}\sin{20^0}.\cos{40^0} = \frac{\sqrt{3}}{2}\cos{40^0}.\cos{20^0}-\frac{\sqrt{3}}{4}.\cos{20^0}$
$\Longleftrightarrow \frac{1}{4}\sin{20^0}-\frac{1}{4}\sin{20^0}+\frac{1}{4}\sin{60^0} = \frac{\sqrt{3}}{4}\cos{60^0}+\frac{\sqrt{3}}{4}\cos{20^0}-\frac{\sqrt{3}}{4}\cos{20^0}$
$\Longleftrightarrow \frac{\sqrt{3}}{8} = \frac{\sqrt{3}}{8}$
Could this help to prove the first one and how ?Do i just need to know that $ \frac{1}{\tan\theta}=\tan(90^{\circ}-\theta) $ ?
| Another approach:
Lets, start by arranging the expression:
$$\tan(20°) \tan(30°) \tan(40°) = \tan(30°) \tan(40°) \tan(20°)$$$$=\tan(30°) \tan(30°+10°) \tan(30° - 10°)$$
Now, we will express $\tan(30° + 10°) $ and $\tan(30° - 10°)$ as the ratio of Prosthaphaeresis Formulas, giving us:
$$\tan(30°) \left( \frac{\tan(30°) + \tan(10°)}{1 - \tan(30°) \tan(10°)}\right) \left( \frac{\tan(30°) - \tan(10°)}{1 + \tan(30°) \tan(10°)}\right) $$
$$= \tan(30°) \left( \frac{\tan^2(30°) - \tan^2(10°)}{1 - \tan^2(30°) \tan^2(10°)}\right) $$
Substituting the value of $\color{blue}{\tan(30°)}$,
$$ = \tan(30°) \left(\frac{1 - 3\tan²(10°)}{3 - \tan²(10°)}\right) $$
Multiplying and dividing by $\color{blue}{\tan(10°)}$,
$$=\tan(30°) \tan(10°) \left(\frac{1 - 3\tan²(10°)}{3 \tan(10°) - \tan^3(10°)}\right) $$
It can be easily shown that: $\color{blue}{\tan 3A =\large \frac{3 \tan A - \tan^3A}{1-3\tan^2A}}$,
Thus, our problem reduces to: $$=\tan(30°) \tan(10°) \frac{1}{\tan(3\times 10°)}= \tan(10°)$$
QED!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/172182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 2
} |
Is there a "natural" topology on powersets? Let's say a topology T on a set X is natural if the definition of T refers to properties of (or relationships on) the members of X, and artificial otherwise. For example, the order topology on the set of real numbers is natural, while the discrete topology is artificial.
Suppose X is the powerset of some set Y. We know some things about X, such as that some members of X are subsets of other members of X. This defines an order on the members of X in terms of the subset relationship. But the order is not linear so it does not define an order topology on X.
I haven't found a topology on powersets that I would call natural. Is there one?
| Natural is far from well-defined. For example, I don’t see why the discrete topology on $\Bbb R$ is any less natural than the order topology; one just happens to make use of less of the structure of $\Bbb R$ than the other.
That said, the Alexandrov topology on $\wp(X)$ seems pretty natural:
$$\tau=\left\{\mathscr{U}\subseteq\wp(X):\forall A,B\subseteq X\big(B\supseteq A\in\mathscr{U}\to B\in\mathscr{U}\big)\right\}\;.$$
In other words, the open sets are the upper sets in the partial order $\langle\wp(X),\subseteq\rangle$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/172257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 2
} |
Proof of $(A - B) - C = A - (B \cup C)$ I have several of these types of problems, and it would be great if I can get some help on one so I have a guide on how I can solve these. I tried asking another problem, but it turned out to be a special case problem, so hopefully this one works out normally.
The question is:
Prove $(A - B) - C = A - (B \cup C)$
I know I must prove both sides are not equivalent to each other to complete this proof. Here's my shot: We start with left side.
*
*if $x \in C$, then $x \notin A$ and $x \in B$.
*So $x \in (B \cup C)$
*So $A - (B \cup C)$
Is this the right idea? Should I then reverse the proof to prove it the other way around, or is that unnecessary? Should it be more formal?
Thanks!
| Working with the Characteristic function of a set makes these problems easy:
$$1_{(A - B) - C}= 1_{A-B} - 1_{A-B}1_C=(1_A- 1_A1_B)-1_A1_C+ 1A1_B1_C \,$$
$$1_{A-(B \cup C)}= 1_{A}- 1_{A}1_{B \cup C}=1_A- 1_A ( 1_B+ 1_C -1_B1_C)\,$$
It is easy now to see that the RHS are equal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/172292",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 3
} |
Trigonometric equation inversion I am trying to invert the following equation to have it with $\theta$ as the subject:
$y = \cos \theta \sin \theta - \cos \theta -\sin \theta$
I tried both standard trig as well as trying to reformulate it as a differential equation (albeit I might have chosen an awkward substitution). Nothing seems to stick and I keep on ending up with pretty nasty expressions. Any ideas?
| Rewrite the equation as $(1-\cos\theta)(1-\sin\theta)=1+y.$
Now make the Weierstrass substitution $t=\tan(\theta/2)$. It is standard that
$\cos\theta=\frac{1-t^2}{1+t^2}$ and $\sin\theta=\frac{2t}{1+t^2}$. So our equation becomes
$$\frac{2t^2}{1+t^2}\cdot \frac{(1-t)^2}{1+t^2}=1+y.$$
Take the square root, and clear denominators. We get
$$\sqrt{2}t(1-t)=\sqrt{1+y}(1+t^2).$$
This is a quadratic in $t$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/172350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Coordinate-free method to determine local maxima/minima? If there is a function $f : M \to \mathbb R$ then the critical point is given as a point where
$$d f = 0$$
$df$ being 1-form (btw am I right here?). Is there a coordinate independent formulation of a criteria to determine if this point is a local maximum or minimum (or a saddle point)?
| Let $p$ be a critical point for a smooth function $f:M\to \mathbb{R}.$
Let $(x_\,\ldots,x_n)$ be an arbitrary smooth coordinate chart around $p$ on $M.$
From multivariate calculus we know that a sufficient condition for $p$ to be a local maximum (resp. minimum) of $f$ is the positiveness (resp. negativeness) of the Hessian $H(f,p)$ of $f$ at $p$ which is the bilinear map on $T_pM$ defined locally by $$H(f,p)=\left.\frac{\partial^2f}{\partial x_i\partial x_j}dx^i\otimes dx^j\right|_p,$$ here the Einstein convention on summation is working.
However, as Thomas commented, the Hessian of a function at a critical point has a coordinate-free espression.
Infact, $H(f,p): T_pM\times T_pM\to\mathbb{R}$ is characterized by
$$H(f,p)(X(p),Y(p))=(\left.\mathcal{L}_X(\mathcal{L}_Y f))\right|_p$$
for any smooth vector fields $X$ and $Y$ on $M$ around $p.$
Note that without a Riemannian metric on $M$ you cannot invariantly define the Hessian of a function at a non-critical point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/172465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Use the Division Algorithm to show the square of any integer is in the form $3k$ or $3k+1$
Use Division Algorithm to show the square of any int is in the form 3k or 3k+1
What confuses me about this is that I think I am able to show that the square of any integer is in the form $X*k$ where $x$ is any integer. For Example:
$$x = 3q + 0 \\
x = 3q + 1 \\
x = 3q + 2$$
I show $3k$ first
$$(3q)^2 = 3(3q^2)$$
where $k=3q^2$ is this valid use of the division algorithm?
If it is then can I also say that int is in the form for example 10*k
for example
$(3q)^2 = 10*(\frac{9}{10}q^2)$
where $k=(\frac{9}{10}q^2)$
Why isn't this valid? Am I using the div algorithm incorrectly to show that any integer is the form 3k and 3k+1, if so how do I use it? Keep in mind I am teaching myself Number Theory and the only help I can get is from you guys in stackexchange.
| Hint $\ $ Below I give an analogous proof for divisor $5$ (vs. $3),$ exploiting reflection symmetry.
Lemma $\ $ Every integer $\rm\:n\:$ has form $\rm\: n = 5\,k \pm r,\:$ for $\rm\:r\in\{0,1,2\},\ k\in\Bbb Z.$
Proof $\ $ By the Division algorithm
$$\rm\begin{eqnarray} n &=&\:\rm 5\,q + \color{#c00}r\ \ \ for\ \ some\ \ q,r\in\Bbb Z,\:\ r\in [0,4] \\
&=&\:\rm 5\,(q\!+\!1)-(\color{#0a0}{5\!-\!r})
\end{eqnarray}$$
Since $\rm\:\color{#0a0}{5\!-\!r}+\color{#c00}r = 5,\,$ one summand is $\le 2,\,$ so lies in $\{0,1,2\},\,$ yielding the result.
Theorem $\ $ The square of an integer $\rm\,n\,$ has form $\rm\, n^2 = \,5\,k + r\,$ for $\rm\:r\in \{0,1,4\}.$
Proof $\ $ By Lemma $\rm\ n^2 = (5k\pm r)^2 = 5\,(5k^2\!\pm 2kr)+r^2\,$ for $\rm\:r\in \{0,1,2\},\,$ so $\rm\: r^2\in\{0,1,4\}.$
Remark $\ $ Your divisor of $\,3\,$ is analogous, with $\rm\:r\in \{0,1\}\,$ so $\rm\:r^2\in \{0,1\}.\,$ The same method generalizes for any divisor $\rm\:m,\,$ yielding that $\rm\:n^2 = m\,k + r^2,\,$ for $\rm\:r\in\{0,1,\ldots,\lfloor m/2\rfloor\}.$
The reason we need only square half the remainders is because we have exploited reflection symmetry (negation) to note that remainders $\rm > n$ can be transformed to negatives of remainders $\rm < n,\,$ e.g. $\rm\: 13 = 5\cdot 2 +\color{#0A0} 3 = 5\cdot 3 \color{#C00}{- 2},\,$ i.e. remainder $\rm\:\color{#0A0}3\leadsto\,\color{#C00}{-2},\,$ i.e. $\rm\:3 \equiv -2\pmod 5.\:$ This amounts to using a system of balanced (or signed) remainders $\rm\, 0,\pm1,\pm2,\ldots,\pm n\ $ vs. $\rm\ 0,1,2,\ldots,2n[-1].\:$ Often this optimization halves work for problems independent of the sign of the remainder.
All of this is much clearer when expressed in terms of congruences (modular arithmetic), e.g. the key inference above $\rm\:n\equiv r\:\Rightarrow\:n^2\equiv r^2\pmod m\:$ is a special case of the ubiquitous
Congruence Product Rule $\rm\ \ A\equiv a,\ B\equiv b\ \Rightarrow\ AB\equiv ab\ \ (mod\ m)$
Proof $\rm\:\ \ m\: |\: A\!-\!a,\ B\!-\!b\:\ \Rightarrow\:\ m\ |\ (A\!-\!a)\ B + a\ (B\!-\!b)\ =\ AB - ab $
For an introduction to congruences see any decent textbook on elementary number theory.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/172535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Prove $ (r\sin A \cos A)^2+(r\sin A \sin A)^2+(r\cos A)^2=r^2$ How would I verify the following trig identity?
$$(r\sin A \cos A)^2+(r\sin A \sin A)^2+(r\cos A)^2=r^2$$
My work thus far is
$$(r^2\cos^2A\sin^2A)+(r^2\sin^2A\sin^2A)+(r^2\cos^2A)$$
But how would I continue? My math skills fail me.
| Just use the distributive property and $\sin^2(x)+\cos^2(x)=1$:
$$
\begin{align}
&(r\sin(A)\cos(A))^2+(r\sin(A)\sin(A))^2+(r\cos(A))^2\\
&=r^2\sin^2(A)(\cos^2(A)+\sin^2(A))+r^2\cos^2(A)\\
&=r^2\sin^2(A)+r^2\cos^2(A)\\
&=r^2(\sin^2(A)+\cos^2(A))\\
&=r^2\tag{1}
\end{align}
$$
This can be generalized to
$$
\begin{align}
&(r\sin(A)\cos(B))^2+(r\sin(A)\sin(B))^2+(r\cos(A))^2\\
&=r^2\sin^2(A)(\cos^2(B)+\sin^2(B))+r^2\cos^2(A)\\
&=r^2\sin^2(A)+r^2\cos^2(A)\\
&=r^2(\sin^2(A)+\cos^2(A))\\
&=r^2\tag{2}
\end{align}
$$
$(2)$ verifies that spherical coordinates have the specified distance from the origin.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/172607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How do I know if a simple pole exists (and how do I find it) for $f(z)$ without expanding the Laurent series first? In general, how do I recognize that a simple pole exists and find it, given some $\Large f(z)$? I want to do this without finding the Laurent series first.
And specifically, for the following function:
$\Large f(z) = \frac{z^2}{z^4+16}$
| Here is how you find the roots of $z^4+16=0$,
$$ z^4 = -16 \Rightarrow z^4 = 16\, e^{i\pi} \Rightarrow z^4 = 16\, e^{i\pi+i2k\pi} \Rightarrow z = 2\, e^{\frac{i\pi + i2k\pi}{4} }\, k =0\,,1\,,2\,,3\,.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/172678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
what should be the range of $u$ satisfying following equation. Let us consider that $u\in C^2(\Omega)\cap C^0(\Omega)$ and satisfying the following equation .
$\Delta u=u^3-u , x\in\Omega$ and
$u=0 $ in $\partial \Omega$
$\Omega \subset \mathbb R^n$ and bounded .
I need hints to find out what possible value $u$ can take ?
Thank you for ur hints in advance .
| We must of course assume $u$ is continuous on the closure of $\Omega$.
Since this is bounded and $u = 0$ on $\partial \Omega$, if $u > 0$ somewhere it must achieve a maximum in $\Omega$. Now $u$ is subharmonic on any part of the domain where $u > 1$, so the maximum principle says ...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/172747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
composition of convex function with harmonic function. Consider $u:\Omega \to \mathbb R$ be harmonic and $f$ be convex function. How do i prove that $f\circ u$ is subharmonic?
It seems straight forward : $\Delta (f\circ u (x)) \ge f(\Delta u(x))=0 $.
Is this all to this problem? Is there a better way?
Also $|u|^p$ is subharmonic for $p\ge1$ , this seems also very obvious because the map $t\to |t|^p$ is convex mapping.
Any comments, improvements or answers are welcome.
| Basically you do what Davide Giraudo suggested. We use the definition as given on the EOM
An upper semicontinuous function $v:\Omega \to\mathbb{R}\cup \{-\infty\}$ is called subharmonic if for every $x_0 \in \Omega$ and for every $r > 0$ such that $\overline{B_r(x_0)} \subset \Omega$, that
$$ v(x_0) \leq \frac{1}{|\partial B_r(x_0)|} \int_{\partial B_r(x_0)} v(y) \mathrm{d}y $$
Now, using that $f$ is convex, Jensen's inequality takes the following form:
$$ f\left( \frac{1}{|\partial B_r(x_0)|} \int_{\partial B_r(x_0)} v(y)\mathrm{d}y \right) \leq \frac{1}{|\partial B_r(x_0)|}\int_{\partial B_r(x_0)} f\circ v(y) \mathrm{d}y $$
If $u$ is harmonic, we have that
$$ \frac{1}{|\partial B_r(x_0)|} \int_{\partial B_r(x_0)} u(y)\mathrm{d}y = u(x_0) $$
so combining the two facts you have that
$$ f\circ u(x_0) \leq \frac{1}{|\partial B_r(x_0)|}\int_{\partial B_r(x_0)} f\circ u(y) \mathrm{d}y $$
which is precisely the definition for $f\circ u$ to be subharmonic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/172820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Can continuity of inverse be omitted from the definition of topological group? According to Wikipedia, a topological group $G$ is a group and a topological space such that
$$ (x,y) \mapsto xy$$ and
$$ x \mapsto x^{-1}$$
are continuous. The second requirement follows from the first one, no? (by taking $y=0, x=-x$ in the first requirement)
So we can drop it in the definition, right?
| There is even a term for a group endowed with a topology such that multiplication is continuous (but inversion need not be): a paratopological group.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/172945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 3,
"answer_id": 0
} |
Universal Cover of projective plane glued to Möbius band This is the second part of exercise 1.3.21 in page 81 of Hatcher's book Algebraic topology, and the first part is answered here.
Consider the usual cell structure on $\mathbb R P^2$, with one 1-cell and one 2-cell attached via a map of degree 2. Consider the space $X$ obtained by gluing a Möbius band along the 1-cell via a homeomorphism with its boundary circle.
Compute $\pi_1(X)$, describe the universal cover of $X$ and the action of $\pi_1(X)$ on the universal cover.
Using van Kampen's theorem, $\pi_1(X)=\langle x,y \mid x^2=1, x=y^2\rangle=\langle y\mid y^4=1\rangle=\mathbb Z_4$.
I have tried gluing spheres and Möbius strips in various configurations, but have so far not been successful. Any suggestions?
| Let $M$ be the Möbius band and let $D$ be the $2$-cell of $RP^2$. Then $X$ is the result of gluing $M$ to $D$ along a map $\partial D\rightarrow \partial M$ of degree $2$. Hence $\pi_1(X)$ has a single generator $\gamma$, that comes from the inclusion $M\rightarrow X$, and the attached disc $D$ kills $\gamma^4$, hence $\pi_1(X)\cong {\mathbb Z}/4$. Alternatively, take the homotopy $M\times I\rightarrow M$ that shrinks the Möbius band to its central circle $S\subset M$; this gives a homotopy from $X = M\cup D$ to the space ${\mathbb S}^1\cup_f D$, where $f\colon \partial D\rightarrow {\mathbb S}^1$ is a map of degree $4$.
The universal cover of the space ${\mathbb S}^1\cup_f D$ is described in Hatcher's Example 1.47, and it is the homeomorphic to the quotient of $D\times\{1,2,3,4\}$ by the relation $(x,i)\sim (y,j)$ iff $x=y$ and $x\in \partial D\times \{i\}$.
Now let $D$ denote de closed unit disc in ${\mathbb R}^2$. The universal cover of the space $X$ is homeomorphic to the quotient of $D\times \{a,b,c,d\}\cup S^1\times [-1,1]$ by the relations
*
*$(x,a)\sim (x,b)\sim (x,1)$ for all $x\in S^1 = \partial D$
*$(x,c)\sim(x,d)\sim (x,-1)$ for all $x\in S^1=\partial D$
and $\pi_1(X)\cong {\mathbb Z}/4$ acts as follows:
*
*$(re^{2\pi i\theta},a)\to (re^{2\pi i(\theta + 1/4)}, c)\to (re^{2\pi i(\theta+1/2)},b)\to (re^{2\pi i (\theta + 3/4)},d)\rightarrow (re^{2\pi i\theta},a)$ for the points in the discs $D\times \{a,b,c,d\}$
*$(e^{2\pi i \theta},t)\to (e^{2\pi i (\theta+1/4)},-t)\to (e^{2\pi i (\theta + 1/2)},t)\to (e^{2\pi i(\theta + 3/4)},-t)\rightarrow (e^{2\pi i \theta},t)$ for the points in $S^1\times [-1,1]$.
and the map $\tilde{X}\rightarrow X$ sends the four discs to the disc and the cylinder to the Möbius band.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/173023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
} |
Showing that the closure of the closure of a set is its closure I have the feeling I'm missing something obvious, but here it goes...
I'm trying to prove that for a subset $A$ of a topological space $X$, $\overline{\overline{A}}=\overline{A}$. The inclusion $\overline{\overline{A}} \subseteq \overline{A}$ I can do, but I'm not seeing the other direction.
Say we let $x \in \overline{A}$. Then every open set $O$ containing $x$ contains a point of $A$. Now if $x \in \overline{\overline{A}}$, then every open set containing $x$ contains a point of $\overline{A}$ distinct from $x$. My problem is: couldn't $\{x,a\}$ potentially be an open set containing $x$ and containing a point of $A$, but containing no other point in $\overline{A}$?
(Also, does anyone know a trick to make \bar{\bar{A}} look right? The second bar is skewed to the left and doesn't look very good.)
| The condition you want to check is
\[
x \in \bar A \quad \Leftrightarrow \quad \text{for each open set $U$ containing $x$, $U \cap A \neq \emptyset$}
\]
This definition implies, among other things, that $A \subset \bar A$. Indeed, with the notation above we always have $x \in U \cap A$. Is it clear why this implies the remaining inclusion in your problem?
If you instead require that $U \cap (A - \{x\}) \neq \emptyset$ then you have defined the set $A'$ of limit points of $A$. We have $\bar A = A \cup A'$. Simple examples such as $A = \{0\}$ inside of $\mathbb R$, for which $\bar A = A$ but $A' = \emptyset$, can be helpful in keeping this straight.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/173070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 1
} |
No pandiagonal latin squares with order divisible by 3? I would like to prove the claim that pandiagonal latin squares, which are of form
0 a 2a 3a ... (n-1)a
b b+a b+2a b+3a ... b+ (n-1)a
. . .
. . .
. . .
(n-1)b (n-1)b +a (n-1)b +2a (n-1)b +3a ... (n-1)(b+a)
for some $a,b\in (0,1...n-1)$ cannot exist when the order is divisible by 3.
I think we should be able to show this if we can show that the pandiagonal latin square of order $n$ can only exist iff it is possible to break the cells of a $n \times n$ grid into $n$ disjoint superdiagonals. Then we would show that an $n\times n$ array cannot have a superdiagonal if $n$ is a multiple of $2$ or $3$. I am, however, not able to coherently figure out either part of this proof. Could someone perhaps show me the steps of both parts?
A superdiagonal on an $n \times n$ grid is a collection of $n$ cells within the grid that contains exactly one representative from each row and column as well as exactly one representative from each broken left diagonal and broken right diagonal.
EDIT: Jyrki, could you please explain how the claim follows from #1?
| [Edit: replacing the old set of hints with a detailed answer]
Combining things from the question as well as its title and another recent question by the same use tells me that the question
is the following. When $n$ is divisible by three, show that no combination of parameters, $a,b\in R=\mathbb{Z}/n\mathbb{Z}$ in a general recipe $L(i,j)=ai+bj, i,j
\in R$ for constructing latin squares, gives rise to a latin square with the extra property (=pandiagonality) that the entries on each broken diagonal parallel to either main or back diagonal are all distinct.
The first thing we observe that $a$ cannot be divisible by three or, more precisely, it
cannot belong to the ideal $3R$. For if it were, then all the entries on the first row
would also be in the ideal $3R$. Consequently some numbers, for example $1$, would not appear at all on the first row, and the construction would not yield a latin square at all.
Similarly, the parameter $b$ cannot be divisible by three, as then the first column would only contain numbers divisible by three.
So both $a$ and $b$ are congruent to either $1$ or $2$ modulo $3$. We split this into two
cases. If $a\equiv b \pmod 3$ (i.e. both congruent to one or both congruent to two), then
$a-b$ is divisible by three. When we move one position to the right and one up in the shown diagram, we always add $a-b$ (modulo $n$) to the entry. This means that the entries
along an ascending broken diagonal are all congruent to each other modulo three. Again some
entries will be missing as the entries are limited to a coset of the ideal $3R$.
The remaining case is that one of $a,b$ is congruent to $1$ modulo $3$ and the other is congruent to $2$ modulo $3$. In that case their sum $a+b$ will be divisible by three.
When we move one position to the right and one down on the table, we add $b+a$ (modulo $n$) to the entry. This means that the entries
along a descending broken diagonal are all congruent to each other modulo three, and we
run into a similar contradiction as earlier.
To summarize: The solution relies on the special property of number three that for
all integers $a,b$ the product $ab(a-b)(a+b)$ will be divisible by three, so in one of the four critical directions (horizontal, vertical, descending/ascending diagonal) the entries will always congruent to each other modulo $3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/173127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
For what value of k, $x^{2} + 2(k-1)x + k+5$ has at least one positive root? For what value of k, $x^{2} + 2(k-1)x + k+5$ has at least one positive root?
Approach: Case I : Only $1$ positive root, this implies $0$ lies between the roots, so $$f(0)<0$$ and $$D > 0$$
Case II: Both roots positive. It implies $0$ lies behind both the roots. So, $$f(0)>0$$
$$D≥0$$
Also, abscissa of vertex $> 0 $
I did the calculation and found the intersection but its not correct. Please help. Thanks.
| You only care about the larger of the two roots - the sign of the smaller root is irrelevant. So apply the quadratic formula to get the larger root only, which is
$\frac{-2(k-1)+\sqrt{4(k-1)^2-4(k+5)}}{2} = -k+1+\sqrt{k^2-3k-4}$. You need the part inside the square root to be $\geq 0$, so $k$ must be $\geq 4$ or $\leq -1$. Now, if $k\geq 4$, then to have $-k+1+\sqrt{k^2-3k-4}>0$, you require $k^2-2k-4> (k-1)^2$, which is a contradiction. Alternately, if $k\leq -1$, then $-k+1+\sqrt{k^2-3k-4}$ must be positive, as required.
So you get the required result whenever $k\leq -1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/173189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Evaluating the product $\prod\limits_{k=1}^{n}\cos\left(\frac{k\pi}{n}\right)$ Recently, I ran across a product that seems interesting.
Does anyone know how to get to the closed form:
$$\prod_{k=1}^{n}\cos\left(\frac{k\pi}{n}\right)=-\frac{\sin(\frac{n\pi}{2})}{2^{n-1}}$$
I tried using the identity $\cos(x)=\frac{\sin(2x)}{2\sin(x)}$ in order to make it "telescope" in some fashion, but to no avail. But, then again, I may very well have overlooked something.
This gives the correct solution if $n$ is odd, but of course evaluates to $0$ if $n$ is even.
So, I tried taking that into account, but must have approached it wrong.
How can this be shown? Thanks everyone.
| The roots of the polynomial $X^{2n}-1$ are $\omega_j:=\exp\left(\mathrm i\frac{2j\pi}{2n}\right)$, $0\leq j\leq 2n-1$. We can write
\begin{align}
X^{2n}-1&=(X^2-1)\prod_{j=1}^{n-1}\left(X-\exp\left(\mathrm i\frac{2j\pi}{2n}\right)\right)\left(X-\exp\left(-\mathrm i\frac{2j\pi}{2n}\right)\right)\\
&=(X^2-1)\prod_{j=1}^{n-1}\left(X^2-2\cos\left(\frac{j\pi}n\right)X+1\right).
\end{align}
Evaluating this at $X=i$, we get
$$(-1)^n-1=(-2)(-2\mathrm i)^{n-1}\prod_{j=1}^{n-1}\cos\left(\frac{j\pi}n\right),$$
hence
\begin{align}
\prod_{j=1}^n\cos\left(\frac{j\pi}n\right)&=-\prod_{j=1}^{n-1}\cos\left(\frac{j\pi}n\right)\\
&=\frac{(-1)^n-1}{2(-2\mathrm i)^{n-1}}\\
&=\frac{(-1)^n-1}2\cdot \frac{\mathrm i^{n-1}}{2^{n-1}}.
\end{align}
The RHS is $0$ if $n$ is even, and $-\dfrac{(-1)^m}{2^{2m}}=-\dfrac{\sin(n\pi/2)}{2^{n-1}}$ if $n$ is odd with $n=2m+1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/173238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 4,
"answer_id": 0
} |
Moment generating function for the uniform distribution Attempting to calculate the moment generating function for the uniform distrobution I run into ah non-convergent integral.
Building of the definition of the Moment Generating Function
$
M(t) = E[ e^{tx}] = \left\{ \begin{array}{l l}
\sum\limits_x e^{tx} p(x) &\text{if $X$ is discrete with mass function $p( x)$}\\
\int\limits_{-\infty}^\infty e^{tx} f( x) dx &\text{if $X$ is continuous with density $f( x)$}
\end{array}\right.
$
and the definition of the Uniform Distribution
$
f( x) = \left\{ \begin{array}{l l}
\frac{ 1}{ b - a} & a < x < b\\
0 & otherwise
\end{array} \right.
$
I end up with a non-converging integral
$\begin{array}{l l}
M( t) &= \int\limits_{-\infty}^\infty e^{tx} f(x) dx\\
&= \int\limits_{-\infty}^\infty e^{tx} \frac{ 1}{ b - a} dx\\
&= \left. e^{tx} \frac{ 1}{ t(b - a)} \right|_{-\infty}^{\infty}\\
&= \infty
\end{array}$
I should find $M(t) = \frac{ e^{tb} - e^{ta}}{ t(b - a)}$, what am I missing here?
| The limits of integration are not correct. You should integrate from $a$ to $b$ not from $-\infty$ to $+\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/173331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Calculating maximum of function I want to determine the value of a constant $a > 0$ which causes the highest possible value of $f(x) = ax(1-x-a)$.
I have tried deriving the function to find a relation between $x$ and $a$ when $f'(x) = 0$, and found $x = \frac{1-a}{2}$.
I then insert it into the original function: $f(a) = \frac{3a - 6a^2 - 5a^3}{8}$
I derived it to $f'(a) = \frac{-15a^2 + 12a - 3}{8}$
I thought deriving the function and equaling it to $0$ would lead to finding a maximum, but I can't find it. I can't go beyond $-15a^2 + 12a = 3$.
Where am I going wrong?
| We can resort to some algebra along with the calculus you are using, to see what happens with this function:
$$f(x)=ax(1−x−a)=-ax^2+(a-a^2)x$$
Note that this is a parabola. Since the coefficient of the $x^2$ term is negative, it opens downward so that the maximum value is at the vertex. As you have already solved, the vertex has x-coordinate $x=\frac{1−a}{2}$. Additionally, Théophile showed that the vertex's y-coordinate is $f(\frac{1−a}{2})=a(\frac{1−a}{2})^2$ which is unbounded as $a$ increases.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/173387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 4
} |
Bounded ration of functions If $f(x)$ is a continuous function on $\mathbb R$, and $f$ is doesn't vanish on $\mathbb R$, does this imply that the function $\frac{f\,'(x)}{f(x)}$ be bounded on $\mathbb R$?
The function $1/f(x)$ will be bounded because $f$ doesn't vanish, and I guess that the derivative will reduce the growth of the function $f$, so that the ration will be bounded, is this explanation correct!?
| In fact for any positive continuous function $g(x)$ on $\mathbb R$ and any $y_0 \ne 0$, the differential equation $\dfrac{dy}{dx} = g(x) y$ with initial condition $y(0) = y_0$ has a
unique solution $y = f(x) = y_0 \exp\left(\int_0^x g(t)\ dt\right)$, and this is a nonvanishing continuous function with $f'(x)/f(x) = g(x)$. So $f'/f$ can do anything
a continuous function can.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/173452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
simple integration question I've tried but I cannot tell whether the following is true or not. Let $f:[0,1]\rightarrow \mathbb{R}$ be a nondecreasing and continuous function. Is it true that I can find a Lebesgue integrable function $h$ such that
$$
f(x)=f(0)+\int_{0}^{x}h(x)dx
$$
such that $f'=h$ almost everywhere?
Any hint on how to proceed is really appreciated it!
| The Cantor function (call it $f$) is a counterexample. It is monotone, continuous, non-constant and has a zero derivative a.e.
If the above integral formula holds, then you have $f'(x) = h(x)$ at every Lebesgue point of $f$, which is a.e. $x \in [0,1]$. Since $f'(x) = 0$ a.e., we have that $h$ is essentially zero. Since $f$ is not constant, we have a contradiction.
See Rudin's "Real & Complex Analysis" Theorem 7.11 and Section 7.16 for details.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/173501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Question about direct sum of Noetherian modules is Noetherian Here is a corollary from Atiyah-Macdonald:
Question 1: The corollary states that finite direct sums of Noetherian modules are Noetherian. But they prove that countably infinite sums are Noetherian, right? (so they prove something stronger)
Question 2: I have come up with the following proof of the statement in the corollary, can you tell me if it's correct? Thank you:
Assume $M_i$ are Noetherian and let $(\bigoplus_i L_i)_n$ be an increasing sequence of submodules in $\bigoplus_i M_i$. Then in particular, $L_{in}$ is an increasing sequence in $M_i$ and hence stabilises, that is, for $n$ greater some $N_i$, $L_{in} = L_{in+1} = \dots $. Now set $N = \max_i N_i$. Then $(\bigoplus_i L_i)_n$ stabilises for $n> N$ and is equal to $\bigoplus_i L_i$, where $L_i = L_{iN_i}$.
This proves that finite direct sums of Noetherian modules are Noetherian so it's a bit weaker. But if it's correct it proves the corollary.
| As pointed out by others, the submodules of $M=M_1\oplus M_2$ are not necessarily direct sums of submodules of $M_1$ and $M_2$. Nevertheless, you always have the exact sequence
$$0\rightarrow M_1\rightarrow M\rightarrow M_2\rightarrow 0$$
Then $M$ is Noetherian if (and only if) $M_1$ and $M_2$ are Noetherian. One direction is trivial (if M is Noetherian then $M_1$ and $M_2$ are Noetherian). I prove the other direction here:
Assume $M_1$ and $M_2$ are Noetherian. Given nested submodules $N_1\subset N_2\subset \cdots$ in $M$, we can see their images stabilize in $M_1$ and $M_2$. More precisely, the chain
$$(N_1\cap M_1)\subset (N_2\cap M_1)\subset\cdots$$
terminates in $M_1$, say at length $j_1$ and so does
$$(N_1+M_1)/M_1\subset (N_2+M_1)/M_1\subset\cdots$$
in $M_2$, say at length $j_2$. Set $j=\max\{j_1,j_2\}$ to get
$$(N_{j}\cap M_1)=(N_{j+1}\cap M_1)=\cdots$$
in $M_1$ and
$$(N_{j}+M_1)/M_1=(N_{j+1}+M_1)/M_1=\cdots$$
in $M_2$. But $N_j$'s are nested modules. Hence the above equalities can occur if and only if $N_j=N_{j+1}=\cdots$. To check this claim, pick $n\in N_{j+1}$. Then $m:=n'-n\in M_1$ for some $n'\in N_{j}$. But $m\in N_{j+1}$ as well. Hence $m\in N_{j+1}\cap M_1$, that is $m\in N_{j}\cap M_1$ by the first equality above. So $m\in N_j$, that is $n\in N_j$, giving us $N_{j+1}\subset N_j$.
So the chain $N_1\subset N_2\subset ...$ terminates in $M$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/173614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Check if two 3D vectors are linearly dependent I would like to determine with code (c++ for example) if two 3D vectors are linearly dependent.
I know that if I could determine that the expression
$ v_1 = k · v_2 $is true then they are linearly dependent; they are linearly independent otherwise.
I've tried to construct an equation system to determine that, but since there could be zeros anywhere it gets very tricky and could end with divisions by zero and similar.
I've also though about using some matrices/determinants, but since the matrix would look like:
$$
\begin{matrix}
x_1 & y_1 & z_1\\
x_2 & y_2 & z_2\\
\end{matrix}
$$
i don't see an easy way to check for the the linear dependency... any ideas?
Thanks!
| Here is the portion of the code you need:
if((x1*y2 - x2*y1) != 0 || (x1*z2 - x2*z1) != 0 || (y1*z2 - y2*z1) != 0)
{
//Here you have independent vectors
}
else
{
//Here the vectors are linearly dependent
}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/173710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Computing a sum of binomial coefficients: $\sum_{i=0}^m \binom{N-i}{m-i}$ Does anyone know a better expression than the current one for this sum?
$$
\sum_{i=0}^m \binom{N-i}{m-i}, \quad 0 \le m \le N.
$$
It would help me compute a lot of things and make equations a lot cleaner in the context where it appears (as some asymptotic expression of the coefficients of a polynomial defined over cyclic graphs). Perhaps the context doesn't help much though.
For instance, if $N = m$, the sum is $N+1$, and for $N = m+1$, this sum is $N + (N-1) + \dots + 1 = N(N+1)/2$. But otherwise I don't know how to compute it.
| Here's my version of the proof (and where it arose in the context). You have a cyclic graph of size $n$, which is essentially $n$ vertices and a loop passing through them, so that you can think of them as the roots of unity on a circle (I don't want to draw because it would take a lot of time, but it really helps).
You want to count the number of subsets of size $k$ ($0 \le k \le n$) of this graph which contain 3 consecutive vertices (for some random reason that actually makes sense to me, but the rest is irrelevant).
You can fix 3 consecutive vertices, assume that the two adjacent vertices to those 3 are not in the subset, and then place the $k-3$ remaining anywhere else. Or you can fix $4$ consecutive vertices, assume again that the two adjacent vertices to those $4$ are not in the subset, and then place the $k-4$ remaining anywhere else. Or you can fix $5$, blablabla, etc. If you assume $n$ is prime, every rotation of the positions you've just counted gives you a distinct subset of the graph, so that the number of such subsets would be
$$
n\sum_{i=0}^{k-3} \binom{n-5-i}{k-3-i}.
$$
On the other hand, if you look at it in another way : suppose there is $\ell$ consecutive vertices in your subset. Look at the $3$ vertices in the $\ell$ consecutive ones that are adjacent to a vertex not in your subset, and place the remaining ones everywhere else. Again, the number of such subsets is
$$
n\binom{n-4}{k-3}.
$$
If $n$ is not prime, my computations are not accurate (they don't count well what I want to count in my context, but that's another question I'm trying to answer for myself), but I do precisely the same mistakes in both computations! So the two are actually equal. Removing the $n$'s on both lines, replacing $n-5$ and $k-3$ by $N$ and $m$, we get the identity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/173760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
checking whether certain numbers have an integral square root I was wondering if it is possible to find out all $n \geq 1$, such that $3n^2+2n$ has an integral square root, that is, there exists $a \in \mathbb{N}$ such that $3n^2+2n = a^2$
Also, similarly for $(n+1)(3n+1)$.
Thanks for any help!
| $3n^2+2n=a^2$, $9n^2+6n=3a^2$, $9n^2+6n+1=3a^2+1$, $(3n+1)^2=3a^2+1$, $u^2=3a^2+1$ (where $u=3n+1$), $u^2-3a^2=1$, and that's an instance of Pell's equation, and you'll find tons of information on solving those in intro Number Theory textbooks, and on the web, probably even here on m.se.
Try similar manipulations for your other question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/173845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Functional and Linear Functional may I know what is the distinction between functional analysis and linear functional analysis? I do a search online and came to a conclusion that linear functional analysis is not functional analysis and am getting confused by them. When I look at the book Linear Functional Analysis published by Joan Cerda by AMS, there is not much difference in its content compared to other titles with just the heading Functional Analysis. Hope someone can clarify for me. Thank You.
| It is mostly a matter of taste. Traditionally functional analysis has dealt mostly with linear operators, whereas authors would typically title their books nonlinear functional analysis if they consider the theory of differentiable (or continuous or more general) mappings between Banach (or more general function) spaces.
But it seems more recently the subject of "nonlinear functional analysis" has been gaining traction and many more people now adhere to the philosophy "classifying mathematical problems as linear and nonlinear is like classifying the universe as bananas and non-bananas." So instead of labeling all the non-bananas, some people now agree that it makes more sense to instead label the bananas...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/173912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
What is the formula to find the count of (sets of) maps between sets that equal the identity map? Given two finite sets $A$ and $B$, what is the formula to find the number of pairs of mappings $f, g$ such that $g \circ f = 1_A$?
| A function $f\colon A\to B$ has a left inverse, $g\colon B\to A$ such that $g\circ f = 1_A$, if and only if $f$ is one-to-one.
Given a one-to-one function $f\colon A\to B$, then the values of a left inverse $g\colon B\to A$ of $f$ on $f(A)$ are uniquely determined; the values on $B-f(A)$ are free.
I'm going to assume first that the sets are finite.
Therefore: the number of ordered pairs $(f,g)\in \{a\colon A\to B\}\times\{b\colon B\to A\}$ such that $g\circ f = 1_A$ is:
*
*$0$ if $|A|\gt |B|$ (no injections from $A$ to $B$);
*if $|A|\leq |B|$, same $|A|=n\leq m=|B|$, then there are $\frac{m!}{(m-n)!}$ ways of mapping $A$ to $B$ in a one-to-one manner; and there are $n^{m-n}$ ways of mapping $B-f(A)$ to $A$ (choose which of the $n$ things in $A$ each of the $m-n$ things in $B-f(A)$ is mapped to). So the number of pairs is
$$\frac{n^{m-n}m!}{(m-n)!}.$$
Though not necessary, I had already written much of what is below, so here goes:
For infinite sets the situation is a bit more complicated, but the analysis is the same. The number of sets of size $|A|$ contained in $|B|$ (necessary for embeddings) is either $|B|$ if $|A|\lt |B|$, or is $2^{|A|}$ when $|A|=|B|$; then you need to count the number of bijections between two sets of cardinality $|A|$. This is $2^{|A|}$ (I believe). Thus, there are $|B|2^{|A|}$ one-to-one functions from $A$ to $B$ when $|A|\leq|B|$. Then you should have $2^{|B|}$ ways to complete "most" of those functions to full left inverses $B\to A$. So I'm going to hazard a guess that the number of pairs would be $0$ if $|A|\gt |B|$, and $|B|2^{|A|}2^{|B|} = 2^{|B|}$ if $|A|\leq |B|$ and $|A|$ is infinite.
I'll leave it for someone who knows more cardinal arithmetic than I (Asaf? Andres?) to correct that if I'm wrong.
Finally, if $A$ is finite and $B$ is infinite, there are $|B|$ possible 1-to-1 function $A\to B$, and then $2^{|B|}$ ways of mapping the complement of the image back to $A$, so that would also give $2^{|B|}$ pairs in this case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/173970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Are there "one way" integrals? If we suppose that we can start with any function we like, can we work "backwards" and differentiate the function to create an integral that is hard to solve?
To define the question better, let's say we start with a function of our choosing, $f(x)$. We can then differentiate the function with respect to $x$ do get $g(x)$:
$$g(x) = f'(x)$$
This, in turn, implies, under appropriate conditions, that the integral of $g(x)$ is $f(x)$:
$$\int_a^b { g(x) dx } = [f(x)]_a^b$$
I'm wondering what conditions are appropriate to allow one to easily get a $g(x)$ and $f(x)$ that assure that $f(x)$ can't be easily found from $g(x)$.
SUMMARY OF THE QUESTION
Can we get a function, $g(x)$, that is hard to integrate, yet we know the solution to? It's important that no one else should be able to find the solution, $f(x)$, given only $g(x)$. Please help!
POSSIBLE EXAMPLE
This question/integral seems like it has some potential.
DEFINITION OF HARDNESS
The solution to the definite integral can be returned with the most $n$ significant digits correct. Then it is hard to do this if the time it takes is an exponential number of this time.
In other words, if we get the first $n$ digits correct, it would take roughly $O(e^n)$ seconds to do it.
| Here is an example (by Harold Davenport) of a function that is hard to integrate:
$$\int{2t^6+4t^5+7t^4-3t^3-t^2-8t-8\over
(2t^2-1)^2\sqrt{t^4+4t^3+2t^2+1}}\,dt\ .$$
The primitive is given by (check it!)
$$\eqalign{&-2\log(t^2+4t+2)-\log(\sqrt2+2t)\cr&+ \log\left(\sqrt2
t+2\sqrt2-2\sqrt{t^4+4t^3+2t^2+1}\,\right)\cr
&-5\log(2t^2-1)-5\log\left(2\sqrt2 t+4\sqrt2 -t^2-4t-6\right) \cr&+
5\log\left(\bigl(4\sqrt2+19\bigr)\sqrt{t^4+4t^3+2t^2+1}\right.
\cr &\qquad\qquad\left. - 16\sqrt2 t^2 -8\sqrt2 t +6\sqrt2
-29t^2-38t+5\right)\cr}$$
$$\eqalign{ &+
2\log\left(\bigl(10\sqrt2+17\bigr)\sqrt{t^4+4t^3+2t^2+1}\right.\cr
&\qquad\qquad\left.+4\sqrt2 t^2+16\sqrt2 t -2\sqrt2-11t^2-44t-39\right)
\cr &+ {1\over2}\log\left(\bigl(731\sqrt2
t+71492\sqrt2-70030t-141522\bigr) \sqrt{t^4+4t^3+2t^2+1}
\right.\cr &\qquad\qquad-40597\sqrt2t^3-174520\sqrt2t^2-122871\sqrt2
t+50648\sqrt2\cr&\qquad\qquad\left.+90874t^3+
403722t^2+ 272622t-61070 \vphantom{\sqrt{t^4}}\right)\cr & +
{(2t+1)\sqrt{t^4+4t^3+2t^2+1}\over 4t^2-2}\ .\cr}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/174010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
What is the difference between normal and perpendicular? What is the difference when a line is said to be normal to another and a line is said to be perpendicular to other?
| If normal is 90 degree to the surface, that means normal is used in 3D. Perpendicular in that case is more in 2D referring two same entities (line-line) with 90 degree angle between them. Normal in this case could refer 2 different entities (line-surface) making this term valid for 3D case (I definitely remember hearing term "line normal to surface", rather then "line perpendicular to the surface") - line and surface makes 3D case; line and line is one plane - 2D. This I just figured out from one of the answers here which made most sense for me.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/174075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 6,
"answer_id": 4
} |
Need to find the recurrence equation for coloring a 1 by n chessboard So the question asks me to find the number of ways H[n] to color a 1 by n chessboard with 3 colors - red, blue and white such that the number of red squares is even and number of blue squares is at least one. I am doing it in this way -
1.If the first square is white then the remaining n-1 squares can be colored in H[n-1] ways.
2.If the first square is red then another red will be needed in the n-1 remaining squares and the rest n-2 can be colored in H[n-2] ways. (i.e (n-1)*H[n-2])
3.And now is the problem with blue. If I put blue in the first square and say that the rest n-1 squares can be colored in H[n-1] ways that will be wrong as I already have a blue and may not need any more(while H[n-1] requires one blue at least).
I thought of adding H'[n-1] to H[n] = H[n-1] + (n-1)*H[n-2] which gives
H[n] = H[n-1] + (n-1)*H[n-2] + H'[n-1] where H'[n] is the number of ways to fill n squares with no blue squares(so H'[n] = (n-1)*H'[n-2] + H'[n-1]).
So now I'm kind of really confused how to solve such an equation ->
H[n] = H[n-1] + (n-1)*H[n-2] + H'[n-1]. (I am specifically asked not to use exponential generating function to solve problem).
| Letting $H[n]$ be the number of ways to color the chessboard with an even number of red squares and at least one blue square, and letting $G[n]$ be the very closely related number of ways to color the chessboard with an odd number of red squares with at least one blue square, we notice that $H[n]+G[n]$ is the number of ways to color with at least one blue square, regardless of the condition on red squares.
We get then $H[n] + G[n] = 3^n - 2^n$
We similarly can define $H'[n]$ and $G'[n]$ as the related answers to the counting problem where the conditions on blue squares is removed.
Let us solve the problem with no condition on the blues first. The first square can either blue white or blue followed by a board with an even number of red squares, or begin with a red square followed by a board with an odd number of red squares. We further notice that $H'[n]+G'[n]=3^n$. We have then that $H'[n] = 2H'[n-1] + G'[n-1] = 2H'[n-1]+3^{n-1}-H'[n-1] = H'[n-1]+3^{n-1}$
It follows that when solving along with an initial condition that $H'[1]=2$ that $H'[n]=\frac{1}{2}(3^n+1)$
Back to the original problem, we have then that if the first square is white that the remaining $n-1$ squares must have an even number of red squares and at least one blue. If the first square is red that the remaining $n-1$ squares must have an odd number of red squares and at least one blue, and finally that if the first square is blue that the remaining squares has an even number of red squares with no further condition on blue squares.
We have then $H[n] = H[n-1] + G[n-1] + H'[n-1]$ and simplifying, $H[n] = H[n-1] + 3^{n-1}-2^{n-1}-H[n-1]+\frac{1}{2}(3^{n-1}+1) = \frac{1}{2}(3^n-2^n+1)$, ironically all of the lower terms having been cancelled and we are already at our closed form solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/174119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Proof of the divisibility rule of 17.
Rule: Subtract 5 times the last digit from the rest of the number, if the
result is divisible by 17 then the number is also divisible by 17.
How does this rule work? Please give the proof.
| Let
$$n=\sum_{k=0}^N 10^k a_k$$
be the number we want to test for divisibility, where the $a_k$s are the digits in the decimal expansion of $n$. We form the second number $m$ by the process you describe, $$m = \sum_{k=1}^N 10^{k-1} a_k - 5 a_0 = \frac{n-a_0}{10}- 5 a_0 $$
Now suppose $17|m$. Then there exists a natural number $b$ such that $17b = m$. We then have
$$
17 b = \frac{n-a_0}{10}- 5 a_0
$$
$$
10 * 17 b = n-a_0- 50 a_0 \implies n= 17(10b + 3a_0)
$$
and so $n$ is divisible by 17.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/174175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 4
} |
Expectation and median (Jensen’s inequality) of spacial functions Let’s have a 1-Lipschitz function $f:S^n \to \mathbb{R}$, where $S^n$ is equipped with the geodesic distance $d$ and with the uniform measure $\mu$.
How can I show that such an $f$ satisfies Jensen’s inequality:
$(\int_{S^n} f d\mu)^2 \leq {\int_{S^n} f^2 d\mu}$?
In addition, is it true that in such a case we have $\sqrt{\int_{S^n} f^2 d\mu} \leq m$ where $m$ is the unique number satisfying $\mu(f \geq m) \geq 0.5, \space \mu(f \leq m) \geq 0.5$?
| The first inequality is called Cauchy-Schwarz inequality, rather than Jensen inequality. Its proof is simple and very general: one considers $g=(f-a)^2$ with $a=\int\limits_{S_n}f\,\mathrm d\mu$. Then $g\geqslant0$ everywhere hence $\int\limits_{S_n}g\,\mathrm d\mu\geqslant0$. Expanding this by linearity and using the fact that the mass of $\mu$ is $1$ yields the result.
The second inequality you suggest is odd. If $f(x)=x_1-1$, then $m=-1$, which ruins every chance to get a nonnegative quantity $\leqslant m$. More generally, $ \sqrt{ \int\limits_{S^n} f^2\,\mathrm d\mu } \geqslant \int\limits_{S^n} f\,\mathrm d\mu$ and, as soon as $f$ is symmetric around one of its medians $m$, the RHS is $m$. To sum up, no comparison can exist, and if one existed, it would be the opposite of the one you suggest.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/174254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
orientability of riemann surface Could any one tell me about the orientability of riemann surfaces?
well, Holomorphic maps between two open sets of complex plane preserves orientation of the plane,I mean conformal property of holomorphic map implies that the local angles are preserved and in particular right angles are preserved, therefore the local notions of "clockwise" and "anticlockwise" for small circles are preserved. can we transform this idea to abstract riemann surfaces?
thank you for your comments
| Holomorphic maps not only preserve nonoriented angles but oriented angles as well. Note that the map $z\mapsto\bar z$ preserves nonoriented angles, in particular nonoriented right angles, but it does not preserve the clockwise or anticlockwise orientation of small circles.
Now a Riemann surface $S$ is covered by local coordinate patches $(U_\alpha, z_\alpha)_{\alpha\in I}\ $, and the local coordinate functions $z_\alpha$ are related in the intersections $U_\alpha\cap U_\beta$ by means of holomorphic transition functions $\phi_{\alpha\beta}$: One has $z_\alpha=\phi_{\alpha\beta}\circ z_\beta$ where the Jacobian $|\phi'_{\alpha\beta}(z)|^2$ is everywhere $>0$. It follows that the positive ("anticlockwise") orientation of infinitesimal circles on $S$ is well defined througout $S$. In all, a Riemann surface $S$ is oriented to begin with.
For better understandng consider a Moebius band $M:=\{z\in {\mathbb C}\ | \ -1<{\rm Im}\, z <1\}/\sim\ ,$ where points $x+iy$ and $x+1-iy$ $\ (x\in{\mathbb R})$ are identified. In this case it is not possible to choose coordinate patches $(U_\alpha, z_\alpha)$ covering all of $M$ such that the transition functions are properly holomorphic. As a consequence this $M$ is not a Riemann surface, even though nonoriented angles make sense on $M$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/174322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
} |
Decomposition of a prime number in a cyclotomic field Let $l$ be an odd prime number and $\zeta$ be a primitive $l$-th root of unity in $\mathbb{C}$.
Let $A$ be the ring of algebraic integers in $\mathbb{Q}(\zeta)$.
Let $p$ be a prime number such that $p \neq l$.
Let $f$ be the smallest positive integer such that $p^f \equiv 1$ (mod $l$).
Then $pA = P_1...P_r$, where $P_i's$ are distinct prime ideals of $A$ and each $P_i$ has the degree $f$ and $r = (l - 1)/f$.
My question: How would you prove this?
This is a related question.
| Once you know what the ring of integers in $\mathbf{Q}(\zeta)$ is, you know that the factorization of a rational prime $p$ therein is determined by the factorization of the minimal polynomial of $\zeta$ over $\mathbf{Q}$, which is the cyclotomic polynomial $\Phi_\ell$, mod $p$. So you basically just need to determine the degree of a splitting field over $\mathbf{F}_p[X]$ of the image of $\Phi_\ell$ in $\mathbf{F}_p$. The degree is the $f$ in your question. This can be determined using the Galois theory of finite fields, mainly the fact that the Galois group is cyclic with a canonical generator. The details are carried out in many books, e.g., Neukrich's book on algebraic number theory.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/174380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
The number $\frac{1}{\sqrt{5}}\left[\left(\frac{1+\sqrt{5}}{2}\right)^{n}-\left(\frac{1-\sqrt{5}}{2}\right)^n\right]$ is always an integer For each $n$ consider the expression $$\frac{1}{\sqrt{5}}\left[\left(\frac{1+\sqrt{5}}{2}\right)^{n}-\left(\frac{1-\sqrt{5}}{2}\right)^n\right]$$
I am trying to prove by induction that this is an integer for all $n$.
In the base case $n=1$, it ends up being $1$.
I am trying to prove the induction step:
*
*if $\frac{1}{\sqrt{5}}\left[\left(\frac{1+\sqrt{5}}{2}\right)^{n}-\left(\frac{1-\sqrt{5}}{2}\right)^n\right]$ is an integer, then so is
$\frac{1}{\sqrt{5}}\left[\left(\frac{1+\sqrt{5}}{2}\right)^{n+1}-\left(\frac{1-\sqrt{5}}{2}\right)^{n+1}\right]$.
I have tried expanding it, but didn't get anywhere.
| Hint $\rm\quad \phi^{\:n+1}\!-\:\bar\phi^{\:n+1} =\ (\phi+\bar\phi)\ (\phi^n-\:\bar\phi^n)\ -\ \phi\:\bar\phi\,\ (\phi^{\:n-1}\!-\:\bar\phi^{\:n-1})$
Therefore, upon substituting $\rm\ \phi+\bar\phi\ =\ 1\ =\, -\phi\bar\phi\ $ and dividing by $\:\phi-\bar\phi = \sqrt 5\:$ we deduce that $\rm\:f_{n+1} = f_n + f_{n-1}.\:$ Since $\rm\:f_0,f_1\:$ are integers, all $\rm\,f_n\:$ are integers by induction, using the recurrence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/174438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Area of Triangle inside another Triangle I am stumped on the following question:
In the figure below $AD=4$ , $AB=3$ , and $CD=9$. What is the area of Triangle AEC?
I need to solve this using trigonometric ratios however if trig. ratios makes this problem a lot easier to solve I would be interested in looking how that would work.
Currently I am attempting to solve this in the following way
$\bigtriangleup ACD_{Area}=\bigtriangleup ECD_{Area} + \bigtriangleup ACE_{Area}$
The only problem with this approach is finding the length of AE or ED. How would I do that ? Any suggestion or is there a better method
| $AE+ED=AD=4$, and $ECD$ and $EAB$ are similar (right?), so $AE$ is to $AB$ as $ED$ is to $CD$. Can you take it from there?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/174496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
For Maths Major, advice for perfect book to learn Algorithms and Date Structures Purpose: Self-Learning, NOT for course or exam
Prerequisite: Done a course in basic data structures and algorithms, but too basic, not many things.
Major: Bachelor, Mathematics
My Opinion: Prefer more compact, mathematical, rigorous book on Algorithms and Data Structures. Since it's for long-term self-learning, not for exam or course, then related factors should not be considered (e.g. learning curve, time-usage), only based on the book to perfectly train algorithms and data structures.
After searching at Amazon, following three are somehow highly reputed.
(1)Knuth, The Art of Computer Programming, Volume 1-4A
This book-set contains most important things in Algorithms, and very mathematically rigorous. I prefer to use this to learn algorithms all in one step.
(2)Cormen, Introduction to Algorithms
It's more like in-between (1) and (3).
(3)Skiena, The Algorithm Design Manual
More introductory and practical compared with (1), is it ok for this to be warm-up then read (1), and skip (2) ?
Desirable answer: Advice or more recommended books
| Knuth is an interesting case, it's certainly fairly rigorous, but I would never recommend it as a text to learn from. It's one of those ones that is great once you already know the answer, but fairly terrible if you want to learn something. On top of this, the content is now rather archaically presented and the pseudocode is not the most transparent.
Of course this is my experience of it (background CS & maths), but I have encountered this rough assessment a few times.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/174579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Methods to solve differential equations We are given the equation
$$\frac{1}{f(x)} \cdot \frac{d\left(f(x)\right)}{dx} = x^3.$$
To solve it, "multiply by $dx$" and integrate:
$\frac{x^4}{4} + C = \ln \left( f(x) \right)$
But $dx$ is not a number, what does it mean when I multiply by $dx$, what am I doing, why does it work, and how can I solve it without multiplying by $dx$?
Second question:
Suppose we have the equation $$\frac{d^2f(x)}{dx^2}=(x^2-1)f(x)$$
Then for large $x$, we have $\frac{d^2f(x)}{dx^2}\approx x^2f(x)$, with the approximate solution $ke \cdot \exp \left (\frac{x^2}{2} \right)$
Why is it then reasonable to suspect, or assume, that the solution to the original equation, will be of the form $f(x)=e^{x^2/2} \cdot g(x)$, where $g(x)$ has a simpler form then $f(x)$? When does it not work?
Third question:
The method of replacing all occurences of $f(x)$, and its derivatives by a power series, $\sum a_nx^n$, for which equations does this work or lead to a simpler equation?
Do we lose any solutions this way?
| FIRST QUESTION
This is answered by studying the fundamental theorem of calculus, which basically (in this context) says that if on an interval $(a,b)$
\begin{equation}
\frac{dF}{dx} = f
\end{equation}
then,
\begin{equation}
\int^{b}_{a} f\left(x\right) dx = F(b) - F(a)
\end{equation}
where the integral is the limit of the Riemann sum. And then, you can specify anti-derivatives (indefinite integrals) as $F\left(x\right) + c$ by fixing an arbitrary $F\left(a\right) = c$ and considering the function
\begin{equation}
F\left(x\right) = \int^{x}_{a} f\left(x\right) dx
\end{equation}
where we do not specify the limits as they are understood to exist and are arbitrary. It is from here actually, that we get that multiplying by $dx$ rule.
For,
\begin{equation}
F = \int dF
\end{equation}
and hence
\begin{equation}
dF = f(x)dx \implies F = \int f(x) dx
\end{equation}
This is a little hand waiving, and for actual proofs, you would have to study Riemann sums.
Now, after this, for specific examples such as yours, I guess Andre Nicolas's answer is better but still I will try to offer something similar.
We can say let $g\left(x\right) = \int x^{3} dx$ and $h\left(x\right) = log\left|f\left(x\right)\right|$. Then,
\begin{equation}
\frac{dh\left(x\right)}{dx} = \frac{dg\left(x\right)}{dx}
\end{equation}
for all $x \in \left(a,b\right)$
and hence, we can say that
\begin{equation}
h\left(x\right)= g\left(x\right) + C
\end{equation}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/174654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 1
} |
Decomposition of a prime number $p \neq l$ in the quadratic subfield of a cyclotomic number field of an odd prime order $l$ Let $l$ be an odd prime number and $\zeta$ be a primitive $l$-th root of unity in $\mathbb{C}$.
Let $K = \mathbb{Q}(\zeta)$.
Let $A$ be the ring of algebraic integers in $K$.
Let $G$ be the Galois group of $\mathbb{Q}(\zeta)/\mathbb{Q}$.
$G$ is isomorphic to $(\mathbb{Z}/l\mathbb{Z})^*$.
Hence $G$ is a cyclic group of order $l - 1$.
Let $f = (l - 1)/2$.
There exists a unique subgroup $G_f$ of $G$ whose order is $f$.
Let $K_f$ be the fixed subfield of $K$ by $G_f$.
$K_f$ is a unique quadratic subfield of $K$.
Let $A_f$ be the ring of algebraic integers in $K_f$.
Let $p$ be a prime number such that $p \neq l$.
Let $pA_f = P_1\cdots P_r$, where $P_1, \dots, P_r$ are distinct prime ideals of $A_f$.
Since $p^{l - 1} \equiv 1$ (mod $l$), $p^f = p^{(l - 1)/2} \equiv \pm$1 (mod $l$).
My question: Is the following proposition true? If yes, how would you prove this?
Proposition
(1) If $p^{(l - 1)/2} \equiv 1$ (mod $l$), then $r = 2$.
(2) If $p^{(l - 1)/2} \equiv -1$ (mod $l$), then $r = 1$.
This is a related question.
| This is most easily established by decomposition group calculations; it is a special case of the more general result proved here (which is an answer to OP's linked question).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/174848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does $z ^k+ z^{-k}$ belong to $\Bbb Z[z + z^{-1}]$? Let $z$ be a non-zero element of $\mathbb{C}$.
Does $z^k + z^{-k}$ belong to $\mathbb{Z}[z + z^{-1}]$ for every positive integer $k$?
Motivation:
I came up with this problem from the following question.
Maximal real subfield of $\mathbb{Q}(\zeta )$
| Yes. It holds for $k=1$; it also holds for $k=2$, since
$$z^2+z^{-2} = (z+z^{-1})^2 - 2\in\mathbb{Z}[z+z^{-1}].$$
Assume that $z^k+z^{-k}$ lie in $\mathbb{Z}[z+z^{-1}]$ for $1\leq k\lt n$. Then, if $n$ is odd, we have:
$$\begin{align*}
z^n+z^{-n} &= (z+z^{-1})^n - \sum_{i=1}^{\lfloor n/2\rfloor}\binom{n}{i}(z^{n-i}z^{-i} + z^{i-n}z^{i})\\
&= (z+z^{-1})^n - \sum_{i=1}^{\lfloor n/2\rfloor}\binom{n}{i}(z^{n-2i}+z^{2i-n}).
\end{align*}$$
and if $n$ is even, we have:
$$\begin{align*}
z^n+z^{-n} &= (z+z^{-1})^n - \binom{n}{n/2} - \sum_{i=1}^{(n/2)-1}\binom{n}{i}(z^{n-i}z^{-i} + z^{i-n}z^{i})\\
&= (z+z^{-1})^n - \binom{n}{n/2} - \sum_{i=1}^{(n/2)-1}\binom{n}{i}(z^{n-2i}+z^{2i-n}).
\end{align*}$$
If $1\leq i\leq \lfloor \frac{n}{2}\rfloor$, then $0\leq n-2i \lt n$, so $z^{n-2i}+z^{2i-n}$ lies in $\mathbb{Z}[z+z^{-1}]$ by the induction hypothesis. Thus, $z^n+z^{-n}$ is a sum of terms in $\mathbb{Z}[z+z^{-1}]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/174911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
$\mathbb{CP}^1$ is compact? $\mathbb{CP}^1$ is the set of all one dimensional subspaces of $\mathbb{C}^2$, if $(z,w)\in \mathbb{C}^2$ be non zero , then its span is a point in $\mathbb{CP}^1$.let $U_0=\{[z:w]:z\neq 0\}$ and $U_1=\{[z:w]:w\neq 0\}$, $(z,w)\in \mathbb{C}^2$,and $[z:w]=[\lambda z:\lambda w],\lambda\in\mathbb{C}^{*}$ is a point in $\mathbb{CP}^1$, the map is $\phi_0:U_0\rightarrow\mathbb{C}$ defined by $$\phi_0([z:w])=w/z$$
the map $\phi:U_1\rightarrow\mathbb{C}$ defined by $$\phi_1([z:w])=z/w$$
Now,Could any one tell me why $\mathbb{CP}^1$ is the union of two closed sets $\phi_0^{-1}(D)$ and $\phi_1^{-1}(D)$, where $D$ is closed unit disk in $\mathbb{C}$,and why $\mathbb{CP}^1$ is compact?
| The complex projective line is the union of the two open subsets
$$
D_i=\{[z_0:z_1]\text{ such that }z_i\neq0\},\quad i=0, 1.
$$
Each of them is homeomorphic to $\Bbb C$, hence to the open unit disk.
Compacteness follows from the fact that it is homeomorphic to the sphere $S^2$. You can see this as a simple elaboration of the observation that
$$
\Bbb P^1(\Bbb C)=D_0\cup[0:1].
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/174987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
} |
Question about $p$-adic numbers and $p$-adic integers I've been trying to understand what $p$-adic numbers and $p$-adic integers are today. Can you tell me if I have it right? Thanks.
Let $p$ be a prime. Then we define the ring of $p$-adic integers to be
$$ \mathbb Z_p = \{ \sum_{k=m}^\infty a_k p^k \mid m \in \mathbb Z, a_k \in \{0, \dots, p-1\} \} $$
That is, the $p$-adic integers are a bit like formal power series with the indeterminate $x$ replaced with $p$ and coefficients in $\mathbb Z / p \mathbb Z$. So for example, a $3$-adic integers could look like this: $1\cdot 1 + 2 \cdot 3 + 1 \cdot 9 = 16$ or $\frac{1}{9} + 1 $ and so on. Basically, we get all natural numbers, fractions of powers of $p$ and sums of those two.
This is a ring (just like formal power series). Now we want to turn it into a field. To this end we take the field of fractions with elements of the form
$$ \frac{\sum_{k=m}^\infty a_k p^k}{\sum_{k=r}^\infty b_k p^k}$$
for $\sum_{k=r}^\infty b_k p^k \neq 0$. We denote this field by $\mathbb Q_p$.
Now as it turns out, $\mathbb Q_p$ is the same as what we get if we take the ring of fractions of $\mathbb Z_p$ for the set $S=\{p^k \mid k \in \mathbb Z \}$. This I don't see. Because then this would mean that every number $$ \frac{\sum_{k=m}^\infty a_k p^k}{\sum_{k=r}^\infty b_k p^k}$$ can also be written as $$ \frac{\sum_{k=m}^\infty a_k p^k}{p^r}$$
and I somehow don't believe that. So where's my mistake? Thanks for your help.
| I want to emphasize that $\mathbb Z_p$ is not just $\mathbb F_p[[X]]$ in disguise, though the two rings share many properties. For example, in the $3$-adics one has
\[
(2 \cdot 1) + (2 \cdot 1) = 1 \cdot 1 + 1 \cdot 3 \neq 1 \cdot 1.
\]
I know three ways of constructing $\mathbb Z_p$ and they're all pretty useful. It sounds like you might enjoy the following description:
\[
\mathbb Z_p = \mathbb Z[[X]]/(X - p).
\]
This makes it clear that you can add and multiply elements of $\mathbb Z_p$ just like power series with coefficients in $\mathbb Z$. The twist is that you can always exchange $pX^n$ for $X^{n + 1}$. This is the “carrying over” that Thomas mentions in his helpful series of comments.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/175098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 0
} |
A Trivial Question - Increasing by doubling a number when its negative
The question is : if $x=y-\frac{50}{y}$ , where x and y are both > $0$ . If the value of y is doubled in the equation above the value of x will a)Decrease b)remain same c)increase four fold d)double e)Increase to more than double - Ans)e
Here is how I am solving it $x = \frac{y^2-50}{y}$ by putting y=$1$ . I get $-49$ and by putting y=2(Doubling) I get $y=-24$. Now how is $-24$ more than double of $-49$. Correct me if I am wrong double increment of $-49$ will be = $-49+49 = 0$ which would be greater than -24.
| Let $x_1=y-\frac{50}{y}$ and $x_2=2y-\frac{50}{2y}$. Then $x_2-x_1=(2y-y)-(\frac{50}{2y}-\frac{50}{y})=y+\frac{50}{2y}=y-\frac{50}{y}+(\frac{150}{2y})=x_1+(\frac{150}{2y})$
$\implies x_2-2x_1=(\frac{150}{2y})\gt 0$ (as $y\gt 0$)$\implies x_2\gt 2x_1\implies x_2/x_1\gt 2$ which is option (e).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/175163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
optimality of 2 as a coefficient in a continued fraction theorem I'm giving some lectures on continued fractions to high school and college students, and I discussed the standard theorem that, for a real number $\alpha$ and integers $p$ and $q$ with $q \not= 0$, if $|\alpha-p/q| < 1/(2q^2)$ then $p/q$ is a convergent in the continued fraction expansion of $\alpha$. Someone in the audience asked if 2 is optimal: is there a positive number $c < 2$ such that, for every $\alpha$ (well, of course the case of real interest is irrational $\alpha$), when $|\alpha - p/q| < 1/(cq^2)$ it is guaranteed that $p/q$ is a convergent to the continued fraction expansion of $\alpha$?
Please note this is not answered by the theorem of Hurwitz, which says that an irrational $\alpha$ has $|\alpha - p_k/q_k| < 1/(\sqrt{5}q_k^2)$ for infinitely many convergents $p_k/q_k$, and that $\sqrt{5}$ is optimal: all $\alpha$ whose cont. frac. expansion ends with an infinite string of repeating 1's fail to satisfy such a property if $\sqrt{5}$ is replaced by any larger number. For the question the student at my lecture is asking, an optimal parameter is at most 2, not at least 2.
| 2 is optimal. Let $\alpha=[a,2,\beta]$, where $\beta$ is a (large) irrational, and let $p/q=[a,1]=(a+1)/1$. Then $p/q$ is not a convergent to $\alpha$, and $${p\over q}-\alpha={1\over2-{1\over \beta+1}}$$ which is ${1\over(2-\epsilon)q^2}$ since $q=1$.
More complicated examples can be constructed. I think this works, though I haven't done all the calculations. Let $\alpha=[a_0,a_1,\dots,a_n,m,2,\beta]$ with $m$ and $\beta$ large, let $p/q=[a_0,a_1,\dots,a_n,m,1]$. Then again $p/q$ is not a convergent to $\alpha$, while $$\left|{p\over q}-\alpha\right|={1\over(2-\epsilon)q^2}$$ for $m$ and $\beta$ sufficiently large.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/175220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
} |
If $p$ is an element of $\overline E$ but not a limit point of $E$, then why is there a $p' \in E$ such that $d(p, p') < \varepsilon$? I don't understand one of the steps of the proof of Theorem 3.10(a) in Baby Rudin. Here's the theorem and the proof up to where I'm stuck:
Relevant Definitions
The closure of the subset $E$ of some metric space is the union of $E$ with the set of all its limit points.
The diameter of the subset $E$ of some metric space is the supremum of the set of all pairwise distances between its elements.
For the points $x$ and $y$ in some metric space, $d(x, y)$ denotes the distance between $x$ and $y$.
Theorem 3.10(a) If $\overline{E}$ is the closure of a set $E$ in a metric space $X$, then
$$
\text{diam} \ \overline{E} = \text{diam} \ E.
$$
Proof. Since $E \subseteq \overline{E}$, we have
$$\begin{equation*}
\text{diam} \ E \leq \text{diam} \ \overline{E}.
\end{equation*}$$
Let $\epsilon > 0$, and pick $p, q \in \overline{E}$. By the definition of $\overline{E}$, there are points $p', q' \in E$ such that $d(p,p') < \epsilon$ and $d(q, q') < \epsilon$...
I see that this works if $p$ and $q$ are limit points of $E$. But how does this work if, say, $p$ isn't a limit point of $E$? What if $E$ is some region in $\mathbb{R}^2$ along with the point $p$ by itself way off somewhere?
| Basically Rudin needs to write a triangle inequality with two additional arbitrary points $p',q'$.
$d(p,q) \leq d(p,p') + d(p',q) $
$ d(p,p') + d(p',q) \leq d(p,p') + d(p',q') + d(q',q)$
He assumes $E$ is non-empty, so we might pick points $p'$ and $q'$ for this inequality, even if $p = p'$ and $q = q'$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/175264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
The Star Trek Problem in Williams's Book This problem is from the book Probability with martingales by Williams. It's numbered as Exercise 12.3 on page 236. It can be stated as follows:
The control system on the starship has gone wonky. All that one can do is to set a distance to be traveled. The spaceship will then move that distance in a randomly chosen direction and then stop. The object is to get into the Solar System, a ball of radius $r$. Initially, the starship is at a distance $R>r$ from the sun.
If the next hop-length is automatically set to be the current distance to the Sun.("next" and "current" being updated in the obvious way). Let $R_n$ be the distance from Sun to the starship after $n$ space-hops. Prove $$\sum_{n=1}^\infty \frac{1}{R^2_n}<\infty$$
holds almost everywhere.
It has puzzled me a long time. I tried to prove the series has a finite expectation, but in fact it's expectation is infinite. Does anyone has a solution?
| in the book it seems to me that the sum terminates when they get home, so if they get home the sum is finite. williams also hints: it should be clear what thm to use here. i am sure he is referring to levy's extension of the borel-cantelli lemma, $\frac 1 {R_n^2}$ is the conditional probability of getting home at time $n+1$ given your position at time n is $R_n$, it is the proportion of the area of the sphere of radius etc.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/175348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 3
} |
Determine invertible and inverses in $(\mathbb Z_8, \ast)$ Let $\ast$ be defined in $\mathbb Z_8$ as follows:
$$\begin{aligned} a \ast b = a +b+2ab\end{aligned}$$
Determine all the invertible elements in $(\mathbb Z_8, \ast)$ and determine, if possibile, the inverse of the class $[4]$ in $(\mathbb Z_8, \ast)$.
Identity element
We shall say that $(\mathbb Z_8, \ast)$ has an identity element if:
$$\begin{aligned} (\forall a \in \mathbb Z_8) \text { } (\exists \varepsilon \in \mathbb Z_8 : a \ast \varepsilon = \varepsilon \ast a = a)\end{aligned}$$
$$\begin{aligned} a+\varepsilon+2a\varepsilon = a \Rightarrow \varepsilon +2a\varepsilon = 0 \Rightarrow \varepsilon(1+2a) = 0 \Rightarrow \varepsilon = 0 \end{aligned}$$
As $\ast$ is commutative, similarly we can prove for $\varepsilon \ast a$.
$$\begin{aligned} a \ast 0 = a+0+2a0 = a \end{aligned}$$
$$\begin{aligned} 0\ast a = 0+a+20a = a\end{aligned}$$
Invertible elements and $[4]$ inverse
We shall state that in $(\mathbb Z_8, \ast)$ there is the inverse element relative to a fixed $a$ if and only if exists $\alpha \in (\mathbb Z_8, \ast)$ so that:
$$\begin{aligned} a\ast \alpha = \alpha \ast a = \varepsilon \end{aligned}$$
$$\begin{aligned} a+\alpha +2a\alpha = 0 \end{aligned}$$
$$\begin{aligned} \alpha(2a+1) \equiv_8 -a \Rightarrow \alpha \equiv_8 -\frac{a}{(2a+1)} \end{aligned}$$
In particular looking at $[4]$ class, it follows:
$$\begin{aligned} \alpha \equiv_8 -\frac{4}{(2\cdot 4+1)}=-\frac{4}{9} \end{aligned}$$
therefore:
$$\begin{aligned} 9x \equiv_8 -4 \Leftrightarrow 1x \equiv_8 4 \Rightarrow x=4 \end{aligned}$$
which seems to be the right value as
$$\begin{aligned} 4 \ast \alpha = 4 \ast 4 = 4+4+2\cdot 4\cdot 4 = 8 + 8\cdot 4 = 0+0\cdot 4 = 0 \end{aligned}$$
Does everything hold? Have I done anything wrong, anything I failed to prove?
| From your post:
Determine all the invertible elements in $(\mathbb Z_8, \ast)$
You have shown that the inverse of $a$ is $\alpha \equiv_8 -\frac{a}{(2a+1)}$. I would go a step further and list out each element which has an inverse since you only have eight elements to check.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/175473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
The longest sum of consecutive primes that add to a prime less than 1,000,000
In Project Euler problem $50,$ the goal is to find the longest sum of consecutive primes that add to a prime less than $1,000,000. $
I have an efficient algorithm to generate a set of primes between $0$ and $N.$
My first algorithm to try this was to brute force it, but that was impossible.
I tried creating a sliding window, but it took much too long to even begin to cover the problem space.
I got to some primes that were summed by $100$+ consecutive primes, but had only run about $5$-$10$% of the problem space.
I'm self-taught, with very little post-secondary education.
Where can I read about or find about an algorithm for efficiently calculating these consecutive primes?
I'm not looking for an answer, but indeed, more for pointers as to what I should be looking for and learning about in order to solve this myself.
| Small speed up hint: the prime is clearly going to be odd, so you can rule out about half of the potential sums that way.
If you're looking for the largest such sum, checking things like $2+3,5+7+11,\cdots$ is a waste of time. Structure it differently so that you spend all your time dealing with larger sums. You don't need most of the problem space.
Hopefully this isn't too much of an answer for you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/175523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 2
} |
Show matrix $A+5B$ has an inverse with integer entries given the following conditions Let $A$ and $B$ be 2×2 matrices with integer entries such that
each of $A$, $A + B$, $A + 2B$, $A + 3B$, $A + 4B$ has an inverse with integer
entries. Show that the same is true for $A + 5B$.
| First note that a matrix $X$ with integer entries is invertible and its inverse has integer entries if and only if $\det(X)=\pm1$.
Let $P(x)=\det(A+xB)$. Then $P(x)$ is a polynomial of degree at most $4$, with integer coefficients, and $P(0),P(1),P(2),P(3),P(4) \in \{ \pm 1 \}$.
Claim: $P(0)=P(1)=P(3)=P(4)$.
Proof: It is known that $b-a|P(b)-P(a)$ for $a,b$ integers.
Then $3|P(4)-P(1), 3|P(3)-P(0)$ and $4|P(4)-P(0)$. Since the RHS of each division is $0$ or $\pm 2$, it follows it is zero. This proves the claim.
Now, $P(x)-P(0)$ is a polynomial of degree at most four which has the roots $0,1,3,4$. Thus
$$P(x)-P(0)=ax(x-1)(x-3)(x-4) \,.$$
hence $P(2)=P(0)-4a$. Since $P(2), P(0) \in \{ \pm 1 \}$, it follows that $a=0$, and hence
$P(x)$ is the constant polynomial $1$ or $-1$.
Extra One can actually deduce further from here that $\det(A)=\pm 1$ and $A^{-1}B$ is nilpotenet.
Indeed $A$ is invertible and since $\det(A+xB)=\det(A)\det(I+xA^{-1}B)$ you get from here that $\det(I+xA^{-1}B)=1$. It is trivial to deduce next that the characteristic polynomial of $A^{-1}B$ is $x^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/175575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
$G$ be a group acting holomorphically on $X$
could any one tell me why such expression for $g(z)$, specially I dont get what and why is $a_n(g)$? notations confusing me and also I am not understanding properly, and why $g$ is automorphism? where does $a_i$ belong? what is the role of $G$ in that expression of power series?
Thank you, I am self reading the subject, and extremely thank you to all the users who are responding while learning and posting my trivial doubts. Thank you again.
| The element $g$ gives rise to a function $g: X \to X$. Since $g$ is a function, it has a Taylor expansion at the point $p$ in the coordinate $z$. We could write
$$
g(z) = \sum_{n=0}^\infty a_nz^n.
$$
for some complex numbers $a_n$. However, we are going to want to do this for all the elements of the group. If there's another element, $k$, we could write
$$
k(z) = \sum_{n=0}^\infty b_nz^n.
$$
It is terrible notation to pick different letters for each element of the group, and how do we even do it for an arbitrary group? What the author is saying is that, instead, we can pick one efficient notation by labeling the Taylor-series coefficient with the appropriate element of $G$. So what I called $a_n$ is what he calls $a_n(g)$; what I called $b_n$ is what he called $a_n(k)$. Do you now understand his arguments that $a_0(g) = 0$ (for any $g$) and $a_1(g)$ can't be?
But it's not just that his notation is better than mine - it's also that it makes it more obvious that we get a function $G \to \mathbb{C}^\times$ by taking the first Taylor coefficient (in his notation that's $g \mapsto a_1(g)$). It's this function which he is showing is a homomorphism.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/175653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
A Couple of Normal Bundle Questions We are working through old qualifying exams to study. There were two questions concerning normal bundles that have stumped us:
$1$. Let $f:\mathbb{R}^{n+1}\longrightarrow \mathbb{R}$ be smooth and have $0$ as a regular value. Let $M=f^{-1}(0)$.
(a) Show that $M$ has a non-vanishing normal field.
(b) Show that $M\times S^1$ is parallelizable.
$2$. Let $M$ be a submanifold of $N$, both without boundary. If the normal bundle of $M$ in $N$ is orientable and $M$ is nullhomotopic in $N$, show that $M$ is orientable.
More elementary answers are sought. But, any kind of help would be appreciated. Thanks.
| Some hints:
*
*(a) consider $\nabla f$. $\quad$(b)Show that $TM\oplus \epsilon^1\cong T\mathbb{R}^{n+1}|_M$ is a trivial bundle, then analyze $T(M\times S^1)$.
*Try to use homotopy to construct an orientation of $TN|_M$. Let $F:M\times[0,1]\rightarrow N$ be a smooth homotopy map s.t. $F_0$ is embedding, $F_1$ is mapping to a point $p$, then pull back (my method is parallel transportation) the orientation of $T_p N$ to $TN|_M$, and then use $TN|_M\cong TM\oplus T^\perp M$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/175710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is this statement stronger than the Collatz conjecture?
$n$,$k$, $m$, $u$ $\in$ $\Bbb N$;
Let's see the following sequence:
$x_0=n$; $x_m=3x_{m-1}+1$.
I am afraid I am a complete noob, but I cannot (dis)prove that the following implies the Collatz conjecture:
$\forall n\exists k,u:x_k=2^u$
Could you help me in this problem? Also, please do not (dis)prove the statement, just (dis)prove it is stronger than the Collatz conjecture.
If it implies and it is true, then LOL.
UPDATE
Okay, let me reconfigure the question: let's consider my statement true. In this case, does it imply the Collatz conjecture?
Please help me properly tagging this question, then delete this line.
| Call your statement S. Then: (1.) S does not hold. (2.) S implies the Collatz conjecture (and S also implies any conjecture you like). (3.) I fail to see how Collatz conjecture should imply S. (4.) If indeed Collatz conjecture implies S, then Collatz conjecture does not hold (and this will make the headlines...).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/175784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Question about proof of $A[X] \otimes_A A[Y] \cong A[X, Y] $ As far as I understand universal properties, one can prove $A[X] \otimes_A A[Y] \cong A[X, Y] $ where $A$ is a commutative unital ring in two ways:
(i) by showing that $A[X,Y]$ satisfies the universal property of $A[X] \otimes_A A[Y] $
(ii) by using the universal property of $A[X] \otimes_A A[Y] $ to obtain an isomorphism $\ell: A[X] \otimes_A A[Y] \to A[X,Y]$
Now surely these two must be interchangeable, meaning I can use either of the two to prove it. So I tried to do (i) as follows:
Define $b: A[X] \times A[Y] \to A[X,Y]$ as $(p(X), q(Y)) \mapsto p(X)q(Y)$. Then $b$ is bilinear. Now let $N$ be any $R$-module and $b^\prime: A[X] \times A[Y] \to N$ any bilinear map.
I can't seem to define $\ell: A[X,Y] \to N$ suitably. The "usual" way to define it would've been $\ell: p(x,y) \mapsto b^\prime(1,p(x,y)) $ but that's not allowed in this case.
Question: is it really not possible to prove the claim using (i) in this case?
| Define $l(X^i Y^j) := b'(X^i, Y^j)$ and continue this to an $A$-linear map $A[X,Y] \to N$ by
$$l\left(\sum_{i,j} a_{ij} X^i Y^j\right) = \sum_{i,j} a_{ij} b'(X^i,Y^j).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/175834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 1
} |
How to prove that if $m = 10t+k $ and $67|t - 20k$ then 67|m? m, t, k are Natural numbers.
How can I prove that if $m = 10t+k $ and $67|t - 20k$ then 67|m ?
| We have
$$10t+k=10t -200k+201k=10(t-20k)+(3)(67)k.$$
The result now follows from the fact that $67\mid(t-20k)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/175870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Distance in the Poincare Disk model of hyperbolic geometry I am trying to understand the Poincare Disk model of a hyperbolic geometry and how to measure distances. I found the equation for the distance between two points on the disk as:
$d^2 = (dx^2 + dy^2) / (1-x^2-y^2)^2$
Given two points on the disk, I am assuming that $dx$ is the difference in the euclidean $x$ coordinate of the points, and similar for $dy$. So, what are $x$ and $y$ in the formula? Also, I see some formulas online as:
$d^2 = 4(dx^2 + dy^2) / (1-x^2-y^2)^2$
I am not sure which one is correct.
Assuming that $X$ and $Y$ are coordinates of one of the points, I have tried a few examples, but can't get the math to work out. For instance, if $A = (0,0)$, $B = (0,.5)$, $C = (0,.75)$, then what are $d(A,B)$, $d(B,C)$, and $d(A,C)$? The value $d(A,B) + d(B,C)$ should equal $d(A,C)$, since they are on the same line, but I can't get this to work out. I get distances of $.666$, $.571$, and $1.714$ respectively.
MathWorld - Hyperbolic Metric
| The point is $dx$ and $dy$ in the formula (the one with the 4 in is right btw) don't represent the difference in the $x$ and $y$ co-ordinates, but rather the 'infinitesimal' distance at the point $(x,y)$, so to actually find the distance between two points we have to do some integration.
So the idea is that at the point $(x,y)$ in Euclidean co-ordinates, the length squared, $ds^2$, of an infinitesimally small line is the sum of the infinitesimally small projections of that line onto the $x$ and $y$ axes ($dx^2$ and $dy^2$) multiplied by a scaling factor which depends on $x$ and $y$ ($\frac{4}{(1-x^2-y^2)^2}$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/175927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 3
} |
Formal notation for function equality Is there a short-hand notation for
$$
f(x) = 1 \quad \forall x
$$
?
I've seen $f\equiv 1$ being used before, but found some some might (mis)interpret that as $f:=1$ (in the sense of definition), i.e., $f$ being the number $1$.
| Go ahead and use $\equiv$. To prevent any possibility of misunderstanding, you can use the word "constant" the first time this notation appears. As in "let $f$ be the constant function $f\equiv1$..."
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/175989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prime-base products: Plot For each $n \in \mathbb{N}$, let $f(n)$ map $n$ to the product of the primes that divide $n$.
So for $n=112$, $n=2^4 \cdot 7^1$, $f(n)= 2 \cdot 7 = 14$.
For $n=1000 = 2^3 \cdot 3^3$, $f(1000)=6$.
Continuing in this manner, I arrive at the following plot:
Essentially: I would appreciate an explanation of this plot, "in the large":
I can see why it contains near-lines at slopes $1,2,3,4,\ldots,$ etc., but, still, somehow
the observational regularity was a surprise to me, due, no doubt, to my numerical naivety—especially the way it degenerates near the $x$-axis to such a regular stipulated pattern.
I'd appreciate being educated—Thanks!
| I cannot tell how much you know. If a number $n$ is squarefree then $f(n) = n.$ This is the most frequent case, as $6/\pi^2$ of natural numbers are squarefree. Next most frequent are 4 times an odd squarefree number, in which case $f(n) = n/2,$ a line of slope $1/2,$ as I think you had figured out. These numbers are not as numerous, a count of $f(n) = n/2$ should show frequency below $6/\pi^2.$ All your lines will be slope $1/k$ for some natural $k,$ but it is not just a printer effect that larger $k$ shows a less dense line. Anyway, worth calculating the actual density of the set $f(n) = n/k.$
Meanwhile, note that a computer printer does not actually join up dots in a line into a printed line, that would be nice but is not realistic. There are optical effects in your graph that suggest we are seeing step functions. If so, there are artificial patterns not supported mathematically.
Alright, i am seeing an interesting variation on frequency that I did not expect. Let us define
$$ g(k) = \frac{\pi^2}{6} \cdot \mbox{frequency of} \; \left\{ f(n) = n/k \right\}. $$
Therefore $$ g(1) = 1. $$
What I am finding is, for a prime $p,$
$$ g(p) = \frac{1}{ p \,(p+1)}, $$
$$ g(p^2) = \frac{1}{ p^2 \,(p+1)}, $$
$$ g(p^m) = \frac{1}{ p^m \,(p+1)}. $$
Furthermore $g$ is multiplicative, so when $\gcd(m,n) = 1,$ then $$g(mn) = g(m) g(n). $$
Note that it is necessary that the frequency of all possible events be $1,$ so
$$ \sum_{k=1}^\infty \; g(k) = \frac{\pi^2}{6}.$$ I will need to think about the sum, it ought not to be difficult to recover this from known material on $\zeta(2).$
EDIT: got it. see EULER. For any specific prime, we get
$$ G(p) = g(1) + g(p) + g(p^2) + g(p^3) + \cdots = \left( 1 + \frac{1}{p(p+1)} + \frac{1}{p^2(p+1)} + + \frac{1}{p^3(p+1)} + \cdots \right) $$ or
$$ G(p) = 1 + \left( \frac{1}{p+1} \right) \left( \frac{1}{p} + \frac{1}{p^2} + \frac{1}{p^3} + \cdots \right) $$ or
$$ G(p) = \frac{p^2}{p^2 - 1} = \frac{1}{1 - \frac{1}{p^2}}. $$
Euler's Product Formula tells us that
$$ \prod_p \; G(p) = \zeta(2) = \frac{\pi^2 }{6}. $$ The usual bit about unique factorization and multiplicative functions is
$$ \sum_{k=1}^\infty \; g(k) = \prod_p \; G(p) = \zeta(2) = \frac{\pi^2}{6}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/176052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Identity related to binomial distribution? While writing a (non-math) paper I came across the following apparent identity:
$N \cdot \mathop \sum \limits_{i = 1}^N \frac{1}{i}\left( {\begin{array}{*{20}{c}}
{N - 1}\\
{i - 1}
\end{array}} \right){p^{i - 1}}{\left( {1 - p} \right)^{N - i}} = \frac{{1 - {{\left( {1 - p} \right)}^N}}}{p}$
where $N$ is a positive integer and $p$ is a nonzero probability. Based on intuition and some manual checks, this looks like it should be true for all such $N$ and $p$. I can't prove this, and being mostly ignorant about math, I don't know how to learn what I need to prove this. I'd really appreciate anything helpful, whether a quick pointer in the right direction or the whole proof (or a proof or example that the two aren't identical).
Note also that ${1 - {\left( {1 - p} \right)}^N} = {{\sum\limits_{i = 1}^N {\left( {\begin{array}{*{20}{c}}
N\\
i
\end{array}} \right){p^i}{{\left( {1 - p} \right)}^{N - i}}} }}$
and that ${p = {1 - {\left( {1 - p} \right)}^1}}$
For background, see the current draft with relevant highlightings here.
| $$
\begin{align}
N\sum_{i=1}^N\dfrac1i\binom{N-1}{i-1}p^{i-1}(1-p)^{N-i}
&=\frac{(1-p)^N}{p}\sum_{i=1}^N\binom{N}{i}\left(\frac{p}{1-p}\right)^i\tag{1}\\
&=\frac{(1-p)^N}{p}\left[\left(1+\frac{p}{1-p}\right)^N-1\right]\tag{2}\\
&=\frac{(1-p)^N}{p}\left[\frac1{(1-p)^N}-1\right]\tag{3}\\
&=\frac{1-(1-p)^N}{p}\tag{4}
\end{align}
$$
Explanation of steps:
*
*$\displaystyle\frac{N}{i}\binom{N-1}{i-1}=\binom{N}{i}$
*$\displaystyle\sum_{i=0}^N\binom{N}{i}x^i=(1+x)^N\quad\quad$($i=0$ is missing, so we subtract $1$)
*$1+\dfrac{p}{1-p}=\dfrac1{1-p}$
*distribute multiplication over subtraction
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/176096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Convergence of the sequence $f_{1}\left(x\right)=\sqrt{x} $ and $f_{n+1}\left(x\right)=\sqrt{x+f_{n}\left(x\right)} $
Let $\left\{ f_{n}\right\} $ denote the set of functions on
$[0,\infty) $ given by $f_{1}\left(x\right)=\sqrt{x} $ and
$f_{n+1}\left(x\right)=\sqrt{x+f_{n}\left(x\right)} $ for $n\ge1 $.
Prove that this sequence is convergent and find the limit function.
We can easily show that this sequence is is nondecreasing. Originally, I was trying to apply the fact that “every bounded monotonic sequence must converge” but then it hit me this is true for $\mathbb{R}^{n} $. Does this fact still apply on $C[0,\infty) $, the set of continuous functions on $[0,\infty) $. If yes, what bound would we use?
| You know that the sequence is increasing. Now notice that if $f(x)$ is the positive solution of $y^2=x+y$, we have $f_n(x)<f(x) \ \forall n \in \mathbb{N}$. In fact, notice that this hold for $n=1$ and assuming for $n-1$ we have
$$f^{2}_{n+1}(x) = x +f_n(x) < x+f(x) = f^{2}(x).$$ Then there exist $\lim_n f_n(x)$ and taking the limit in $f^{2}_{n+1}(x) = x + f_n(x)$, we find that $\lim_n f_n(x)= f(x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/176260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Proof by contradiction that $n!$ is not $O(2^n)$ I am having issues with this proof: Prove by contradiction that $n! \ne O(2^n)$. From what I understand, we are supposed to use a previous proof (which successfully proved that $2^n = O(n!)$) to find the contradiction.
Here is my working so far:
Assume $n! = O(2^n)$. There must exist $c$, $n_{0}$ such that $n! \le c \cdot 2^n$. From the previous proof, we know that $n! \le 2^n$ for $n \ge 4$.
We pick a value, $m$, which is gauranteed to be $\ge n_{0}$ and $\ne 0$. I have chosen $m = n_{0} + 10 + c$.
Since $m > n_0$:
$$m! \le c \cdot 2^m\qquad (m > n \ge n_0)$$
$$\dfrac{m!}{c} \le 2^m$$
$$\dfrac{1}{c} m! \le 2^m$$
$$\dfrac{1}{m} m! \le 2^m\qquad (\text{as }m > c)$$
$$(m - 1)! \le 2^m$$
That's where I get up to.. not sure which direction to head in to draw the contradiction.
| In
Factorial Inequality problem $\left(\frac n2\right)^n > n! > \left(\frac n3\right)^n$,
they have obtained
$$
n!\geq \left(\frac{n}{e}\right)^n.
$$
Hence
$$
\lim_{n\rightarrow\infty}\frac{n!}{2^n}\geq\lim_{n\rightarrow\infty}\left(\frac{n}{2e}\right)^n=\infty.
$$
Suppose that $n!=O(2^n)$. Then there exist $C>0$ and $N_0\in \mathbb{N}$ such that
$$\frac{
n!}{2^n}\leq C
$$
for all $n\geq N_0$. Lettting $n\rightarrow\infty$ in the aobve inequality we obtain
$$
\lim_{n\rightarrow\infty}\frac{n!}{2^n}\leq C,
$$
which is an absurd.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/176334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 7,
"answer_id": 1
} |
Continuous maps between compact manifolds are homotopic to smooth ones
If $M_1$ and $M_2$ are compact connected manifolds of dimension $n$, and $f$ is a continuous map from $M_1$ to $M_2$, f is homotopic to a smooth map from $M_1$ to $M_2$.
Seems to be fairly basic, but I can't find a proof. It might be necessary to assume that the manifolds are Riemannian.
It should be possible to locally solve the problem in Euclidean space by possibly using polynomial approximations and then patching them up, where compactness would tell us that approximating the function in finitely many open sets is enough. I don't see how to use the compactness of the target space though.
| It is proved as Proposition 17.8 on page 213 in Bott, Tu, Differential Forms in Algebraic Topology. For the necessary Whitney embedding theorem, they refer to deRham, Differential Manifolds.
This Whitney Approximation on Manifolds is proved as Proposition 10.21 on page 257 in Lee, Introduction to Smooth Manifolds.
There you can find even the proof of Whitney embedding Theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/176399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 1,
"answer_id": 0
} |
What is a lift? What exactly is a lift? I wanted to prove that for appropriately chosen topological groups $G$ we can show that the completion of $\widehat{G}$ is isomorphic to the inverse limit $$\lim_{\longleftarrow} G/G_n$$
I wasn't sure how to construct the map so I asked this question to which I got an answer but I don't understand it. Searching for "lift" or "inverse limit lift" has not helped and I was quite sure that if $[x] \in \widehat{G}$ I could just "project" $x_n$ to $$\lim_{\longleftarrow} G/G_n$$ using the map $\prod_{n=1}^\infty \pi_n (x) = \prod_{n=1}^\infty x_n$ in $G/G_n$. Would someone help me finish this construction and explain to me what a lift is? Thank you.
Edit
I don't know any category theory. An explanation understandable by someone not knowing any category theory would be very kind.
| The term lift is merely meant to mean the following. Given a surjective map $G\to G'$ a lift of an element $x\in G'$ is a choice of $y\in G$ such that $y\mapsto x$ under this map.
In the linked answer you have $(x_n)\in \lim G/G_n$. Thus the notation has $x_n\in G/G_n$. The elements $y_n$ are just chosen preimages of $x_n$ under the natural surjection $G\to G/G_n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/176442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Find endomorphism of $\mathbb{R}^3$ such that $\operatorname{Im}(f) \subset \operatorname{Ker}(f)$ I have a problem:
Let's find an endomorphism of $\mathbb{R}^3$ such that $\operatorname{Im}(f) \subset \operatorname{Ker}(f)$.
How would you do it?
The endomorphism must be not null.
| Observe that such operator has the property that $f(f(\vec x))=0$, since $f(\vec x)\in\ker f$. Any matrix $M$ in $M_3(\mathbb R)$ such that $M^2=0$ will give you such operator.
For example, then, $f(x_1,x_2,x_3)=(x_3,0,0)$, for the canonical basis the matrix looks like this:$$\begin{pmatrix} 0&0&1\\0&0&0\\0&0&0\end{pmatrix}$$
See now that $f^2=0$, but $f\neq 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/176504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Alternating sum of binomial coefficients
Calculate the sum:
$$ \sum_{k=0}^n (-1)^k {n+1\choose k+1} $$
I don't know if I'm so tired or what, but I can't calculate this sum. The result is supposed to be $1$ but I always get something else...
| Another way to see it: prove that
$$\binom{n}{k}+\binom{n}{k+1}=\binom{n+1}{k+1}\,\,\,,\,\,\text{so}$$
$$\sum_{k=0}^n(-1)^k\binom{n+1}{k+1}=\sum_{k=0}^n(-1)^k\cdot 1^{n-k}\binom{n}{k}+\sum_{k=0}^n(-1)^k\cdot 1^{n-k}\binom{n}{k+1}=$$
$$=(1+(-1))^n-\sum_{k=0}^n(-1)^{k+1}\cdot 1^{n-k-1}\binom{n}{k+1}=0-(1-1)^n+1=1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/176622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
} |
A simple question of finite number of basis Let $V$ be a vector space. Define
" A set $\beta$ is a basis of $V $" as "(1) $\beta$ is linearly independent set, and (2) $\beta$ spans $V$ "
On this definition, I want to show that "if $V$ has a basis (call it $\beta$) then $\beta$ is a finite set."
In my definition, I have no assumption of the finiteness of the set $\beta$. But Can I show this statement by using some properties of a vector space?
| I take it $V$ is finite-dimensional? For instance, the (real) vector space of polynomials over $\mathbb{R}$ is infinite-dimensional $-$ in this space, no basis is finite. And for finite-dimensional spaces, it's worth noting that the dimension of a vector space $V$ is defined as being the size of a basis of $V$, so if $V$ is finite-dimensional then any basis for $V$ is automatically finite, by definition!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/176660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Show $\iint_{D} f(x,y)(1 - x^2 - y^2) ~dx ~dy = \pi/2$ Suppose $f(x,y)$ is a bounded harmonic function in the unit disk
$D = \{z = x + iy : |z| < 1 \} $
and $f(0,0) = 1$. Show that
$$\iint_{D} f(x,y)(1 - x^2 - y^2) ~dx ~dy = \frac{\pi}{2}.$$
I'm studying for a prelim this August and I haven't taken Complex in a long time (two years ago). I don't know how to solve this problem or even where to look unless it's just a game with Green's theorem-any help? I don't need a complete solution, just a helpful hint and I can work the rest out on my own.
| Harmonic functions have the mean value property, that is
$$
\frac1{2\pi}\int_0^{2\pi}f(z+re^{i\phi})\,\mathrm{d}\phi=f(z)
$$
If we write the integral in polar coordinates
$$
\begin{align}
\iint_Df(x,y)(1-x^2-y^2)\,\mathrm{d}x\,\mathrm{d}y
&=\int_0^1\int_0^{2\pi}f(re^{i\phi})(1-r^2)\,r\,\mathrm{d}\phi\,\mathrm{d}r\\
&=2\pi\int_0^1f(0)(1-r^2)\,r\,\mathrm{d}r\\
&=2\pi f(0)\cdot\frac14\\
&=\frac\pi2
\end{align}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/176714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Why does the $2$'s and $1$'s complement subtraction works? The algorithm for $2$'s complement and $1$'s complement subtraction is tad simple:
$1.$ Find the $1$'s or $2$'s complement of the subtrahend.
$2.$ Add it with minuend.
$3.$ If there is no carry then then the answer is in it's complement form we have take the $1$'s or $2$'s
complement of the result and assign a negative sign to the final
answer.
$4.$ If there is a carry then add it to the result in case of $1$'s complement or neglect in case of $2$'s complement.
This is what I was taught in undergraduate, but I never understood why this method work. More generally why does the radix complement or the diminished radix complement subtraction works? I am more interested in understanding the mathematical reasoning behind this algorithm.
| We consider $k$-bit integers with an extra $k+1$ bit, that is associated with the sign. If the "sign-bit" is 0 then we treat this number as usual as a positive number in binary representation. Otherwise this number is negative (details follow).
Let us start with the 2-complement. To compute the 2-complement of an number $x$ we flip the bits, starting after the right most 1. Here is an example:
0010100 -> 1101100
We define for every positive number $a$, the inverse element $-a$ as the 2-complement of $a$. When adding two numbers, we will forget the overflow bit (after the sign bit), if an overflow occurs. This addition with the set of defined positive and negative numbers forms a finite Abelian group. Notice that we have only one zero, which is 00000000. The representation 10000000 does not encode a number. To check the group properties, observe that
*
*$a + (-a) = 0$,
*$a+0=a$,
*$-(-a)=a$,
*$(a+b)=(b+a)$,
*$(a+b)+c= a+ (b+c)$.
As a consequence, we can compute with the 2-complement as with integers. To subtract $a$ from $b$ we compute $a+(-b)$. If we want to restore the absolute value of a negative number we have to take its 2-complement (this explains step 3 in the question's algorithm).
Now the 1-complement. This one is more tricky, because we have two distinct zeros (000000 and 111111). I think it is easiest to consider the 1-complement of $a$ as its 2-complement minus 1. Then we can first reduce the 1-complement to the 2-complement and then argue over the 2-complements.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/176778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
Fourier transform of the derivative - insufficient hypotheses? An exercise in Carlos ISNARD's Introdução à medida e integração:
Show that if $f$ and $f'$ $\in\mathscr{L}^1(\mathbb{R},\lambda,\mathbb{C})$ and $\lim_{x\to\pm\infty}f(x)=0$ then $\hat{(f')}(\zeta)=i\zeta\hat{f}(\zeta)$.
($\lambda$ is the Lebesgue measure on $\mathbb{R}$.)
I'm tempted to apply integration by parts on the integral from $-N$ to $N$ and then take limit as $N\to\infty$. But to obtain the result I seemingly need $f'e^{-i\zeta x}$ to be Riemann-integrable so as to use the fundamental theorem of Calculus.
What am I missing here?
Thank you.
| We can find a sequence of $C^1$ functions with compact support $\{f_j\}$ such that $$\lVert f-f_j\rVert_{L^1}+\lVert f'-f_j'\rVert_{L^1}\leq \frac 1j.$$
This is a standard result about Sobolev spaces. Actually more is true: the smooth functions with compact support are dense in $W^{1,1}(\Bbb R)$, but that's not needed here. It's theorem 13 in Willie Wong's notes about Sobolev spaces.
Convergence in $L^1$ shows that for all $\xi$ we have $\widehat{f_j}(\xi)\to \widehat{f}(\xi)$ and $\widehat{f_j'}(\xi)\to \widehat{f'}(\xi)$. And the result for $C^1$ functions follows as stated in the OP by integration by parts.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/176831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
What is smallest possible integer k such that $1575 \times k$ is perfect square? I wanted to know how to solve this question:
What is smallest possible integer $k$ such that $1575 \times k$ is a perfect square?
a) 7, b) 9, c) 15, d) 25, e) 63. The answer is 7.
Since this was a multiple choice question, I guess I could just put it in and test the values from the given options, but I wanted to know how to do it without hinting and testing. Any suggestions?
| If $1575\times k$ is a perfect square, we can write it as
$\prod_k p_k^2$. Since $1$ is not among the choices given, let's assume that $1575$ is not a square from the beginning.
Therefore $9=3^2$ and $25=5^2$ are ruled out, since multiplying a non square with a square doesn't make it a square. $63=9\cdot 7$ is ruled out, since $7$ would be the smaller possible choice.
We are left with $7$ and $15$ and it jumps to the eye that $1575$ is dividible by $15$ twice:
$$
1575=1500+75=100\cdot 15+5\cdot 15=105\cdot 15=(75+30)\cdot 15=(5+2)\cdot 15^2.
$$
So what are you left with...? Good luck ;-)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/176954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
I want to prove $ \int_0^\infty \frac{e^{-x}}{x} dx = \infty $ How can I prove this integral diverges?
$$ \int_0^\infty \frac{e^{-x}}{x} dx = \infty $$
| $$
\int_{0}^{\infty}\frac{e^{-x}}{x}= \int_{0}^{1}\frac{e^{-x}}{x}+\int_{1}^{\infty}\frac{e^{-x}}{x} \\
> \int_{0}^{1}\frac{e^{-x}}{x} \\
> e^{-1}\int_{0}^{1}\frac{1}{x}
$$
which diverges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/177010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
What's the meaning of the unit bivector i? I'm reading the Oersted Medal Lecture by David Hestenes to improve my understanding of Geometric Algebra and its applications in Physics. I understand he does not start from a mathematical "clean slate", but I don't care for that. I want to understand what he's saying and what I can do with this geometric algebra.
On page 10 he introduces the unit bivector i. I understand (I think) what unit vectors are: multiply by a scalar and get a scaled directional line. But a bivector is a(n oriented) parallellogram (plane). So if I multiply the unit bivector i with a scalar, I get a scaled parallellogram?
| The unit bivector represents the 2D subspaces. In 2D Euclidean GA, there are 4 coordinates:
*
*1 scalar coordinate
*2 vector coordinates
*1 bivector coordinate
A "vector" is then (s, e1, e2, e1^e2).
The unit bivector is frequently drawn as a parallelogram, but that's just a visualization aid. It conceptually more closely resembles a directed area where the sign indicates direction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/177078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 2
} |
Does "nullity" have a potentially conflicting or confusing usage? In Linear Algebra and Its Applications, David Lay writes, "the dimension of the null space is sometimes called the nullity of A, though we will not use the term." He then goes on to specify "The Rank Theorem" as "rank A + dim Nul A = n" instead of calling it the the rank-nullity theorem and just writing "rank A + nullity A = n".
Naturally, I wonder why he goes out of his way to avoid using the term "nullity." Maybe someone here can shed light....
| While choices of terminology is often a matter of taste (I would not know why the author should prefer to say $\operatorname{Nul}A$ instead of $\ker A$), there is at least a mathematical reason why "rank" is more important than "nullity": is it connected to the matrix/linear map in a way where source and destination spaces are treated on equal footing, while the nullity is uniquely attached to the source space. This is why the rank can be defined in an equivalent manner as the row rank or the column rank, or in a neutral way as the size of the largest non-vanishing minor or the smallest dimension of an intermediate space through which the linear map can be factored (decomposition rank). No such versatility exists for the nullity, it is just the dimension of the kernel inside the source space, and cannot be related in any way to the destination space. A notion analogous to the nullity at the destination side is the codimension of the image in the destination space (that is, the dimension of the cokernel); it measures the failure to be surjective, and it is different from the nullity (which measures the failure to be injective) for rectangular matrices. There is a (rather obvious) analogue to the rank-nullity theorem that says that for linear $f:V\to W$ one has
$$
\operatorname{rk} f+ \dim\operatorname{coker} f = \dim W.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/177130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Finding $E(N)$ in this question suppose $X_1,X_2,\ldots$ is sequence of independent random variables of $U(0,1)$ if
$N=\min\{n>0 :X_{(n:n)}-X_{(1:n)}>\alpha , 0<\alpha<1\}$ that $X_{(1:n)}$ is smallest order statistic and
$X_{(n:n)}$ is largest order statistic. how can find $E(N)$
| Let $m_n=\min\{X_k\,;\,1\leqslant k\leqslant n\}=X_{(1:n)}$ and $M_n=\max\{X_k\,;\,1\leqslant k\leqslant n\}=X_{(n:n)}$. As explained in comments, $(m_n,M_n)$ has density $n(n-1)(y-x)^{n-2}\cdot[0\lt x\lt y\lt1]$ hence $M_n-m_n$ has density $n(n-1)z^{n-2}(1-z)\cdot[0\lt z\lt1]$.
For every $n\geqslant2$, $[N\gt n]=[M_n-m_n\lt\alpha]$ hence
$$
\mathrm P(N\gt n)=\int_0^\alpha n(n-1)z^{n-2}(1-z)\mathrm dz=\alpha^{n}+n(1-\alpha)\alpha^{n-1}.
$$
The same formula holds for $n=0$ and $n=1$ hence
$$
\mathrm E(N)=\sum_{n=0}^{+\infty}\mathrm P(N\gt n)=\sum_{n=0}^{+\infty}\alpha^n+(1-\alpha)\sum_{n=0}^{+\infty}n\alpha^{n-1}=\frac2{1-\alpha}.
$$
Edit: To compute the density of $(m_n,M_n)$, start from the fact that
$$
\mathrm P(x\lt m_n,M_n\lt y)=\mathrm P(x\lt X_1\lt y)^n=(y-x)^n,
$$
for every $0\lt x\lt y\lt 1$. Differentiating this identity twice, once with respect to $x$ and once with respect to $y$, yields the opposite of the density of $(m_n,M_n)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/177189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
What digit appears in unit place when $2^{320}$ is multiplied out Is there a way to answer the following preferably without a calculator
What digit appears in unit place when $2^{320}$ is multiplied out ? a)$0$ b)$2$ c)$4$ d)$6$ e)$8$
---- Ans(d)
| $\rm mod\ 5\!:\, \color{#0A0}{2^4}\equiv \color{#C00}1\Rightarrow2^{320}\equiv (\color{#0A0}{2^4})^{80}\equiv \color{#C00}1^{80}\!\equiv \color{#C00}1.\,$ The only choice $\:\equiv \color{#C00}1\!\pmod 5\:$ is $6,\: $ in d).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/177312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Prove $\frac{1}{1 \cdot 3} + \frac{1}{3 \cdot 5} + \frac{1}{5 \cdot 7} + \cdots$ converges to $\frac 1 2 $ Show that
$$\frac{1}{1 \cdot 3} + \frac{1}{3 \cdot 5} + \frac{1}{5 \cdot 7} + \cdots = \frac{1}{2}.$$
I'm not exactly sure what to do here, it seems awfully similar to Zeno's paradox.
If the series continues infinitely then each term is just going to get smaller and smaller.
Is this an example where I should be making a Riemann sum and then taking the limit which would end up being $1/2$?
| Solution as per David Mitra's hint in a comment.
Write the given series as a telescoping series and evaluate its sum:
$$\begin{eqnarray*}
S &=&\frac{1}{1\cdot 3}+\frac{1}{3\cdot 5}+\frac{1}{5\cdot 7}+\cdots \\
&=&\sum_{n=1}^{\infty }\frac{1}{\left( 2n-1\right) \left( 2n+1\right) } \\
&=&\sum_{n=1}^{\infty }\left( \frac{1}{2\left( 2n-1\right) }-\frac{1}{
2\left( 2n+1\right) }\right)\quad\text{Partial fractions decomposition} \\
&=&\frac{1}{2}\sum_{n=1}^{\infty }\left( \frac{1}{2n-1}-\frac{1}{2n+1}
\right) \qquad \text{Telescoping series} \\
&=&\frac{1}{2}\sum_{n=1}^{\infty }\left( a_{n}-a_{n+1}\right), \qquad a_{n}=
\frac{1}{2n-1},a_{n+1}=\frac{1}{2\left( n+1\right) -1}=\frac{1}{2n+1} \\
&=&\frac{1}{2}\left( a_{1}-\lim_{n\rightarrow \infty }a_{n}\right) \qquad\text{see below} \\
&=&\frac{1}{2}\left( \frac{1}{2\cdot 1-1}-\lim_{n\rightarrow \infty }\frac{1
}{2n-1}\right) \\
&=&\frac{1}{2}\left( 1-0\right) \\
&=&\frac{1}{2}.
\end{eqnarray*}$$
Added: The sum of the telescoping series $\sum_{n=1}^{\infty }\left( a_{n}-a_{n+1}\right)$ is the limit of the telescoping sum $\sum_{n=1}^{N}\left( a_{n}-a_{n+1}\right) $ as $N$ tends to $\infty$. Since
$$\begin{eqnarray*}
\sum_{n=1}^{N}\left( a_{n}-a_{n+1}\right) &=&\left( a_{1}-a_{2}\right)
+\left( a_{2}-a_{3}\right) +\ldots +\left( a_{N-1}-a_{N}\right) +\left(
a_{N}-a_{N+1}\right) \\
&=&a_{1}-a_{2}+a_{2}-a_{3}+\ldots +a_{N-1}-a_{N}+a_{N}-a_{N+1} \\
&=&a_{1}-a_{N+1},
\end{eqnarray*}$$
we have
$$\begin{eqnarray*}
\sum_{n=1}^{\infty }\left( a_{n}-a_{n+1}\right) &=&\lim_{N\rightarrow
\infty }\sum_{n=1}^{N}\left( a_{n}-a_{n+1}\right) \\
&=&\lim_{N\rightarrow \infty }\left( a_{1}-a_{N+1}\right) \\
&=&a_{1}-\lim_{N\rightarrow \infty }a_{N+1} \\
&=&a_{1}-\lim_{N\rightarrow \infty }a_{N} \\
&=&a_{1}-\lim_{n\rightarrow \infty }a_{n}.\end{eqnarray*}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/177373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 1
} |
Combinatorics question: Prove 2 people at a party know the same amount of people I recently had an assignment and got this question wrong, was wondering what I left out.
Prove that at a party where there are at least two people, there are two people who know the same number of other people there. Be sure to use the variable "n" when writing your answer.
My answer:
n >= 2
Case1: another person comes to the party, but doesn't know either of the first two. So the original two still only know the same number of people.
Case2: another person comes, and knows one out of the original 2, so therefore the >newcommer, and the one that doesnt know the newcommer both know the same number of people.
Case 3: another person comes and knows both of them, implying that they all know each >other, and therefore they all know the same number of people.
So therefore if n>=2, where n and n-1 know each other, in either case whether n+1 joins, >there will be at least two people who know the same amount of people.
Many thanks in advance. Have a test coming up.
| First of all, you left out saying that you were proceding by induction, and you left out establishing a base case for the induction.
But let's look at case 2. Suppose that of the original two people, one knows 17 of the 43 people at the party, the other knows 29. Now the newcomer knows the first of the two, but not the other. What makes you think the newcomer knows exactly 29 of the people at the party?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/177432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
What is the $\tau$ symbol in the Bourbaki text? I'm reading the first book by Bourbaki (set theory) and he introduces this logical symbol $\tau$ to later define the quantifiers with it. It is such that if $A$ is an assembly possibly contianing $x$ (term, variable?), then $\tau_xA$ does not contain it.
Is there a reference to this symbol $\tau$? Is it not useful/necessary? Did it die out of fashion?
How to read it?
At the end of the chapter on logic, they make use of it by... I don't know treating it like $\tau_X(R)$ would represent the solution of an equation $R$, this also confuses me.
| Adrian Mathias offers the following explanation here:
Bourbaki use the Hilbert operator but write it as $\tau$ rather than $\varepsilon$, which latter is visually too close to the sign $\in$ for the membership relation. Bourbaki use the word assemblage, or in their English translation, assembly, to mean a finite sequence of signs or leters, the signs being $\tau$, $\square$, $\lor$, $\lnot$, $=$, $\in$ and $\bullet$.
The substitution of the assembly $A$ for each occurrence of the letter $x$ in the assembly $B$ is denoted by $(A|x) B$.
Bourbaki use the word relation to mean what in English-speaking countries is usually called a well-formed formula.
The rules of formation for $\tau$-terms are these:
Let $R$ be an assembly and $x$ a letter; then the assembly $\tau_x(R)$ is obtained in three steps:
*
*form $\tau R$, of length one more than that of $R$;
*link that first occurrence of $\tau$ to all occurrences of $x$ in $R$
*replace all those occurrences of $x$ by an occurrence of $\square$.
In the result $x$ does not occur. The point of that is that there are no bound variables; as variables become bound (by an occurrence of $\tau$), they are replaced by $\square$, and those occurrences of $\square$ are linked to the occurrence of $\tau$ that binds them.
The intended meaning is that $\tau_x(R)$ is some $x$ of which $R$ is true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/177494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
When was $\pi$ first suggested to be irrational? When was $\pi$ first suggested to be irrational?
According to Wikipedia, this was proved in the 18th century.
Who first claimed / suggested (but not necessarily proved) that $\pi$ is irrational?
I found a passage in Maimonides's Mishna commentary (written circa 1168, Eiruvin 1:5) in which he seems to claim that $\pi$ is irrational. Is this the first mention?
| From a non-wiki source:
Archimedes [1], in the third century B.C. used regular polygons inscribed and circumscribed to a circle to approximate : the more sides a polygon has, the closer to the circle it becomes and therefore the ratio between the polygon's area between the square of the radius yields approximations to $\pi$. Using this method he showed that $223/71<\pi<22/7$.
Found on PlanetMath: Pi, with a reference to Archimedes' work...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/177546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 4,
"answer_id": 3
} |
analytically calculate value of convolution at certain point i'm a computer science student and i'm trying to analytically find the value of the convolution between an ideal step-edge and either a gaussian function or a first order derivative of a gaussian function.
In other words, given an ideal step edge with amplitude $A$ and offset $B$:
$$
i(t)=\left\{
\begin{array}{l l}
A+B & \quad \text{if $t \ge t_{0}$}\\
B & \quad \text{if $t \lt t_{0}$}\\
\end{array} \right.
$$
and the gaussian function and it's first order derivative
$$
g(t) = \frac{1}{\sigma \sqrt{2\pi}}e^{- \frac{(t - \mu)^2}{2 \sigma^2}}\\
g'(t) = -\frac{t-\mu}{\sigma^3 \sqrt{2\pi}}e^{- \frac{(t - \mu)^2}{2 \sigma^2}}
$$
i'd like to calculate the value of both
$$
o(t) = i(t) \star g(t)\\
o'(t) = i(t) \star g'(t)
$$
at time $t_{0}$ ( i.e. $o(t_{0})$ and $o'(t_{0}) )$.
I tried to solve the convolution integral but unfortunately i'm not so matematically skilled to do it. Can you help me?
Thank you in advance very much.
| We can write $i(t):=B+A\chi_{[t_0,+\infty)}$. Using the properties of convolution, we can see that
\begin{align}
o(t_0)&=B+A\int_{-\infty}^{+\infty}g(t)\chi_{[t_0,+\infty)}(t_0-t)dt\\
&=B+A\int_{-\infty}^0\frac 1{\sigma\sqrt{2\pi}}\exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)dx.
\end{align}
The integral can be expressed with erf-function.
For the second one, things are easier since we can compute the integrals:
\begin{align}
o'(t_0)&=A\int_{-\infty}^0g'(t)dt=Ag(0).
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/177655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If a player is 50% as good as I am at a game, how many games will it be before she finally wins one game? This is a real life problem. I play Foosball with my colleague who hasn't beaten me so far. I have won 18 in a row. She is about 50% as good as I am (the average margin of victory is 10-5 for me). Mathematically speaking, how many games should it take before she finally wins a game for the first time?
| A reasonable model for such a game is that each goal goes to you with probability $p$ and to her with probability $1-p$. We can calculate $p$ from the average number of goals scored against $10$, and then calculate the fraction of games won by each player from $p$.
The probability that she scores $k$ goals before you score your $10$th is $\binom{9+k}9p^{10}(1-p)^k$, so her average number of goals is
$$\sum_{k=0}^\infty\binom{9+k}9p^{10}(1-p)^kk=10\frac{1-p}p\;.$$
Since you say that this is $5$, we get $10(1-p)=5p$ and thus $p=\frac23$. The probability that you get $10$ goals before she does is
$$\sum_{k=0}^9\binom{9+k}9\left(\frac23\right)^{10}\left(\frac13\right)^k=\frac{1086986240}{1162261467}\approx0.9352\;,$$
so her chance of winning a game should be about $7\%$, and she should win about one game out of $1/(1-0.9352)\approx15$ games – so you winning $18$ in a row isn't out of the ordinary.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/177725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
showing almost equal function are actually equal I am trying to show that if $f$ and $g$ are continuous functions on $[a, b]$ and if $f=g$ a.e. on $[a, b]$, then, in fact, $f=g$ on $[a, b]$. Also would a similar assertion be true if $[a, b]$ was replaced by a general measurable set $E$ ?
Some thoughts towards the proof
*
*Since $f$ and $g$ are continuous functions, so for all open sets $O$ and $P$ in $f$ and $g$'s ranges respectfully the sets $f^{-1}\left(O\right) $ and $g^{-1}\left(P\right) $ are open.
*Also since $f=g$ a.e. on $[a, b]$ I am guessing here implies their domains and ranges are equal almost everywhere(or except in the set with measure zero).
$$m(f^{-1}\left(O\right) - g^{-1}\left(P\right)) = 0$$
I am not so sure if i can think of clear strategy to pursue here. Any help would be much appreciated.
Also i would be great full you could point out any other general assertions which if established would prove two functions are the same under any domain or range specification conditions.
Cheers.
| The set $\{x\mid f(x)\neq g(x)\}$ is open (it's $(f-g)^{—1}(\Bbb R\setminus\{0\})$), and of measure $0$. It's necessarily empty, otherwise it would contain an open interval.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/177792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Prove whether a relation is an equivalence relation Define a relation $R$ on $\mathbb{Z}$ by $R = \{(a,b)|a≤b+2\}$.
(a) Prove or disprove: $R$ is reflexive.
(b) Prove or disprove: $R$ is symmetric.
(c) Prove or disprove: $R$ is transitive.
For (a), I know that $R$ is reflexive because if you substitute $\alpha$ into the $a$ and $b$ of the problem, it is very clear that $\alpha \leq \alpha + 2$ for all integers.
For (b), I used a specific counterexample; for $\alpha,\beta$ in the integers, if you select $\alpha = 1$, and $\beta = 50$, it is clear that although $\alpha ≤ \beta + 2$, $\beta$ is certainly not less than $ \alpha + 2$.
However, for (c), I am not sure whether the following proof is fallacious or not:
Proof: Assume $a R b$ and $b R g$;
Hence $a ≤ b + 2$ and $b ≤ g + 2$
Therefore $a - 2 ≤ b$ and $b ≤ g + 2$
So $a-2 ≤ b ≤ g + 2$
and clearly $a-2 ≤ g+2$
So then $a ≤ g+4$
We can see that although $a$ might be less than $ g+2$,
it is not always true.
Therefore we know that the relation $R$ is not transitive.
QED.
It feels wrong
| You are right, the attempt to prove transitivity doesn't work. But your calculation should point towards a counterexample.
Make $a \le b+2$ in an extreme way, by letting $b=a-2$. Also, make $b\le g+2$ in the same extreme way. Then $a \le g+2$ will fail. Perhaps work with an explicit $a$, like $47$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/177877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Difference between "space" and "mathematical structure"? I am trying to understand the difference between a "space" and a "mathematical structure".
I have found the following definition for mathematical structure:
A mathematical structure is a set (or sometimes several sets) with various associated mathematical objects such as subsets, sets of subsets, operations and relations, all of which must satisfy various requirements (axioms). The collection of associated mathematical objects is called the structure and the set is called the underlying set. http://www.abstractmath.org/MM/MMMathStructure.htm
Wikipedia says the following:
In mathematics, a structure on a set, or more generally a type, consists of additional mathematical objects that in some manner attach (or relate) to the set, making it easier to visualize or work with, or endowing the collection with meaning or significance.
http://en.wikipedia.org/wiki/Mathematical_structure
Regarding a space, Wikipedia says:
In mathematics, a space is a set with some added structure. http://en.wikipedia.org/wiki/Space_(mathematics)
I have also found some related questions, but I do not understand from them what the difference between a space and a mathematical structure is:
difference-between-space-and-algebraic-structure
what-does-a-space-mean
| I'm not sure I must be right. Different people have different ideas. Here I just talk about my idea for your question. In my opinion, they are same: the set with some relation betwen the elements of the set. Calling it a space or calling it just a mathematical structure is just a kind of people's habit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/177937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 5,
"answer_id": 1
} |
Applications of Bounds of the Sum of Inverse Prime Divisors $\sum_{p \mid n} \frac{1}{p}$ Question: What interesting or notable problems involve the additive function $\sum_{p \mid n} \frac{1}{p}$ and depend on sharp bounds of this function?
I'm aware of at least one: A certain upper bound of the sum implies the ABC conjecture.
By the Arithmetic-Geometric Inequality, one can bound the radical function (squarefree kernel) of an integer,
\begin{align}
\left( \frac{\omega(n)}{\sum_{p \mid n} \frac{1}{p}} \right)^{\omega(n)} \leqslant \text{rad}(n).
\end{align}
If for any $\epsilon > 0$ there exists a finite constant $K_{\epsilon}$ such that for any triple $(a,b,c)$ of coprime positive integers, where $c = a + b$, one has
\begin{align}
\sum_{p \mid abc} \frac{1}{p} < \omega(abc) \left( \frac{K_\epsilon}{c} \right)^{1/((1+\epsilon) \omega(abc))},
\end{align}
then
\begin{align}
c < K_{\epsilon} \left( \frac{\omega(abc)}{\sum_{p \mid abc} \frac{1}{p}} \right)^{(1+ \epsilon)\omega(abc)} \leqslant \text{rad}(abc)^{1+\epsilon},
\end{align}
and the ABC-conjecture is true.
Edit: Now, whether or not any triples satisfy the bound on inverse primes is a separate issue. Greg Martin points out that there are infinitely many triples which indeed violate it. This begs the question of whether there are any further refinements of the arithmetic-geometric inequality which remove such anomalies, but this question is secondary.
| As it turns out, your proposed inequality is false - in fact false for any $\epsilon>0$, even huge $\epsilon$.
The following approximation to the twin primes conjecture was proved by Chen's method: there exist infinitely many primes $b$ such that $c=b+2$ is either prime or the product of two primes. With $a=2$, this gives $\omega(abc)\le4$, and so
$$
\omega(abc) \left( \frac{K_\epsilon}{c} \right)^{1/((1+\epsilon) \omega(abc))} \le 4 \bigg( \frac{K_\epsilon}c \bigg)^{1/(4+4\epsilon)},
$$
which (for any fixed $\epsilon>0$) can be arbitrarily small as $c$ increases. However, the other side $\sum_{p\mid abc} \frac1p$ is at least $\frac12$, and so the inequality has infinitely many counterexamples.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/178002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Are the two statements concening number theory correct? Statement 1: any integer no less than four can be factorized as a linear combination of two and three.
Statement 2: any integer no less than six can be factorized as a linear combination of three, four and five.
I tried for many numbers, it seems the above two statement are correct. For example,
4=2+2; 5=2+3; 6=3+3+3; ...
6=3+3; 7=3+4; 8=4+4; 9=3+3+3; 10=5+5; ...
Can they be proved?
| The question could be generalized, but there is a trivial solution in the given cases.
*
*Any even number can be written as $2n$. For every odd number $x > 1$, there exists an even number $y$ such that $y+3 = x$.
*Likewise, numbers divisible by $4$ can be written as $4n$. All other numbers over $2$ are $4n + 5$, $4n + 3$ or $4n + 3 + 3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/178078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Prove that for any $x \in \mathbb N$ such that $xProve that every positive integer $x$ with $x<n!$ is the sum of at most $n$ distinct divisors of $n!$.
| Hint: Note that $x = m (n-1)! + r$ where $0 \le m < n$ and $0 \le r < (n-1)!$. Use induction.
EDIT: Oops, this is wrong: as Steven Stadnicki noted, $m (n-1)!$ doesn't necessarily divide $n!$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/178122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 1
} |
Accidents of small $n$ In studying mathematics, I sometimes come across examples of general facts that hold for all $n$ greater than some small number. One that comes to mind is the Abel–Ruffini theorem, which states that there is no general algebraic solution for polynomials of degree $n$ except when $n \leq 4$.
It seems that there are many interesting examples of these special accidents of structure that occur when the objects in question are "small enough", and I'd be interested in seeing more of them.
| $\mathbb{R}^n$ has a unique smooth structure except when $n=4$. Furthermore, the cardinality of [diffeomorphism classes of] smooth manifolds that are homeomorphic but not diffeomorphic to $\mathbb{R}^4$ is $\mathfrak{c}=2^{\aleph_0}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/178183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32",
"answer_count": 20,
"answer_id": 4
} |
Euler's theorem for powers of 2 According to Euler's theorem,
$$x^{\varphi({2^k})} \equiv 1 \mod 2^k$$
for each $k>0$ and each odd $x$. Obviously, number of positive integers less than or equal to $2^k$ that are relatively prime to $2^k$ is
$$\varphi({2^k}) = 2^{k-1}$$
so it follows that
$$x^{{2^{k-1}}} \equiv 1 \mod 2^k$$
This is fine, but it seems like even
$$x^{{2^{k-2}}} \equiv 1 \mod 2^k$$
holds, at least my computer didn't find any counterexample.
Can you prove or disprove it?
| An $x$ such that $\phi(m)$ is the least positive integer $k$ for which $x^k \equiv 1 \mod m$ is called a primitive root mod $m$. The positive integers that have primitive roots are
$2, 4, p^n$ and $2 p^n$ for odd primes $p$ and positive integers $n$. In particular you are correct that there is no primitive root for $2^n$ if $n \ge 3$, and thus $x^{2^{n-2}} \equiv 1 \mod 2^n$ for all odd $x$ and all $n \ge 3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/178242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
$\sigma$-algebra of $\theta$-invariant sets in ergodicity theorem for stationary processes Applying Birkhoff's ergodic theorem to a stationary process (a stochastic process with invariant transformation $\theta$, the shift-operator - $\theta(x_1, x_2, x_3,\dotsc) = (x_2, x_3, \dotsc)$) one has a result of the form
$ \frac{X_0 + \dotsb + X_{n-1}}{n} \to \mathbb{E}[ X_0 \mid J_{\theta}]$ a.s.
where the right hand side is the conditional expectation of $X_0$ concerning the sub-$\sigma$-algebra of $\theta$-invariant sets... How do these sets in $J_{\theta}$ look like? (I knew that $\mathbb{P}(A) \in \{0,1\}$ in the ergodic case, but I don't want to demand ergodicity for now).
| The $T$-invariant $\sigma$-field is generated by the (generally nonmeasurable) partition into the orbits of $T$. See https://mathoverflow.net/questions/88268/partition-into-the-orbits-of-a-dynamical-system
This gives a somewhat geometric view of the invariant $\sigma$-field. You should also studied the related notion of ergodic components of $T$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/178308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
'Linux' math program with interactive terminal? Are there any open source math programs out there that have an interactive terminal and that work on linux?
So for example you could enter two matrices and specify an operation such as multiply and it would then return the answer or a error message specifying why an answer can't be computed? I am just looking for something that can perform basic matrix operations and modular arithmetic.
| Sage is basically a Python program/interpreter that aims to be an open-source mathematical suite (ala Mathematica and Magma etc.). There are many algorithms implemented as a direct part of Sage, as well as wrapping many other open-source mathematics packages, all into a single interface (the user never has to tell which package or algorithm to use for a given computation: it makes the decisions itself). It includes GP/Pari and maxima, and so does symbolic manipulations and number theory at least as well as them.
It has a command-line mode, as well as a web notebook interface (as an example, a public server run by the main developers).
And, although this might not be relevant since your use-case sounds very simple, the syntax is just Python with a small preprocessing step to facilitate some technical details and allow some extra notation (like [1..4] which expands to [1,2,3,4]), so many people already know it and, if not, learning it is very easy.
As a slight tangent, Sage is actually the origin of the increasingly popular Cython language for writing fast "Python", and even offers an easy and transparent method of using Cython for sections of code in the notebook.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/178340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43",
"answer_count": 14,
"answer_id": 2
} |
Randomly selecting a natural number In the answer to these questions:
*
*Probability of picking a random natural number,
*Given two randomly chosen natural numbers, what is the probability that the second is greater than the first?
it is stated that one cannot pick a natural number randomly.
However, in this question:
*
*What is the probability of randomly selecting $ n $ natural numbers, all pairwise coprime?
it is assumed that we can pick $n$ natural numbers randomly.
A description is given in the last question as to how these numbers are randomly selected, to which there seems to be no objection (although the accepted answer is given by one of the people explaining that one cannot pick a random number in the first question).
I know one can't pick a natural number randomly, so how come there doesn't seem to be a problem with randomly picking a number in the last question?
NB: I am happy with some sort of measure-theoretic answer, hence the probability-theory tag, but I think for accessibility to other people a more basic description would be preferable.
| It really depends on what you mean by the "probability of randomly selecting n natural numbers with property $P$". While you cannot pick random natural number, you can speak of uniform distribution.
For the last problem, the probability is calculated, and is to be understood as the limit when $N \to \infty$ from the "probability of randomly selecting n natural numbers from $1$ to $N$, all pairwise coprime".
Note that in this sense, the second problem also has an answer. And some of this type of probabilities can be connected via dynamical systems to an ergodic measure and an ergodic theorem.
Added The example provided by James Fennell is good to understand the last paragraph above.
Consider ${\mathbb Z}_2 = {\mathbb Z}/2{\mathbb Z}$, and the action of ${\mathbb Z}$ on ${\mathbb Z}_2$ defined by
$$m+ ( n \mod 2)=(n+m) \mod 2$$
Then, there exists an unique ergodic measure on ${\mathbb Z}_2$, namely $P(0 \mod 2)= P(1 \mod 2)= \frac{1}{2}$.
This is really what we intuitively understand by "half of the integers are even".
Now, the ergodic theory yields (and is something which can be easily proven directly in this case)
$$\lim_{N} \frac{\text{amount of even natural numbers} \leq N}{N} = P( 0 \mod 2) =\frac{1}{2} \,.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/178415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 0
} |
Finding all solutions to $y^3 = x^2 + x + 1$ with $x,y$ integers larger than $1$
I am trying to find all solutions to
(1) $y^3 = x^2 + x + 1$, where $x,y$ are integers $> 1$
I have attempted to do this using...I think they are called 'quadratic integers'. It would be great if someone could verify the steps and suggest simplifications to this approach. I am also wondering whether my use of Mathematica invalidates this approach.
My exploration is based on a proof I read that $x^3 + y^3 = z^3$ has no non-trivial integer solutions. This proof uses the ring Z[W] where $W = \frac{(-1 + \sqrt{-3})}{2}$. I don't understand most of this proof, or what a ring is, but I get the general idea. The questions I have about my attempted approach are
*
*Is it valid?
*How could it be simplified?
Solution:
Let $w = (-1 + \sqrt{-3})/2$. (Somehow, this can be considered an "integer" even though it doesn't look anything like one!)
Now $x^3 - 1 = (x-1)(x-w)(x-w^2)$ so that, $(x^3 - 1)/(x-1) = x^2 + x + 1 = (x-w)(x-w^2)$. Hence
$y^3 = x^2 + x + 1 = (x-w)(x-w^2).$
Since $x-w, x-w^2$ are coprime up to units (so I have read) both are "cubes". Letting $u$ be one of the 6 units in Z[w], we can say
$x-w = u(a+bw)^3 = u(c + dw)$ where
$c = a^3 + b^3 - 3ab^2, d = 3ab(a-b)$
Unfortunately, the wretched units complicate matters. There are 6 units hence 6 cases, as follows:
1) $1(c+dw) = c + dw$
2) $-1(c+dw) = -c + -dw$
3) $w(c+dw) = -d + (c-d)w$
4) $-w(c+dw) = d + (d-c)w$
5) $-w^2(c+dw) = c-d + cw$
6) $w^2(c+dw) = d-c + -cw$
Fortunately, the first two cases can be eliminated. For example, if $u = 1$ then $x-w = c+dw$ so that $d = -1 = 3ab(a-b).$ But this is not possible for integers $a,b$. The same reasoning applies to $u = -1$.
For the rest I rely on a program called Mathematica, which perhaps invalidates my reasoning, as you will see.
We attack case 5. Here
$x = c-d = a^3 + b^3 - 3a^2b$, and $c = a^3 + b^3 - 3ab^2 = -1.$
According to Mathematica the only integer solutions to $c = -1$ are
$(a,b) = (3,2), (1,1), (0,-1), (-1,0), (-1,-3), (-2,1).$
Plugging these into $x = c-d$ we find that no value of x that is greater than 1. So case 5 is eliminated, as is 6 by similar reasoning.
Examining case 4 we see that $d-c = -(a^3 + b^3 - 3a^2*b) = -1$ with solutions
$(-2,-3), (-1,-1), (-1,2), (0,1), (1,0), (3,1).$
Plugging these values into $x = d = 3ab(a-b)$ yields only one significant value, namely $x = 18$ (e.g. (a,b)=(3,1) . The same result is given by case 4. Hence the only solution to (1) is $7^3 = 18^2 + 18 + 1$
However, I'm unsure this approach is valid because I don't know how Mathematica found solutions to expressions such as $a^3 + b^3 - 3ab^2=-1$. These seem more difficult than the original question of $y^3 = x^2 + x + 1$, although I note that Mathematica could not solve the latter.
| In this answer to a related thread, I outline a very elementary solution to this problem. There are probably even simpler ones, but I thought you might appreciate seeing that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/178499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 3
} |
Is every monomial over the UNIT OPEN BALL bounded by its L^{2} norm? Let $m\geq 2$ and $B^{m}\subset \mathbb{R}^{m}$ be the unit OPEN ball . For any fixed multi-index $\alpha\in\mathbb{N}^{m}$ with $|\alpha|=n$ large and $x\in B^{m}$
$$|x^{\alpha}|^{2}\leq \int_{B^{m}}|y^{\alpha}|^{2}dy\,??$$
| No. For a counterexample, take $\alpha=(n,0,\ldots,0)$. Obviously, $\max_{S^m}|x^\alpha|=1$, but an easy calculation shows
$$
\int_{S^m}|y^\alpha|^2{\mathrm{d}}\sigma(y) \to 0,
$$
as $n\to\infty$.
For the updated question, that involves the open unit ball, the answer is the same. With the same counterexample, we have
$$
\int_{B^m}|y^\alpha|^2{\mathrm{d}}y \to 0,
$$
as $n\to\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/178573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Invertible elements in the ring $K[x,y,z,t]/(xy+zt-1)$ I would like to know how big is the set of invertible elements in the ring $$R=K[x,y,z,t]/(xy+zt-1),$$ where $K$ is any field. In particular whether any invertible element is a (edit: scalar) multiple of $1$, or there is something else. Any help is greatly appreciated.
| Let $R=K[X,Y,Z,T]/(XY+ZT-1)$. In the following we denote by $x,y,z,t$ the residue classes of $X,Y,Z,T$ modulo the ideal $(XY+ZT-1)$. Let $f\in R$ invertible. Then its image in $R[x^{-1}]$ is also invertible. But $R[x^{-1}]=K[x,z,t][x^{-1}]$ and $x$, $z$, $t$ are algebraically independent over $K$. Thus $f=cx^n$ with $c\in K$, $c\ne0$, and $n\in\mathbb Z$. Since $R/xR\simeq K[Z,Z^{-1}][Y]$ we get that $x$ is a prime element, and therefore $n=0$. Conclusion: if $f$ is invertible, then $f\in K-\{0\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/178656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Assuming that $(a, b) = 2$, prove that $(a + b, a − b) = 1$ or $2$ Statement to be proved: Assuming that $(a, b) = 2$, prove that
$(a + b, a − b) = 1$ or $2$.
I was thinking that $(a,b)=\gcd(a,b)$ and tried to prove the statement above, only to realise that it is not true.
$(6,2)=2$
but $(8,4)=4$, seemingly contradicting the statement to be proved?
Is there any other meaning for $(a,b)$, or is there a typo in the question?
Sincere thanks for any help.
| if $d\mid(a+b, a-b) \Rightarrow d|(a+b)$ and $d\mid(a-b)$
$\Rightarrow d\mid(a+b) \pm (a-b) \Rightarrow d\mid2a$ and $d\mid2b \Rightarrow d\mid(2a,2b) \Rightarrow d\mid2(a,b)$
This is true for any common divisor of (a+b) and (a-b).
If d becomes (a+b, a-b), (a,b)|(a±b)
as for any integers $P$, $Q$, $(a,b)\mid(P.a+Q.b)$, $d$ can not be less than $(a,b)$,
in fact, (a,b) must divide d,
so $d = (a,b)$ or $d = 2(a,b)$.
Here (a,b)=2.
So, $(a+b, a-b)$ must divide $4$ i.e., $= 2$ or $4$ (as it can not be $1$ as $a$, $b$ are even).
Observation:
$a$,$b$ can be of the form
(i) $4n+2$, $4m$ where $(2n+1,m)=1$, then $(a+b, a-b)=2$, ex.$(6,4) = 2 \Rightarrow (6+4, 6-4)=2$
or (ii) $4n+2$, $4m+2$ where $(2n+1,2m+1)=1$, then $(a+b, a-b)=4$, ex.$(6, 10)=2 \Rightarrow (6+10, 6 - 10)=4$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/178714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Subsets and Splits