Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Simplify fraction - Where did the rest go? While studying maths I encountered the following fraction : $\frac{5ab}{10b}$ Which I then had to simplify. The answer I came up with is: $\frac{5ab}{10b} = \frac{ab}{2b}$ But the correct answer seemed to be: $\frac{5ab}{10b} = \frac{a}{2} = \frac{1}{2}$a Why is the above answer correct and mine wrong? I can't wrap my head around $b$ just disappearing like that.
To get from $\dfrac{5ab}{10b}$ to $\dfrac{ab}{2b}$ you probably divided the numerator and denominator each by $5$. Now divide them each by $b$ (if $b \not = 0$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/74275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
There exists a real number $c$ such that $A+cI$ is positive when $A$ is symmetric Without using the fact that symmetric matrices can be diagonalized: Let $A$ be a real symmetric matrix. Show that there exists a real number $c$ such that $A+cI$ is positive. That is, if $A=(a_{ij})$, one has to show that there exists real $c$ that makes $\sum_i a_{ii}x_i^2 + 2\sum_{i<j}a_{ij}x_ix_j + c\sum_i x_i^2 > 0$ for any vector $X=(x_1,...,x_n)^T$. This is an exercise in Lang's Linear Algebra. Thank you for your suggestions and comments.
Whether $x^TAx$ is positive doesn't depend on the normalization of $x$, so you only have to consider unit vectors. The unit sphere is compact, so the sum of the first two sums is bounded. The third sum is $1$, so you just have to choose $c$ greater than minus the lower bound of the first two sums.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Is there a reason why curvature is defined as the change in $\mathbf{T}$ with respect to arc length $s$ And not with respect to time $t$? (or whatever parameter one is using) $\displaystyle |\frac{d\mathbf{T}(t)}{\mathit{dt}}|$ seems more intuitive to me. I can also see that $\displaystyle |\frac{d\mathbf{T}(t)}{\mathit{ds}}| = |\frac{d\mathbf{r}'(t)}{dt}|$ (because $\displaystyle |\mathbf{r}'(t)| = \frac{ds}{dt}$, which does make sense, but I don't quite understand the implications of $\displaystyle |\frac{d\mathbf{T}(t)}{\mathit{dt}}|$ vs. $\displaystyle |\frac{d\mathbf{T}(t)}{\mathit{ds}}|$ and why the one was chosen over the other.
The motivation is that we want curvature to be a purely geometric quantity, depending on the set of points making up the line alone and not the parametric formula that happened to generate those points. $\left|\frac{dT}{dt}\right|$ does not satisfy this property: if I reparameterize by $t\to 2t$ for instance I get a curve that looks exactly the same as my original curve, but has twice the curvature. This isn't desirable. $\left|\frac{dT}{ds}\right|$ on the other hand has the advantage of being completely invariant, by definition, to parameterization (assuming some regularity conditions on the curve).
{ "language": "en", "url": "https://math.stackexchange.com/questions/74403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
A question about hyperbolic functions Suppose $(x,y,z),(a,b,c)$ satisfy $$x^2+y^2-z^2=-1, z\ge 1,$$ $$ax+by-cz=0,$$ $$a^2+b^2-c^2=1.$$ Does it follow that $$z\cosh(t)+c\sinh(t)\ge 1$$ for all real number $t$?
The curve $(X_1,X_2,X_3)=\cosh(t)(x,y,z)+\sinh(t)(a,b,c), -\infty<t<\infty$ is continuous and satisfies $X_1^2+X_2^2-X_3^2=-\cosh^2(t)+\sinh^2(t)=-1$. One of its point $(x,y,z)$ (when $t=0$) lies on the upper sheet $X_1^2+X_2^2-X_3^2=-1, X_3\ge 1$. By connectness of the curve, the whole curve must lie in this connected component. Hence $z\cosh(t)+c\sinh(t)\ge 1$ for all $t$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
On a finite nilpotent group with a cyclic normal subgroup I'm reading Dummit & Foote, Sec. 6.1. My question is the following. If $G$ is a finite nilpotent group with a cyclic normal subgroup $N$ such that $G/N$ is also cyclic, when is $G$ abelian? I know that dihedral groups are not abelian, and I think the question is equivalent to every Sylow subgroup being abelian. EDIT: So, the real question is about finding all metacyclic finite p-groups that are not abelian. Thanks.
Rod, you are right when you say this can be brought back to every Sylow group being abelian. Since $G$ is nilpotent you can reduce to $G$ being a $p$-group. However, a counterexample is easily found, take the quaternion group $G$ of order 8, generated by $i$ and $j$ as usual. Let $N$ be the subgroup $<i>$ of index 2. $N$ satisfies your conditions but $G$ is not abelian.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What is the math notation for this type of function? A function that turns a real number into another real number can be represented like $f : \mathbb{R}\to \mathbb{R}$ What is the analogous way to represent a function that turns an unordered pair of elements of positive integers each in $\{1,...,n\}$ into a real number? I guess it would almost be something like $$f : \{1,...,n\} \times \{1,...,n\} \to \mathbb{R}$$ but is there a better notation that is more concise and that has the unorderedness?
I would say that it might be best to preface your notation with a sentence explaining it, which will allow the notation itself to be more compact, and generally increase the understanding of the reader. For example, we could write: Let $X=\{x\in\mathbb{N}\mid x\leq N\}$, and let $\sim$ be an equivalence relation on $X^2$ defined by $(a,b)\sim(c,d)$ iff either $a=c$ and $b=d$, or $a=d$ and $b=c$. Let $Y=X^2/\sim$, and let $f:Y\to\mathbb{R}$. So, $Y$ can be thought of as the set of unordered pairs of positive integers up to $N$, and you can then proceed to use this notation every time you want to talk about such a function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Proof of inequality $\prod_{k=1}^n(1+a_k) \geq 1 + \sum_{k=1}^n a_k$ with induction I have to show that $\prod_{k=1}^n(1+a_k) \geq 1 + \sum_{k=1}^n a_k$ is valid for all $1 \leq k \leq n$ using the fact that $a_k \geq 0$. Showing that it works for $n=0$ was easy enough. Then I tried $n+1$ and get to: $$\begin{align*} \prod_{k=1}^{n+1}(1+a_k) &= \prod_{k=1}^{n}(1+a_k)(1+a_{n+1}) \\ &\geq (1+\sum_{k=1}^n a_k)(1+a_{n+1}) \\ &= 1+\sum_{k=1}^{n+1} a_k + a_{n+1}\sum_{k=1}^n a_k \end{align*}$$ In order to finish it, I need to get rid of the $+ a_{n+1}\sum_{k=1}^n a_k$ term. How do I accomplish that? It seems that this superfluous sum is always positive, making this not really trivial, i. e. saying that it is even less if one omits that term and therefore still (or even more so) satisfies the $\geq$ …
We want to show : $$\left(\frac{1}{a_{n+1}}+1\right)\prod_{i=1}^{n}\left(1+a_{i}\right)>1+\frac{1}{a_{n+1}}+\sum_{i=1}^{n}\frac{a_{i}}{a_{n+1}}$$ We introduce the function : $$f(a_{n+1})=\left(\frac{1}{a_{n+1}}+1\right)\prod_{i=1}^{n}\left(1+a_{i}\right)-1+\frac{1}{a_{n+1}}+\sum_{i=1}^{n}\frac{a_{i}}{a_{n+1}}$$ If we differentiate and multiply by $a_{n+1}^2$ we fall on the hypothesis induction . Moreover we deduce that the function is decreasing so we have : $$f(a_{n+1})\geq \lim_{a_{n+1}\to \infty}f(a_{n+1})=\operatorname{induction hypothesis}\geq 0$$ Remins to multiply by $a_{n+1}$ and conclude . The advantage is : we get a $n$ refinements and we know much more on the behavior of the difference .
{ "language": "en", "url": "https://math.stackexchange.com/questions/74636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Injective functions also surjective? Is it true that for each set $M$ a given injective function $f: M \rightarrow M$ is surjective, too? Can someone explain why it is true or not and give an example?
This statement is true if $M$ is a finite set, and false if $M$ is infinite. In fact, one definition of an infinite set is that a set $M$ is infinite iff there exists a bijection $g : M \to N$ where $N$ is a proper subset of $M$. Given such a function $g$, the function $f : M \to M$ defined by $f(x) = g(x)$ for all $x \in M$ is injective, but not surjective. Henning's answer illustrates this with an example when $M = \mathbb N$. To put that example in the context of my answer, let $E \subseteq \mathbb N$ be the set of positive even numbers, and consider the bijection $g: \mathbb N \to E$ given by $g(x) = 2x$ for all $x \in \mathbb N$. On the other hand, if $M$ is finite and $f: M \to M$, then it is true that $f$ is injective iff it is surjective. Let $m = |M| < \infty$. Suppose $f$ is not surjective. Then $f(M)$ is a strict subset of $M$, and hence $|f(M)| < m$. Now, think of $x \in M$ as pigeons, and throw the pigeon $x$ in the hole $f(x)$ (also a member of $M$). Since the number of pigeons strictly exceeds the number of holes (both these numbers are finite), it follows from the pigeonhole principle that some two pigeons go into the same hole. That is, there exist distinct $x_1, x_2 \in M$ such that $f(x_1) = f(x_2)$, which shows that $f$ is not injective. (See if you can prove the other direction: if $f$ is surjective, then it is injective.) Note that the pigeonhole principle itself needs a proof and that proof is a little elaborate (relying on the definition of a finite set, for instance). I ignore such complications in this answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
Is $[0,1]^\omega$ a continuous image of $[0,1]$? Is $[0,1]^\omega$, i.e. $\prod_{n=0}^\infty [0,1]$ with the product topology, a continuous image of $[0,1]$? What if $[0,1]$ is replaced by $\mathbb{R}$? Edit: It appears that the answer is yes, and follows from the Hahn-Mazurkiewicz Theorem ( http://en.wikipedia.org/wiki/Space-filling_curve#The_Hahn.E2.80.93Mazurkiewicz_theorem ). However, I am still interested in the related question: is $\mathbb{R}^\omega$ a continuous image of $\mathbb{R}$?
So if I'm reading correctly you want to find out if there is a continuous (with respect product topology) surjective map $f: \mathbb{R} \rightarrow \mathbb{R}^{\omega}$? No, there is not. Note that $\mathbb{R}$ is $\sigma$-compact, so write: $$\mathbb{R} = \bigcup_{n \in \mathbb{N}} [-n,n]$$ Then using the fact that $f$ is surjective we get: $$\mathbb{R}^{\omega} = \bigcup_{n \in \mathbb{N}} f([-n,n])$$ By continuity of $f$ each $D_n=f([-n,n])$ is a compact subset of $\mathbb{R}^{\omega}$. So the question boils down to whether is possible that $\mathbb{R}^{\omega}$ is $\sigma$-compact with product topology. No, let $\pi_{n}$ be the standard projection from $\mathbb{R}^{\omega}$ onto $\mathbb{R}$, then $\pi_{n}f([-n,n])$ is a compact subset of $\mathbb{R}$ so bounded. Thus for each $n \in \mathbb{N}$ choose $x_{n} \in \mathbb{R} \setminus \pi_{n}f([-n,n])$ then $x=(x_{n})$ lies in $\mathbb{R}^{\omega}$ but not in $\bigcup_{n \in \mathbb{N}} f([-n,n])$, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Extending to a holomorphic function Let $Z\subseteq \mathbb{C}\setminus \overline{\mathbb{D}}$ be countable and discrete (here $\mathbb{D}$ stands for the unit disc). Consider a function $f\colon \mathbb{D}\cup Z\to \mathbb{C}$ such that 1) $f\upharpoonright \overline{\mathbb{D}}$ is continuous 2) $f\upharpoonright \mathbb{D}$ is holomorphic 3) if $|z_0|=1$ and $z_n\to z_0$, $z_n\in Z$ then $(f(z_n)-f(z_0))/(z_n-z_0)\to f^\prime(z_0)$ Can $f$ be extended to a holomorphic function on some domain containing $Z$?
No. The function $g(z) = 1+ 2z + \sum_{n=1}^{\infty} 2^{-n^2} z^{2^n}$ is holomorphic on the open disk $\mathbb{D}$ and infinitely often real differentiable in any point of the closed disk $\overline{\mathbb{D}}$ but cannot be analytically extended beyond $\overline{\mathbb{D}}$: The radius of convergence is $1$. For $n \gt k$ we have $2^{nk} 2^{-n^2} \leq 2^{-n}$, hence the series and all of its derivatives converge uniformly on $\overline{\mathbb{D}}$, thus $g$ is indeed smooth in the real sense and holomorphic on $\mathbb{D}$. By Hadamard's theorem on lacunary series the function $g$ cannot be analytically continued beyond $\overline{\mathbb D}$. [In fact, it is not difficult to show that $g$ is injective on $\mathbb{\overline{D}}$, so $g$ is even a diffeomorphism onto its image $g(\overline{\mathbb D})$ — but that's not needed here.] I learned about this nice example from Remmert, Classical topics in complex function theory, Springer GTM 172, Chapter 11, §2.3 (note: I'm quoting from the German edition). The entire chapter is devoted to the behavior of power series on the boundary of convergence and gives theorems that provide positive and negative answers on whether a given power series can be extended or not. Now, to get a counterexample, apply Whitney's theorem to extend $g$ to a smooth function $f$ on all of $\mathbb{C}$. Every restriction of $f$ to any set of the form $\overline{\mathbb{D}} \cup Z$ (I think that's what was intended) as in your question will provide a counterexample.
{ "language": "en", "url": "https://math.stackexchange.com/questions/74901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How do you parameterize a sphere so that there are "6 faces"? I'm trying to parameterize a sphere so it has 6 faces of equal area, like this: But this is the closest I can get (simply jumping $\frac{\pi}{2}$ in $\phi$ azimuth angle for each "slice"). I can't seem to get the $\theta$ elevation parameter correct. Help!
The following doesn't have much to do with spherical coordinates, but it might be worth noting that these 6 regions can be seen as the projections of the 6 faces of an enclosing cube. In other words, each of the 6 regions can be parametrized as the region of the sphere $$S=\{\{x,y,z\}\in\mathbb R^3\mid x^2+y^2+z^2=1\}$$ for which, respectively: * *$x > \max(\lvert y\rvert,\lvert z\rvert)$ *$x < -\max(\lvert y\rvert,\lvert z\rvert)$ *$y > \max(\lvert z\rvert,\lvert x\rvert)$ *$y < -\max(\lvert z\rvert,\lvert x\rvert)$ *$z > \max(\lvert x\rvert,\lvert y\rvert)$ *$z < -\max(\lvert x\rvert,\lvert y\rvert)$ For example, the top region can be parametrized by $z =\sqrt{1-x^2-y^2}$, with region \begin{align*} \lvert x\rvert&<\frac{1}{\sqrt{2}}\\ \lvert y\rvert&<\min\left(\sqrt{\frac{1-x^2}{2}},\sqrt{1-2x^2}\right) \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/74941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
How do we check Randomness? Let's imagine a guy who claims to possess a machine that can each time produce a completely random series of 0/1 digits (e.g. $1,0,0,1,1,0,1,1,1,...$). And each time after he generates one, you can keep asking him for the $n$-th digit and he will tell you accordingly. Then how do you check if his series is really completely random? If we only check whether the $n$-th digit is evenly distributed, then he can cheat using: $0,0,0,0,...$ $1,1,1,1,...$ $0,0,0,0,...$ $1,1,1,1,...$ $...$ If we check whether any given sequence is distributed evenly, then he can cheat using: $(0,)(1,)(0,0,)(0,1,)(1,0,)(1,1,)(0,0,0,)(0,0,1,)...$ $(1,)(0,)(1,1,)(1,0,)(0,1,)(0,0,)(1,1,1,)(1,1,0,)...$ $...$ I may give other possible checking processes but as far as I can list, each of them has flaws that can be cheated with a prepared regular series. How do we check if a series is really random? Or is randomness a philosophical concept that can not be easily defined in Mathematics?
All the sequences you mentioned have a really low Kolmogorov complexity, because you can easily describe them in really short space. A random sequence (as per the usual definition) has a high Kolmogorov complexity, which means there is no instructions shorter then the string itself that can describe or reproduce the string. Ofcourse the length of the description depends on the formal system (language) you use to describe it, but if the length of the string is much longer then the axioms of your formal systems, then the Kolmogorov-complexity of a random string becomes independent of your choice of system. Luckily, under the Church-Turing thesis, there is only 1 model of computation,(unless your machine uses yet undiscovered physical laws), so there is only 1 language your machine can speak that we have to check. So to test if a string is random, we only have to brute-force check the length of the shortest Turing-program that outputs the first n bits correctly. If the length eventually becomes proportional to n, then we can be fairly certain we have a random sequence, but to be 100% we have to check the whole (infinite) string. (As per definition of random).
{ "language": "en", "url": "https://math.stackexchange.com/questions/75005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
How many ways can 8 people be seated in a row? I am stuck with the following question, How many ways can 8 people be seated in a row? if there are 4 men and 4 women and no 2 men or women may sit next to each other. I did it as follows, As 4 men and 4 women must sit next to each other so we consider each of them as a single unit. Now we have we 4 people(1 men group, 1 women group, 2 men or women) they can be seated in 4! ways.Now each of the group of men and women can swap places within themselves so we should multiply the answer with 4!*4! This makes the total 4!*4!*4! =13824 . Please help me out with the answer. Are the steps clear and is the answer and the method right? Thanks
If there is a man on the first seat, there has to be a woman on the second, a man on the third so forth. Alternatively, we could start with a woman, then put a man, then a woman and so forth. In any case, if we decide which gender to put on the first seat, the genders for the others seats are forced upon us. So there are only two ways in which we can divide the eight aligned seats into "man-seats" and "woman-seats" without violating the rule that no two men and no two women may sit next to each other. Once you have chosen where to seat the men and where the women you can permute the two groups arbitrarily, giving $4!$ possibilities each. So in total the number of possible constellations is $$ 2\cdot 4!\cdot 4!=1152. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/75071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Probability that no two consecutive throws of some (A,B,C,D,E,F)-die show up consonants I have a question on probability. I am looking people presenting different approaches on solving this. I already have one solution but I was not satisfied like a true mathematician ;).....so go ahead and take a dig.....if no one answers....I will post my solution....thanks! There is an unbiased cubical die with its faces labeled as A, B, C, D, E and F. If the die is thrown $n$ times, what is the probability that no two consecutive throws show up consonants? If someone has already asked a problem of this type then I will be grateful to be redirected :)
Here is a solution different from the one given on the page @joriki links to. Call $c_n$ the probability that no two consecutive consonants appeared during the $n$ first throws and that the last throw produces a consonant. Call $b_n$ the probability that no two consecutive consonants appeared during the $n$ first throws and that the last throw did not produce a consonant. Thus one is looking for $p_n=c_n+b_n$. For every $n\geqslant1$, $c_{n+1}=\frac23b_n$ (if the previous throw was a consonant, one cannot get a consonant now and if it was not, $\frac23$ is the probability to get a consonant now) and $b_{n+1}=\frac13b_n+\frac13c_n$ (if the present throw is not a consonant, one asks that two successive consonants were not produced before now). Furthermore $c_1=\frac23$ and $b_1=\frac13$. One asks for $p_n=3b_{n+1}$ and one knows that $9b_{n+2}=3b_{n+1}+3c_{n+1}=3b_{n+1}+2b_n$ for every $n\geqslant1$. The roots of the characteristic equation $9r^2-3r-2=0$ are $r_2=\frac23$ and $r_1=-\frac13$ hence $b_n=B_2r_2^n+B_1r_1^n$ for some $B_1$ and $B_2$. One can use $b_2=\frac13$ as second initial condition, this yields $B_2=\frac23$ and $B_1=\frac13$ hence $b_n=r_2^{n+1}-r_1^{n+1}$. Finally, $p_n=3^{-n-1}\left(2^{n+2}-(-1)^{n}\right)$. (Sanity check: one can check that $p_0=p_1=1$, $p_2=\frac59$, and even with some courage that $p_3=\frac{11}{27}$, are the correct values.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/75098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
How to prove that $\lim\limits_{x\to0}\frac{\sin x}x=1$? How can one prove the statement $$\lim_{x\to 0}\frac{\sin x}x=1$$ without using the Taylor series of $\sin$, $\cos$ and $\tan$? Best would be a geometrical solution. This is homework. In my math class, we are about to prove that $\sin$ is continuous. We found out, that proving the above statement is enough for proving the continuity of $\sin$, but I can't find out how. Any help is appreciated.
Usual proofs can be circular, but there is a simple way for proving such inequality. Let $\theta$ be an acute angle and let $O,A,B,C,D,C'$ as in the following diagram: We may show that: $$ CD \stackrel{(1)}{ \geq }\;\stackrel{\large\frown}{CB}\; \stackrel{(2)}{\geq } CB\,\stackrel{(3)}{\geq} AB $$ $(1)$: The quadrilateral $OCDC'$ and the circle sector delimited by $O,C,C'$ are two convex sets. Since the circle sector is a subset of the quadrilateral, the perimeter of the circle sector is less than the perimeter of the quadrilateral. $(2)$: the $CB$ segment is the shortest path between $B$ and $C$. $(3)$ $CAB$ is a right triangle, hence $CB\geq AB$ by the Pythagorean theorem. In terms of $\theta$ we get: $$ \tan\theta \geq \theta \geq 2\sin\frac{\theta}{2} \geq \sin\theta $$ for any $\theta\in\left[0,\frac{\pi}{2}\right)$. Since the involved functions are odd functions the reverse inequality holds over $\left(-\frac{\pi}{2},0\right]$, and $\lim_{\theta\to 0}\frac{\sin\theta}{\theta}=1$ follows by squeezing. A slightly different approach might be the following one: let us assume $\theta\in\left(0,\frac{\pi}{2}\right)$. By $(2)$ and $(3)$ we have $$ \theta \geq 2\sin\frac{\theta}{2}\geq \sin\theta $$ hence the sequence $\{a_n\}_{n\geq 0}$ defined by $a_n = 2^n \sin\frac{\theta}{2^n}$ is increasing and bounded by $\theta$. Any increasing and bounded sequence is convergent, and we actually have $\lim_{n\to +\infty}a_n=\theta$ since $\stackrel{\large\frown}{BC}$ is a rectifiable curve and for every $n\geq 1$ the $a_n$ term is the length of a polygonal approximation of $\stackrel{\large\frown}{BC}$ through $2^{n-1}$ equal segments. In particular $$ \forall \theta\in\left(0,\frac{\pi}{2}\right), \qquad \lim_{n\to +\infty}\frac{\sin\left(\frac{\theta}{2^n}\right)}{\frac{\theta}{2^n}} = 1 $$ and this grants that if the limit $\lim_{x\to 0}\frac{\sin x}{x}$ exists, it is $1$. By $\sin x\leq x$ we get $\limsup_{x\to 0}\frac{\sin x}{x}\leq 1$, hence it is enough to show that $\liminf_{x\to 0}\frac{\sin x}{x}\geq 1$. We already know that for any $x$ close enough to the origin the sequence $\frac{\sin x}{x},\frac{\sin(x/2)}{x/2},\frac{\sin(x/4)}{x/4},\ldots$ is convergent to $1$, hence we are done. Long story short: $\lim_{x\to 0}\frac{\sin x}{x}=1$ follows from the fact that a circle is a rectifiable curve, and a circle is a rectifiable curve because it is the boundary of a convex, bounded subset of $\mathbb{R}^2$. The convexity of the disk follows from the triangle inequality: a disk is a closed ball for the euclidean distance. $(1)$ relies on this powerful Lemma: Lemma. If $A,B$ are convex bounded sets in $\mathbb{R}^2$ and $A\subsetneq B$, the perimeter of $A$ is less than the perimeter of $B$. Proof: by boundedness and convexity, $\partial A$ and $\partial B$ are rectifiable, with lengths $L(A)=\mu(\partial A),\,L(B)=\mu(\partial B)$. Always by convexity, there is some chord in $B$ that does not meet the interior of $A$ (a tangent to $\partial A$ at a smooth point does the job, for instance). Assume that such chord has endpoints $B_1, B_2 \in \partial B$ and perform a cut along $B_1 B_2$: both the area and the perimeter of $B$ decrease, but $B$ remains a bounded convex set enclosing $A$. Since $A$ can be approximated through a sequence of consecutive cuts, $L(A)<L(B)$ follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/75130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "536", "answer_count": 28, "answer_id": 18 }
Euclidean distance vs Squared So I understand that Euclidean distance is valid for all of properties for a metric. But why doesn't the square hold the same way?
You lose the triangle inequality if you don’t take the square root: the ‘distance’ from the origin to $(2,0)$ would be $4$, which is greater than $2$, the sum of the ‘distances’ from the origin to $(1,0)$ and from $(1,0)$ to $(2,0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/75193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
mean and std deviation of a population equal? Hypothetically, if we have a population of size $n$ whose mean and std deviation are equal, I think with some work we have a constraint that the ratio, (Sum of squared points)/(Sum of points$)^2$ $= \frac{(2n-1)}{n^2}$, which gets small quickly as $n$ gets large. Are there heuristic considerations that might render such a population plausible as an extension of, say, the binomial distribution (as with the Poisson distribution, although that distribution the mean is equal to the variance)? Does this property (mean = Sqrt[variance] ) suggest anything about the population generally, if that question is not too vague? I have not encountered a population with this property in any texts, but am fairly sure it has been considered...?
The distributions of exponential type whose variance and mean are related, so that $\operatorname{Var}(X) \sim (\mathbb{E}(X))^p$ for a fixed $p$, are called an index parameter, are known as Tweedie family. The case you are interested in corresponds to index $p =2$. $\Gamma$-distribution possesses this property (the exponential distribution is a special case of $\Gamma$-distribution). $\Gamma$-distribution has probability density $f(x) = \frac{1}{\Gamma(\alpha)} x^{\alpha - 1} \beta^{-\alpha} \exp\left(- \frac{x}{\beta}\right) \mathbf{1}_{x > 0}$. Its mean is $\mu = \alpha \beta$ and variance $\mu_2 = \alpha \beta^2$, hence $\mu_2 = \frac{\mu^2}{\alpha}$. The equality requires $\alpha = 1$, so we recover the exponential distribution as already noted by Michael Hardy. But we do not have to stay within the exponential family. You can certainly achieve the equality with normal distribution, and with beta distribution, and with many discrete distributions, for instance, the generalized Poisson distribution with $\lambda = \frac{1}{1-\mu}$, both the mean and variance equal $(1 - \mu)^{-2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/75244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How many everywhere defined functions are not $ 1$ to $1$ I am stuck with the following question, How many everywhere defined functions from S to T are not one to one S={a,b,c,d,e} T={1,2,3,4,5,6} Now the teacher showed that there could be $6^5$ ways to make and everywhere defined function and $6!$ ways of it to be $1$ to $1$ but when I drew them on paper I could drew no more than 30, here are those, $(a,1),(a,2).........(a,6)$$(b,1),(b,2).........(b,6)$$(c,1),(c,2).........(c,6)$$(d,1)(d,2).........(d,6)$$(e,1)(e,2).........(e,6)$ Can anyone please help me out with how there are $6^5$ functions? Thanks
You have produced a complete and correct list of all ordered pairs $(x,y)$, where $x$ ranges over $S$ and $y$ ranges over $T$. However, this is not the set of all functions from $S$ to $T$. Your list, however, gives a nice way of visualizing all the functions. We can produce all the functions from $S$ to $T$ by picking any ordered pair from your first row, followed by any ordered pair from your second row, followed by any ordered pair from your third row, and so on. We have $6$ choices from the first row. For any of these choices, we have $6$ choices from the second row, for a total of $6\times 6$ choices from the first two rows. For every way of choosing from the first two rows, we have $6$ ways to choose from the third row, and so on for a total of $6^5$. A function from $S$ to $T$ is a set of ordered pairs, with one ordered pair taken from each row. This view is quite close to the formal definition of function that you may have seen in your course. For example, we might choose $(a,1)$, $(b,5)$, $(c,1)$, $(d,5)$, $(e,2)$. This gives us the function that maps $a$ and $c$ to $1$, $b$ and $d$ to $5$, and $e$ to $2$. We can produce the one-to-one functions in similar way from your "matrix." We start by taking any ordered pair from your first row. But then, from the second row, we must pick an ordered pair which is not in the same column as the first ordered pair you picked. So even though there are $6$ choices from the first row, for every such choice, there are only $5$ choices from the second row. Similarly, once we have picked a pair from each of the first two rows, in the third row we must avoid the columns these pairs are in. So we end up with only $6\times 5\times 4\times 3\times 2$ one-to-one functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/75301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
The Pigeon Hole Principle and the Finite Subgroup Test I am currently reading this document and am stuck on Theorem 3.3 on page 11: Let $H$ be a nonempty finite subset of a group $G$. Then $H$ is a subgroup of $G$ if $H$ is closed under the operation of $G$. I have the following questions: 1. It suffices to show that $H$ contains inverses. I don't understand why that alone is sufficient. 2. Choose any $a$ in $G$...then consider the sequence $a,a^2,..$ This sequence is contained in $H$ by the closure property. I know that if $G$ is a group, then $ab$ is in $G$ for all $a$ and $b$ in $G$.But, I don't understand why the sequence has to be contained in $H$ by the closure property. 3. By the Pigeonhole Principle, since $H$ is finite, there are distinct $i,j$ such that $a^i=a^j$. I understand the Pigeonhole Principle (as explained on page 2) and why $H$ is finite, but I don't understand how the Pigeonhole Principle was applied to arrive at $a^i=a^j$. 4. Reading the proof, it appears to me that $H$ = $\left \langle a \right \rangle$ where $a\in G$. Is this true?
To show $H$ is a subgroup you must show it's closed, contains the identity, and contains inverses. But if it's closed, non-empty, and contains inverses, then it's guaranteed to contain the identity, because it's guaranteed to contain something, say, $x$, then $x^{-1}$, then $xx^{-1}$, which is the identity. $H$ is assumed closed, so if it contains $a$ and $b$, it contains $ab$. But $a$ and $b$ don't have to be different: if it contains $a$, it contains $a$ and $a$, so it contains $aa$, which is $a^2$. But then it contains $a$ and $a^2$ so it contains $aa^2$ which is $a^3$. Etc. So it contains $a,a^2,a^3,a^4,\dots$. $H$ is finite, so these can't all be different, so some pair is equal, that is, $a^i=a^j$ for some $i\ne j$. As for your last question, do you know any example of a group with a non-cyclic subgroup?
{ "language": "en", "url": "https://math.stackexchange.com/questions/75371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
Prove that $\lim \limits_{n\to\infty}\frac{n}{n^2+1} = 0$ from the definition This is a homework question: Prove, using the definition of a limit, that $$\lim_{n\to\infty}\frac{n}{n^2+1} = 0.$$ Now this is what I have so far but I'm not sure if it is correct: Let $\epsilon$ be any number, so we need to find an $M$ such that: $$\left|\frac{n}{n^2 + 1}\right| < \epsilon \text{ whenever }x \gt M.$$ $$ n \lt \epsilon(n^2 + 1) $$ $$n \lt \epsilon n^2 + \epsilon$$ Now what? I am completely clueless on how to do this!
First, $\epsilon$ should not be "any number", it should be "any positive number." Now, you are on the right track. What do you need in order for $\frac{n}{n^2+1}$ to be smaller than $\epsilon$? You need $n\lt \epsilon n^2 + \epsilon$. This is equivalent to requiring $$\epsilon n^2 - n + \epsilon \gt 0.$$ You want to find out for what values of $n$ this is true. This is a quadratic inequality: you first solve $$\epsilon n^2 - n + \epsilon = 0,$$ and then you use the solution to determine where the quadratic is positive, and where it is negative. The answer will, of course, depend on $\epsilon$. Using the quadratic formula, we have that $$\epsilon n^2 - n + \epsilon = 0$$ has solutions $$n = \frac{1 + \sqrt{1-4\epsilon^2}}{2\epsilon}, \quad n= \frac{1-\sqrt{1-4\epsilon^2}}{2\epsilon}.$$ That is, $$\epsilon n^2 - n + \epsilon = \epsilon\left( n - \frac{1-\sqrt{1-4\epsilon^2}}{2\epsilon}\right)\left(n - \frac{1+\sqrt{4\epsilon^2}}{2\epsilon}\right).$$ Now, we can assume that $\epsilon\lt \frac{1}{2}$, so that $4\epsilon^2\lt 1$ (if it works for all small enough $\epsilon$, then it works for all $\epsilon$. Since $\epsilon\gt 0$, then the quadratic is positive if $n$ is smaller than the smallest of the roots, or if $n$ is larger than the larger of the two roots. The larger root is $\displaystyle \frac{1 + \sqrt{1-4\epsilon^2}}{2\epsilon}$. So if $$n \gt \frac{1+\sqrt{1-4\epsilon^2}}{2\epsilon},$$ then $$\epsilon n^2 -n + \epsilon \gt 0.$$ Can you finish it up from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/75429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 1 }
Number of point subsets that can be covered by a disk Given $n$ distinct points in the (real) plane, how many distinct non-empty subsets of these points can be covered by some (closed) disk? I conjecture that if no three points are collinear and no four points are concyclic then there are $\frac{n}{6}(n^2+5)$ distinct non-empty subsets that can be covered by a disk. (I have the outline of an argument, but it needs more work. See my answer below.) Is this conjecture correct? Is there a good BOOK proof? This question is rather simpler than the related unit disk question. The answer to this question provides an upper bound to the unit disk question (for $k=1$).
When $n=6$, consider four points at the corner of a square, and two more points very close together near the center of the square. To be precise, let's take points at $(\pm1,0)$ and $(0,\pm1)$ and at $(\epsilon,\epsilon)$ and $(-2\epsilon,\epsilon)$ for some small $\epsilon>0$. Then if I'm not mistaken, the number of nonempty subsets that can be covered by a disk is 34 (6 of size 1, 11 of size 2, 8 of size 3, 4 of size 4, 4 of size 5, and 1 of size 6), while your conjectured formula gives 41 [I originally had the incorrect 31]. Now that I think about, when $n=4$, taking two points very near the midpoint of the segment joining the other two points (say $(\pm1,0)$ and $(0,\pm\epsilon)$) gives 12 such nonempty subsets (4 of size 1, 5 of size 2, 2 of size 3, and 1 of size 4) while your conjectured formula gives 14. Edited to add: it's been commented correctly that the above $n=4$ example does indeed give 14, since every size-3 subset can be covered by a disk. But what about if the four points are $(\pm1,0)$, $(0,\epsilon)$, and $(0,0)$ instead? Now I believe the first three points cannot be covered by a disk without covering the fourth also. Perhaps you want to add the condition that no three points are collinear?
{ "language": "en", "url": "https://math.stackexchange.com/questions/75487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Filter to obtain MMSE of data from Gaussian vector Data sampled at two time instances giving bivariate Gaussian vector $X=(X_1,X_2)^T$ with $f(x_1,x_2)=\exp(-(x_1^2+1.8x_1x_2+x_2^2)/0.38)/2\pi \sqrt{0.19}$ Data measured in noisy environment with vector: $(Y_1,Y_2)^T=(X_1,X_2)^T+(W_1,W_2)^T$ where $W_1,W_2$ are both $i.i.d.$ with $\sim N (0,0.2)$. I have found correlation coefficient of $X_1,X_2$, $\rho=-0.9$ and $X_1,X_2 \sim N(0,1)$ Question: How to design filter to obtain MMSE estimator of $X_1$ from $Y$ vector and calculate MSE of this estimator?
What you need is $\mathbb{E}(X_1 \mid Y_1, Y_2)$. We have $$ \operatorname{var}\begin{bmatrix} X_1 \\ Y_1 \\ Y_2 \end{bmatrix} = \left[\begin{array}{r|rr} 1 & 1 & -0.9 \\ \hline1 & 1.02 & -0.9 \\ -0.9 & -0.9 & 1.02 \end{array}\right]= \begin{bmatrix} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{12}^\top & \Sigma_{22} \end{bmatrix}. $$ So the conditional expected value is $$ \mathbb{E}(X_1) + \Sigma_{12} \Sigma_{22}^{-1} \left( \begin{bmatrix} Y_1 \\ Y_2 \end{bmatrix} - \mathbb{E}\begin{bmatrix} Y_1 \\ Y_2 \end{bmatrix}. \right) $$ See: http://en.wikipedia.org/wiki/Multivariate_normal_distribution#Conditional_distributions
{ "language": "en", "url": "https://math.stackexchange.com/questions/75615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to prove that $\sum\limits_{n=1}^\infty\frac{(n-1)!}{n\prod\limits_{i=1}^n(a+i)}=\sum\limits_{k=1}^\infty \frac{1}{(a+k)^2}$ for $a>-1$? A problem on my (last week's) real analysis homework boiled down to proving that, for $a>-1$, $$\sum_{n=1}^\infty\frac{(n-1)!}{n\prod\limits_{i=1}^n(a+i)}=\sum_{k=1}^\infty \frac{1}{(a+k)^2}.$$ Mathematica confirms this is true, but I couldn't even prove the convergence of the original series (the one on the left), much less demonstrate that it equaled this other sum; the ratio test is inconclusive, and the root test and others seem hopeless. It was (and is) quite a frustrating problem. Can someone explain how to go about tackling this?
This uses a reliable trick with the Beta function. I say reliable because you can use the beta function and switching of the integral and sum to solve many series very quickly. First notice that $$\prod_{i=1}^{n}(a+i)=\frac{\Gamma(n+a+1)}{\Gamma(a+1)}.$$ Then $$\frac{(n-1)!}{\prod_{i=1}^{n}(a+i)}=\frac{\Gamma(n)\Gamma(a+1)}{\Gamma(n+a+1)}=\text{B}(n,a+1)=\int_{0}^{1}(1-x)^{n-1}x{}^{a}dx.$$ Hence, upon switching the order we have that $$\sum_{n=1}^{\infty}\frac{(n-1)!}{n\prod_{i=1}^{n}(a+i)}=\int_{0}^{1}x^{a}\left(\sum_{n=1}^{\infty}\frac{(1-x)^{n-1}}{n}\right)dx.$$ Recognizing the power series, this is $$\int_{0}^{1}x^{a}\frac{-\log x}{1-x}dx.$$ Now, expand the power series for $\frac{1}{1-x}$ to get $$\sum_{m=0}^{\infty}-\int_{0}^{1}x^{a+m}\log xdx.$$ It is not difficult to see that $$-\int_{0}^{1}x^{a+m}\log xdx=\frac{1}{(a+m+1)^{2}},$$ so we conclude that $$\sum_{n=1}^{\infty}\frac{(n-1)!}{n\prod_{i=1}^{n}(a+i)}=\sum_{m=1}^{\infty}\frac{1}{(a+m)^{2}}.$$ Hope that helps, Remark: To evaluate the earlier integral, notice that $$-\int_{0}^{1}x^{r}\log xdx=\int_{1}^{\infty}x^{-(r+2)}\log xdx=\int_{0}^{\infty}e^{-u(r+1)}udu=\frac{1}{(r+1)^{2}}\int_{0}^{\infty}e^{-u}udu. $$ Alternatively, as Joriki pointed out, you can just use integration by parts.
{ "language": "en", "url": "https://math.stackexchange.com/questions/75681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 2, "answer_id": 0 }
Conditional expectation of $\max(X,Y)$ and $\min(X,Y)$ when $X,Y$ are iid and exponentially distributed I am trying to compute the conditional expectation $$E[\max(X,Y) | \min(X,Y)]$$ where $X$ and $Y$ are two iid random variables with $X,Y \sim \exp(1)$. I already calculated the densities of $\min(X,Y)$ and $\max(X,Y)$, but I failed in calculating the joint density. Is this the right way? How can I compute the joint density then? Or do I have to take another ansatz?
For two independent exponential distributed variables $(X,Y)$, the joint distribution is $$ \mathbb{P}(x,y) = \mathrm{e}^{-x-y} \mathbf{1}_{x >0 } \mathbf{1}_{y >0 } \, \mathrm{d} x \mathrm{d} y $$ Since $x+y = \min(x,y) + \max(x,y)$, and $\min(x,y) \le \max(x,y)$ the joint distribution of $(U,V) = (\min(X,Y), \max(X,Y))$ is $$ \mathbb{P}(u,v) = \mathcal{N} \mathrm{e}^{-u-v} \mathbf{1}_{v \ge u >0 } \, \mathrm{d} u \mathrm{d} v $$ The normalization constant is easy to find as $$ \int_0^\infty \mathrm{d} v \int_0^v \mathrm{d} u \,\, \mathrm{e}^{-u-v} = \int_0^\infty \mathrm{d} v \,\, \mathrm{e}^{-v} ( 1 - \mathrm{e}^{-v} ) = 1 - \frac{1}{2} = \frac{1}{2} = \frac{1}{\mathcal{N}} $$ Thus the conditional expectation we seek to find is found as follows (assuming $u>0$): $$ \mathbb{E}(\max(X,Y) \vert \min(X,Y) = u) = \frac{\int_0^\infty v \mathrm{d} P(u,v)}{\int_u^\infty \mathrm{d} P(u,v)} = \frac{\int_u^\infty \mathcal{N} v \mathrm{e}^{-u-v} \mathrm{d} v}{\int_u^\infty \mathcal{N} \mathrm{e}^{-u-v} \mathrm{d} v} = 1 + u $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/75732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 6, "answer_id": 1 }
Showing $f^{-1}$ exists where $f(x) = \frac{x+2}{x-3}$ Let $f(x) = \dfrac{x + 2 }{x - 3}$. There's three parts to this question: * *Find the domain and range of the function $f$. *Show $f^{-1}$ exists and find its domain and range. *Find $f^{-1}(x)$. I'm at a loss for #2, showing that the inverse function exists. I can find the inverse by solving the equation for $x$, showing that it exists without just solving for the inverse. Can someone point me in the right direction?
It is a valid way to find the inverse by solving for x first and then verify that $f^{-1}(f(x))=x$ for all $x$ in your domain. It is quite preferable to do it here because you need it for 3. anyways.
{ "language": "en", "url": "https://math.stackexchange.com/questions/75839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Second countability and products of Borel $\sigma$-algebras We know that the Borel $\sigma$-algebra of the Cartesian product space (with the product topology) of two topological spaces is equal to the product of the Borel $\sigma$-algebras of the factor spaces. (The product $\sigma$-algebra can be defined via pullbacks of projection maps...) When one upgrades the above statement to a product of a countable family of topological spaces, the analagous result, namely that the Borel $\sigma$-algebra is the the product Borel $\sigma$-algebra, is conditioned by the topological spaces being second countable. Why? My question is this: how and why does second countability make its appearance when we upgrade the finite product to countable product? (Second countability means that the base is countable, not just locally...) My difficulty is that I do not see how suddenly second countability is important when we pass from finite to countable products.
This is not even true for a product of two spaces: see this Math Overflow question. To rephrase, second countability can be important even for products of two topological spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/75890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Expected value of the stochastic integral $\int_0^t e^{as} dW_s$ I am trying to calculate a stochastic integral $\mathbb{E}[\int_0^t e^{as} dW_s]$. I tried breaking it up into a Riemann sum $\mathbb{E}[\sum e^{as_{t_i}}(W_{t_i}-W_{t_{i-1}})]$, but I get expected value of $0$, since $\mathbb{E}(W_{t_i}-W_{t_{i-1}}) =0$. But I think it's wrong. Thanks! And I want to calculate $\mathbb{E}[W_t \int_0^t e^{as} dW_s]$ as well, I write $W_t=\int_0^t dW_s$ and get $\mathbb{E}[W_t \int_0^t e^{as} dW_s]=\mathbb{E}[\int_0^t e^{as} dW_s]$. Is that ok? ($W_t$ is brownian motion.)
The expectation of the Ito integral $\mathbb{E}( \int_0^t \mathrm{e}^{a s} \mathrm{d} W_s )$ is zero as George already said. To compute $\mathbb{E}( W_t \int_0^t \mathrm{e}^{a s} \mathrm{d} W_s )$, write $W_t = \int_0^t \mathrm{d} W_s$. Then use Ito isometry: $$ \mathbb{E}( W_t \int_0^t \mathrm{e}^{a s} \mathrm{d} W_s ) = \mathbb{E}\left( \int_0^t \mathrm{d} W_s \cdot \int_0^t \mathrm{e}^{a s} \mathrm{d} W_s \right) = \int_0^t (1 \cdot \mathrm{e}^{a s}) \mathrm{d} s = \frac{\mathrm{e}^{a t} - 1}{a} \phantom{hhhh} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/75955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
The tricky time complexity of the permutation generator I ran into tricky issues in computing time complexity of the permutation generator algorithm, and had great difficulty convincing a friend (experienced in Theoretical CS) of the validity of my reasoning. I'd like to clarify this here. Tricky complexity question Given a positive integer $n$, what is the time complexity of generating all permutations on the set $[n]=\{1,2,..,n\}$? Friend's reasoning Any algorithm to generate all permutations of $[n]$ takes $\Omega(n!)$ time. This is a provable , super-exponential lower bound, [edited ]hence the problem is in EXPTIME. My reasoning The above reasoning is correct, except that one should compute the complexity with respect to the number of expected output bits. Here, we expect $n!$ numbers in the output, and each can be encoded in $\log n$ bits; hence we expect $b=O(n!\log n)$ output bits. A standard algorithm to traverse all $n!$ permutations will take a polynomial time overhead i.e. it will execute in $s(n)=O(n!n^k)$ time, hence we will need $t(n)=b(n)+s(n) = O(n!(\log n + n^k)) $ time in all. Since $b(n)$ is the number of output bits, we will express $t(n)$ as a function of $b(n)$. To do so, note that $n^k \approx (n!)^{k/n}$ using $n! \approx n^n$; so $s(n)=O( b(n) (b(n))^{k/n}) = O(b^2(n) )$ . Hence we have a polynomial time algorithm in the number of output bits, and the problem should be in $P$, not in say EXPTIME. Main Question : Whose reasoning is correct, if at all? Note I raised this problem here because I had a bad experience at StackOverflow with a different tricky time complexity problem; and this is certainly not suited for Cstheory.SE as it isn't research level.
Your friend's bound is rather weak. Let the input number be $x$. Then the output is $x!$ permutations, but the length of each permutation isn't $x$ bits, as you claim, but $\Theta(\lg(x!)) = \Theta(x \lg x)$ bits. Therefore the total output length is $\Theta(x! \;x \lg x)$ bits. But, as @KeithIrwin has already pointed out, complexity classes work in terms of the size of the input. An input value of $x$ has size $n=\lg x$, so the generation is $\Omega(2^n! \;2^n n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/76008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Homeomorphism between two spaces I am asked to show that $(X_{1}\times X_{2}\times \cdots\times X_{n-1})\times X_{n}$ is homeomorphic to $X_{1}\times X_{2}\times \cdots \times X_{n}$. My guess is that the Identity map would work but I am not quite sure. I am also wondering if I could treat the the set $(X_{1}\times X_{2}\times \cdots\times X_{n-1})\times X_{n}$ as the product of two sets $X_{1}\times X_{2}\times \cdots\times X_{n-1}$ and $X_{n}$ so that I could use the projection maps but again I am not sure exactly how to go about this. Can anyone help me?
Let us denote $A = X_1\times \cdots \times X_{n-1}$ and $X = X_{1}\times \cdots\times X_{n-1}\times X_n$. The box topology $\tau_A$ on $A$ is defined by the basis of open product sets: $$ \mathcal B(A) = \{B_1\times\cdots \times B_{n-1}:B_i \text{ is open in } X_i,1\leq i\leq n-1\}. $$ The box topology $\tau_X$ on $X$ is defined by the basis: $$ \mathcal B(X) = \{B_1\times\cdots\times B_{n}:B_i \text{ is open in } X_i,1\leq i\leq n\}. $$ Let us follow Henning and put $f:A\times X_n\to X$ as $$f((x_1,\ldots,x_{n-1}),x_n) = (x_1,\ldots,x_n)$$ so $$ f^{-1}(x_1,\ldots,x_n) = ((x_1,\ldots,x_{n-1}),x_n). $$ Clearly, it is a bijection. Then we should check that $B\in\tau'$ iff $B\in \tau_X$. Let us check it: * *if $B\in\tau_X$ then $$ f^{-1}(B) = \bigcup\limits_{\alpha}(B_{1,\alpha}\times\cdots\times B_{n-1,\alpha})\times B_{n,\alpha}\in \tau'$$ since $B_{1,\alpha}\times\cdots\times B_{n-1,\alpha}\in \tau_A$. *if $B\in \tau'$ then $$ B = \bigcup\limits_\alpha C_\alpha \times B_{n,\alpha} $$ where $C_\alpha \in \tau(A)$. But we know the basis for the latter topology, so $$ C_\alpha = \bigcup\limits_\beta C_{1,\alpha,\beta}\times\cdots\times C_{n-1,\alpha,\beta} $$ where $C_{i,\alpha,\beta}$ are open in $X_i$, here $1\leq i\leq n-1$. Finally we substitute these expressions and get $$ f(B) = \bigcup\limits_{\alpha}B_{1,\alpha}\times\cdots\times B_{n-1,\alpha}\times B_{n,\alpha}\in \tau_X $$ where we denote $$ B_{i,\alpha} = \bigcup\limits_{\beta}C_{i,\alpha,\beta}\text{ - open in }X_i. $$ Note that we also implicitly interchanged unions w.r.t. $\alpha$ and $\beta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/76087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
If G is a group of order n=35, then it is cyclic I've been asked to prove this. In class we proved this when $n=15$, but our approached seemed unnecessarily complicated to me. We invoked Sylow's theorems, normalizers, etc. I've looked online and found other examples of this approach. I wonder if it is actually unnecessary, or if there is something wrong with the following proof: If $|G|=35=5\cdot7$ , then by Cauchy's theorem, there exist $x,y \in G$ such that $o(x)=5$, $o(y)=7$. The order of the product $xy$ is then $\text{lcm}(5,7)=35$. Since we've found an element of $G$ of order 35, we conclude that $G$ is cyclic. Thanks.
Another explicit example: Consider $$ A = \left( \begin{array}{cc} 1 & -1 \\ 0 & -1 \end{array} \right), \quad \text{and} \quad B = \left(\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right). $$ Then, $A^2 = B^2 = I$, but $$ AB = \left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array} \right) $$ has infinite order. It should also be mentioned that if $x$ has order $n$ and $y$ has order $m$, and $x$ and $y$ commute: $xy = yx$, then the order of $xy$ divides $\text{lcm}(m,n)$, though the order of $xy$ is not $\text{lcm}(m,n)$ in general. For example, if an element $g \in G$ has order $n$, then $g^{-1}$ also has order $n$, but $g g^{-1}$ has order $1$. Joriki's example also provides a scenario where the order of $xy$ is not $\text{lcm}(m,n)$ in general.
{ "language": "en", "url": "https://math.stackexchange.com/questions/76112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 7, "answer_id": 4 }
What would be the radius of convergence of $\sum\limits_{n=0}^{\infty} z^{3^n}$? I know how to find the radius of convergence of a power series $\sum\limits_{n=0}^{\infty} a_nz^n$, but how does this apply to the power series $\sum\limits_{n=0}^{\infty} z^{3^n}$? Would the coefficients $a_n=1$, so that one may apply D'Alembert's ratio test to determine the radius of convergence? I would appreciate any input that would be helpful.
What do you do for the cosine and sine series? There, you cannot use the Ratio Test directly because every other coefficient is equal to $0$. Instead, we do the Ratio Test on the subsequence of even (resp. odd) terms. You can do the same here. We have $a_{3^n}=1$ for all $n$, and $a_j=0$ for $j$ not a power of $3$. Define $b_k = z^{3^k}$. Then we are trying to determine the convergence of the series $\sum b_k$, so using the Ratio Test we have: $$\lim_{n\to\infty}\frac{|b_{n+1}|}{|b_{n}|} = \lim_{n\to\infty}\frac{|z|^{3^{n+1}}}{|z|^{3^{n}}} = \lim_{n\to\infty}|z|^{3^{n+1}-3^n} = \lim_{n\to\infty}|z|^{2\times3^{n}} = \left\{\begin{array}{cc} 0 & \text{if }|z|\lt 1\\ 1 & \text{if }|z|=1\\ \infty &\text{if }|z|\gt 1 \end{array}\right.$$ So the radius of convergence is $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/76173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
Infinitely many $n$ such that $p(n)$ is odd/even? We denote by $p(n)$ the number of partitions of $n$. There are infinitely many integers $m$ such that $p(m)$ is even, and infinitely many integers $n$ such that $p(n)$ is odd. It might be proved by the Euler's Pentagonal Number Theorem. Could you give me some hints?
Hint: Look at the pentagonal number theorem and the recurrence relation it yields for $p(n)$, and consider what would happen if $p(n)$ had the same parity for all $n\gt n_0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/76226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Lie bracket and covariant derivatives I came across the following equality $[\text{grad} f, X] = \nabla_{\text{grad} f} X + \nabla_X \text{grad} f$ Is this true, and how can I prove this (without coordinates)?
No. Replace all three occurrences of the gradient by any vector field, call it $W,$ but then replace the plus sign on the right hand side by a minus sign, and you have the definition of a torsion-free connection, $$ \nabla_W X - \nabla_X W = [W,X].$$ If, in addition, there is a positive definite metric, the Levi-Civita connection is defined to be torsion-free and satisfy the usual product rule for derivatives, in the guise of $$ X \, \langle V,W \rangle = \langle \nabla_X V, W \,\rangle + \langle V, \, \nabla_X W \, \rangle. $$ Here $\langle V,W \rangle$ is a smooth function, writing $X$ in front of it means taking the derivative in the $X$ direction. Once you have such a connection, it is possible to define the gradient of a function, for any smooth vector field $W$ demand $$ W(f) = df(W) = \langle \, W, \, \mbox{grad} \, f \, \rangle $$ Note that physicists routinely find use for connections with torsion. Also, $df$ (the gradient) comes from the smooth structure, the connection needs more.
{ "language": "en", "url": "https://math.stackexchange.com/questions/76282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove sequence $a_n=n^{1/n}$ is convergent How to prove that the sequence $a_n=n^{1/n}$ is convergent using definition of convergence?
Well, the easiest proof is that the sequence is decreasing and bounded below (by 1); thus it converges by the Monotone Convergence Theorem... The proof from definition of convergence goes like this: A sequence $a_{n}$ converges to a limit L in $\mathbb{R}$ if and only if $\forall \epsilon > 0 $, $\exists N\in\mathbb{N}$ such that $\left | L - a_{n} \right | < \epsilon$ for all $n \geq N $. The proposition: $\lim_{n\to\infty} n^{1/n} = 1 $ Proof: Let $\epsilon > 0$ be given. Then by Archimedean property of the real numbers, there exists $M \in \mathbb{N}$ such that $M < \epsilon + 1$ then find $x\in\mathbb{R}; x>2$ such that $1+M>x^{1/x}$ and let $P = \left \lceil x \right \rceil$. Then, since $f(x)=x^{1/x}$ is decreasing (for $x>e$) (trivial and left to the reader :D) take any $x\in\mathbb{N}$ such that $x>P$ and observe that (because of our choice and $M$ and $P$) we have $n^{1/n} \leq P^{1/P} \leq M \le 1 + \epsilon$ whenever $n\geq P$ and so $\left | 1 - a_{n} \right | < \epsilon$ whenever $n\geq P$. Thus $a_{n}$ converges (to 1). Edit: We can not always find a natural number M such that $M < \epsilon$ (what if $0 < \epsilon < 1$)? But we can always find a natural number M such that $M < \epsilon + 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/76330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 4, "answer_id": 1 }
Simplify an expression to show equivalence I am trying to simplify the following expression I have encountered in a book $\sum_{k=0}^{K-1}\left(\begin{array}{c} K\\ k+1 \end{array}\right)x^{k+1}(1-x)^{K-1-k}$ and according to the book, it can be simplified to this: $1-(1-x)^{K}$ I wonder how is it done? I've tried to use Mathematica (to which I am new) to verify, by using $\text{Simplify}\left[\sum _{k=0}^{K-1} \left(\left( \begin{array}{c} K \\ k+1 \end{array} \right)*x{}^{\wedge}(k+1)*(1-x){}^{\wedge}(K-1-k)\right)\right]$ and Mathematica returns $\left\{\left\{-\frac{K q \left((1-q)^K-q^K\right)}{-1+2 q}\right\},\left\{-\frac{q \left(-(1-q)^K+(1-q)^K q+(1+K) q^K-(1+2 K) q^{1+K}\right)}{(1-2 q)^2}\right\}\right\}$ which I cannot quite make sense of it. To sum up, my question is two-part: * *how is the first expression equivalent to the second? *how should I interpret the result returned by Mathematica, presuming I'm doing the right thing to simplify the original formula? Thanks a lot!
Simplify[PowerExpand[Simplify[Sum[Binomial[K, k + 1]*x^(k + 1)*(1 - x)^(K - k - 1), {k, 0, K - 1}], K > 0]]] works nicely. The key is in the use of the second argument of Simplify[] to add assumptions about a variable. and using PowerExpand[] to distribute powers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/76378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
What row am I on in a matrix if I only have the column index of an element, and the total number of columns and rows? Assume a 3 x 3 grid, but the elements are actually contained in a zero-based index array. So I know if I'm on element 5, I'm in row 2, and if I'm on element 7 then I'm in row 3. How would I actually calculate this? Please feel free to modify the title's terminology if I'm not using it correctly. Also, I would very much appreciate the logic used to determine the formula for the answer, as I'm not a mathematician.
The logic used is fairly simple. If you have a 3 by 3 grid, starting at 0, the elements would look as: 0 1 2 3 4 5 6 7 8 You count from left to right, until the number of columns matches the number of columns the grid has, then start again with the next row. Mathematically, the row is floor(elementNumber / numberOfRows) I.E.: element 4 would be floor(4 / 3) = 1.33 = 1; which is actually row 2 (0 indexed remember?) The Column would be elementNumber mod numberofColumns. IE, element 4: 4 mod 3 = 1; which is actually column 2 (0 indexed)
{ "language": "en", "url": "https://math.stackexchange.com/questions/76449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Cauchy's Theorem (Groups) Question? I'm afraid at this ungodly hour I've found myself a bit stumped. I'm attempting to answer the following homework question: If $p_1,\dots,p_s$ are distinct primes, show that an abelian group of order $p_1p_2\cdots p_s$ must be cyclic. Cauchy's theorem is the relevant theorem to the chapter that precedes this question... So far (and quite trivially), I know the element in question has to be the product of the elements with orders $p_1,\dots, p_s$ respectively. I've also successfully shown that the order of this element must divide the product of the $p$'s. However, showing that the order is exactly this product (namely that the product also divides the orders) has proven a bit elusive. Any helpful clues/hints are more than welcome and very much appreciated!
Start by proving that, in an abelian group, if $g$ has order $a$, and $h$ has order $b$, and $\gcd(a,b)=1$, then $gh$ has order $ab$. Clearly, $(gh)^{ab}=1$, so $gh$ has order dividing $ab$. Now show that if $(gh)^s=1$ for some $s\lt ab$ then you can find some $r$ such that $(gh)^{rs}$ is either a power of $g$ or of $h$ and not the identity. Details left to the reader.
{ "language": "en", "url": "https://math.stackexchange.com/questions/76629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that: set $\{1, 2, 3, ..., n - 1\}$ is group under multiplication modulo $n$? Prove that: The set $\{1, 2, 3, ..., n - 1\}$ is a group under multiplication modulo $n$ if and only if $n$ is a prime number without using Euler's phi function.
Assume that $H=\{1,2,3,...n-1\}$ is a group. Suppose that $n$ is not a prime. Then $n$ is composite, i.e $n=pq$ for $1<p,q<n-1$ . This implies that $pq \equiv0(mod n)$ but $0$ is not in H. Contradiction, hence $n$ must be prime. Conversely, Suppose $n$ is a prime then $gcd(a,n)=1$ for every a in H. Therefore, $ax=1-ny$, $x,y \in H$. So, $ax\equiv1(modn)$. That is every element of H has an inverse. This conclude that H must be a group since the identity is in H and H is associative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/76671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 0 }
How many correct answers does it take to fill the Trivial Pursuit receptacle? My friends and I likes to play Trivial Pursuit without using the board. We play it like this: * *Throw a die to determine what color you get to answer. *Ask a question, if the answer is correct you get a point. *If enough points are awarded you win We would like to modify the game as to include the colors. There are 6 colors. The game could then be won by completing all colors or answering enough questions. We would like the the effort to complete it by numbers to be similar to that of completing it by colors. So the required number of correct answers should be the same as where it is likely that all the colors has been collected. What is the number of correct answers one needs to acquire to make it probable, P>=0.5, that all colors are collected? We dabbled in a few sums before realizing this was over our heads.
This is the coupon collector's problem. For six, on average you will need $6/6+6/5+6/4+6/3+6/2+6/1=14.7$ correct answers, but the variability is high. This is the expectation, not the number to have 50% chance of success.
{ "language": "en", "url": "https://math.stackexchange.com/questions/76720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Measure-theoretical isomorphism between interval and square What is an explicit isomorphism between the unit interval $I = [0,1]$ with Lebesgue measure, and its square $I \times I$ with the product measure? Here isomorphism means a measure-theoretic isomorphism, which is one-one outside some set of zero measure.
For $ x \in [0,1]$, let $x = .b_1 b_2 b_3 \ldots$ be its base-2 expansion (the choice in the ambiguous cases doesn't matter, because that's a set of measure 0). Map this to $(.b_1 b_3 b_5 \ldots,\ .b_2 b_4 b_6 \ldots) \in [0,1]^2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/76782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Solution to an ODE, can't follow a step of a Stability Example In my course notes, we are working on the stability of solutions, and in one example we start out with: Consider the IVP on $(-1,\infty)$: $x' = \frac{-x}{1 + t}$ with $x(t_{0}) = x_{0}$. Integrating, we get $x(t) = x(t_{0})\frac{1 + t_{0}}{1 + t}$. I can't produce this integration but the purpose of the example is to show that $x(t)$ is uniformly stable, and asymptotically stable, but not uniformly asymptotically stable. But I can't verify the initial part and don't want to just skip over it. Can someone help me with the details here? Update: the solution has been pointed out to me and is in the answer below by Bill Cook (Thanks!).
Separate variables and get $\int 1/x \,dx = \int -1/(1+t)\,dt$. Then $\ln|x|=-\ln|1+t|+C$ Exponentiate both sides and get $|x| = e^{-\ln|1+t|+C}$ and so $|x|=e^{\ln|(1+t)^{-1}|}e^C$ Relabel the constant drop absolute values and recover lost zero solution (due to division by $x$) and get $x=Ce^{\ln|(1+t)^{-1}|}=C(1+t)^{-1}$. Finally plug in the IC $x_0 = x(t_0)=C(1+t_0)^{-1}$ so that $C=x_0(1+t_0)$ and there you go the solution is $$ x(t) = x_0 \frac{1+t_0}{1+t} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/76844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Adjunction of a root to a UFD Let $R$ be a unique factorization domain which is a finitely generated $\Bbbk$-algebra for an algebraically closed field $\Bbbk$. For $x\in R\setminus\{0\}$, let $y$ be an $n$-th root of $x$. My question is, is the ring $$ A := R[y] := R[T]/(T^n - x) $$ a unique factorization domain as well? Edit: I know the classic counterexample $\mathbb{Z}[\sqrt{5}]$, but $\mathbb{Z}$ it does not contain an algebraically closed field. I am wondering if that changes anything. Edit: As Gerry's Answer shows, this is not true in general. What if $x$ is prime? What if it is a unit?
I think you can get a counterexample to the unit question, even in characteristic zero, and even in an integral domain (in contrast to Georges' example), although there are a few things that need checking. Let $R={\bf C}[x,1/(x^2-1)]$, so $1-x^2$ is a unit in $R$. Then $$(1+\sqrt{1-x^2})(1-\sqrt{1-x^2})=xx$$ It remains to check that * *$R$ is a UFD, *$1\pm\sqrt{1-x^2}$ and $x$ are irreducibles in $R[\sqrt{1-x^2}]$ *$1\pm\sqrt{1-x^2}$ and $x$ are not associates in $R[\sqrt{1-x^2}]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/76895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Let $f:\mathbb{C} \rightarrow \mathbb{C}$ be entire and $\exists M \in\mathbb{R}: $Re$(f(z))\geq M$ $\forall z\in\mathbb{C}$. Prove $f(z)=$constant Possible Duplicate: Liouville's theorem problem Let $f:\mathbb{C} \rightarrow \mathbb{C}$ be entire and suppose $\exists M \in\mathbb{R}: $Re$(f(z))\geq M$ $\forall z\in\mathbb{C}$. How would you prove the function is constant? I am approaching it by attempting to show it is bounded then by applying Liouville's Theorem. But have not made any notable results yet, any help would be greatly appreciated!
Consider the function $\displaystyle g(z)=e^{-f(z)}$. Note then that $\displaystyle |g(z)|=e^{-\text{Re}(f(z))}\leqslant \frac{1}{e^M}$. Since $g(z)$ is entire we may conclude that it is constant (by Liouville's theorem). Thus, $f$ must be constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/76958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Calculate the expansion of $(x+y+z)^n$ The question that I have to solve is an answer on the question "How many terms are in the expansion?". Depending on how you define "term" you can become two different formulas to calculate the terms in the expansion of $(x+y+z)^n$. Working with binomial coefficients I found that the general relation is $\binom{n+2}{n}$. However I'm having some difficulty providing proof for my statement. The other way of seeing "term" is just simply as the amount of combinations you can take out of $(x+y+z)^n$ which would result into $3^n$. Depending on what is the right interpretation, how can I provide proof for it?
For the non-trivial interpretation, you're looking for non-negative solutions of $a + b + c = n$ (each of these corresponds to a term $x^a y^b z^c$). Code each of these solutions as $1^a 0 1^b 0 1^c$, for example $(2,3,5)$ would be coded as $$110111011111.$$ Now it should be easy to see why the answer is $\binom{n+2}{n}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/77009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
what is the tensor product $\mathbb{H\otimes_{R}H}$ I'm looking for a simpler way of thinking about the tensor product: $\mathbb{H\otimes_{R}H}$, i.e a more known algbera which is isomorphic to it. I have built the algebra and played with it for a bit, but still can't seem to see any resemblence to anything i already know. Thanks!
Hint : (1) Show that the map $$H \otimes_R H \rightarrow End_R(H), x \otimes y \mapsto (a \mapsto xay).$$ is an isomorphism of $R$-vector spaces (I don't know the simplest way to do this, but try for example to look at a basis (dimension is 16...)). (2) Denote by $H^{op}$ the $R$-algebra $H$ where the multiplication is reversed (i.e. $x \times_{H^{op}} y = y \times_{H} x$). Denote by (1,i,j,k) the usual basis. Show that the map $$H \rightarrow H^{op}, 1 \mapsto 1,i \mapsto i, j \mapsto k, k \mapsto j$$ is an isomorphism of $R$-algebras (3) Show that the map in (1) $$H \otimes_R H^{op} \rightarrow End_R(H), x \otimes y \mapsto (a \mapsto xay).$$ is an isomophism of $R$-algebras (4) Find an isomorphism $H \times_R H \rightarrow M_4(R)$ of $R$-algebras.
{ "language": "en", "url": "https://math.stackexchange.com/questions/77178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
The norm of $x\in X$, where $X$ is a normed linear space Question: Let $x\in X$, $X$ is a normed linear space and let $X^{*}$ denote the dual space of $X$. Prove that$$\|x\|=\sup_{\|f\|=1}|f(x)|$$ where $f\in X^{*}$. My proof: Let $0\ne x\in X$, using HBT take $f\in X^{*}$ such that $\|f\|=1$ and $f(x)=\|x\|$. Now, $\|x\|=f(x)\le|f(x)|\le\sup_{\|x\|=1}|f(x)|=\sup_{\|f\|=1}|f(x)|$, this implies $$\|x\|\le\sup_{\|f\|=1}|f(x)|\quad (1)$$ Since $f$ is a bounded linear functional $|f(x)|\le\|f\|\|x\|$ for all $x\in X$. Since$\|f\|=1$, $|f(x)|\le\|x\|$ for all $x\in X$. This implies $$\|x\|\ge\sup_{\|f\|=1}|f(x)|\quad(2)$$ Therefore $(1)$ and $(2)$ gives $\|x\|=\sup_{\|f\|=1}|f(x)|$. If $x=0$, the result seems to be trivial, but I am still trying to convince myself. Still I have doubts about my proof, is it correct? Please help. Edit: Please note that, I use the result of the one of the consequences of Hahn-Banach theorem. That is, given a normed linear space $X$ and $x_{0}\in X$ $x_{0}\ne 0$, there exist $f\in X^{*}$ such that $f(x_{0})=\|f\|\|x_{0}\|$
Thanks for the comments. Let see.... Let $0\ne x\in X$, using the consequence of HBT (analytic form) take $g\in X^{*}$ such that $\|g\|=1$ and $ g(x)=\|x\|$. Now, $\|x\|=g(x)\le|g(x)|\le\sup_{\|f\|=1}|f(x)|$, this implies $$\|x\|\le\sup_{\|f\|=1}|f(x)|\quad (1)$$ Since $f$ is a bounded linear functional (given): $|f(x)|\le\|f\|\|x\|$ for all $x\in X$. For a linear functional $f$ with $\|f\|=1$ we have by defintion, $|f(x)|\le\|x\|$ for all $x\in X$. This implies $$\|x\|\ge\sup_{\|f\|=1}|f(x)|\quad(2)$$ Therefore $(1)$ and $(2)$ gives $\|x\|=\sup_{\|f\|=1}|f(x)|$. If $x=0$, the result is trivial. Any more comments?
{ "language": "en", "url": "https://math.stackexchange.com/questions/77239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Right angles in the clock during a day Can someone provide a solution for this question ... Given the hours , minutes and seconds hands calculate the number of right angles the three hands make pairwise with respect to each other during a day... So it asks for the second and hour angle , minute and hour and second and minute Thanks a lot..
Take two hands: a fast hand that completes $x$ revolutions per day, and a slow hand that completes $y$ revolutions per day. Now rotate the clock backwards, at a rate of $y$ revolutions per day: the slow hand comes to a standstill, and the fast hand slows down to $x-y$ revolutions per day. So the number of times that the hands are at right angles is $2(x-y)$. The three hands make 2, 24, and 1440 revolutions per day, so the total is: $$2\times(24-2) + 2\times(1440-2) + 2\times(1440-24) = 5752$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/77299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
How to prove that $\lim\limits_{h \to 0} \frac{a^h - 1}{h} = \ln a$ In order to find the derivative of a exponential function, on its general form $a^x$ by the definition, I used limits. $\begin{align*} \frac{d}{dx} a^x & = \lim_{h \to 0} \left [ \frac{a^{x+h}-a^x}{h} \right ]\\ \\ & =\lim_{h \to 0} \left [ \frac{a^x \cdot a^h-a^x}{h} \right ] \\ \\ &=\lim_{h \to 0} \left [ \frac{a^x \cdot (a^h-1)}{h} \right ] \\ \\ &=a^x \cdot \lim_{h \to 0} \left [\frac {a^h-1}{h} \right ] \end{align*}$ I know that this last limit is equal to $\ln(a)$ but how can I prove it by using basic Algebra and Exponential and Logarithms properties? Thanks
It depends a bit on what you're prepared to accept as "basic algebra and exponential and logarithms properties". Look first at the case where $a$ is $e$. You need to know that $\lim_{h\to0}(e^h-1)/h)=1$. Are you willing to accept that as a "basic property"? If so, then $a^h=e^{h\log a}$ so $$(a^h-1)/h=(e^{h\log a}-1)/h={e^{h\log a}-1\over h\log a}\log a$$ so $$\lim_{h\to0}(a^h-1)/h=(\log a)\lim_{h\to0}{e^{h\log a}-1\over h\log a}=\log a$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/77348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
Question about two simple problems on covering spaces Here are two problems that look trivial, but I could not prove. i) If $p:E \to B$ and $j:B \to Z$ are covering maps, and $j$ is such that the preimages of points are finite sets, then the composite is a covering map. I suppose that for this, the neighborhood $U$ that will be eventually covered by the composite will be the same that is eventually covered by $j$, but I can´t prove that the preimage can be written as a disjoint union of open sets homeomorphic to $U$. ii) For this I have no idea what to do, but if I prove that it´s injective, I'm done. Let $p:E \to B$ be a covering map, with $E$ path connected and $B$ simply connected; prove that $p$ is a homeomorphism.
Lets call an open neighborhood $U$ of a point $y$ principal (wrt. a covering projection $p: X \to Y$), if it's pre image $p^{-1}(U)$ is a disjoint union of open sets, which are mapped homeomorphically onto $U$ by $p$. By definition a covering projection is a surjection $p: X \to Y$, such that every point has a principal neighborhood. It is easy to see, that if $U$ is a principal neighborhood of a point $y$, then any open neighborhood $U'$ of $y$ with $U' \subset U$ is again principal. i) Let $p: X \to Y$ and $q: Y \to Z$ be covering projections, where $q^{-1}(\{z\})$ is finite for every $z \in Z$. Let $z \in Z$ and $U$ a principal neighborhood of $z$. For every point $y \in q^{-1}(\{z\})$ choose a principal neighborhood $V_y$. We can assume that $V_y$ is a subset of the component of $q^{-1}(U)$ corresponding to $y$, possibly replacing $V_y$ with its intersection with that component. Now let $$U' = \bigcap_{y \in q^{-1}(\{z\})}q(V_y),$$ then $U'$ is an open (being the intersection of finitely many open subsets) neighborhood of $z$. It should be easy to verify that $U'$ is principal. ii) Let $e,e' \in E$ with $p(e) = p(e')$ and $\gamma: I \to E$ be a path from $e$ to $e'$. Now $p \circ \gamma$ is a closed path, and therefore nullhomotopic. Lifting such a homotopy shows that $e=e'$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/77429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A property of Hilbert sphere Let $X$ be (Edit: a closed convex subset of ) the unit sphere $Y=\{x\in \ell^2: \|x\|=1\}$ in $\ell^2$ with the great circle (geodesic) metric. (Edit: Suppose the diameter of $X$ is less than $\pi/2$.) Is it true that every decreasing sequence of nonempty closed convex sets in $X$ has a nonempty intersection? (A set $S$ is convex in $X$ if for every $x,y\in S$ the geodesic path between $x,y$ is contained in $S$.) (I edited my original question.)
No. For example, let $A_n$ be the subset of $X$ consisting of vectors that are zero in the first $n$ co-ordinates. EDIT: this assumes that when $x$ and $y$ are antipodal, convexity of $S$ containing $x$, $y$ only requires that at least one of the great-circle paths is contained in $S$. If it requires all of them, then the $A_n$ are not convex. t.b. points out in the comments that in this case we can set $A_n$ to consist of all vectors in $X$ that are zero in the first $n$ co-ordinates and non-negative in the remainder.
{ "language": "en", "url": "https://math.stackexchange.com/questions/77510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that $\lim \limits_{n \to \infty} \frac{x^n}{n!} = 0$, $x \in \Bbb R$. Why is $$\lim_{n \to \infty} \frac{2^n}{n!}=0\text{ ?}$$ Can we generalize it to any exponent $x \in \Bbb R$? This is to say, is $$\lim_{n \to \infty} \frac{x^n}{n!}=0\text{ ?}$$ This is being repurposed in an effort to cut down on duplicates, see here: Coping with abstract duplicate questions. and here: List of abstract duplicates.
The Stirling's formula says that: $$ n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}\right)^n, $$ inasmuch as $$ \lim_{n \to \infty} \frac{n!}{\sqrt{2 \pi n} \left(\displaystyle\frac{n}{e}\right)^n} = 1, $$ thearebfore $$ \begin{aligned} \lim_{n \to \infty} \frac{2^n}{n!} & = \lim_{n \to \infty} \frac{2^n}{\sqrt{2 \pi n} \left(\displaystyle\frac{n}{e}\right)^n} = \lim_{n \to \infty} \Bigg[\frac{1}{\sqrt{2 \pi n}} \cdot \frac{2^n}{\left(\displaystyle\frac{n}{e}\right)^n} \Bigg]\\ &= \lim_{n \to \infty} \frac{1}{\sqrt{2 \pi n}} \cdot \lim_{n \to \infty} \left(\frac{e2}{n}\right)^n = 0 \cdot 0^\infty = 0 \end{aligned} $$ Note: You can generalize replacing $2$ by $x$. Visit: Stirling's approximation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/77550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "86", "answer_count": 15, "answer_id": 12 }
How to prove that $ f(x) = \sum_{k=1}^\infty \frac{\sin((k + 1)!\;x )}{k!}$ is nowhere differentiable This function is continuous, it follows by M-Weierstrass Test. But proving non-differentiability, I think it's too hard. Does someone know how can I prove this? Or at least have a paper with the proof? The function is $$ f(x) = \sum_{k=1}^\infty \frac{\sin((k + 1)!\;x )}{k!}$$ Thanks!
(Edited: handwaving replaced by rigor) For conciseness, define the helper functions $\gamma_k(x)=\sin((k+1)!x)$. Then $f(x)=\sum_k \frac{\gamma_k(x)}{k!}$. Fix an arbitrary $x\in\mathbb R$. We will construct a sequence $(x_n)_n$ such that $$\lim_{n\to\infty} x_n = x \quad\land\quad \lim_{n\to\infty} \left|\frac{f(x_n)-f(x)}{x_n-x}\right| = \infty$$ Such a sequence will directly imply that $f$ is not differentiable at $x$. Let $x'_n$ be the largest number less than $x$ such that $|\gamma_n(x'_n)-\gamma_n(x)|=1$. Let $x''_n$ be the smallest number larger than $x$ such that $\gamma_n(x''_n)=\gamma_n(x'_n)$. One of these, to be determined later, will become our $x_n$. No matter which of these two choices of $x_n$ we have $|x_n-x|<\frac{2\pi}{(n+1)!}$ so $\lim x_n=x$. To estimate the difference quotient, write $$f(x) = \underbrace{\sum_{k=1}^{n-1}\frac{\gamma_k(x)}{k!}}_{p(x)}+ \underbrace{\frac{\gamma_n(x)}{n!}}_{q(x)}+ \underbrace{\sum_{k=n+1}^{\infty} \frac{\gamma_k(x)}{k!}}_{r(x)}$$ and so, $$\underbrace{f(x_n)-f(x)}_{\Delta f} = \underbrace{p(x_n)-p(x)}_{\Delta p} + \underbrace{q(x_n)-q(x)}_{\Delta q} + \underbrace{r(x_n)-r(x)}_{\Delta r}$$ Of these, by construction of $x_n$ we have $|\Delta q| = \frac{1}{n!}$. Also, $r(x)$ is globally bounded by the remainder term in the series $\sum 1/n! = e$, which by Taylor's theorem is at most $\frac{e}{(n+1)!}$. So $|\Delta r| \le \frac{2e}{(n+1)!}$. $\Delta p$ is not dealt with as easily. In some cases it may be numerically larger than $\Delta q$, ruining a simple triange-equality based estimate. But it can be tamed by a case analysis: * *If $p$ is strictly monotonic on $[x'_n, x''_n]$, then $p(x'_n)-p(x)$ and $p(x''_n)-p(x)$ will have opposite signs. Since $q(x'_n)=q(x''_n)$, we can choose $x_n$ such that $\Delta p$ and $\Delta q$ has the same sign. Therefore $|\Delta p+\Delta q|\ge|\Delta q|=\frac{1}{n!}$. *Otherwise, $p$ has an extremum between $x'_n$ and $x''_n$; select $x_n$ such that the extremum is between $x$ and $x_n$. Because $p$ is a finite sum of $C^\infty$ functions, we can bound its second derivative separately for each of its terms: $$\forall t: |p''(t)| \le \sum_{k=1}^{n-1}\left|\frac{\gamma''_k(t)}{k!}\right| \le \sum_{k=1}^{n-1}\frac{(k+1)!^2}{k!} \le \sum_{k=1}^{n-1} (k+1)!(k+1) \le 2n!n $$ Therefore the maximal variation of $p$ in an interval of length $\le\frac{2\pi}{(n+1)!}$ that contains a stationary point must be $\left(\frac{2\pi}{(n+1)!}\right)^2 2n!n = \frac{8\pi^2n}{(n+1)^2}\frac{1}{n!}$. The $\frac{8\pi^2n}{(n+1)^2}$ factor is less than $1/2$ for $n>16\pi^2$, so for large enough $n$ we have $|\Delta p+\Delta q|\ge \frac{1}{2n!}$. Thus, for large $n$ we always have $$|\Delta f| \ge \frac{1}{2n!} - \frac{2e}{(n+1)!} = \frac{1}{n!}\left(\frac{1}{2}-\frac{2e}{n+1}\right)$$ and therefore $$\left|\frac{f(x_n)-f(x)}{x_n-x}\right| \ge \frac{(n+1)!}{2\pi}|\Delta f| \ge \frac{n+1}{2\pi}\left(\frac{1}{2}-\frac{2e}{n+1}\right) = \frac{n+1}{4\pi}-\frac{e}{\pi} \to \infty$$ as promised.
{ "language": "en", "url": "https://math.stackexchange.com/questions/77611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Is the power set of the natural numbers countable? Some explanations: A set S is countable if there exists an injective function $f$ from $S$ to the natural numbers ($f:S \rightarrow \mathbb{N}$). $\{1,2,3,4\}, \mathbb{N},\mathbb{Z}, \mathbb{Q}$ are all countable. $\mathbb{R}$ is not countable. The power set $\mathcal P(A) $ is defined as a set of all possible subsets of A, including the empty set and the whole set. $\mathcal P (\{\})=\{\{\}\}, \mathcal P (\mathcal P(\{\}))=\{\{\}, \{\{\}\}\} $ $\mathcal P(\{1,2\})=\{\{\}, \{1\},\{2\},\{1,2\}\}$ My question is: Is $\mathcal P(\mathbb{N})$ countable? How would an injective function $f:S \rightarrow \mathbb{N}$ look like?
Power set of natural numbers has the same cardinality with the real numbers. So, it is uncountable. In order to be rigorous, here's a proof of this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/77656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64", "answer_count": 3, "answer_id": 2 }
An inequality for graphs In the middle of a proof in a graph theory book I am looking at appears the inequality $$\sum_i {d_i \choose r} \ge n { m /n \choose r},$$ and I'm not sure how to justify it. Here $d_i$ is the degree of vertex $i$ and the sum is over all $n$ vertices. There are $m$ edges. If it is helpful I think we can assume that $r$ is fixed and $n$ is large and $m \approx n^{2 - \epsilon}$ for some $\epsilon > 0$. My only thought is that I can do it if I replace every ${d_i \choose r}$ by $(d_i)^r / r!$, but this seems a little sloppy. Also, for my purposes, I am happy with even a quick-and-dirty proof that $$\sum_i {d_i \choose r} \ge C \, n { m /n \choose r}$$ holds for some constant $C>0$. Motivation: an apparently simple counting argument gives a lower bound on the Turán number of the complete bipartite graph $K_{r,s}$. Source: Erdős on Graphs : His Legacy of Unsolved Problems. In the edition I have, this appears on p.36. Relevant part of the text of the book: 3.3. Turán Numbers for Bipartite Graphs Problem What is the largest integer $m$ such that there is a graph $G$ on $n$ vertices and $m$ edges which does not contain $K_{r,r}$ as a graph? In other words, determine $t(n, K_{r,r})$. .... for $2\le r \le s$ $$t(n,K_{r,s}) < cs^{1/r}n^{2-1/r}+O(n) \qquad (3.2)$$ The proof of (3.2) is by the following counting argument: Suppose a graph $G$ has $m$ edges and has degree sequence $d_1,\ldots,d_n$. The number of copies of stars $S_r$ in $G$ is exactly $$\sum_i \binom{d_i}r.$$ Therefore, there is at least one copy of $K_{r,t}$ contained in $G$ if $$\frac{\sum_i \binom{d_i}r}{\binom nr} \ge \frac{n\binom{m/n}r}{\binom nr}>s$$ which holds if $m\ge cs^{1/r}n^{2-1/r}$ for some appropriate constant $c$ (which depends of $r$ but is independent on $n$). This proves (3.2)
Fix an integer $r \geq 1$. Then the function $f: \mathbb R^{\geq 0} \to \mathbb R^{\geq 0}$ given by $$ f(x) := \binom{\max \{ x, r-1 \}}{r} $$ is both monotonically increasing and convex.* So applying Jensen's inequality to $f$ for the $n$ numbers $d_1, d_2, \ldots, d_n$, we get $$ \sum_{i=1}^n f(d_i) \geq n \ f\left(\frac{1}{n} \sum_{i=1}^n d_i \right). \tag{1} $$ The claim in the question follows by simplifying the left and right hand sides: * *Notice that for any integer $d$, we have $f(d) = \binom{d}{r}$. (If $d < r$, then both $f(d)$ and $\binom{d}{r}$ are zero.) So the left hand side simplifies to $\sum \limits_{i=1}^n \binom{d_i}{r}$. *On the other hand, $\sum\limits_{i=1}^n d_i = 2m$ for any graph. Therefore, assuming that $2m/n \geq r-1$ (which seems reasonable given the range of parameters, see below), the right hand side of $(1)$ simplifies to $n \cdot \binom{2m/n}{r}$. (In turn, this is at least $n \cdot \binom{m/n}{r}$ that is claimed in the question.) EDIT: (More context has been added to the question.) In the context of the problem, we have $m \geq c s^{1/r} n^{2 - 1/r}$. If $r > 1$, then $m \geq c s^{1/r} n^{3/2} \geq \Omega(n^{3/2})$ (where we treat $r$ and $s$ to be fixed constants, and let $n \to \infty$). Hence, for sufficiently large $n$, we have $m \geq n (r-1)$, and hence our proof holds. *Monotonicity and convexity of $f$. We write $f(x)$ as the composition $f(x) = h(g(x))$, where $$ \begin{eqnarray*} g(x) &:=& \max \{ x - r + 1, 0 \} & \quad (x \geq 0); \\ h(y) &:=& \frac{1}{r!} y (y+1) (y+2) \cdots (y+r-1) & \quad (y \geq 0). \end{eqnarray*} $$ Notice that both $g(x)$ and $h(y)$ are monotonically increasing and convex for all everywhere in $[0, \infty)$. (It might help to observe that $h$ is a polynomial with nonnegative coefficients; so all its derivatives are nonnegative everywhere in $[0, \infty)$.) Under these conditions, it follows that $f(x) = h(g(x))$ is also monotonically increasing and convex for all $x \geq 0$. (See the wikipedia page for the relevant properties of convex functions.) EDIT: Corrected a typo. We need $2m/n \geq r-1$, rather than $2m \geq r-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/77710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Equal simple field extensions? I have a question about simple field extensions. For a field $F$, if $[F(a):F]$ is odd, then why is $F(a)=F(a^2)$?
Since $[F(a):F]$ is odd the minimal polynomial of $a$ is an odd degree polynomial, say $p(x)=b_{0}x^{2k+1}+b_1x^{2k}+...b_{2k+1}$, now since $a$ satisfies $p(x)$ we have: $b_{0}a^{2k+1}+b_1a^{2k}+...b_{2k+1}=0$ $\implies$ $a(b_0a^{2k}+b_2a^{2k-2}+...b_{2k})+b_1a^{2k}+b_3a^{2k-2}+...b_{2k+1}=0$ $\implies$ $a= -(b_1a^{2k}+b_3a^{2k-2}+...+b_{2k+1})/(b_0a^{2k}+b_2a^{2k-2}+...b_{2k})$ $\implies$ $a$ is in $F(a^2)$,so obviously $F(a^2)=F(a)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/77769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
Is it possible that $\mathbb{Q}(\alpha)=\mathbb{Q}(\alpha^{n})$ for all $n>1$? Is it possible that $\mathbb{Q}(\alpha)=\mathbb{Q}(\alpha^{n})$ for all $n>1$ when $\mathbb{Q}(\alpha)$ is a $p$th degree Galois extension of $\mathbb{Q}$? ($p$ is prime) I got stuck with this problem while trying to construct polynomials whose Galois group is cyclic group of order $p$. Edit: Okay, I got two nice answers for this question but to fulfill my original purpose(constructing polynomials with cyclic Galois group) I realized that I should ask for all primes $p$ if there is any such $\alpha$(depending on $p$) such that the above condition is satisfied. If the answer is no (i.e. there are primes $p$ for which there is no $\alpha$ such that $\mathbb{Q}(\alpha)=\mathbb{Q}(\alpha^{n})$ for all $n>1$) then I succeed up to certain stage.
If you mean for some $\alpha$ and $p$, then yes: if $\alpha=1+\sqrt{2}$, then $\mathbb{Q}(\alpha)$ is of degree 2, which is prime, and $\alpha^n$ is never a rational number, so $\mathbb{Q}(\alpha)=\mathbb{Q}(\alpha^n)$ for all $n>1$. If you mean for all $\alpha$ such that the degree of $\mathbb{Q}(\alpha)$ is a prime number $p$, then no: if $\alpha=\sqrt{2}$, then $\mathbb{Q}(\alpha)$ is of degree 2, which is prime, but $\mathbb{Q}(\alpha^n)=\mathbb{Q}$ when $n$ is even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/77928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Prove the identity $ \sum\limits_{s=0}^{\infty}{p+s \choose s}{2p+m \choose 2p+2s} = 2^{m-1} \frac{2p+m}{m}{m+p-1 \choose p}$ $$ \sum\limits_{s=0}^{\infty}{p+s \choose s}{2p+m \choose 2p+2s} = 2^{m-1} \frac{2p+m}{m}{m+p-1 \choose p}$$ Class themes are: Generating functions and formal power series.
I will try to give an answer using basic complex variables here. This calculation is very simple in spite of some more complicated intermediate expressions that appear. Suppose we are trying to show that $$\sum_{q=0}^\infty {p+q\choose q} {2p+m\choose m-2q} = 2^{m-1} \frac{2p+m}{m} {m+p-1\choose p}.$$ Introduce the integral representation $${2p+m\choose m-2q} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2p+m}}{z^{m-2q+1}} \; dz.$$ This gives for the sum the integral (the second binomial coefficient enforces the range) $$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2p+m}}{z^{m+1}} \sum_{q=0}^\infty {p+q\choose q} z^{2q} \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{2p+m}}{z^{m+1}} \frac{1}{(1-z^2)^{p+1}} \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{p+m-1}}{z^{m+1}} \frac{1}{(1-z)^{p+1}} \; dz.$$ This is $$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(2+z-1)^{p+m-1}}{z^{m+1}} \frac{1}{(1-z)^{p+1}} \; dz \\ = 2^{p+m-1} \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+(z-1)/2)^{p+m-1}}{z^{m+1}} \frac{1}{(1-z)^{p+1}} \; dz \\ = 2^{p+m-1} \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{m+1}} \frac{1}{(1-z)^{p+1}} \sum_{q=0}^{p+m-1} {p+m-1\choose q} \frac{(z-1)^q}{2^q} \; dz \\ = 2^{p+m-1} \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{m+1}} \frac{1}{(1-z)^{p+1}} \sum_{q=0}^{p+m-1} {p+m-1\choose q} (-1)^q \frac{(1-z)^q}{2^q} \; dz \\ = 2^{p+m-1} \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{m+1}} \sum_{q=0}^{p+m-1} {p+m-1\choose q} (-1)^q \frac{(1-z)^{q-p-1}}{2^q} \; dz.$$ The only non-zero contribution is with $q$ ranging from $0$ to $p.$ This gives $$ 2^{p+m-1} \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{m+1}} \sum_{q=0}^p {p+m-1\choose q} (-1)^q \frac{1}{2^q} \frac{1}{(1-z)^{p+1-q}} \; dz$$ which on extracting coefficients yields $$2^{p+m-1} \sum_{q=0}^p {p+m-1\choose q} (-1)^q \frac{1}{2^q} {m+p-q\choose p-q}.$$ Introduce the integral representation $${m+p-q\choose p-q} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{m+p-q}}{z^{p-q+1}} \; dz.$$ This gives for the sum the integral (the second binomial coefficient enforces the range) $$2^{p+m-1} \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{m+p}}{z^{p+1}} \sum_{q=0}^\infty {p+m-1\choose q}\frac{(-1)^q}{2^q} \left(\frac{z}{1+z}\right)^q \; dz \\ = 2^{p+m-1} \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{(1+z)^{m+p}}{z^{p+1}} \left(1-\frac{1}{2}\frac{z}{1+z}\right)^{p+m-1} \; dz \\ = 2^{p+m-1} \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{p+1}} \left(1+z-1/2\times z\right)^{p+m-1} \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z}{z^{p+1}} \left(2+z\right)^{p+m-1} \; dz.$$ Extracting coefficients now yields $${p+m-1\choose p} \times 2^{m-1} + {p+m-1\choose p-1} \times 2^m.$$ This symmetric form may be re-written in an asymmetric form as follows, $${p+m-1\choose p} \times 2^{m-1} + \frac{p}{m} {p+m-1\choose p} \times 2^m \\ = 2^{m-1} \times \left(1 + \frac{2p}{m}\right) {p+m-1\choose p}$$ as claimed. The bonus feature of this calculation is that we evaluated two binomial sums instead of one. We have not made use of the properties of complex integrals here so this computation can also be presented using just algebra of generating functions. Apparently this method is due to Egorychev although some of it is probably folklore.
{ "language": "en", "url": "https://math.stackexchange.com/questions/77949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 2 }
Countable or uncountable set 8 signs Let S be a set of pairwise disjoint 8-like symbols on the plane. (The 8s may be inside each other as well) Prove that S is at most countable. Now I know you can "map" a set of disjoint intervals in R to a countable set (e.g. Q :rational numbers) and solve similar problems like this, but the fact that the 8s can go inside each other is hindering my progress with my conventional approach...
Let $\mathcal{E}$ denote the set of all your figure eights. Then, define a map $f:\mathcal{E}\to\mathbb{Q}^2\times\mathbb{Q}^2$ by taking $E\in\mathcal{E}$ to a chosen pair of rational ordered pairs, one sitting inside each loop. Show that if two such figure eights were to have the same chosen ordered pair, they must interesect, which is impossible. Thus, $f$ is an injection and so $\mathcal{E}$ is countable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/78018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
"Best practice" innovative teaching in mathematics Our department is currently revamping our first-year courses in mathematics, which are huge classes (about 500+ students) that are mostly students who will continue on to Engineering. The existing teaching methods (largely, "lemma, proof, corollary, application, lemma, proof, corollary, application, rinse and repeat") do not properly accommodate the widely varying students. I am interested in finding out about alternative, innovative or just interesting ideas for teaching mathematics - no matter how way out they may seem. Preferably something backed up by educational research, but any ideas are welcome. Edit: a colleague pointed out that I should mention that currently we lecture to the 500 students in groups of around 200-240, not all 500 at once.
“...no matter how way out they may seem.” In that case you might want to consider the public-domain student exercises for mathematics that I have created. The address is: http://www.public-domain-materials.com/folder-student-exercise-tasks-for-mathematics-language-arts-etc---autocorrected.html
{ "language": "en", "url": "https://math.stackexchange.com/questions/78088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 4, "answer_id": 3 }
The constant distribution If $u$ is a distribution in open set $\Omega\subset \mathbb R^n$ such that ${\partial ^i}u = 0$ for all $i=1,2,\ldots,n$. Then is it necessarily that $u$ is a constant function?
It's true if we assume that $\Omega$ is connected. We will show that $u$ is locally constant, and the connectedness will allow us to conclude that $u$ is indeed constant. Let $a\in\Omega$ and $\delta>0$ such that $\overline{B(a,2\delta)}\subset \Omega$. Consider a test function $\varphi\in\mathcal D(\Omega)$ such that $\varphi=1$ on $B(a,2\delta)$, and put $S=\varphi u$, which is a distribution with compact support. Let $\{\rho_k\}$ a mollifier, i.e. non-negative functions, of integral $1$, with support contained in the ball $\overline{B\left(0,\frac 1k\right)}$, and consider $S_k=\rho_k* S$. It's a distribution which can be associated to a test function. We have $\partial^i S=\partial^i\varphi u$, hence $\partial^iS=0$ on $B(a,2\delta)$. $\partial^iS_k=\rho_k*(\partial^iS)$ is $0$ if $\frac 1k<\delta$, hence $S_k$ is constant. Since $S_k$ converges to $S$ in $\mathcal D'$, $S$ is constant on $B(a,\delta)$ and so is $u$. This topic answers the cases $\Omega=\mathbb R^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/78142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Optimal number of answers for a test with wrong-answer penalty Suppose you have to take a test with ten questions, each with four different options (no multiple answers), and a wrong-answer penalty of half a correct answer. Blank questions do not score neither positively nor negatively. Supposing you have not studied specially hard this time, what's the optimal number of questions to try to answer so the probabilities to pass the exam (having at least five points).
Let's work this all the way through. Suppose you answer $n$ questions. Let $X$ be the number you get correct. Assuming $\frac{1}{4}$ chance of getting an answer correct, $X$ is binomial$(n,1/4)$. Let $Y$ be the actual score on the exam, including the penalty. Then $Y = X - \frac{1}{2}(n-X) = \frac{3}{2}X-\frac{1}{2}n$. To maximize the probability of passing the exam, you want to choose the value of $n$ that maximizes $P(Y \geq 5)$. This probability is $$P\left(\frac{3}{2}X - \frac{n}{2} \geq 5\right) \Longleftrightarrow P\left(X \geq \frac{10+n}{3}\right) = \sum_{k = \lceil(10+n)/3\rceil}^n \binom{n}{k} \left(\frac{1}{4}\right)^k \left(\frac{3}{4}\right)^{n-k}$$ $$ = \left(\frac{3}{4}\right)^n \sum_{k =\lceil(10+n)/3\rceil}^n \binom{n}{k} \frac{1}{3^k}.$$ This can be calculated quickly for the possible values of $n$. I get, via Mathematica, 5 0.000976563 6 0.000244141 7 0.00134277 8 0.00422668 9 0.00134277 10 0.00350571 Thus you maximize your probability of passing by answering eight questions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/78215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
What kinds of non-zero characteristic fields exist? There are these finite fields of characteristic $p$ , namely $\mathbb{F}_{p^n}$ for any $n>1$ and there is the algebraic closure $\bar{\mathbb{F}_p}$. The only other fields of non-zero characteristic I can think of are transcendental extensions namely $\mathbb{F}_{q}(x_1,x_2,..x_k)$ where $q=p^{n}$. Thats all! I cannot think of any other fields of non-zero characteristic. I may be asking too much if I ask for characterization of all non-zero characteristic fields. But I would like to know what other kinds of such fields are possible. Thanks.
The basic structure theory of fields tells us that a field extension $L/K$ can be split into the following steps: * *an algebraic extension $K^\prime /K$, *a purely transcendental extension $K^\prime (T)/K^\prime$, *an algebraic extension $L/K^\prime (T)$. The field $K^\prime$ is the algebraic closure of $K$ in $L$ and thus uniquely determined by $L/K$. The set $T$ is a transcendence basis of $L/K$; its cardinality is uniquely determined by $L/K$. A field $L$ has characteristic $p\neq 0$ iff it contains the finite field $\mathbb{F}_p$. Hence you get all fields of characteristic $p$ by letting $K=\mathbb{F}_p$ in the description of field extensions, and by chosing $T$ and $K^\prime$ and $L/K^\prime (T)$ as you like. Of course in general it is then hard to judge whether two such fields are isomorphic - essentially because of step 3.
{ "language": "en", "url": "https://math.stackexchange.com/questions/78266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 0 }
Are polynomials dense in Gaussian Sobolev space? Let $\mu$ be standard Gaussian measure on $\mathbb{R}^n$, i.e. $d\mu = (2\pi)^{-n/2} e^{-|x|^2/2} dx$, and define the Gaussian Sobolev space $H^1(\mu)$ to be the completion of $C_c^\infty(\mathbb{R}^n)$ under the inner product $$\langle f,g \rangle_{H^1(\mu)} := \int f g\, d\mu + \int \nabla f \cdot \nabla g\, d\mu.$$ It is easy to see that polynomials are in $H^1(\mu)$. Do they form a dense set? I am quite sure the answer must be yes, but can't find or construct a proof in general. I do have a proof for $n=1$, which I can post if anyone wants. It may be useful to know that the polynomials are dense in $L^2(\mu)$. Edit: Here is a proof for $n=1$. It is sufficient to show that any $f \in C^\infty_c(\mathbb{R})$ can be approximated by polynomials. We know polynomials are dense in $L^2(\mu)$, so choose a sequence of polynomials $q_n \to f'$ in $L^2(\mu)$. Set $p_n(x) = \int_0^x q_n(y)\,dy + f(0)$; $p_n$ is also a polynomial. By construction we have $p_n' \to f'$ in $L^2(\mu)$; it remains to show $p_n \to f$ in $L^2(\mu)$. Now we have $$ \begin{align*} \int_0^\infty |p_n(x) - f(x)|^2 e^{-x^2/2} dx &= \int_0^\infty \left(\int_0^x (q_n(y) - f'(y)) dy \right)^2 e^{-x^2/2} dx \\ &\le \int_0^\infty \int_0^x (q_n(y) - f'(y))^2\,dy \,x e^{-x^2/2} dx \\ &= \int_0^\infty (q_n(x) - f'(x))^2 e^{-x^2/2} dx \to 0 \end{align*}$$ where we used Cauchy-Schwarz in the second line and integration by parts in the third. The $\int_{-\infty}^0$ term can be handled the same with appropriate minus signs. The problem with $n > 1$ is I don't see how to use the fundamental theorem of calculus in the same way.
Nate, I once needed this result, so I proved it in Dirichlet forms with polynomial domain (Math. Japonica 37 (1992) 1015-1024). There may be better proofs out there, but you could start with this paper.
{ "language": "en", "url": "https://math.stackexchange.com/questions/78311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 3, "answer_id": 1 }
Confused about modular notations I am little confused about the notations used in two articles at wikipedia. According to the page on Fermat Primality test $ a^{p-1}\equiv 1 \pmod{m}$ means that when $a^{p-1}$ is divided by $m$, the remainder is 1. And according to the page on Modular Exponential $c \equiv b^e \pmod{m}$ means that when $b^e$ is divided by $m$, the remainder is $c$. Am I interpreting them wrong? or both of them are right? Can someone please clarify them to me? Update: So given an equation say $x \equiv y \pmod{m}$, how do we interpret it? $y$ divided by $m$ gives $x$ as remainder or $x$ divided by $m$ gives $y$ as remainder?
Congruence is similar to equations where they could be interpreted left to right or right to left and both are correct. Another way to picture this is: a ≡ b (mod m) implies there exists an integer k such that k*m+a=b There are various ways to visualize a (mod m) number system as the integers mod 3 could be viewed as {0,1,2} or {-1,0,1} as 2≡-1 (mod 3) to give an example here. Another point is that the (mod m) is applied to both sides of the congruence. For example, in dealing with polar coordinates, the angle co-ordinate would be a good example of the use of a modular operator as 0 degrees is the same as 360 degrees is the same as 720 degrees as the only difference is the number of rotations to get that angle. But the problem the same equation a ≡ b (mod m) is interpreted as k*m + a = b and k*m + b = a in the two articles. If there exists a k such that k*m+a=b then there also exists a -k such that a=b-k*m if you remember that in an equation one can add or subtract an element so long as it is done on both sides of the equation. Thus, while the constant is different there does exist such a term. Casting out nines would be a useful exercise if you want some examples where m=9.
{ "language": "en", "url": "https://math.stackexchange.com/questions/78367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Is there any intuition behind why the derivative of $\cos(x)$ is equal to $-\sin(x)$ or just something to memorize? why is $$\frac{d}{dx}\cos(x)=-\sin(x)$$ I am studying for a differential equation test and I seem to always forget \this, and i am just wondering if there is some intuition i'm missing, or is it just one of those things to memorize? and i know this is not very differential equation related, just one of those things that I've never really understood. and alliteratively why is $$ \int\sin(x)dx=-\cos(x)$$ any good explanations would be greatly appreciated.
I am with Henning Makholm on this and produce a sketch The slope of the blue line is the red line, and the slope of the red line is the green line. I know the blue line is sine and the red line is cosine; the green line can be seen to be the negative of sine. Similarly the partial area under the red line is the blue line, though this sort of approach leads to $\int \sin(x)dx=1-\cos(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/78414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
No. of solutions of equation? Given an equation $a_1 X_1 + a_2 X_2 + \cdots + a_n X_n = N$ where $a_1,a_2,\ldots,a_n$ are positive constants and each $X_i$ can take only two values $\{0,1\}$. $N$ is a given constant. How can we calculate the possible no of solutions of given equation ?
This is counting the number of solutions to the knapsack problem. You can adapt the Dynamic Programming algorithm to do this in pseudopolynomial time. (I'm assuming the input data are integers.) Let $s[i,w]$ = # of ways to achieve the sum $w$ using only the first $i$ variables. Then $$s[0,w]=\begin{cases}1 & w=0 \\0 & w \ne 0\end{cases}$$ For $i=1\ldots n$, For $w=0\ldots N$ $$s[i,w] = s[i-1,w] + s[i-1,w-a_i]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/78475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Conditional expectation for a sum of iid random variables: $E(\xi\mid\xi+\eta)=E(\eta\mid\xi+\eta)=\frac{\xi+\eta}{2}$ I don't really know how to start proving this question. Let $\xi$ and $\eta$ be independent, identically distributed random variables with $E(|\xi|)$ finite. Show that $E(\xi\mid\xi+\eta)=E(\eta\mid\xi+\eta)=\frac{\xi+\eta}{2}$ Does anyone here have any idea for starting this question?
$E(\xi\mid \xi+\eta)=E(\eta\mid \xi+\eta)$ since $\xi$ and $\eta$ are exchangeable, i.e. $(\xi,\eta)$ and $(\eta,\xi)$ are identically distributed. (Independent does not matter here.) So $2E(\xi\mid \xi+\eta)=2E(\eta\mid \xi+\eta) = E(\xi\mid \xi+\eta)+E(\eta\mid \xi+\eta) =E(\xi+\eta\mid \xi+\eta) = \xi+\eta$ since the sum $\xi+\eta$ is fixed. Now divide by two.
{ "language": "en", "url": "https://math.stackexchange.com/questions/78546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 0 }
How do I prove equality of x and y? If $0\leq x,y\leq\frac{\pi}{2}$ and $\cos x +\cos y -\cos(x+y)=\frac{3}{2}$, then how can I prove that $x=y=\frac{\pi}{3}$? Your help is appreciated.I tried various formulas but nothing is working.
You could also attempt an geometric proof. First, without loss of generality you can assume $0 <x,y < \frac{\pi}{2}$. Construct a triangle with angles $x,y, \pi-x-y$. Let $a,b,c$ be the edges. Then by cos law, you know that $$\frac{a^2+b^2-c^2}{2ab}+ \frac{a^2+c^2-b^2}{2ac}+ \frac{b^2+c^2-a^2}{2bc}=\frac{3}{2}$$ and you need to show that $a=b=c$. The inequality is $$c(a^2+b^2-c^2)+b(a^2+c^2-b^2)+a(b^2+c^2-a^2)=3abc$$ or $$a^2b+ab^2+ac^2+a^2c+bc^2+b^2c=a^3+b^3+c^3+3abc \,.$$ This should be easy to factor, but I fail to see it....
{ "language": "en", "url": "https://math.stackexchange.com/questions/78629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Proving that if $G/Z(G)$ is cyclic, then $G$ is abelian Possible Duplicate: Proof that if group $G/Z(G)$ is cyclic, then $G$ is commutative If $G/Z(G)$ is cyclic, then $G$ is abelian If $G$ is a group and $Z(G)$ the center of $G$, show that if $G/Z(G)$ is cyclic, then $G$ is abelian. This is what I have so far: We know that all cyclic groups are abelian. This means $G/Z(G)$ is abelian. $Z(G)= \{z \in G \mid zx=xz \text{ for all } x \in G \}$. So $Z(G)$ is abelian. Is it sufficient to say that since $G/Z(G)$ and $Z(G)$ are both abelian, $G$ must be abelian?
Here's part of the proof that $G$ is abelian. Hopefully this will get you started... Let $Z(G)=Z$. If $G/Z$ is cyclic, then it has a generator, say $G/Z = \langle gZ \rangle$. This means that for each coset $xZ$ there exists some $i \in \mathbb{Z}$ such that $xZ=(gZ)^i=g^iZ$. Suppose that $x,y \in G$. Consider $x \in xZ=g^iZ$ so that $x=g^iz$ for some $z\in Z$. Represent $y$ in a similar manner and consider $xy$ and $yx$. Why are they equal? Edit: !!!Spoiler alert!!! :) Here's the rest of the story. $yZ \in G/Z = \langle gZ \rangle$ so that $yZ=(gZ)^j=g^jZ$ for some $j \in \mathbb{Z}$. Therefore, $y \in yZ=g^jZ$ so that $y=g^jz_0$ for some $z_0 \in Z$. Finally, $xy=g^izg^jz_0=g^ig^jzz_0=g^{i+j}zz_0=g^{j+i}zz_0=g^jg^izz_0=g^jz_0g^iz=yx$ The second equality follows because $z$ is in the center and thus commutes with everything. Then we're just messing with powers of $g$ (which commute with themselves). The next to last equality follows because $z_0$ is in the center and thus commutes with everything.
{ "language": "en", "url": "https://math.stackexchange.com/questions/78690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Breaking a variable out of a trigonometry equation $A = 2 \pi r^2 - r^2 (2 \arccos(d/2r) - \sin(2 \arccos(d/2r)))$ Given $A$ and $r$ I would like to solve for $d$. However, I get stuck breaking the $d/2r$ out the trig functions. For context this is the area of two overlapping circles minus the overlapping region. Given a radius and desired area I'd like to be able to determine how far apart they should be. I know $A$ should be bounded below by $0 (d = 0)$ and above by $2 \pi r^2 (d \le 2r)$.
This is a transcendental equation for $d$ which can't be solved for $d$ in closed form. You can get rid of the trigonometric functions in the last term using $$\sin2x=2\sin x\cos x$$ and $$\sin x=\sqrt{1-\cos^2 x}$$ and thus $$\sin\left(2\arccos\frac d{2r}\right)=\frac dr\sqrt{1-\left(\frac d{2r}\right)^2}\;,$$ but that still leaves $d$ both in the argument of the arccosine and outside.
{ "language": "en", "url": "https://math.stackexchange.com/questions/78766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Example of a function that is not twice differentiable Give an example of a function f that is defined in a neighborhood of a s.t. $\lim_{h\to 0}(f(a+h)+f(a-h)-2f(a))/h^2$ exists, but is not twice differentiable. Note: this follows a problem where I prove that the limit above $= f''(a)$ if $f$ is twice differentiable at $a$.
Consider the function $f(x) = \sum_{i=0}^{n}|x-i|$. This function is continuous everywhere but not differentiable at exactly n points. Consider the function $G(x) = \int_{0}^{x} f(t) dt$. This function is differentiable, since $f(t)$ is continuous due to FTC II. Now $G'(x) = f(x)$, which is not differentiable at $n$ points.
{ "language": "en", "url": "https://math.stackexchange.com/questions/78825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 3, "answer_id": 2 }
Counting words with parity restrictions on the letters Let $a_n$ be the number of words of length $n$ from the alphabet $\{A,B,C,D,E,F\}$ in which $A$ appears an even number of times and $B$ appears an odd number of times. Using generating functions I was able to prove that $$a_n=\frac{6^n-2^n}{4}\;.$$ I was wondering if the above answer is correct and in that case what could be a combinatorial proof of that formula?
I don’t know whether you’d call them combinatorial, but here are two completely elementary arguments of a kind that I’ve presented in a sophomore-level discrete math course. Added: Neither, of course, is as nice as Didier’s, which I’d not seen when I posted this. Let $b_n$ be the number of words of length $n$ with an odd number of $A$’s and an odd number of $B$’s, $c_n$ the number of words of length $n$ with an even number of $A$’s and an even number of $B$’s, and $d_n$ the number of words of length $n$ with an odd number of $A$’s and an even number of $B$’s. Then $$a_{n+1}=4a_n+b_n+c_n\;.\tag{1}$$ Clearly $a_n=d_n$: interchanging $A$’s and $B$’s gives a bijection between the two types of word. $(1)$ therefore reduces to $$a_{n+1}=4a_n+b_n+c_n\;.\tag{2}$$ But $6^n=a_n+b_n+c_n+d_n=b_n+c_n+2a_n$, so $b_n+c_n=6^n-2a_n$, and $(2)$ becomes $$a_{n+1}=2a_n+6^n,a_0=0\;.\tag{3}$$ $(3)$ can be solved by brute force: $$\begin{align*} a_n&=2a_{n-1}+6^{n-1}\\ &=2(2a_{n-2}+6^{n-2})+6^{n-1}\\ &=2^2a_{n-2}+2\cdot 6^{n-2}+6^{n-1}\\ &=2^2(2a_{n-3}+6^{n-3})+2\cdot 6^{n-2}+6^{n-1}\\ &=2^3a_{n-3}+2^2\cdot 6^{n-3}+2\cdot 6^{n-2}+6^{n-1}\\ &\;\vdots\\ &=2^na_0+\sum_{k=0}^{n-1}2^k6^{n-1-k}\\ &=6^{n-1}\sum_{k=0}^{n-1}\left(\frac13\right)^k\\ &=6^{n-1}\frac{1-(1/3)^n}{2/3}\\ &=\frac{2^{n-1}3^n-2^{n-1}}{2}\\ &=\frac{6^n-2^n}4\;. \end{align*}$$ Alternatively, with perhaps just a little more cleverness $(1)$ can be expanded to $$\begin{align*} a_{n+1}&=4a_n+b_n+c_n\\ b_{n+1}&=4b_n+2a_n\\ c_{n+1}&=4c_n+2a_n\;, \end{align*}\tag{4}$$ whence $$\begin{align*}a_{n+1}&=4a_n+4b_{n-1}+2a_{n-1}+4c_{n-1}+2a_{n-1}\\ &=4a_n+4a_{n-1}+4(b_{n-1}+c_{n-1})\\ &=4a_n+4a_{n-1}+4(a_n-4a_{n-1})\\ &=8a_n-12a_{n-2}\;. \end{align*}$$ This straightforward homogeneous linear recurrence (with initial conditions $a_0=0,a_1=1$) immediately yields the desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/78882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
State-space to transfer function I’m looking into MATLAB’s state-space functionality, and I found a peculiar relation that I don’t believe I’ve seen before, and I’m curious how one might obtain it. According to this documentation page, when converting a state-space system representation to its transfer function, the following well-known equality is used $$H(s) = C(sI-A)^{-1}B$$ However, they go one step further and state that $$H(s) = C(sI-A)^{-1}B = \frac{{\det}(sI-A+BC) - {\det}(sI-A)}{\det(sI-A)}$$ How did they manage to convert $C{\text {Adj}}(sI-A)B$ into ${\det}(sI-A+BC) - {\det}(sI-A)$? As far as I understand, we can assume here that $B$ is a column vector and $C$ is a row vector, i.e. it’s a single-input / single-output relationship.
They are using the Sherman-Morrison formula, which I remember best in the form $$ \det(I+MN) = \det(I+NM); $$ this holds provided that both products $MN$ and $NM$ are defined. Note that if $M$ is a column vector and $N$ a row vector, then $I+NM$ is a scalar. Now $$\begin{align} \det(sI-A+BC) &= \det\left((sI-A)(I+(sI-A)^{-1}BC\right)\\ &= \det(sI-A)\det(I+(sI-A)^{-1}BC) \tag{1}. \end{align} $$ Applying the formula with $M=I+(sI-A)^{-1}B$ and $N=C$ yields that $$ \det(I+(sI-A)^{-1}BC) = \det(I+C(sI-A)^{-1}B). $$ If $B$ is a column vector and $C$ a row vector then the final determinant is equal to $$ 1 + C(sI-A)^{-1}B \tag{2} $$ Plugging (2) to (1), gives $$ \det(sI-A+BC) = \det(sI-A)\left( 1 + C(sI-A)^{-1}B \right) $$ Shuffling terms gives the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/78941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is this a valid function? I am stuck with the question below, Say whether the given function is one to one. $A=\mathbb{Z}$, $B=\mathbb{Z}\times\mathbb{Z}$, $f(a)=(a,a+1)$ I am a bit confused about $f(a)=(a,a+1)$, there are two outputs $(a,a+1)$ for a single input $a$ which is against the definition of a function. Please help me out by expressing your review about the question. Thanks
If $f\colon A\to B$, then the inputs of $f$ are elements of $A$, and the outputs of $f$ are elements of $B$, whatever the elements of $B$ may be. If the elements of $B$ are sets with 17 elements each, then the outputs of the function will be sets with 17 elements each. If the elements of $B$ are books, then the output will be books. Here, the set $B$ is the set whose elements are ordered pairs. So every output of $f$ must be an ordered pair. This is a single item, the pair (just like your address may consist of many words and numbers, but it's still a single address). By the way: whether the function is one-to-one or not is immaterial here (and it seems to me, immaterial to your confusion...)
{ "language": "en", "url": "https://math.stackexchange.com/questions/79055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Distance between $N$ sets of reals of length $N$ Let's say, for the sake of the question, I have 3 sets of real numbers of variate length: $$\{7,5,\tfrac{8}{5},\tfrac{1}{9}\},\qquad\{\tfrac{2}{7},4,\tfrac{1}{3}\},\qquad\{1,2,7,\tfrac{4}{10},\tfrac{5}{16},\tfrac{7}{8},\tfrac{9}{11}\}$$ Is there a way to calculate the overall distance these sets have from one another? Again, there are only 3 sets for the sake of the example, but in practice there could be $N$ sets as such: $$\{A_1,B_1,C_1,\ldots,N_1\},\qquad \{A_2,B_2,C_2\ldots,N_2\},\;\ldots \quad\{A_n,B_n,C_n,\ldots,N_n\}$$ These sets of reals can be considered to be sets in a metric space. The distance needed is the shortest overall distance between these sets, similar to the Hausdorff distance, except rather then finding the longest distance between 2 sets of points, I am trying to find the shortest distance between N sets of points.
Let $E(r)$ be the set of all points at distance at most $r$ from the set $E$. This is called the closed $r$-neighborhood of $E$. The Hausdorff distance $d_H(E_1,E_2)$ is defined as the infimum of numbers $r$ such that $E_1\subseteq E_2(r)$ and $E_2\subseteq E_1(r)$. There are at least two reasonable ways to generalize $d_H(E_1,E_2)$ to $d_H(E_1,\dots,E_N)$: * *$d_H(E_1,\dots,E_N)$ is the infimum of numbers $r$ such that $E_i\subseteq E_j(r)$ for all $i,j\in\{1,\dots,N\}$. *$d_H(E_1,\dots,E_N)$ is the infimum of numbers $r$ such that $E_i\subseteq \bigcup_{j\ne i}E_j(r)$ and $\bigcup_{j\ne i}E_j(r)\subseteq E_i$ for all $i$. Both 1 and 2 recover the standard $d_H$ when $N=2$. Distance 1 is larger, and is somewhat less interesting because it's simply $\max_{i,j} d_H(E_i,E_j)$. Both distances turn into $0$ if and only if $E_1=\dots = E_N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/79122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The field of fractions of a field $F$ is isomorphic to $F$ Let $F$ be a field and let $\newcommand{\Fract}{\operatorname{Fract}}$ $\Fract(F)$ be the field of fractions of $F$; that is, $\Fract(F)= \{ {a \over b } \mid a \in F , b \in F \setminus \{ 0 \} \}$. I want to show that these two fields are isomorphic. I suggest this map $$ F \to \Fract(F) \ ; \ a \mapsto {a\over a^{-1}} ,$$ for $a \neq 0$ and $0 \mapsto 0$, but this is not injective as $a$ and $-a$ map to the same image. I was thinking about the map $ \Fract(F) \rightarrow F ;\; a/b\mapsto ab^{-1}$ and this is clearly injective. It is also surjective as $a/1 \mapsto a$. Is this the desired isomorphism?
Let $F$ be a field and $Fract(F)=\{\frac{a}{b} \;|\; a\in F, b\in F, b\not = 0 \} $ modulo the equivalence relation $\frac{a}{b}\sim \frac{c}{d}\Longleftrightarrow ad=bc$. We exhibit a map that is a field isomorphism between $F$ and $Fract(F)$. Every fraction field of an integral domain $D$ comes with a canonical ring homomorphism $$\phi: D\rightarrow Fract(D);\; d\mapsto \frac{d}{1}$$ This map is clearly injective. In the case $D$ is a field $F$, this canonical map is an isomorphism with inverse $$Fract(F)\rightarrow F;\; {a\over b} \mapsto ab^{-1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/79188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 1, "answer_id": 0 }
A deceiving Taylor series When we try to expand $$ \begin{align} f:&\mathbb R \to \mathbb R\\ &x \mapsto \begin{cases} \mathrm e^{-\large\frac 1{x^2}} &\Leftarrow x\neq 0\\ 0 &\Leftarrow x=0 \end{cases} \end{align}$$ in the Taylor series about $x = 0$, we wrongly conclude that $f(x) \equiv 0$, because the derivative of $f(x)$ of any order at this point is $0$. Therefore, $f(x)$ is not well-behaved for this procedure. What conditions determine the good behavior of a function for the Taylor series expansion?
To add to the other answers, in many cases one uses some simple sufficient (but not necessary) condition for analyticity: for example, any elementary function (polynomials, trigonometric functions, exponential,logarithms) is analytic in any open subset of its domain,; the same for compositions, sums, products, reciprocals (except for the points where the original function is zero... which leaves out your example), etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/79241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Inner product space computation If $x = (x_1,x_2)$ and $y = (y_1,y_2)$ show that $\langle x,y\rangle = \begin{bmatrix}x_1 & x_2\end{bmatrix}\begin{bmatrix}2 & -1 \\ 1 & 1\end{bmatrix}\begin{bmatrix}y_1 \\ y_2\end{bmatrix}$ defines an inner product on $\mathbb{R}^2$. Is there any hints on this one? All I'm thinking is to compute a determinant, but what good is that?
Use the definition of an inner product and check whether your function satisfies all the properties. Note that in general, as kahen pointed out in the comment, $\langle \mathbf{x}, \mathbf{y}\rangle = \mathbf{y}^*A\mathbf{x}$ defines an inner product on $\mathbb{C}^n$ iff $A$ is a positive-definite Hermitian $n \times n$ matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/79300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prime reciprocals sum Let $a_i$ be a sequence of $1$'s and $2$'s and $p_i$ the prime numbers. And let $r=\displaystyle\sum_{i=1}^\infty p_i^{-a_i}$ Can $r$ be rational, and can r be any rational $> 1/2$ or any real? ver.2: Let $k$ be a positive real number and let $a_i$ be $1 +$ (the $i$'th digit in the binary decimal expansion of $k$). And let $r(k)=\displaystyle\sum_{n=1}^\infty n^{-a_n}$ Does $r(k)=x$ have a solution for every $x>\pi^2/6$, and how many ?
The question with primes in the denominator: The minimum that $r$ could possibly be is $C=\sum\limits_{i=1}^\infty\frac{1}{p_i^2}$. However, a sequence of $1$s and $2$s can be chosen so that $r$ can be any real number not less than $C$. Since $\sum\limits_{i=1}^\infty\left(\frac{1}{p_i}-\frac{1}{p_i^2}\right)$ diverges, consider the sum $$ S_n=\sum_{i=1}^n b_i\left(\frac{1}{p_i}-\frac{1}{p_i^2}\right) $$ where $b_i$ is $0$ or $1$. Choose $b_n=1$ while $S_{n-1}+\frac{1}{p_n}-\frac{1}{p_n^2}\le L-C$ and $b_n=0$ while $S_{n-1}+\frac{1}{p_n}-\frac{1}{p_n^2}>L-C$. If we let $a_i=1$ when $b_i=1$ and $a_i=2$ when $b_i=0$, then $$ \sum\limits_{i=1}^\infty\frac{1}{p_i^{a_i}}=\sum\limits_{i=1}^\infty\frac{1}{p_i^2}+\sum_{i=1}^\infty b_i\left(\frac{1}{p_i}-\frac{1}{p_i^2}\right)=C+(L-C)=L $$ The question with non-negative integers in the denominator: Changing $p_n$ from the $n^{th}$ prime to $n$ simply allows us to specify $C=\frac{\pi^2}{6}$. The rest of the procedure follows through unchanged. That is, choose any $L\ge C$ and let $$ S_n=\sum_{i=1}^n b_i\left(\frac{1}{i}-\frac{1}{i^2}\right) $$ where $b_i$ is $0$ or $1$. Choose $b_n=1$ while $S_{n-1}+\frac{1}{n}-\frac{1}{n^2}\le L-C$ and $b_n=0$ while $S_{n-1}+\frac{1}{n}-\frac{1}{n^2}>L-C$. If we let $a_i=1$ when $b_i=1$ and $a_i=2$ when $b_i=0$, then $$ \sum\limits_{n=1}^\infty\frac{1}{n^{a_i}}=\sum\limits_{n=1}^\infty\frac{1}{n^2}+\sum_{n=1}^\infty b_n\left(\frac{1}{n}-\frac{1}{n^2}\right)=C+(L-C)=L $$ We don't need to worry about an infinite final sequence of $1$s in the binary number since that would map to a divergent series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/79376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Difference between maximum and minimum? If I have a problem such as this: We need to enclose a field with a fence. We have 500m of fencing material and a building is on one side of the field and so won’t need any fencing. Determine the dimensions of the field that will enclose the largest area. This is obviously a maximizing question, however what would I do differently if I needed to minimize the area? Whenever I do these problems I just take the derivative, make it equal to zero and find x, and the answer always seems to appear - but what does one do differently to find the minimum or maximum?
The Extreme Value Theorem guarantees that a continuous function on a finite closed interval has both a maximum and a minimum, and that the maximum and the minimum are each at either a critical point, or at one of the endpoints of the interval. When trying to find the maximum or minimum of a continuous function on a finite closed interval, you take the derivative and set it to zero to find the stationary points. These are one kind of critical points. The other kind of critical points are the points in the domain at which the derivative is not defined. The usual method to find the extreme you are looking for (whether it is a maximum or a minimum) is to determine whether you have a continuous function on a finite closed interval; if this is the case, then you take the derivative. Then you determine the points in the domain where the derivative is not defined; then the stationary points (points where the derivative is $0$). And then you plug in all of these points, and the endpoints of the interval, into the function, and you look at the values. The largest value you get is the maximum, the smallest value you get is the minimum. The procedure is the same whether you are looking for the maximum or for the minimum. But if you are not regularly checking the endpoints, you will not always get the right answer, because the maximum (or the minimum) could be at one of the endpoints. (In the case you are looking for, evaluating at the endpoints gives an area of $0$, so that's the minimum). (If the domain is not finite and closed, things get more complicated. Often, considering the limit as you approach the endpoints (or the variable goes to $\infty$ or $-\infty$, whichever is appropriate) gives you information which, when combined with information about local extremes of the function (found also by using critical points and the first or second derivative tests), will let you determine whether you have local extremes or not. )
{ "language": "en", "url": "https://math.stackexchange.com/questions/79437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Proof: If $f'=0$ then is $f$ is constant I'm trying to prove that if $f'=0$ then is $f$ is constant WITHOUT using the Mean Value Theorem. My attempt [sketch of proof]: Assume that $f$ is not constant. Identify interval $I_1$ such that $f$ is not constant. Identify $I_2$ within $I_1$ such that $f$ is not constant. Repeat this and by the Nested Intervals Principle, there is a point $c$ within $I_n$ for any $n$ such that $f(c)$ is not constant... This is where I realized that my approach might be wrong. Even if it isn't I don't know how to proceed. Thanks for reading and any help/suggestions/corrections would be appreciated.
So we have to prove that $f'(x)\equiv0$ $\ (a\leq x\leq b)$ implies $f(b)=f(a)$, without using the MVT or the fundamental theorem of calculus. Assume that an $\epsilon>0$ is given once and for all. As $f'(x)\equiv0$, for each fixed $x\in I:=[a,b]$ there is a neighborhood $U_\delta(x)$ such that $$\Biggl|{f(y)-f(x)\over y-x}\Biggr|\leq\epsilon\qquad\bigl(y\in\dot U_\delta(x)\bigr)$$ ($\delta$ depends on $x$). For each $x\in I\ $ put $U'(x):=U_{\delta/3}(x)$. Then the collection $\bigl(U'(x)\bigr)_{x\in I}$ is an open covering of $I$. Since $I$ is compact there exists a finite subcovering, and we may assume there is a finite sequence $(x_n)_{0\leq n\leq N}$ with $$a=x_0<x_1<\ldots< x_{N-1}<x_N=b$$ such that $I\subset\bigcup_{n=0}^N\ U'(x_n)$. The $\delta/3$-trick guarantees that $$|f(x_n)-f(x_{n-1})|\leq \epsilon(x_n-x_{n-1}).$$ By summing up we therefore obtain the estimate $|f(b)-f(a)|\leq \epsilon(b-a)$, and as $\epsilon>0$ was arbitrary it follows that $f(b)=f(a)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/79566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 1 }
What was the notation for functions before Euler? According to the Wikipedia article, [Euler] introduced much of the modern mathematical terminology and notation, particularly for mathematical analysis, such as the notion of a mathematical function. — Leonhard Euler, Wikipedia What was the notation for functions before him?
Let's observe an example : $a)$ formal description of function (two-part notation) $f : \mathbf{N} \rightarrow \mathbf{R}$ $n \mapsto \sqrt{n}$ $b)$ Euler's notation : $f(n)=\sqrt{n}$ I don't know who introduced two-part notation but I think that this notation must be older than Euler's notation since it gives more information about function and therefore two-part-notation is closer to correct definition of the function than Euler's notation. There is also good wikipedia article about notation for differentiation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/79613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 2 }
Upper bound for $-t \log t$ While reading Csiszár & Körner's "Information Theory: Coding Theorems for Discrete Memoryless Systems", I came across the following argument: Since $f(t) \triangleq -t\log t$ is concave and $f(0) = 0$ and $f(1) = 0$, we have for every $0 \leq t \leq 1-\tau$, $0 \leq \tau \leq 1/2$, \begin{equation}|f(t) - f(t+\tau)| \leq \max (f(\tau), f(1-\tau)) = -\tau \log\tau\end{equation} I can't make any progress in seeing how this bound follows from the properties of $f(t)$. Any insights would be greatly appreciated.
The function $g$ defined on the interval $I=[0,1-\tau]$ by $g(t)=f(t)-f(t+\tau)$ has derivative $g'(t)=-\log(t)+\log(t+\tau)$. This derivative is positive hence $g$ is increasing on $I$ from $g(0)=-f(\tau)<0$ to $g(1-\tau)=f(1-\tau)>0$. For every $t$ in $I$, $g(t)$ belongs to the interval $[-f(\tau),f(1-\tau)]$, in particular $|g(t)|\leqslant\max\{f(\tau),f(1-\tau)\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/79683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
limit of $f$ and $f''$ exists implies limit of $f'$ is 0 Prove that if $\lim\limits_{x\to\infty}f(x)$ and $\lim\limits_{x\to\infty}f''(x)$ exist, then $\lim\limits_{x\to\infty}f'(x)=0$. I can prove that $\lim\limits_{x\to\infty}f''(x)=0$. Otherwise $f'(x)$ goes to infinity and $f(x)$ goes to infinity, contradicting the fact that $\lim\limits_{x\to\infty}f(x)$ exists. I can also prove that if $\lim\limits_{x\to\infty}f'(x)$ exists, it must be 0. So it remains to prove that $\lim\limits_{x\to\infty}f'(x)$ exists. I'm stuck at this point.
This is similar to a recent Putnam problem, actually. By Taylor's theorem with error term, we know that for any $x$, $$ f(x+1) = f(x) + f'(x) + \tfrac12f''(t) $$ for some $x\le t\le x+1$. Solve for $f'(x)$ and take limits....
{ "language": "en", "url": "https://math.stackexchange.com/questions/79755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
$f'(x)-xf(x)=0$ has more roots than $f(x)=0$ Let $f(x)$ be a polynomial with real coefficients. Show that the equation $f'(x)-xf(x)=0$ has more roots than $f(x)=0$. I saw the hint, nevertheless I can't prove it clearly. The hint is that $f(x)e^{-x^2/2}$ has a derivative $(f'(x)-xf(x))e^{-x^2/2}$, and use the Rolle's theorem. My outline: I think that $f'-xf$ have zeros between distinct zeros of $f$, and if $f$ has a zero of multiplicity $k$, then $f'-xf$ has the same zero with multiplicity $k-1$. But how can I show that $f'-xf$ have zeros outside of zeros of $f$, i.e. $(-\infty,\alpha_1)$ and $(\alpha_n,\infty)$ where $\alpha_1$, $\alpha_n$ are the first, last zero of $f$ respectively?
$f(x)e^{-x^2/2}$ is zero at $\alpha_1$, and tends to zero at $-\infty$. So it must have a zero derivative somewhere in $(-\infty,\alpha_1)$. Edited to reply to Gobi's comment You can use Rolle's theorem after a little work. Let us write $g(x)$ for $f(x)e^{-x^2/2}$. Take any point $t \in (-\infty,\alpha_1)$. Since $g(x)$ tends to zero at $-\infty$, there is a point $c < t$ such that $g(c) < g(t)/2$. Then by the Intermediate Value Theorem, there exist points $a \in (c,t)$ and $b \in (t,\alpha_1)$ such that $g(a) = g(b) = g(t)/2$. Now you can use Rolle's therem on $(a,b)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/79821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
ArcTan(2) a rational multiple of $\pi$? Consider a $2 \times 1$ rectangle split by a diagonal. Then the two angles at a corner are ArcTan(2) and ArcTan(1/2), which are about $63.4^\circ$ and $26.6^\circ$. Of course the sum of these angles is $90^\circ = \pi/2$. I would like to know if these angles are rational multiples of $\pi$. It doesn't appear that they are, e.g., $(\tan^{-1} 2 )/\pi$ is computed as 0.35241638234956672582459892377525947404886547611308210540007768713728\ 85232139736632682857010522101960 to 100 decimal places by Mathematica. But is there a theorem that could be applied here to prove that these angles are irrational multiples of $\pi$? Thanks for ideas and/or pointers! (This question arose thinking about Dehn invariants.)
Lemma: If $x$ is a rational multiple of $\pi$ then $2 \cos(x)$ is an algebraic integer. Proof $$\cos(n+1)x+ \cos(n-1)x= 2\cos(nx)\cos(x) \,.$$ Thus $$2\cos(n+1)x+ 2\cos(n-1)x= 2\cos(nx)2\cos(x) \,.$$ It follows from here that $2 \cos(nx)= P_n (2\cos(x))$, where $P_n$ is a monic polynomial of degree $n$ with integer coefficients. Actually $P_{n+1}=XP_n-P_{n-1}$ with $P_1(x)=X$ and $P_0(x)=1$. Then, if $x$ is a rational multiple of $\pi$ we have $nx =2k \pi$ for some $n$ and thus, $P_n(2 \cos(x))=1$. Now, coming back to the problem. If $\tan(x)=2$ then $\cos(x) =\frac{1}{\sqrt{5}}$. Suppose now by contradiction that $x$ is a rational multiple of $\pi$. Then $2\cos(x) =\frac{2}{\sqrt{5}}$ is an algebraic integer, and so is its square $\frac{4}{5}$. But this number is algebraic integer and rational, thus integer, contradiction.... P.S. If $\tan(x)$ is rational, and $x$ is a rational multiple of $\pi$, it follows exactly the same way that $\cos^2(x)$ is rational, thus $4 \cos^2(x)$ is algebraic integer and rational. This shows that $2 \cos(x) \in \{ 0, \pm 1, \pm 2 \}$.....
{ "language": "en", "url": "https://math.stackexchange.com/questions/79861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 3, "answer_id": 1 }
If a Fourier Transform is continuous in frequency, then what are the "harmonics"? The basic idea of a Fourier series is that you use integer multiples of some fundamental frequency to represent any time domain signal. Ok, so if the Fourier Transform (Non periodic, continuous in time, or non periodic, discrete in time) results in a continuum of frequencies, then uh, is there no fundamental frequency / concept of using integer multiples of some fundamental frequency?
In order to talk about a fundamental frequency, you need a fundamental period. But the Fourier transform deals with integrable functions ($L^1$, or $L^2$ if you go further in the theory) defined on the whole real line, and they are not periodic (except the zero function).
{ "language": "en", "url": "https://math.stackexchange.com/questions/79893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Show $X_n {\buildrel p \over \rightarrow} X$ and $X_n \le Z$ a.s., implies $X \le Z$ a.s. Suppose $X_n  {\buildrel p \over \rightarrow} X$ and $X_n \le Z,\forall n \in \mathbb{N}$. Show $X \le Z$ almost surely. I've try the following, but I didn't succeed. By the triangle inequality, $X=X-X_n+X_n \le |X_n-X|+|X_n|$. Hence, $P(X \le Z) \le P(|X_n-X| \le Z) + P(|X_n| \le Z)$. I know that, since $X_n  {\buildrel p \over \rightarrow} X$ then $P(|X_n-X| \le Z) \to 1$, and we have $P( |X_n| \le Z)=1$. I can't go further.
$X_n {\buildrel p \over \rightarrow} X$ implies that there is a subsequence $X_{n(k)}$ with $X_{n(k)}\to X$ almost surely.
{ "language": "en", "url": "https://math.stackexchange.com/questions/79946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Sequential continuity for quotient spaces Sequential continuity is equivalent to continuity in a first countable space $X$. Look at the quotient projection $g:X\to Y$ to the space of equivalence classes of an equivalence relation with the quotient topology and a map $f:Y\to Z$. I want to test if $f$ is continuous. Can I do this by showing $$\lim_n\;f(g(x_n))=f(g(\lim_n\;x_n))\;?$$ Can I do this at least if $X$ is a metric space? Can I do this by showing $$\lim_n\;f(g(x_n))=f(\lim_n\;g(x_n))\;?$$
You certainly can’t do it in general if $X$ isn’t a sequential space, i.e., one whose structure is completely determined by its convergent sequences. $X$ is sequential iff it’s the quotient of a metric space, and the composition of two quotient maps is a quotient map, so if $X$ is sequential, $Y$ is also sequential, therefore $f$ is continuous iff $\lim\limits_n\;f(y_n) = f(\lim\limits_n\;y_n)$ for every convergent sequence $\langle y_n:n\in\omega\rangle$ in $Y$. However, you can’t always pull this back to $X$. Edit (24 April 2016): Take $X=[0,1]$, and let $Y$ be the quotient obtained by identifying $0$ and $1$ to a point $p$, $g$ being the quotient map. Suppose that $f:Y\to Z$, and you want to test the continuity of $f$. For $n\in\mathbb{Z}^+$ let $$x_n=\begin{cases} \frac2{n+4},&\text{if }n\text{ is even}\\ \frac{n+1}{n+3},&\text{if }n\text{ is odd}\;, \end{cases}$$ so that $$\langle x_n:n\in\Bbb Z^+\rangle=\left\langle\frac12,\frac13,\frac23,\frac14,\frac34,\frac15,\frac45,\ldots\right\rangle\;.$$ Then $\langle g(x_n):n\in\mathbb{Z}^+\rangle$ converges to $p$ in $Y$, so you need to test whether $\langle f(g(x_n)):n\in\mathbb{Z}^+\rangle$ converges to $f(p)$ in $Z$, but you can’t do this by asking whether $$\lim_n\;f(g(x_n))=f(g(\lim_n\;x_n))\;,$$ because $\langle x_n:n\in\mathbb{Z}^+\rangle$ isn’t convergent in $X$. Thus, the answer to your first question is no even if $X$ is metric. (This replaces a flawed example with one that actually works, borrowed from this answer by Eric Wofsey.) End edit. The answer to your second question, however, is yes. Let $\langle y_n:n\in\omega\rangle$ be a convergent sequence in $Y$ with limit $y$. Then there are $x_n\in X$ such that $y_n = g(x_n)$ for $n\in\omega$, so checking that $$\lim_n\;f(y_n)=f(y)$$ is checking that $$\lim_n\;f(g(x_n))=f(\lim_n\;g(x_n))\;.$$ Note, though, that you have to check all sequences in $X$, not just convergent ones.
{ "language": "en", "url": "https://math.stackexchange.com/questions/80017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Taking the derivative of $\frac1{x} - \frac1{e^x-1}$ using the definition Given $f$: $$ f(x) = \begin{cases} \frac1{x} - \frac1{e^x-1} & \text{if } x \neq 0 \\ \frac1{2} & \text{if } x = 0 \end{cases} $$ I have to find $f'(0)$ using the definition of derivative (i.e., limits). I already know how to differentiate and stuff, but I still can't figure out how to solve this. I know that I need to begin like this: $$ f'(0) = \lim_{h \to 0} \frac{f(h)-f(0)}{h} = \lim_{h \to 0} \frac{\frac1{h} - \frac1{e^h-1}-\frac1{2}}{h} $$ But I don't know how to do this. I feel like I should, but I can't figure it out. I tried distributing the denominator, I tried l'Hôpital's but I get $0$ as the answer, while according to what my prof gave me (this is homework) it should be $-\frac1{12}$. I really don't know how to deal with these limits; could someone give me a few tips?
Hmm, another approach, which seems simpler to me. However I'm not sure whether it is formally correct, so possibly someone else can also comment on this. The key here is that the expression $\small {1 \over e^x-1 } $ is a very well known generation function for the bernoulli-numbers $$\small {1 \over e^x-1 } = x^{-1} - 1/2 + {1 \over 12} x - {1 \over 720} x^3 + {1 \over 30240 }x^5 + O(x^7) $$ from where we can rewrite $$\small \frac1x - {1 \over e^x-1 } = 1/2 - {1 \over 12} x + {1 \over 720} x^3 - {1 \over 30240 }x^5 + O(x^7) \qquad \text{ for } x \ne 0 $$ and because by the second definition $\small f(x)=\frac12 \text{ for }x=0$ that power series is the analytic expression for both cases at and near that point. Then the derivative can be taken termwise: $$\small (\frac1x - {1 \over e^x-1 })' = - {1 \over 12} + {3 \over 720} x^2 - {5 \over 30240 }x^4 + O(x^6) $$ and is $\small -\frac1{12} $ at x=0
{ "language": "en", "url": "https://math.stackexchange.com/questions/80078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
A question about composition of trigonometric functions A little something I'm trying to understand: $\sin(\arcsin{x})$ is always $x$, but $\arcsin(\sin{x})$ is not always $x$ So my question is simple - why? Since each cancels the other, it would make sense that $\arcsin(\sin{x})$ would always result in $x$. I'd appreciate any explanation. Thanks!
It is a result of deriving an inverse function for non-bijective one. Let $f:X\to Y$ be some function, i.e. for each $x\in X$ we have $f(x)\in Y$. If $f$ is not a bijection then we cannot find a function $g:Y\to X$ such that $g(f(x)) = x$ for all $x\in X$ and $f(g(y)) =y$ for all $y\in Y$. Consider your example, $f = \sin:\mathbb R\to\mathbb R$. It is neither a surjection (since $\sin (\mathbb R) = [0,1]$) nor an injection (since $\sin x = \sin(x+2\pi k)$ for all $k\in \mathbb Z$). As a result you cannot say that $\sin$ has an inverse. On the other hand, if you consider a restriction of $f^* = \sin|_{X}$ with $X = [-\pi/2,\pi/2]$ and a codomain $Y = [-1,1]$ then $f^*$ has an inverse since $$ \sin|_{[-\pi/2,\pi/2]}:[-\pi/2,\pi/2]\to[-1,1] $$ is an injection. As a result you obtain a function $\arcsin:[-1,1]\to [-\pi/2,\pi/2]$ which is the inverse for $f^* = \sin|_{[-\pi/2,\pi/2]}$. In particular it means that $\sin (\arcsin{y})=f^*(\arcsin{y}) = y$ for all $y\in[-1,1]$. On the other hand, if you take $x = \frac{\pi}2+2\pi$ then $\sin x = 1$ and hence $\arcsin(\sin{x}) = \frac{\pi}2\neq x$. More precisely, $\arcsin$ is the partial inverse for $\sin$ with all following properties.
{ "language": "en", "url": "https://math.stackexchange.com/questions/80146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Variance over two periods with known variances? If period 1 has variance v1 and period 2 has variance v2, what is the variance over period 1 and period 2? (period 1 and period 2 are the same length) I've done some manual calculations with random numbers, and I can't seem to figure out how to calculate the variance over period 1 and period 2 from v1 and v2.
If you only know the variances of your two sets, you can't compute the variance of the union of the two. However, if you know both the variances and the means of two sets, then there is a quick way to calculate the variance of their union. Concretely, say you have two sets $A$ and $B$ for which you know the means $\mu_A$ and $\mu_B$ and variances $\sigma^2_A$ and $\sigma^2_B$, as well as the sizes of each set $n_A$ and $n_B$. You want to know the mean $\mu_X$ and variance $\sigma^2_X$ of the union $X=A\cup B$ of the two sets (assuming that the union is disjoint, i.e. that $A$ and $B$ don't have any elements in common). With a little bit of scribbling and algebra, you can reveal that $$\mu_X = \frac{n_A\mu_A + n_B\mu_B}{n_A+n_B}$$ and $$\sigma^2_X = \frac{n_A\sigma^2_A + n_B\sigma^2_B}{n_A + n_B} - \frac{n_An_B}{(n_A+n_B)^2} (\mu_A - \mu_B)^2 $$ As pointed out in the answer by tards, the formula for the variance of the combined set depends explicitly on the means of the sets $A$ and $B$, not just on their variances. Moreover, you can see that adding a constant to one of the sets doesn't change the first term (the variances remain the same) but it does change the second term, because one of the means changes. The fact that the dependence on the means enters through a term of the form $\mu_A-\mu_B$ shows you that if you added the same constant to both sets, then the overall variance would not change (just as you'd expect) because although both the means change, the effect of this change cancels out. Magic!
{ "language": "en", "url": "https://math.stackexchange.com/questions/80195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
The Mathematics of Tetris I am a big fan of the old-school games and I once noticed that there is a sort of parity associated to one and only one Tetris piece, the $\color{purple}{\text{T}}$ piece. This parity is found with no other piece in the game. Background: The Tetris playing field has width $10$. Rotation is allowed, so there are then exactly $7$ unique pieces, each of which is composed of $4$ blocks. For convenience, we can name each piece by a letter. See this Wikipedia page for the Image ($\color{cyan}{\text{I}}$ is for the stick piece, $\color{goldenrod}{\text{O}}$ for the square, and $\color{green}{\text{S}},\color{purple}{\text{T}},\color{red}{\text{Z}},\color{orange}{\text{L}},\color{blue}{\text{J}}$ are the others) There are $2$ sets of $2$ pieces which are mirrors of each other, namely $\color{orange}{\text{L}}, \color{blue}{\text{J}}$ and $\color{green}{\text{S}},\color{red}{\text{Z}}$ whereas the other three are symmetric $\color{cyan}{\text{I}},\color{goldenrod}{\text{O}}, \color{purple}{\text{T}}$ Language: If a row is completely full, that row disappears. We call it a perfect clear if no blocks remain in the playing field. Since the blocks are size 4, and the playing field has width $10$, the number of blocks for a perfect clear must always be a multiple of $5$. My Question: I noticed while playing that the $\color{purple}{\text{T}}$ piece is particularly special. It seems that it has some sort of parity which no other piece has. Specifically: Conjecture: If we have played some number of pieces, and we have a perfect clear, then the number of $\color{purple}{\text{T}}$ pieces used must be even. Moreover, the $\color{purple}{\text{T}}$ piece is the only piece with this property. I have verified the second part; all of the other pieces can give a perfect clear with either an odd or an even number used. However, I am not sure how to prove the first part. I think that assigning some kind of invariant to the pieces must be the right way to go, but I am not sure. Thank you,
My colleague, Ido Segev, pointed out that there is a problem with most of the elegant proofs here - Tetris is not just a problem of tiling a rectangle. Below is his proof that the conjecture is, in fact, false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/80246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "277", "answer_count": 4, "answer_id": 3 }
A book asks me to prove a statement but I think it is false The problem below is from Cupillari's Nuts and Bolts of Proofs. Prove the following statement: Let $a$ and $b$ be two relatively prime numbers. If there exists an $m$ such that $(a/b)^m$ is an integer, then $b=1$. My question is: Is the statement true? I believe the statement is false because there exists an $m$ such that $(a/b)^m$ is an integer, and yet $b$ does not have to be $1$. For example, let $m=0$. In this case, $(a/b)^0=1$ is an integer as long as $b \neq 0$. So I think the statement is false, but I am confused because the solution at the back of the book provides a proof that the statement is true.
Your counterexample is valid. But the statement is true if $m$ is required to be a positive natural number or positive integer. Alternatively, note it's not if true $m$ is required to be negative. In my opinion, it seems like you were supposed to assume $m>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/80303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Crafty solutions to the following limit The following problem came up at dinner, I know some ways to solve it but they are quite ugly and as some wise man said: There is no place in the world for ugly mathematics. These methods are using l'Hôpital, but that becomes quite hideous very quickly or by using series expansions. So I'm looking for slick solutions to the following problem: Compute $\displaystyle \lim_{x \to 0} \frac{\sin(\tan x) - \tan(\sin x)}{\arcsin(\arctan x) - \arctan(\arcsin x)}$. I'm curious what you guys will make of this.
By Taylor's series we obtain that * *$\sin (\tan x)= \sin\left(x + \frac13 x^3 + \frac2{15}x^5+ \frac{17}{315}x^7+O(x^9) \right)=x + \frac16 x^3 -\frac1{40}x^5 - \frac{55}{1008}x^7+O(x^9)$ and similarly * *$\tan (\sin x)= x + \frac16 x^3 -\frac1{40}x^5 - \frac{107}{5040}x^7+O(x^9)$ *$\arcsin (\arctan x)= x - \frac16 x^3 +\frac{13}{120}x^5 - \frac{341}{5040}x^7+O(x^9)$ *$\arctan (\arcsin x)= x - \frac16 x^3 +\frac{13}{120}x^5 - \frac{173}{5040}x^7+O(x^9)$ which leads to the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/80364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36", "answer_count": 2, "answer_id": 1 }
How strong does a matrix distort angles? How strong does it distort lengths anisotrolicly? let there be given a square matrix $M \in \mathbb R^{N\times N}$. I would like to have some kind of measure in how far it * *Distorts angles between vectors *It stretches and squeezes discriminating directions. While I am fine with $M = S \cdot Q$, with $S$ being a positive multiple of the identity and $Q$ being an orthogonal matrix, I would like to measure in how far $M$ diverts from this form. In more visual terms, I would like to measure by a numerical expression to what extent a set is non-similar to the image of the respective set. What I have in mind is a numerical measure, just like, e.g, the determinant of $M$ measures the volume of a cube under the transform by $M$. Can you help me?
Consider the singular value decomposition of the matrix. Look at the singular values. These tell you, how the unit sphere is stretched or squeezed by the matrix along the directions corresponding to the singular vectors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/80420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Decoding and correcting $(1,0,0,1,0,0,1)$ Hamming$ (7,4)$ code Correct any error and decode $(1,0,0,1,0,0,1)$ encoded using Hamming $(7,4)$ assuming at most one error. The message $(a,b,c,d)$ is encoded $(x,y,a,z,b,c,d)$ The solution states $H_m = (0,1,0)^T$ which corresponds to the second column in the Standard Hamming (7,4) matrix which means the second digit, 0, is wrong and the corrected code is $(1,1,0,1,0,0,1)$. The resulting code becomes $(0,0,0,1)$ My question is: How do I get $H_m$?
The short answer is that you get the syndrome $H_m$ by multiplying the received vector $r$ with the parity check matrix: $H_m=H r^T$. There are several equivalent parity check matrices for this Hamming code, and you haven't shown us which is the one your source uses. The bits that you did give hint at the possibility that the check matrix could be $$ H=\pmatrix{1&0&1&0&1&0&1\cr0&1&1&0&0&1&1\cr0&0&0&1&1&1&1\cr}. $$ To be fair, that is one of the more popular choices, and also fits your received word / syndrome pair :-). The reason I'm nitpicking about this is that any one of the $7!=5040$ permutations of the columns would work equally well for encoding/decoding. Being an algebraist at heart, I often prefer an ordering that exhibits the cyclic nature of the code better. Somebody else might insist on an ordering the matches with systematic encoding. Doesn't matter much! Here your received vector $r=(1,0,0,1,0,0,1)$ has bits on at positions 1, 4 and 7, so the syndrome you get is $$ H_m=H r^T=\pmatrix{1\cr 0\cr0\cr}+\pmatrix{0\cr 0\cr1\cr}+\pmatrix{1\cr 1\cr1\cr}=\pmatrix{0\cr 1\cr0\cr} $$ the modulo two sum of the first, fourth and seventh columns of $H$. If $r$ were a valid encoded message, then the syndrome $H_m$ would be the zero vector. As that was not the case here, an error (one or more) has occurred. The task of the decoder is to find the most likely error, and the usual assumptions lead us to the goal of toggling as few bits of $r$ as possible. The nice thing about the Hamming code is that we can always do this by toggling at most one bit. We identify $H_m$ as the second column of $H$, so to make the syndrome zero by correcting a single bit we need to toggle the second bit of $r$. What makes the Hamming code tick is that all possible non-zero syndrome vectors occur as columns of $H$. Therefore we always meet our goal of making the syndrom zero by toggling (at most) a single bit of any received word.
{ "language": "en", "url": "https://math.stackexchange.com/questions/80472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How can one prove that $\sqrt[3]{\left ( \frac{a^4+b^4}{a+b} \right )^{a+b}} \geq a^ab^b$, $a,b\in\mathbb{N^{*}}$? How can one prove that $\sqrt[3]{\left ( \frac{a^4+b^4}{a+b} \right )^{a+b}} \geq a^ab^b$, $a,b\in\mathbb{N^{*}}$?
Since $\log(x)$ is concave, $$ \log\left(\frac{ax+by}{a+b}\right)\ge\frac{a\log(x)+b\log(y)}{a+b}\tag{1} $$ Rearranging $(1)$ and exponentiating yields $$ \left(\frac{ax+by}{a+b}\right)^{a+b}\ge x^ay^b\tag{2} $$ Plugging $x=a^3$ and $y=b^3$ into $(2)$ gives $$ \left(\frac{a^4+b^4}{a+b}\right)^{a+b}\ge a^{3a}b^{3b}\tag{3} $$ and $(3)$ is the cube of the posited inequality. From my comment (not using concavity): For $0<t<1$, the minimum of $t+(1-t)u-u^{1-t}$ occurs when $(1-t)-(1-t)u^{-t}=0$; that is, when $u=1$. Therefore, $t+(1-t)u-u^{1-t}\ge0$. If we set $u=\frac{y}{x}$ and $t=\frac{a}{a+b}$, we get $$ \frac{ax+by}{a+b}\ge x^{a/(a+b)}y^{b/(a+b)}\tag{4} $$ Inequality $(2)$ is simply $(4)$ raised to the $a+b$ power.
{ "language": "en", "url": "https://math.stackexchange.com/questions/80550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Convergence of $b_n=|a_n| + 1 - \sqrt {a_n^2+1}$ and $b_n = \frac{|a_n|}{1+|a_{n+2}|}$ here's my daily problem: 1) $b_n=|a_n| + 1 - \sqrt {a_n^2+1}$. I have to prove that, if $b_n$ converges to 0, then $a_n$ converges to 0 too. Here's how I have done, could someone please check if this is correct? I'm always afraid to square both sides. $ \begin{align*} 0&=|a_n| + 1 - \sqrt {a_n^2+1}\\ & -|a_n| = 1 - \sqrt {a_n^2+1}\\ & a_n^2 = 1 - 2 *\sqrt {a_n^2+1} + a_n^2 + 1\\ & 2 = 2 *\sqrt {a_n^2+1}\\ & 1 = \sqrt {a_n^2+1}\\ & 1 = {a_n^2+1}\\ & a_n^2 = 0 \Rightarrow a_n=0\\ \end{align*}$ 2) $b_n = \frac{|a_n|}{1+|a_{n+2}|}$ I have to prove the following statement is false with an example: "If $b_n$ converges to 0, then $a_n$ too." I'm pretty lost here, any directions are welcome! I thought that would only converge to 0, if $a_n=0$. Maybe if $a_n >>> a_{n+2}$? Thanks in advance! :)
You seem to have a fundamental misconception regarding the difference between the limit of a sequence and an element of a sequence. When we say $b_n$ converges to $0$, it does not mean $b_n = 0$ for all $n$. For instance $b_n = \frac{1}{n}$ is convergent to $0$, but there is no natural number $n$ for which $b_n = 0$. In i) What you tried is ok (though the misconception above shows in the way you have written it), but are making some assumptions which need to be proved. If we were to rewrite what you wrote, it would be something like, If $a_n$ was convergent to $L$, then we would have that $ 0 = |L| + 1 - \sqrt{L^2 + 1}$ and then the algebra shows that $L = 0$. Using $a_n$ instead of $L$ makes what you wrote nonsensical. Also, can you tell what assumption is being made here and needs justification? For ii) Try constructing a sequence such that $\frac{a_{n+2}}{a_n} \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/80607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to prove such a simple inequality? Let $f\in C[0,\infty)\cap C^1(0,\infty)$ be an increasing convex function with $f(0)=0$ $\lim_{t->\infty}\frac{f(t)}{t}=+\infty$ and $\frac{df}{dt} \ge 1$. Then there exists constants $C$ and $T$ such that for any $t\in [T,\infty)$, $\frac{df}{dt}\le Ce^{f(t)}.$ Is it correct? If the conditions are not enough, please add some condition and prove it. Thank you
It is false as can be seen by the following proof by contradiction. Suppose it is true, and such a $C$ and $T$ exist. Then consider the following sequence of functions $f_n(t)=n(t-T) + 1+2T$ for $t\geq T$ and then extend $f_n$ smoothly for $t < T$ while keeping $f_n > 1$ and $f_n'\geq 1$. Then we have that $f_n'(T) \leq Ce^{f(T)}$, and $f_n'(T)=nT$ and $e^{f(T)}=e^{1+2T}$, hence the inequality would say that $nT \leq Ce^{1+2T}$ or $C \geq nTe^{-(1+2T)}$ which is a contradiction as $n\to\infty$. NOTE: I assumed that you do not want the constants to depend on $f$ (as this would change the problem).
{ "language": "en", "url": "https://math.stackexchange.com/questions/80729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }