Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Trigonometric Coincidence I Know that using Taylor Series, the formula of $\sin x$ is $$x-x^3/3!+x^5/5!-x^7/7!\cdots,$$ and the unit of $x$ is radian (where $\pi/2$ is right angle). However, the ratio of the circumference and the diameter of a circle is also $\pi$. Is it a coincidence? Or is there a proof?
The facts you mention are definitely not a coincidence; they are a consequence of how things are defined, and partly a reason why they are defined so. (This is an incredibly vague statement, but the question itself is a little vague). Some textbooks will define $\sin x$ by the series expansions you just mentioned, and then define $\pi$ as the least such positive number that $\sin \pi = 0$. Also, put $\cos x = \sin' x$. You can then check that $(\cos x, \sin x)$ parametrize points on the circle, and then calculate the circumference of a unit circle using the appropriate integral. This is a legitimate way of doing mathematics, but not a very illuminating one (in this particular situation), I'm afraid. If you know a little about differential equations (and are prepared not to be absolutely rigorous), you can proceed as follows. Let's agree that you know that there are functions $\sin x $ and $\cos x$ (as functions of the angle, so that the precise units are yet to be determined). Denote the measure of the full angle by $\tau$ (there $\tau$ is just a name just now, nothing more), and the circumference of the unit circle by $C$. Note that we could set $\tau$ to be pretty much any number (like the well know constant $2 \pi$, or $360$, or whatever), but $C$ is fixed once and for all, and in fact $C = 2 \pi$. Now, $\sin x$ and $\cos x$ will have some expansions as Taylor series, which will depend on what choice of $\tau$ we take (e.g. you have $\sin \tau/4 = 1$, but what this means depends on what $\tau$ is). We would like to choose $\tau$ so that the Taylor series for $\sin$ is something ``nice'' - which is of course a very vague requirement. To figure out the Taylor series, it's easiest to start with some differential equations. So let's try to find out what $\sin'x$ is. After some thought (it's probably best to draw a picture...) you can convince yourself that $\sin'x = \frac{C}{\tau} \cos x$. Likewise, $\cos'x = - \frac{C}{\tau} \sin x$. Hopefully, it is clear that things will become nicest when $C = \tau$, so that $\sin' x = \cos x $ and $\cos'x = - \sin x$ - and this is the choice that the mathematicians made! With a little work, you can derive from the conditions $\sin' x = \cos x $ and $\cos'x = - \sin x$ the Taylor expansion for $\sin x $ and $\cos x$. You could set other units (i.e. choose different $\tau$) - only then the Taylor expansion would not look so nice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/480468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Want to clarify whether I am correct or not, $\Phi(G) \subseteq \Phi(H)$? I Want you to clarify whether I am correct or not regarding following question. I will be thankful to you for telling me if I am wrong: Let $G$ be finite group and $\Phi(G)$ denotes its frattini subgroup. Let $H$ be a subgroup of $G$ such that $\Phi(G) \subseteq H$. Since $\Phi(G)$ is group of all non generators of $G$ therefore it also non generators for $H$. This implies that $\Phi(G) \subseteq \Phi(H)$. regards Steve
No. Take $G$ to be the dihedral group of order $8$. The Frattini subgroup is the unique normal subgroup of order $2$. But $G$ has two subgroups of order $4$ that are elementary abelian (and as such, they have trivial Frattini subgroup) and yet they contain $\Phi(G)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/480540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Three diagonals of a regular 11-sided regular polygon are chosen ; probability of parallelism Could someone help me with this? Suppose P is an 11-sided regular polygon and S is the set of all lines that contain two distinct vertices of P. If three lines are randomly chosen from S, what is the probability that the chosen lines contain a pair of parallel lines ?
But you are choosing three lines. The probability that you have calculated is for choosing a pair of parallel lines. The question asked is you randomly choose 3 lines and that it contain exactly one parallel pair. I would agree with Steven that the probability space is 55C3. Each edge will have 4 parallel lines and only that. The lines made up of other vertices are not parallel to any other. Thus, the answer that I think is 11C1*5C2*50C1/55C3 = 100/477. The rational is 11C1 is the choice that you make of 11 sets of 5 parallel lines (including the edge). You choose 2 of these 5 and hence 5C2 and the other line you choose among 50 lines that are not parallel to the chosen pair. Let me know if my reasoning is correct, Mark. Thanks Satish
{ "language": "en", "url": "https://math.stackexchange.com/questions/480612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Arithmetic progressions with coprime differences Suppose we have finite number $n \geqslant 2$ of arithmetic progressions $\{x \equiv r_1 \pmod {d_1}\}, \ldots ,\{ x \equiv r_n \pmod {d_n}\}$ such that $\gcd(d_1, \ldots, d_n) = 1.$ Is true that some pair of them has nonempty intersection? (I think, it's true).
Counterexample: the three congruences $x\equiv 0\pmod{6}$, $x\equiv 1\pmod{10}$, $x\equiv 2\pmod{15}$. One can produce similar counterexamples using any three distinct primes $p,q,r$ instead of $2,3,5$. Remark: I notice that fedja gave a very similar counterexample in a comment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/480683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A property of power series and the q-th roots of unity I'm trying to understand why if $ \displaystyle \sum_{n=0}^{\infty} a_{n}x^{n} = f(x) $, then $$ \sum_{n=0}^{\infty} a_{p+nq} x^{p+nq} = \frac{1}{q} \sum_{j=0}^{q-1} \omega^{-jp} f(\omega^{j} x)$$ where $\omega$ is a primitive $q$-th root of unity. I'm assuming it has something to do with the $q$-th roots of unity summing to zero. It is used in a proof of Gauss' digamma theorem. http://planetmath.org/ProofOfGaussDigammaTheorem
In the right hand side of your expresion, remplace $f(\omega^j x)$ by $\sum_{k=0}^{\infty} a_k \omega^{jk}x^k $. You will get : $$\frac{1}{q}\sum_{j=0}^{q-1} \omega^{-jp}\sum_{k} a_k x^k \omega^{jk}$$ By interverting the sums (formally at least it has to be justified : see Jyrki Lahtonen's comment) you get : $$\sum_{k} a_k x^k \frac{1}{q}\sum_{j=0}^{q-1} \omega^{j(k-p)}$$ And then you can use the result you are refering to in order to get the left hand side : If $(k-p)=nq$ : $\sum_{j=0}^{q-1} \omega^{jnq} =\sum_{j=0}^{q-1} (\omega^q)^{jn}= q$ ($\omega$ is a $q$-th root I guess ?) Otherwise $(k-p)=nq+r$ ($0< r<q$) and $$\sum_{j=0}^{q-1} \omega^{j(k-p)}=\sum_{j=0}^{q-1} \omega^{jnq+jr}=\sum_{j=0}^{q-1} \omega^{jr}=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/480736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Calculate $ \lim_{x \to 4} \frac{3 -\sqrt{5 -x}}{1 -\sqrt{5 -x}} $ How evaluate the following limit? $$ \lim_{x \to 4} \frac{3 -\sqrt{5 -x}}{1 -\sqrt{5 -x}} $$ I cannot apply L'Hopital because $ f(x) = 3 -\sqrt{5 -x} \neq 0 $ at $x = 5$
Let $x=5\cos4y$ where $0\le4y\le\pi$ $x\to4\implies\cos4y\to\dfrac45$ But as $\cos4y=2\cos^22y-1, 2y\to\arccos\dfrac3{\sqrt{10}}=\arcsin\dfrac1{\sqrt{10}}$ $$F=\lim_{x\to4}\frac{3-\sqrt{5+x}}{1-\sqrt{5-x}}=\lim_{2y\to\arcsin\frac1{\sqrt{10}}}\dfrac{\dfrac3{\sqrt{10}}-\cos2y}{\dfrac1{\sqrt{10}}-\sin2y}$$ If we set $\arccos\dfrac3{\sqrt{10}}=\arcsin\dfrac1{\sqrt{10}}=2A\implies\cos2A=?,\sin2A=?,\tan2A=\dfrac13$ $$F=\lim_{2y\to2A}\dfrac{\cos2A-\cos2y}{\sin2A-\sin2y}=\lim_{y\to A}\dfrac{-2\sin(A-y)\sin(A+y)}{2\sin(A-y)\cos(A+y)}=-\tan(A+A)=?$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/480816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Applying measure zero definition to Cantor sets I just learned about the concept of measure zero in real analysis, i.e. the definition that a set in $\mathbb{R}^n$ has measure zero if for any $\epsilon$ it can be covered by countably many rectangles whose volume sum to $<\epsilon$. I'm wondering if knowing only this, I can prove that the Cantor-third set has measure zero, while the fat Cantor set doesn't have measure zero. If the answer is that I should wait until I learn more about measures in general, I will happily do that. I'm just curious whether this definition alone is enough.
Yes, it is enough. If you think about the elimination process to create the Cantor set, after the first stage you see that the Cantor set can be covered by two intervals of length $1/3$ and a bit. After the second stage of middle-third removal, you see that the Cantor set can be covered by four intervals of length $1/9$ plus a bit, etc. So after the $n^{\rm th}$ stage you see that the Cantor set can be covered in $2^n$ intervals of length $3^{-n}$ (and bit, but is is easier not to write it!). In other words, the outer measure of the Cantor set is less than $(2/3)^n$ for all $n$, and so it has to be zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/480877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\iint_Ex\ dx\ dy$ over $E=\lbrace(x,y)\mid 0\le x, 0\le y\le 1, 1\le x^2+y^2\le 4\rbrace$ (Answer check and curious about alternative methods) $$\iint_Ex\ dx\ dy$$ $$E=\lbrace(x,y)\mid 0\le x, 0\le y\le 1, 1\le x^2+y^2\le 4\rbrace$$ Shape of region Entirely in first quadrant of xy plane, between two circles, of r=1 and r=2 respectively (about the origin), and below the line y=1. I split this up into two regions: 1) A nice segment, $\lbrace(r,\theta)\mid r\in[1,2],\theta\in[0,\frac{\pi}{6}]\rbrace$ 2) A part I just called "Other part", which is: $\lbrace(r,\theta)\mid\theta\in[\frac{\pi}{6},\frac{\pi}{2}],1\le r\le\frac{1}{\sin(\theta)}\rbrace$ I made a small note about using two closed regions (rather than one open and one closed at $\theta=\frac{\pi}{6}$) being fine because of continuity. Method So now $\iint_Ex\ dA=\iint_Ex\ dx\ dy=\iint_Er\cos(\theta)r\ d\theta\ dr$ = $\iint_Er^2\cos(\theta)d\theta\ dr$ Which I just did over the two regions, it was nice and straight forward, I'll provide more details if anyone thinks I'm trying to scavenge answers. To get $$\iint_Ex \ dx\ dy=\frac{3}{2}$$ Twist Hints: It may be helpful to use polar-coordinates and to do the angular integration first, noting that the polar description of the line y=1 is $r\sin(\theta)=1$ When I did it I had two integrals (one for each of the regions, the segment and the other part) the segment with it's nice easy r from 1 to 2 made it clear I wanted to do the $\int^\frac{\pi}{6}_0\cos(\theta)d\theta$ first, so I did, the other integral having bounds from 1 to $\frac{1}{\sin(\theta)}$ basically required I do radial integration first. I worked out the $\frac{\pi}{6}$ and whatnot just by looking at the triangle, created by the x-axis and the line dividing my two regions. it must have a point r=2 and y=1, that is $\sin(\theta)=\frac{1}{2}$ thus the $\frac{\pi}{6}$. I also noticed that after $\theta=\frac{\pi}{6}$ that $r\sin(\theta)=1$, this is in the hints though. Anyway I didn't adhere to the hints, I found this way, which (if my answer was right, which I believe it to be) seems nicer. Can anyone see a faster way, or what the hint was expecting, that one integral required a radial evaluation first. (unless the change of order would have been easier than I thought, I decided not to bother looking to change for the sake of a hint when I had a clear path)
Let's use Stoke's Theorem: \begin{align} \iint_{E}x\,{\rm d}x\,{\rm d}y &= \iint_{E}\,{\partial\left(x^{2}/2\right) \over \partial x}\,{\rm d}x\,{\rm dy} = \iint_{E}\nabla\times\left({1 \over 2}\,x^{2}\,\hat{y}\right)\,\cdot\hat{z} \,{\rm d}x\,{\rm dy} = {1 \over 2}\oint x^{2}\,\hat{y}\cdot{\rm d}\vec{r} \\[3mm]&= {1 \over 2}\int_{\rm GA \bigcup RA} x^{2}\,{\rm d}y \end{align} ${\rm GA}$ and ${\rm RA}$ stand for ${\rm G}$reen ${\rm A}$rc and ${\rm R}$ed ${\rm A}$rc, respectively. Also, for an arc of radius $a$ which spans angles in $\left(0, \beta\right)$: \begin{align} \int_{0}^{\beta}{\rm x}^{2}\left(\theta\right)\, {{\rm d}{\rm y}\left(\theta\right) \over {\rm d}\theta}\,{\rm d}\theta &= \int_{0}^{\beta}a^{2}\cos^{2}\left(\theta\right) \left\lbrack a\cos\left(\theta\right)\right\rbrack\,{\rm d}\theta = a^{3}\int_{0}^{\beta}\cos^{3}\left(\theta\right)\,{\rm d}\theta \\[3mm]&= {1 \over 12}\,a^{3}\left\lbrack% 9\sin\left(\beta\right) + \sin\left(3\beta\right) \right\rbrack \\[5mm]& \end{align} \begin{align} {1 \over 2}\int_{\rm GA} x^{2}\,{\rm d}y &= {1 \over 2}\,2^{3}\int_{0}^{\pi/6}\cos^{3}\left(\theta\right)\,{\rm d}\theta = 4\,{1 \over 12}\,\left\lbrack% 9\sin\left(\pi \over 6\right) + \sin\left(3\,{\pi \over 6}\right) \right\rbrack = {11 \over 6} \\[3mm] {1 \over 2}\int_{\rm RA} x^{2}\,{\rm d}y &= {1 \over 2}\,1^{3}\int^{0}_{\pi/2}\cos^{3}\left(\theta\right)\,{\rm d}\theta = {1 \over 2}\,{-1 \over 12}\,\left\lbrack% 9\sin\left(\pi \over 2\right) + \sin\left(3\,{\pi \over 2}\right) \right\rbrack = -\,{1 \over 3} \\[1cm]& \end{align} $$ \begin{array}{|c|}\hline\\ \quad\iint_{E}x\,{\rm d}x\,{\rm d}y = {11 \over 6} + \left(-\,{1 \over 3}\right) = {\large{3 \over 2}}\quad \\ \\ \hline \end{array} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/480934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Proof that $3 \mid \left( a^2+b^2 \right)$ iff $3 \mid \gcd \left( a,b\right)$ After a lot of messing around today I curiously observed that $a^2+b^2$ is only divisible by 3 when both $a$ and $b$ contain factors of 3. I am trying to prove it without using modular arithmetic (because that would be way too easy), but finding it very difficult to do so. Is there an easy way to prove this without using modular arithmetic? I am also interested in a more general statement. Namely, I want to find the values of $Z$ for which $Z \mid \left( a^2+b^2 \right)$ necessarily implies that $Z \mid \gcd \left( a,b\right)$. We know that $Z$ cannot be 5, because $3^2+4^2=5^2$. More generally, if $Z$ is the largest element of a pythagorean triple then the above implication does not hold.
Given that 3 is a factor of A and of B. If 3 is a factor of A then A is a multiple of 3, say A = 3k, k an integer.Then A²=9k² Analoguously for B² = 9m² So A²+B² = 9k²+9m²=9(k²+m²) and 3 is a factor of 9, k²+m² is another integer. I think I am seriously overlooking something here? It can't be that easy...
{ "language": "en", "url": "https://math.stackexchange.com/questions/481025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Power correct notation Ok, I know this may sound dumb, but I am trying to understand which is the correct (most beauty) notation for the power function ${\rm pow}(f(x),n)$. This is the correct one: $[f(x)]^n$ From trigonometry, where I was used to write $\cos^2x$, we get: $f^n(x)$ And from Bishop's Pattern Recognition and Machine Learning I get $\mathbb{E}\,[f(x)^2]$, so: $f(x)^n$ So, is there any other 'me' out there who has already found the correct beauty and elegant way?
The most clear notation is certainly to write $$\left(f(x)\right)^n$$ This is because the notation $f^n(x)$ will frequently refer to the composition $f \circ f \circ ... \circ f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/481095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$\limsup$ of indicator function of converging sets Let $\{A_n\}_{n=1}^{\infty}$ be a sequence of bounded sets of $\mathbb{R}^n$, such that $A_n \rightarrow A \subset \mathbb{R}^n$. Let $\mathbb{1}: \mathbb{R}^n \rightarrow \{0,1\}$ be the indicator function. 1) I am wondering if the following statement holds. Assume $A_n$'s compact. For all $x \in \mathbb{R}^n$, it holds that $$\limsup_{n \rightarrow \infty} \mathbb{1}_{A_n}(x) \leq \mathbb{1}_{A}(x). $$ 2) What if the $A_n$'s are open? Comments. I tried to follow this post. I am not clear whether the compactness is actually needed.
The post to which you linked and the answer to it contain the answers to your questions: if $\langle A_n:n\in\Bbb N\rangle\to A$, then $\langle 1_{A_n}(x):n\in\Bbb N\rangle\to 1_A(x)$ for each $x\in\Bbb R^n$. It follows that $$\limsup_{n\to\infty}1_{A_n}(x)=\lim_{n\to\infty}1_{A_n}(x)=1_A(x)$$ for all $x\in\Bbb R^n$. No special properties of the sets $A_n$ (like boundedness or compactness) are needed: all that’s necessary is that the sequence $\langle A_n:n\in\Bbb N\rangle$ converge to some $A\subseteq\Bbb R^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/481166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Proof that $\max(x_1,x_2)$ is continuous I haven't done a proof in years and I've become a little stuck on this, I'd appreciate it if somebody could tell me if I've approached the problem correctly... Question: Prove that the following function $f(x_1; x_2) = \max[x_1; x_2]$, $x_1, x_2 \in \Bbb R$ is continuous over $\Bbb R^2$ ($\max [x_1; x_2] = x_1$ if $x_1 \geq x_2$). Use definition of continuity. Function $f\colon \mathbb{R}^2 \to \mathbb{R}$ is continuous at $x_0 \in \mathbb{R}$ if $\forall \epsilon > 0 \exists \delta >0 :\mbox{ if }|x-x_0| <\delta \implies |f(x_0) - f(x)|<\epsilon$ Proof: Consider $(x_1,x_2) \in \mathbb{R}^2$ and $x_1>x_2$. Then $\forall \epsilon > 0$ choose $\delta > 0 : ||(x_1,x_2) - (x_0,x_0)|| < \delta = \epsilon $, then; $|\max(x_1,x_2) - \max(x_0,x_0)| = |x_1 - x_0| \le ||(x_1,x_2) - (x_0,x_0)|| < \delta = \epsilon $
If $x_2>x_1$ then $\max(x_1,x_2)=x_2$ and then it is continuous in that region. Similar analysis in the region $x_1>x_2$. If $x_2=x_1$ then note that $|\max(y_1,y_2)-\max(x_1,x_2)|\le \max(|y_1-x_1|,|y_2-x_2|)$. Given $\varepsilon>0$, choose $\delta=\varepsilon$. If $||(y_1,y_2)-(x_1,x_2) ||<\delta$ then we have $$ |\max(y_1,y_2)-\max(x_1,x_2)|\le \max(|y_1-x_1|,|y_2-x_2|)\le \sqrt{|y_1-x_1|^2+|y_2-x_2|^2}<\delta =\varepsilon. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/481226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to solve a system of trigonometric equations This question today appeared in my maths olympiad paper: If $\cos x + \cos y + \cos z = \sin x + \sin y + \sin z = 0$, then, prove that $\cos 2x + \cos 2y + \cos 2z = \sin 2x + \sin 2y + \sin 2z = 0$. Can anyone please help me in finding out the solution of this equation ? I have not gone any far in this solution.
Putting $a=\cos x+i\sin x$ etc, we have $a+b+c=0$ and $a^{-1}=\frac1{\cos x+i\sin x}=\cos x-i\sin x$ $\implies a^{-1}+b^{-1}+c^{-1}=0\implies ab+bc+ca=0$ $a^2+b^2+c^2=(a+b+c)^2-2(ab+bc+ca)=0$ Now, $a^2=(\cos x+i\sin x)^2=\cos^2x-\sin^2x+i2\sin x\cos x=\cos2x+i\sin2x$ which is a special case of de Moivre's formula
{ "language": "en", "url": "https://math.stackexchange.com/questions/481285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What can we conclude about the natural projection maps? In an arbitrary category, we have that even if $X$ and $Y$ have a product $X \times Y$, the natural projections needn't be epimorphisms. Two questions: * *Are there (preferably simple!) conditions we can place on the category such that all the natural projections are, in fact, epimorphisms? *Without assuming anything about the category, is there a (preferably interesting!) weaker property that the natural projections always possess?
You might like this theorem: If $\pi _i:P {\to} A_i$ for $i\in I$ is a product and if $i_0 \in I$ is such that, for each $i \in I$, $hom(A_{i0} ,A_i)$ is not empty then $\pi _{i0}$ is a retraction. In general the $\pi _i$ form an extremal mono-source See: Abstract and concrete categories: the joy of cats. Propositions 10.21 and 10.28
{ "language": "en", "url": "https://math.stackexchange.com/questions/481397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is L'Hôpital's rule giving the wrong answer? Given the following limit: $$ \lim_{x \rightarrow -1} \frac{x^5+1}{x+1} $$ The solution using L'Hôpital's rule: $$ \lim_{x \rightarrow -1} \frac{x^5+1}{x+1} = \begin{pmatrix} \frac{0}{0} \end{pmatrix} \rightarrow \lim_{x \rightarrow -1} \frac{5x^4}{1} = 5 \cdot (-1)^4 = -5 $$ This is wrong. How come? EDIT: Well, a perfect example of a huge blunder.
Your application of L'Hôpital's is fine and correct. The problem is your evaluation/final conclusion... Recall: $\quad$For $k \in \mathbb N,\;$ $(-1)^n = 1\;$ for (even) $\;n = 2k\,;\;$ $(-1)^n = -1\;$ for (odd) $\;n = 2k+1.\;$ That said, we have: $$5\cdot (-1)^4 = 5\cdot 1 = 5.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/481472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Minimal polynomial Let $V$ be the vector space of square matrices of order $n$ over the field $F$. Let $A$ be a fixed square matrix of $n$ and let $T$ be a linear operator on $V$ such that $T(B) = AB$. Show that the minimal polynomial for $T$ is the minimal polynomial for $A$. Thank you for your time.
You need to show that every polynomial that kills $A$ kills $T$. But then to show minimality, you need to show that every polynomial that fails to kill $A$ fails to kill $T$. $$f(A)=0$$ $$f(T)B = \left(\sum_{k=0}^n c_k T^k \right)B = \sum_{k=0}^n c_k (T^k B).\tag1$$ $$ T^k(B) = T^{k-1}(T(B)) = T^{k-1}(AB) = T^{k-2} (A(AB)) = T^{k-2} (AA(B)) = \cdots. $$ In other words, show by induction on $k$ that $T^k (B) = A^k B$ and then apply $(1)$. That should suggest how to do the other part.
{ "language": "en", "url": "https://math.stackexchange.com/questions/481545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
Show that a retract of a contractible space is contractible. - Is this proof legit? I am wondering if my proof is correct? Thank you very much. Show that a retract of a contractible space is contractible. Given $X$ contracts to $x \in X$, we know there is a family of maps $f_t: X \to X, t \in I$, such that $f_0 = \mathbb{I}$ (the identity map), $f_1(X) = x$, and f_t|x= \mathbb{I} for all t. Consider a retract on $X$ to $A$, we know there is a map $r: X \to A$, such that $r(X) = A$, $r|_A = \mathbb{I}|_A$. And now we set out to show that $A$ contracts to any $a \in A$, that is, there exists $\hat{f}$ such that $\hat{f}_t: A \to A, t \in I$, such that $\hat{f}_0 = \mathbb{I}$, $\hat{f}_1(A) = a$ , and f_t|a= \mathbb{I} for all t . But since $X$ retracts to $A$, that means $r$ brings any point $x \in X$ to some $a^\prime \in A$ homotopically. Therefore, we have a map from $X$ to $a^\prime$, which is the $\hat{f}$ we want when restricts to $A$. That is, $$\hat{f}_t = r \circ f_t,$$ because it satisfies all the criterion we want: $\hat{f}_0|_A = r \circ f_0|_A = r \circ \mathbb{I}|_A = \mathbb{I}|_A$, $\hat{f}_1(A) = r \circ f_1(A) = r \circ x = a^\prime$ which satisfy the condition that \hat{f}_1(A) = a for any a \in A, and f_t|a^\prime= \mathbb{I} for all t.
There are still some inconsistencies in your text: Consider a retract on $X$ to $A$, we know there is a map $r:X→A,$ such that $r(X)=A,$ $r(A)=A.$ This still has to be corrected. It should better say: "$r:X\to A$ such that $r|_A=\Bbb I|_A$" $\hat f_1(A)=a$ for any $a∈A$ That can be deleted. You already know that $\hat f_1(A)=\{a'\}$ and that is all you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/481610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Meaning of the Rank of a Map of Free Modules? I am reading the section on differentials in Eisenbud's book (Commutative Algebra), and I'm just wondering what he means in sentences like this one: "Suppose that $J:R^t \rightarrow R^r$ is a map of free modules over a ring $R$ whose rank is less than or equal to $c$, as for the Jacobian matrix of an ideal of codimension $c$..." (Chapter 16.7, Page 407) I'm not sure what "rank" stands for in this generality (where the image need not be free). Vanishing of minors?
Presumably it means the largest $k$ such that the induced map $\Lambda^k(J) : \Lambda^k(R^t) \to \Lambda^k(R^r)$ on exterior powers doesn't vanish. (This is a coordinate-free restatement of a condition on vanishing of minors.) At least, that would be my guess. Does the rest of the statement make sense with this interpretation?
{ "language": "en", "url": "https://math.stackexchange.com/questions/481733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Fibonacci using proof by induction: $\sum_{i=1}^{n-2}F_i=F_n-2$ everyone. I have been assigned an induction problem which requires me to use induction with the Fibonacci sequence. The summation states: $$\sum_{i=1}^{n-2}F_i=F_n-2\;,$$ with $F_0=F_1=1$. I have tried going through the second one to see if it was right, but I came with a weird thing. $F_2$ should be $2$, but according to this statement, it equals zero. What am I doing wrong? It got my attention. Thank you for your time.
(First an aside: the Fibonacci sequence is usually indexed so that $F_0=0$ and $F_1=1$, and your $F_0$ and $F_1$ are therefore usually $F_1$ and $F_2$.) The recurrence might be more easily understood if you substituted $m=n-2$, so that $n=m+2$, and wrote it $$\sum_{i=1}^mF_i=F_{m+2}-2\;.\tag{1}$$ Now see what happens if you substitute $m=1$: you get $F_1=F_{1+2}-2$, which is correct: $F_1=1=3-2=F_3-2$. You can try other positive values of $m$, and you’ll get equally good results. Now recall that $m=n-2$: when I set $m=1,2,3,\dots$, I’m in effect setting $n=3,4,5,\dots$ in the original formula. That’s why one commenter suggested that you might want to start $n$ at $3$. What if you try $m=0$? Then $(1)$ becomes $$\sum_{i=1}^0F_i=F_2-2=2-2=0\;,$$ but as Cameron and others have pointed out, this is exactly what should happen: $\sum_{i=a}^bx_i$ can be understood as the sum of all $x_i$ such that $a\le i\le b$, so here we have the sum of all $F_i$ such that $1\le i\le 0$. There are no such $F_i$, so the sum is by convention $0$. The induction argument itself is pretty straightforward, but by all means leave a comment if you’d like me to say something about it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/481802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
For $N\unlhd G$ , with $C_G(N)\subset N$ we have $G/N$ is abelian Question is that : let $N\unlhd G$ such that every subgroup of $N$ is Normal in $G$ and $C_G(N)\subset N$. Prove that $G/N$ is abelian. what could be the possible first thought (though for me it took some time :)) is to use that $C_G(N)$ is Normal subgroup (As in general centralizer is a subgroup). one reason to see this is that $C_G(N)$ is not Normal in General and $C_G(N)$ is not subset of $N$ in general. As $C_G(N)\subset N$, we have $G/N\leq G/C_G(N)$ I some how want to say that $G/C_G(N)$ is abelian and by that conclude that $G/N$ is abelian. I would like someone to see If my way of approach is correct/simple?? I have not yet proved that $G/C_G(N)$ is abelian. I would be thankful if someone can give an idea. Thank You.
For any $n\in N$, $g\in G$, there is an integer $k$ s.t. $gng^{-1}=n^k$ (as the subgroup generated by $n$ is normal). That implies that $ghn(gh)^{-1}=hgn(hg)^{-1}$ for all $g,h\in G$, $n\in N$, i.e. that $G/$(the kernel of the conjugation action of $G$ on $N$) is Abelian. The kernel is $C_G(N)$, i.e. $G/C_G(N)$ is Abelian, and thus (as you observed), $G/N$ is Abelian..
{ "language": "en", "url": "https://math.stackexchange.com/questions/481869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
definition of rectangle I am wondering whether it is fine to define a rectangle with right angles like Wikipedia's page of rectangle. In Euclidean plane geometry, a rectangle is any quadrilateral with four right angles. It can also be defined as an equiangular quadrilateral. Because that the sum of the angles of a quadrilateral is 360 degrees is a theorem, it is little ambiguous for me to define a rectangle with a theorem and some calculation (360/4=90). And if that's is the case, is it fine to say a regular polygon (except equilaterial triangle) is a polygon which has equal sides and equal inner angles?
The definition is correct. It is normal to define objects after a theorem, even if at first it seems counterintuitive. In advanced mathematics it happens all the time that a result (like a theorem) permits you to write a definition that otherwise wouldn't make sense. There is no ambiguity because: * *Quadrilaterals are defined before rectangles; *You can prove that for any quadrilateral, the sum of internal angles is $360°$; *Therefore, there exist only one type of equiangular quadrilateral: the one with four right angles. No other one! You can call this unique case "rectangle", with no ambiguity. (The only ambiguity, strictly speaking, is that there are many rectangles, of different sizes and ratios. But that is not a problem.) The definition you gave for regular polygons is also correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/481941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
To partition the unity by translating a single function I am trying to show that there exists a (real or complex-valued) function $\psi \in C^\infty(\mathbb{R}^n)$ having the following properties: * *The support of $\psi$ is contained in the unit ball $B(0, 1)$. EDIT As Daniel Fischer points out, this condition cannot be satisfied, at least for $n>4$. The "right" condition is that $\psi$ be supported in the unit cube. *The sum of all integer translates of $\psi$ is $1$, that is $$\sum_{k \in \mathbb{Z}^n} \psi(x+k)=1,\qquad \forall x \in \mathbb{R}^n.$$ *The sum of the squares of the integer translates of $\psi$ is between two constants: $$0< c \le \sum_{k\in\mathbb{Z}^n} \lvert \psi(x+k)\rvert^2\le C,\qquad \forall x \in \mathbb{R}^n.$$ The existence of a function with those properties is assumed implicitly in the paper that I am reading ${}^{[1]}$, but this does not seem that obvious to me. Can you give me some hint, at least? ${}^{[1]}$ S. Lee, "On the pointwise convergence of solutions to the Schrödinger equation in $\mathbb{R}^2$", IMRN (2006), pp.1-21. The function $\psi$ appears in the Appendix.
Start with a continuous partition of unity on $\mathbb{R}$ obtained by translation of a single function, say $$\psi_0(x) = \begin{cases} 1 &, \lvert x\rvert \leqslant \frac14\\ \frac32 - 2\lvert x\rvert &, \frac14 < \lvert x\rvert \leqslant \frac34\\ 0 & \lvert x\rvert > \frac34\end{cases}$$ It is easy to check that $\sum_{k\in\mathbb{Z}} \psi_0(x+k) \equiv 1$. Now take an approximation of the identity $\varphi$ with compact support in $\left[-\frac18, \frac18\right]$ and convolve, $$\psi_1 := \psi_0 \ast \varphi.$$ The support of $\psi_1$ is contained in $\left[-\frac78,\frac78\right]$, $\psi_1$ is smooth, and $$1 \equiv \sum_{k \in \mathbb{Z}} \psi_1(x+k) = \sum_{k \in \mathbb{Z}} (\psi_0 \ast \varphi)(x+k) = \sum_{k \in \mathbb{Z}} \tau_k(\psi_0 \ast \varphi) = \sum_{k \in \mathbb{Z}} (\tau_k \psi_0)\ast\varphi = \left( \sum_{k \in \mathbb{Z}} \tau_k\psi_0\right)\ast \varphi,$$ since the supports of the $\tau_k\psi_0$ are a locally finite family. Then the function $$\psi(x_1,\, \dotsc,\, x_n) := \prod_{k=1}^n \psi_1(x_k)$$ has almost the required properties (it support is not contained in the unit ball, but in the cube). However, since "But that is totally not important", let's take that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/481995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Cumulative distribution function, integration problem Given the continuous Probability density function $f(x)=\begin{cases} 2x-4, & 2\le x\le3 \\ 0 ,& \text{else}\end{cases}$ Find the cumulative distribution function $F(x)$. The formula is $F(x)=\int _{ -\infty }^{ x }{ f(x) } $ My Solution The first case is when $2\le x\le3$ then $$\int_2^x {(2u-4)} \, du=[u^2-4u]_2^x=x^2-4x-4+8=x^2-4x+4$$ so $F(x)=x^2-4x+4 , \text{ for } 2\le x\le 3$ The Problem Now I have the cases where $x<2 \text{ and } x>3$ , ($\int_{-\infty}^{x}0 \, dx$) but I am not sure how to do it. I would appreciate if someone could show me the solution of this two cases.
when x<2, it is $\int_{-\infty}^{x}0du=0$ when x>3, it is obviously 1. You should research the properties of $F(x)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/482061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
A problem on a triangle's inradius and circumradius . I'm trying to solve the following problem : In $△ABC$, $AB = AC, BC = 48$ and inradius $r = 12$. Find the circumradius $R$. Here is a figure that I drew : ( note : it was not given in the question so there may be some mistakes ) I don't know how to solve it , am I missing any relation between inradius , circumradius and sides of a isosceles triangle? EDIT: Is there a simple solution without using trigonometry ?
Let $M$ be the midpoint of $BC$, let $P$ be the point where the perpendicular from $O$ meets the side $AB$, and let $|PA|=:x$. Since the two tangent segments from $B$ to the incircle have equal length it follows that $|PB|=24$; therefore $|AB|=24+x$, and $|AO|^2= 12^2+x^2$. It follows that $$(24+x)^2=24^2+\bigl(12+\sqrt{12^2+x^2}\bigr)^2\ .$$ Solving for $x$ gives $x=16$, whence $|AB|=40$, $|AO|=20$, $|AM|=32$. Now let $|MK|=:y$. Then $\sqrt{24^2+y^2}=32-y$, which enforces $y=7$. It follows that $R=32-7=25$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/482120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Getting the $x$-intercept of $f(x) = -16x^2 + 80x + 5$ $$f(x) = -16x^2 + 80x + 5$$ I need to find the bigger value of $x$ that makes $f(x) = 0$. Naturally, I thought to do: $$0=-16x^2+80x+5$$ and I applied the quadratic formula $$0=\frac{-80\pm\sqrt{6080}}{-32}$$ but the answer doesn't seem like it would be correct. Did I do something wrong?
First of all, please remember that the $x$-intercept is where the graph $y = \operatorname{f}(x)$ meets the $x$-axis. If you're not plotting a graph then it doesn't make sense to talk about $x$- and $y$-intercepts. You're looking for the solutions to the equation $\operatorname{f}(x)=0$. If $\operatorname{f}(x)=-16x^2+80x+5$ then you need to solve $-16x^2+80x+5=0$. The quadratic formula can be used where $a=-16$, $b=80$ and $c=5$. We have: $$\begin{array} .x &=& \frac{-b\pm\sqrt{b^2-4ac}}{2a} \\ \\ &=& \frac{-80 \pm \sqrt{(80)^2-4(-16)(5)}}{2(-16)} \\ \\ &=& \frac{-80\pm\sqrt{6720}}{-32} \\ \\ &=& \tfrac{5}{2} \pm \tfrac{1}{4}\sqrt{105} \end{array}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/482291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Number of divisors of $9!$ which are of the form $3m+2$ Total number of divisors of $9!$ which are is in the form of $3m+2$, where $m\in \mathbb{N}$ My Try: Let $ N = 9! = 1\times 2 \times 3 \times 2^2 \times 5 \times 2 \times 3 \times 7 \times 2^3 \times 3^2 = 2^7 \times 3^4 \times 5 \times 7$ Now If Here $N$ must be a mutiple of $3m+2$, means when $N$ is divided by $3$ It will gave a remainder $2$ But I did not understand how can i proceed further, thanks in advance
Hint: If the factor is of the form $3m+2$, then the prime factorization must be of the form $$ 2^a \times 3^0 \times 5^b \times 7^c, $$ where $a+b \equiv 1 \pmod{2} $ and $ c= 0$ or $1$. Count the number of possibilities.
{ "language": "en", "url": "https://math.stackexchange.com/questions/482373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Easy way to find the streamlines In a textbook, this problem appears: Find the streamlines of the vector field $\mathbf{F}=(x^2+y^2)^{-1}(-y\hat{x}+x\hat{y})$. The system we need to solve, I suppose, is: $\dfrac{dx}{d\tau}=\dfrac{-y}{x^2+y^2}$ $\dfrac{dy}{d\tau}=\dfrac{x}{x^2+y^2}$ This is a text which just introduced the concept streamlines. It's not about differential equations. But I cannot find a simple way to tell what the streamlines are. The answer is "horizontal circles with the center on the $z$ axis." I visualized this using Mathematica to confirm the answer, I also solved the system using Mathematica but the answer was very complex. I don't see how I could find that solution by hand. Is there some trick I can use to solve this easily?
Hint: divide side by side the two equations (for example the second by the first), obtaing$$\frac {dy}{dx}=-\frac x{y}$$The solutions are$$x^2+y^2=c \quad (c>0)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/482450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Given mean and standard deviation, find the probability Lets say that you know the mean and the standard deviation of a regularly distributed dataset. How do you find the probability that a random sample of n datapoints results in a sample mean less than some x? Example- Lets say the population mean is 12, and the standard deviation is 4, what is the probability that a random sample of 40 datapoints results in a sample mean less than ten? Yes, this is a homework problem, but I changed the numbers. Go ahead and change them again if you like- I just want to know how to do these kinds of problems. The professor is... less than helpful.
If you mean "normally distributed", then the distribution of the sample mean is normal with the same expected value as the population mean, namely $12$, and with standard deviation equal to the standard deviation of the population divided by $\sqrt{40}$. Thus it is $4/\sqrt{40}\approx0.6324555\ldots$. The number $10$ deviates from the expected value by $10-12=-2$. If you divide that by the standard deviation of the sample mean, you get $-2/0.6324555\ldots\approx-3.1622\ldots$. That means you're looking at a number about $3.1622$ standard deviations below the mean. You should have a table giving the probabilty of being below number that's a specified number of standard deviations above or below the mean. If you don't mean normally distributed, then the sample size of $40$ tells us that if the distribution is not too skewed, the distribution of the sample mean will be nearly normally distributed even if the population is not. The expected value and standard deviation of the sample mean stated above do not depend on whether the population is normally distributed nor even on whether it's highly skewed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/482530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
New & interesting uses of Differential equations for undergraduates? I'm teaching an elementary DE's module to some engineering students. Now, every book out there, and every set of online notes, trots out two things: * *DE's are super-important, vital, can't live without 'em, applications in every possible branch of applied mathematics & the sciences etc etc *Applications: population growth (exponential & logistic), cooling, mixing problems, occasionally a circuit problem or a springs problem. Oh - and orthogonal trajectories, so that you can justify teaching non-linear exact equations. I can't believe that these same applications are still all that educators use for examples. Surely there must some interesting, new applications, which can be explained at (or simplified to) an elementary level? Interestingly, most of these "applications" are separable. Where are the linear non-separable equations; the linear systems? I've been searching online for some time now, and remarkably enough there's very little out there. So either educators are completely stuck for good examples, or all the modern uses are simply too difficult and abstruse to be simplified down to beginners level. However - if there are any interesting new & modern uses of DE's, explainable at an elementary level, I'd love to know about them.
You may find this interesting that the ODE theory is getting involved well in studying Avalanches. See here, here and here for example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/482659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 4 }
”figure 8” space embedded in S2 Let M3 be the 3-manifold defined as the quotient space of I × S2 by the identification {0} × {x} s {1} × {Tx}, where T : S2 → S2 is a reflection through a plane in R3. Find π1(M) and π2(M).
The universal cover is $\tilde M=\mathbb R\times S^2$: your manifold is $(\mathbb R\times S^2)/\mathbb Z$, with $n\in\mathbb Z$ acting by $n\cdot (t,x)=(t+n,T^nx)$. As the result, $\pi_1(M)=\mathbb Z$, $\pi_2(M)=\pi_2(\tilde M)=\mathbb Z$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/482795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to calc $\log1+\log2+\log3+\log4+...+\log N$? How to calculate $\log1+\log2+\log3+\log4+...+\log N= log(N!)$? Someone told me that it's equal to $N\log N,$ but I have no idea why.
A small $caveat$: "someone" is wrong: try, for example, with $N=2$; then $$2\log 2=\log 2^2,$$ while $$\log1+\log 2=\log 2!=\log 2.$$ For more details on the relationship between the 2 logarithms, I refer to the comments under the OP.
{ "language": "en", "url": "https://math.stackexchange.com/questions/482860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Prove $m^*(A) = m^*(A + t)$ Define $m^*(A) = \inf Z_A$ as the outer measure of $A \subseteq \mathbb{R}$ where $$Z_A = \left\{\sum_{n=1}^{\infty}|I_n| : I_n \text{ are intervals}, A \subseteq \bigcup_{n=1}^{\infty}I_n\right\} $$ We want to show $m^*(A) = m^*(A + t)$. It will suffice to show $Z_A = Z_{A + t}$. So, pick an element $z \in Z_A$. Therefore, we have a covering $\{I_n\}$ of $A$ such that $z = \sum |I_n |$. Let $I_n = [a_n, b_n]$. then $|I_n| = b_n - a_n = (b_n + t) - (a_n + t) = |I_n + t|$, and also since $\{I_n + t\}$ cover $A +t$, then $z \in Z_{A + t}$. Now, pick $z \in Z_{A + t}$. So, we have a covering $\{J_n\}$ of $A + t$ such that $z = \sum |I_n + t |$. But, notice $|I_n + t| = |I_n|$ by the above computation. Also, we have that $\{J_n - t \}$ cover $A$. Therefore, $z \in Z_A$ Is this proof correct? is this rigorous enough to conclude that since $(I_n)$ cover $A$, then $(I_n + t)$ must cover $A + t$ and reverse? Thanks for the feedback.
You may want to explain why it suffices to show $Z_A = Z_{A+t}$. The argument for $Z_A \subseteq Z_{A+t}$ looks good (you may want to use the subset notation to make what you're doing clearer, but I don't think it's necessary). The argument for $Z_{A+t} \subseteq Z_A$ uses both $J_n$ and $I_n$; I think you should just be referring to $J_n$, or if you prefer, explicitly stating how the collections $\{J_n\}$ and $\{I_n\}$ are related. After a little bit of tidying up, it looks like it will be a good proof. Note, there is an easier way to show $Z_{A + t} \subseteq Z_A$. You've shown $Z_A \subseteq Z_{A + t}$ for any real number $t$. Replace $t$ by $s$ so that $Z_A \subseteq Z_{A+s}$. Now replacing $A$ by $A + t$ we have $Z_{A+t} \subseteq Z_{A+t+s}$. Choosing $s = -t$ we have the desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/482932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Numbers divisible by the square of their largest prime factor Let $p(n)$ be greatest prime factor of $n$, denote $A=\{n\mid p^2(n)\mid n,n\in \mathbb N\}.$ $A=\{4,8,9,16,18,25,27,32,36,49,50,\cdots\},$ see also A070003. Define $f(x)=\sum_{\substack{n\leq x\\n\in A}}1.$ Erdős proved that $$f(x)=x \cdot e^{-(1 + o(1))\sqrt{\log x \log \log x}}.\tag1$$ Hence $f(x)=o(x),$ $$\lim_{x\to \infty}\frac{f(x)}x=0,\tag2$$ since $(2)$ is easier than $(1)$, can you prove $(2)$ without use $(1)$?
What follows is self contained proof that $$f(x)=\sum_{\begin{array}{c} n\leq x\\ n\in A \end{array}}1 \ll x e^{-c\sqrt{\log x}}.$$ If you work more carefully with the friable integers, you can recover Erdős' result. Proof. We may write $$\sum_{\begin{array}{c} n\leq x\\ n\in A \end{array}}1=\sum_{\begin{array}{c} np^{2}\leq x\\ n\in S(p) \end{array}}1 = \sum_{p\leq\sqrt{x}}\sum_{\begin{array}{c} n\leq x/p^{2}\\ n\in S(p) \end{array}}1$$ where $S(y)=\left\{ n:\ P(n)\leq y\right\}$ is the set of $y$-friable integers. Notice that $$ \sum_{p\leq\sqrt{x}}\sum_{\begin{array}{c} n\leq x/p^{2}\\ n\in S(p) \end{array}}1 \leq x\sum_{B\leq p}\frac{1}{p^{2}}+\sum_{\begin{array}{c} n\leq x\\ P(n)\leq B \end{array}}1.\ \ \ \ \ \ \ \ \ \ (1)$$ The first term is $\leq\frac{x}{B}$, and the second term may be bounded using Rankin's Trick, which we will now use. Let $\sigma>0$, and notice that $$\sum_{n:\ P(n)\leq B}1\leq\sum_{\begin{array}{c} n=1\\ P(n)\leq B \end{array}}^{\infty}\left(\frac{x}{n}\right)^{\sigma}=x^{\sigma}\prod_{p}\left(1-\frac{1}{p^{\sigma}}\right)^{-1}.$$ For $\sigma=1-\frac{1}{\log B},$ the above is $$\ll xe^{-\frac{\log x}{\log y}}\exp\left(e\sum_{p\leq B}\frac{1}{p}\right)\ll xe^{-\frac{\log x}{\log B}}\left(\log B\right)^{e}.$$ Setting $B=e^{\sqrt{\log x}},$ it follows that $$\sum_{\begin{array}{c} n\leq x\\ n\in A \end{array}}1\ll xe^{-\sqrt{\log x}},$$ and the result is proven. Remark: Erdős' result may be recovered in the following manner: First, to obtain the extra $\sqrt{\log \log x}$ in the upper bound, you need a bound of the form $$\Psi(x,y)=\sum_{\begin{array}{c} n\leq x\\ P(n)\leq y \end{array}}1 \ll x u^{-(1+o(1))u}$$ where $u=\frac{\log x}{\log y},$ rather than Rankin's trick. Now, the asymptotic can be obtain by bounding the second term in equation $(1)$, and noticing that the first term will provide us with our main term.
{ "language": "en", "url": "https://math.stackexchange.com/questions/483027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
Differential equations: solving separable equation Solve the separable equation $y' = (x-8)e^{-2y}$ satisfying the initial condition $y(8)=\ln(8)$. I can not figure this out I am not sure what I am doing wrong.
We have $$\frac{dy}{dx}=(x-8)e^{-2y}\implies e^{2y}dy=(x-8)dx$$ Integrating either sides $$\frac{e^{2y}}2=\frac{x^2}2-8x+C\implies e^{2y}=x^2-16x+2C$$ where $C$ is an arbitrary constant Putting $x=8, y=\ln8, e^{2\ln 8}=8^2-16\cdot8+2C$ $2C-64=(e^{\ln 8})^2=8^2\implies 2C=64+64$
{ "language": "en", "url": "https://math.stackexchange.com/questions/483171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that roots are real I am stuck with this equation, I need to prove the roots are real when $a, b, c \in R$ The equation is $$(a+b-c)x^2+ 2(a+b)x + (a+b+c) = 0$$ If someone could tell me the right way to go about this, so I can attempt it. Thank you EDIT: I have made an error in the question. I have now corrected it.
We look at the discriminant of the the polynomial, which for a quadratic $ax^2 +bx +c$ is $b^2 -4ac$, plugging the values in for our polynomial gives $$\Delta = 4(a+b)^2-4(a+b-c)(a+b+c)\\ = 4[(a+b)^2 - (a+b)^2+c^2]\\ = 4c^2$$ Since the square of a real number is positive, we know that the roots must be real, by looking at the quadratic forumla and seeing that the solutions are $$\frac{-b\pm\sqrt\Delta}{2a}$$ And the square root of a positive real number is real. We used the discriminant because it makes computation so much easier, than if we were doing everything that we did in the first step underneath the radical, and it would be rather ugly. Inspection shows that if $\Delta > 0$, there are two distinct real roots, if $\Delta < 0$, there are two complex roots, which are conjugate, and if $\Delta = 0$ then you have a real double root.
{ "language": "en", "url": "https://math.stackexchange.com/questions/483238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
How to prove $n ≡ n_0 + n_1 + \dots +n_k \pmod{b-1}$ I am trying to prove this statement, where $n$ has base $b$ representation, which can be understood easily using this example: In base $10$, mod $9$ of any number can be found by adding up its digits and doing the mod $9$ of that sum. It's been a while since I've done proofs, and I'm just not sure where to start here. I know that, for example, in base $10$: $$10 \bmod 9 = 1\quad \text{or}\quad b \bmod b-1 = 1.$$ I believe I can substitute that in somehow, but I'm not sure how to start. Thanks for your help.
Note that $n = \sum_{i=0}^kn_ib^i$ and $b \equiv 1 \operatorname{mod} (b - 1)$. Now use the fact that if $a \equiv b \operatorname{mod} m$ and $c \equiv d \operatorname{mod} m$, then $a + c \equiv b + d \operatorname{mod} m$ and $ac \equiv bd \operatorname{mod} m$; in particular, $a^n \equiv b^n \operatorname{mod} m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/483386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Derivative $\Delta x$ and $dx$ difference This may seems like a dummy question but I need to ask it. Consider the definition of derivative: $$\frac{d}{dx}F(x) = \lim_{\Delta x->0}\frac{F(x+\Delta x) - F(x)}{\Delta x} = f(x)$$ Also: $$f(x)\Delta x = F(x+\Delta x) - F(x) \tag{When $\Delta x$ gets closer to $0$}$$ I can also say that: $$\frac{d}{dx}F(x) = f(x)$$ So: $$dF(x) = f(x)dx$$ but $dF(x)$ can also be seen as $F(x+\Delta x) - F(x) \tag{When $\Delta x$ gets closer to $0$}$ So should $dx$ be considered $\Delta x \tag{When $\Delta x$ gets closer to $0$}$? I think this is wrong because it's the same as saying $\lim_{\Delta x \to 0} \Delta x =dx$ when in true $\lim_{\Delta x \to 0} \Delta x =0$. Or maybe $\Delta x$ already means a change in $x$, so the limit of this change, aproaching infinity is gonna be $dx$. In this case, no problem, but and in cases that people use $h$ instead $\Delta x$? I think i'm consufing it a lot. Sorry...
Your question is very good. There's something called the "non-standard" numbers. Trying to define them, we would have the set $$\{ \alpha, \text{such that } 0 < \alpha < x, \forall x \in \mathbb{R}\}$$ What happens, is that $\mathrm{d}x$ is in that set, while $\Delta x$ isn't. For instance, let's differentiate $y = f(x) = x^2$, think of $\mathrm{d}x$ as an infinitesimal disturbing in $x$ that causes another infinitesimal disturbing $\mathrm{d}y$ in $y$, that is: $$y + \mathrm{d}y = (x+ \mathrm{d}x)^2 = x^2 + 2x \mathrm{d}x + {\mathrm{d}x}^2 \\ \mathrm{d}y = 2x \mathrm{d}x + {\mathrm{d}x}^2 \\ \frac{\mathrm{d}y}{\mathrm{d}x} = 2x + \mathrm{d}x $$ Then, you would ask: but isn't the derivative of $x^2$, $2x$? The fact that the differentials are smaller than any real number would justify neglecting the remaining $\mathrm{d}x$, we would take the standard part of the derivative we just calculated. $$\frac{\mathrm{d}y}{\mathrm{d}x} = \operatorname{std}(2x + \mathrm{d}x) = 2x$$ In the same way we neglected the $\mathrm{d}x$ here, we would do the same to higher order differentials, like $\mathrm{d}x \mathrm{d}y$ (product), or ${\mathrm{d}x}^2$ (powers). I sugest you try and differentiate $x^3$ to feel this, and I hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/483451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
If $x$ is real and $x + \frac1x$ is rational, show by strong induction that $x^n + \frac{1}{x^n}$ is rational for all $n$. Suppose that $x$ subset of Real numbers such that $x + \frac{1}{x}$ is subset of Rational numbers. Using strong induction, show that for each $n$ subset of Natural numbers, $A_n = x^n + \frac{1}{x^n}$ is subset of Rational numbers. How do I start? I was given a hint of the product of $A_1$ and $A_n$ but have no idea how to apply. thanks in advance!
If you need to use strong induction: Using the hint $$ A_1 A_n = x^{n+1}+\frac{1}{x^{n+1}} +x^{n-1}+\frac{1}{x^{n-1}} = A_{n+1} + A_{n-1}. $$ You want to rearrange this for an expression in $A_{n+1}$, the answer should follow from strong induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/483532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that if $\theta$ is an angle with $\cos(2\theta)$ irrational then $\cos \theta$ is also irrational Prove that if $\theta$ is an angle with $\cos(2\theta)$ irrational then $\cos \theta$ is also irrational. (hint: recall that $\cos(2\theta)=2\cos^2(\theta)-1$ )
If $x$ is some rational number, what can you say about $2x^2 - 1$? Using this reasoning, suppose $\cos \theta$ were rational (even though it isn't). Then what would you know about $\cos 2\theta$? Why would this be a problem? Conclude that since $\cos \theta$ being rational results in a problem, $\cos \theta$ must be irrational.
{ "language": "en", "url": "https://math.stackexchange.com/questions/483581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
If $A \subseteq B$, is it true that $C \cap A \subseteq C \cap B$? If $A \subseteq B \implies C \cap A \subseteq C \cap B$ ? Let $x \in C \cap A \implies x \in C$ and $x \in A$ $\implies$ $x \in C$ and $x \in B$ $\implies x \in C \cap B$ is this valid?
Not quite, since your definition of intersection isn't correct. If $x \in C \cap A$, then $x \in C$ and $x \in A$. If $x \in A$, then $x \in B$, so $x \in C$ and $x \in B$, so $x \in C \cap B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/483644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is $D^{n+1}/S^{n} = S^{n+1}$ true? I went to my first lecture in Algebraic Topology and managed to get really confused. It seems like they assumed that the following statement was "obvious": $D^{n+1}/S^{n} = S^{n+1}$ Where $D^{n}$ is the unit disk/ball in $\mathbb{R}^{n}$ and $S^{n}$ is the unit sphere in $\mathbb{R}^{n+1}$. The notation $X/A$ is, as far as I could understand, a way of denoting that $A\subseteq X$ collapse to one point. So we divide $X$ into equivalence classes where the points in $A$ are in the same equivalence class and the points $X-A$ is in their "own" equivalence class. My questions: * *Why is this statement true? and do $=$ indicate that there exists a homeomorphism between the two spaces?
Here's one way to set up a concrete homeomorphism. The sphere $S^{n+1}$ is homeomorphic to the $1$-point compactification of $\mathbb{R}^{n+1}$. This is witnessed by the stereographic projection maps. Take the disc $D^{n+1} \subset \mathbb{R}^{n+1}$, map the inner open disc to $\mathbb{R}^{n+1}$ (you have probably seen this done in a point-set topology class), and map the boundary of $D^{n+1}$ (which is $S^{n}$ by definition) to the point at infinity. This is not injective, but if you define the same map on $D^{n+1} / S^{n}$, this map is bijective and continuous (not too hard to see, recall which sets are open in the compactified $\mathbb{R}^{n+1}$), and goes from a compact space to a Hausdorff space, hence is a homeomorphism (another standard point-set result). Another idea: It is relatively easy to show that the sphere is two discs glued at their boundaries. Let $X_1$ be an inner disc of half the radius of $D^{n+1}$ inside $D^{n+1}/S^{n}$. Let $X_2 = D^{n+1}/S^{n} - X_{1}$. We know that $X_1$ is a disc, and $X_2$ looks like an annular region with collapsed boundary. Define a map from $X_2$ to a disc in the following way. Map rings on the inside of $X_2$ to rings on the outside of a disc, where by rings I mean copies of $S^n$ defined as level sets of the Euclidean norm. Do this inside-out, so that rings on the outside of $X_2$ get mapped to smaller rings in the interior of a disc. Finally, send the outer, collapsed boundary of $X_2$ to the center of the disc. This is clearly a bijection. It is not too hard to show it is continuous. It is a map from a compact space to a Hausdorff space, hence a homeomorphism, so $X_2$ is actually a disc. Thus, $D^{n+1}/S^n$ was just two discs glued together at their boundaries, hence is homeomorphic to $S^{n+1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/483763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
positive characteristic and multiple roots I can't understand a proof in Milne, proposition 2.12 at pag 29. In particular, i can't prove the implication $c)\Rightarrow d)$ where: c) $F$ has characteristic $p\neq 0$ and $f$ is a polynomial in $X^p$ ; d) all the roots of $f$ are multiple Suppose $f(X)=g(X^p)$ and $g(X)=\displaystyle\prod_i(X-a_i)^{m_i}$ in some extension $K$ of $F$. Then $$f(X)=g(X^p)=\displaystyle\prod_i(X^p-a_i)^{m_i}=\displaystyle\prod_i(X-\alpha_i)^{pm_i}$$ where $\alpha_i^{p}=a_i$ Well, my question is: who assures me that such an $\alpha_i$ exists? This is the case when $F$ is finite, so that $F=F^p$, but i don't have this hypothesis!
If they aren't already in $K$ (as when $F$ hence$ K$ are finite), the $\alpha_i$ are in some extension $L$ or $K$. For example, pick $F = \Bbb F_p(X^p)$. The polynomial $Y^p - X^p$ in $F[Y]$ has $p$ repeated roots in the extension $L = \Bbb F_p(X)$ of $F$ since $Y^p - X^p = (Y-X)^p$
{ "language": "en", "url": "https://math.stackexchange.com/questions/483828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Assuming there exist infinite prime twins does $\prod_i (1+\frac{1}{p_i})$ diverge? Assume there are an infinite amount of prime twins. Let $p_i$ be the smallest of the $i$ th prime twin. Does that imply that $\prod_i (1+\frac{1}{p_i})$ diverges ?
Based on André Nicolas hint I realized : $\prod_{i=1}^k (1+\dfrac{1}{p_i}) < (\sum_{i=1}^k (1+\dfrac{1}{p_i}))^2$ And by Brun's theorem it follows the product converges. Q.E.D. mick
{ "language": "en", "url": "https://math.stackexchange.com/questions/483880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
understanding IVT vs MVT I know that if the MVT was applied to physics then it would roughly translate to saying that the average velocity = instantaneous velocity. But suppose that my average velocity on $[0,T]$ was $10$. Then $\frac{f(T)-f(0)}{T-0}=10.$ Assuming that $\int f' \,dt = f$, then $\frac{f(T)-f(0)}{T-0} = \frac{\int^{T}_{0} f' \,dt}{T}=10.$ In other words, the average value of $f'(t)$ is $10$ on $[0,T]$. Intuitively, since $f'(t)$ is continuous, if we interpret it as a continuous velocity function, then $f'(t)$ has to take values greater than 10 and less than 10. But then I can apply the IVT to say that $\exists \, c$ s.t. $f'(c)=10$, i.e. instantaneous velocity at $c$ is 10 or the average velocity... This confuses me because it appears that the MVT and IVT can be both applied to solve this. any thoughts?
Hopefully this isn't awkward, but following anorton's suggestion I've copied my comment to an answer: That is correct. If $f′(t)$ is continuous, then you can apply the IVT as you do. Lots of books don't require this hypothesis for the MVT, though. All you need for the MVT is $f$ continuous on $[0,T]$ and differentiable on $(0,T)$. It is possible to apply the MVT even if $f′(t)$ is not continuous on $(0,T)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/483969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving that the limit of a sequence is $> 0$ Let $u$ be the complex sequence defined as follows : $u_0=i$ and $ \forall n \in \mathbb N, u_{n+1}=u_n + \frac {n+1-u_n}{|n+1-u_n|} $ . Consider $w_n$ defined by $\forall n \in \mathbb N,w_n=|u_n-n|$ . I have to prove that $w_n$ has a limit $> 0$. Here is what I've proved so far: * *The sequence $Im(u_n)$ is decreasing and bounded by $0$ and $1$ hence convergent with a limit $\in [0;1]$ *$w_n$ is decreasing and bounded by $0$, hence convergent with limit $l \geq 0$ So all I need to prove now is that $l \neq 0$ I tried a proof with contradiction, but could not complete it... Thanks for you help.
Well the other proofs are quite long... I guess this is shorter. A simple computation proves that the sequence $n-Re(u_n)$ is increasing. Now by contradiction, if $l=0$ then $|Re(u_n-n)|=n-Re(u_n)$ converges to $0$ But $n-Re(u_n)$ is increasing. Hence $\forall n \in \mathbb N, n-Re(u_n) = 0$ That's a contradiction. I'd like to thank the users who gave a valid solution with a lower bound for $w_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/484056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
What is the simplest $\Bbb{R}\to\Bbb{R}$ function with two peaks and a valley? What is the simplest $\Bbb{R}\to\Bbb{R}$ function with two peaks and a valley? I have a set of points in $\Bbb{R^2}$ and I would like to fit a curve to the points, the points approximately lie on a curve like the one depicted in the following figure: My points are such that $a$ (the width of the valley) is almost constant while $b$ (the height of the peaks with respect to the valley) can change. My informal definition of "simplest" is based on the following requirements: * *the function should be sufficient smooth *the fit should be easy to do with some off-the-shelf algorithm *I have just the points lying on the curve in figure, so I think that the function should smoothly go to zero to the left of the left peak and to the right of the right peak. My goal is to estimate $b$.
When you say that you "think that the function should smoothly go to zero to the left of the left peak and to the right of the right peak", notice that it does not have to. Moreover, both $a$ and $b$ can be chosen arbitrarily. The easiest way to do this would be to take a polynomial. If you want the simplest polynomial with two maxima at $(\pm\!\tfrac{1}{2}a,b)$ and one minimum at $(0,0)$, then $$\operatorname{f}(x) = \frac{8b}{a^4}(a^2-2x^2)x^2$$ If you insist the function to tend to zero then you need to edit your question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/484117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How many $5$-element subgroups does $S_7$ have? How many $5$-element subgroups are there in $S_7$, the group of permutations on $7$ elements? Let $H$ be a $5$-element subgroup of $S_7$. We have $\mbox{ord}(H) = 5$ and $5\mid 7!$. But I don't have any idea how can I find 5-element subgroups.
Hint: A subgroup of $S_7$ of order $5$ must be cyclic (all groups of prime order are cyclic), and therefore is generated by an element of order $5$. The only elements of order $5$ in $S_7$ are $5$-cycles (why?), but each subgroup contains $4$ such cycles. This reduces the problem to one of a combinatorial flavor.
{ "language": "en", "url": "https://math.stackexchange.com/questions/484199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
A clarification regarding partial derivatives Let us suppose the $i^{th}$ partial derivative of $f:\Bbb{R}^n\to \Bbb{R}$ exists at $P$; i.e. if $P=(x_1,x_2,\dots,x^n)$, $$\frac{f(x_1,x_2,\dots,x_n+\Delta x_n)-f(x_1,x_2,\dots,x_n)}{\Delta x_n}=f'_n (P)$$ My book says this implies that $$f(x_1,x_2,\dots,x_n+\Delta x_n)-f(x_1,x_2,\dots,x_n)=f'_n(P)\Delta x_n + \epsilon_n \Delta x_n$$ such that $\lim\limits_{\Delta x_n\to 0} \epsilon_n=0$. I don't understand where $\epsilon_n$ comes into the picture. Why can't we just have $f(x_1,x_2,\dots,x_n+\Delta x_n)-f(x_1,x_2,\dots,x_n)=f'_n(P)\Delta x_n$, considering we're anyway using $\Delta x_n$ as a real number rather than an operator. Justification for asking on overflow- I'm doing research on multi-variable calculus..? Thanks!
Compare the definition of the derivative to your ratio when $f:\mathbb R\to\mathbb R$, $x\mapsto x^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/484279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Matrix generating $\operatorname{SL}_n(\mathbb{R})$ How do I show that the following matrices generate $\operatorname{SL}_2(\mathbb{R})$ $\begin{pmatrix} 1 & a \\ 0 & 1 \\ \end{pmatrix}$ or $\begin{pmatrix} 1 & 0 \\ a & 1 \\ \end{pmatrix}$
Let $G$ be the span of the matrices $$ \begin{pmatrix} 1 & a \\ 0 & 1 \end{pmatrix} \quad\text{and}\quad \begin{pmatrix} 1 & 0 \\ a & 1 \end{pmatrix} $$ with $a\in \mathbb R$. We have $$ \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ -1 & 1 \end{pmatrix} \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \in G $$ and for all $a\in \mathbb{R}^\times$ $$ \begin{pmatrix} 1 & a \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \begin{pmatrix} 1 & a^{-1} \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \begin{pmatrix} 1 & a \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} = \begin{pmatrix} a & 0 \\ 0 & a^{-1} \end{pmatrix} \in G $$ Now by a slightly modified Gauss elimination, any matrix $A\in \operatorname{SL}_2(\mathbb R)$ can be transformed into the unit matrix using the above matrices. So $G = \operatorname{SL}_2(\mathbb R)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/484346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why do you add +1 in counting test questions? Here's an example question from the SAT question of the day: On the last day of a one-week sale, customers numbered 149 through 201 were waited on. How many customers were waited on that day? Possible answers: 51, 52, 53, 152, 153. The correct answer here is 53, which is the result of (201-149) + 1 = 53. What's the reasoning in adding the +1?
Let $C_n$ be the customer numbered $n$. List the customers in question: $$C_{149},C_{150},C_{151},\ldots,C_{200},C_{201}\;.$$ Now write their numbers in the form $148+\text{something}$: $$C_{148+\underline1},C_{148+\underline2},C_{148+\underline3},\ldots,C_{148+\underline{52}},C_{148+\underline{53}}\;.$$ In this form it’s clear that if you ignore the $148$, you’re just counting from $1$ through $53$, so there are $53$ customers. Now think back to see where the $53$ came from: it was what had to be added to the base-point $148$ to get $201$, the last customer number, so it was $201-148$. A little thought will show you that the same idea works in general, and that the base-point number will always be that of the last customer not being counted, so it will be one less than the number of the first customer that you want to count. If you’re counting the customers from $C_{\text{first}}$ through $C_{\text{last}}$, your base-point number will be $\text{base}=\text{first}-1$, and your counted customers will be $$C_{\text{base}+1},C_{\text{base}+2},C_{\text{base}+3},\ldots,C_{\text{base}+?}\;,$$ where $\text{base}+?=\text{last}$. Thus, the question mark must be $$?=\text{last}-\text{base}=\text{last}-(\text{first}-1)=\text{last}-\text{first}+1\;.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/484393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42", "answer_count": 15, "answer_id": 9 }
Help verify $\lim_{x\to 7} \frac {x^2+7x+49}{x^2+7x-98}$ . So my question is "Evaluate the limit" $\displaystyle \lim_{x\to 7} \frac {x^2+7x+49}{x^2+7x-98}$ I know you can't factor the numerator but you can for denominator. But either way you can't divide by $0$. So I say my answer is D.N.E. If anyone can verify that I got the right answer, I would be most grateful.
Observe $x^2+7x+49=x^2+7x-98+147$ hence:$$\lim_{x\to7}\frac{x^2+7x+49}{x^2+7x-98}=\lim_{x\to7}\frac{x^2+7x-98+147}{x^2+7x-98}=\lim_{x\to7}\left(1+\frac{147}{x^2+7x-98}\right)$$Now note $x^2+7x-98=x^2-7x+14x-98=(x+14)(x-7)$ hence as $x\to7$ our limit tends to $\pm\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/484444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that $mn < 0$ if and only if exactly one of $m,n$ is positive I need to prove that $mn < 0$ if and only if $m > 0$ and $n < 0$ or $m < 0$ and $n > 0$. So I need to prove two cases: 1. If $m < 0$ and $n > 0$ or, in the alternative, if $m > 0$ and $n < 0$, then $mn < 0$. 2. If $mn < 0$, then $m < 0$ and $n > 0$ or else $m > 0$ and $n < 0$. So far I have, By axiom O.1 (from the study guide), if $m$ and $n$ are positive , then $mn > 0$, and by corollary 1.14 (study guide), if $m$ and $n$ are negative, then $(-m)(-n) > 0$. Hence, if $m$ is negative and $n$ is positive or $m$ is positive and $n$ is negative, then $(-m)n < 0$ and $m(-n) < 0$. I am not sure if this I am on the right track or not, but at this point I am just completely stumped. Any help is welcome. Thanks, Tony
It looks OK up until the last sentence. You say that if $m$ is negative and $n$ is positive or $m$ is positive and $n$ is negative $(-m)n < 0$ and $m(-n) < 0$. This is false; these quantities are positive, not negative. I think I know what you meant, but you need to rewrite this part. Note, if $m$ is negative, don't write $-m$ to refer to that same number. That's like saying $-1$ is negative and then using $-(-1) = 1$ in place of $-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/484529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
On the Divergence of $s_n=\cos{\frac{\pi}{3}n}$: A Proof Question: Show that $s_n=\cos{\frac{\pi}{3}n}$ is divergent. Attempt: Suppose that $\lim_{n\rightarrow \infty}(\cos{\frac{\pi}{3}n})=s$, then given an $\epsilon$, say $\epsilon=1$, we can find an $N\in\mathbb{N}$ so that $$\begin{vmatrix} (\cos{\frac{\pi}{3}n})-s\end{vmatrix}<1.$$ If $n=6k+1$---for some sufficient $k\in\mathbb{N}$, then we obtain $\lvert \frac{1}{2}-s\rvert<1$, and so $\frac{1}{2}<s<\frac{3}{2}$; however, if $n=6k+3$---likewise for some sufficient $k\in\mathbb{N}$, then we obtain $\lvert -1-s\rvert<1$, and so $-2<s<0$. Therefore, since $s$ cannot satisfy both inequalities, $\lim_{n\rightarrow\infty}(\cos{\frac{\pi}{3}n})$ does not exist
You are correct in your work. However, as suggested above, an easier way is to just show that there are two subsequences converging to different limits. In your case, $s_{6n+1}$ converges to $0.5$ and $s_{6n+3}$ converges to $-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/484613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Understanding 'root' in its context For which of the following primes p, does the polynomial $x^4+x+6$ have a root of multiplicity$> 1$ over a field of characteristic $p$? $p=2/3/5/7$. My book solves it using the concepts of modern algebra, which I am not very comfortable with. I wonder if there is an intuition based method to solve this question. Like, $x^2-2x+1$ would have $1$ as root with multiplicity$=2$. But in the given equation, everything is positive, so what is meant by root here? Is it not the value of $x$ when the graph crosses the $x$-axis?
The question is slightly ambiguous, because a polynomial may only have its roots in an extension of the field where it's defined, for instance $x^2+1$ has no roots in $\mathbb{R}$, but it has roots in $\mathbb{C}$. However, having a multiple root is equivalent to be divisible by $(x-a)^2$, where $a$ is in the field where the polynomial has its coefficients (or maybe, depending on conventions) in an extension field. Polynomial division is carried out the same in every field: $$ x^4+x+6 = (x-a)^2 (x^2+2ax+3a^2) + ((4a^3+1)x+(6-3a^4)) $$ where $(4a^3+1)x+(6-3a^4)$ is the remainder. For divisibility we need the remainder is zero, so $$\begin{cases} 4a^3 + 1 = 0 \\ 6 - 3a^4 = 0 \end{cases}$$ We can immediately exclude the case the characteristic is $2$, because in this case the remainder is $x+c$ ($c$ some constant term). If the characteristic is $3$, then the constant term in the remainder is zero and the first equation becomes $$ a^3+1=0 $$ So, when $a=-1$, there is divisibility. Note also that $0$ can never be a multiple root of the polynomial, so we can say $a\ne0$. Assume the characteristic is neither $2$ nor $3$. We can multiply the first equation by $3a$ and the second equation by $4$; summing them up we get $$ 3a+24=0 $$ which can be simplified in $a=-8$. Plugging it in the first equation, we get $$ 4(-8)^3+1=-2047=-23\cdot89 $$ which is zero if and only if the characteristic is either $23$ or $89$. Thus the only prime in your list that gives multiple roots is $p=3$: indeed $$ x^4+x+6=x(x^3+1)=x(x+1)^3 $$ when the characteristic is $3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/484699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that $ \frac{a^2}{a+b}+ \frac{b^2}{b+c} \geq \frac{3a+2b-c}{4} $ Prove that: $$ \frac{a^2}{a+b} + \frac{b^2}{b+c} \geq \frac{3a+2b-c}{4} : (a, b, c)\in \mathbb{R}^+$$ This is just one of these questions where you just have no idea how to start. First impressions, I don't see how any known inequality can be used, and I also don't want to go just make everything as a sum then solve it. I always was bad at inequalities and I don't know why. I did the other exercise just fine, but inequalities are hard for me. This is from a high-school olympiad.
$$\begin{align} \frac{a^2}{a+b} + \frac{b^2}{b+c} &\geqslant \frac{3a + 2b - c}{4}\\ \iff \frac{a^2}{a+b} - a + \frac{b^2}{b+c} - b &\geqslant - \frac{a + 2b + c}{4}\\ \iff -\frac{ab}{a+b} - \frac{bc}{b+c} &\geqslant - \frac{a+b}{4} - \frac{b+c}{4}\\ \iff \frac{(a+b)^2 - 4ab}{4(a+b)} + \frac{(b+c)^2 - 4bc}{4(b+c)} &\geqslant 0. \end{align}$$ Since all numbers are positive, the denominators are positive, and $(x+y)^2 - 4xy = (x-y)^2 \geqslant 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/484761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Finding the conditional extrema | Correct Method Find the conditional extrema of $ f(x,y) = x^2 + y $ subject to $x^2 + y^2 = 4$ Do you I use the Lagrange multipliers for this? Or do I use the partial differential method and figure out what side of the zero-point this equation falls under: $$ f_{xx}f_{yy} - {f_{xy}}^2 $$ ?
We have: $$\tag 1 f(x,y) = x^2 + y ~~ \mbox{subject to} ~~\phi(x) = x^2 + y^2 = 4$$ A 3D plot shows: If we draw a contour plot of the two functions, we get: From this plot, we see four points of interest, so we will use Lagrange Multipliers to find those and then we just need classify them as local or global min or max or not classifiable. We can write: $$\tag 2 F(x,y) = f + \lambda \phi = x^2 + y + \lambda(x^2 + y^2)$$ So, $\tag 3 \dfrac{\partial F}{\partial x} = 2x (1 + \lambda) = 0 \rightarrow x = 0~ \mbox{or}~ \lambda = -1, ~\mbox{and}~$ $\tag 4 \dfrac{\partial F}{\partial y} = 1 + 2 \lambda y = 0 $ From $(3)$, we get: $$x = 0 \rightarrow y^2 = 4 \rightarrow y = \pm 2 \rightarrow \lambda = \pm \dfrac{1}{4}$$ From $(3)$ and $(4)$, we get: $$\lambda = -1, y = \dfrac{1}{2} \rightarrow x^2 + \dfrac{1}{4} = 4 \rightarrow x = \pm \dfrac{\sqrt{15}}{4}$$ Summarizing these results, we have the four potential critical points to investigate at: $$(x,y) = (0,2), (0,-2), \left(\dfrac{\sqrt{15}}{2},\dfrac{1}{2}\right),\left(-\dfrac{\sqrt{15}}{2},\dfrac{1}{2}\right)$$ Now, classify these critical points using the typical method. You should end up with (you can actually make these out from the 3D plot above): * *Local min at $(0,2)$ *Global min at $(0,-2)$ *Global max at $\left(\pm \dfrac{\sqrt{15}}{2}, \dfrac{1}{2}\right)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/484835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to solve a complex polynomial? * *Solve: $$ z^3 - 3z^2 + 6z - 4 = 0$$ How do I solve this? Can I do it by basically letting $ z = x + iy$ such that $ i = \sqrt{-1}$ and $ x, y \in \mathbf R $ and then substitute that into the equation and get a crazy long equation? If I did that I suspect I wouldn't be able to decipher the imaginary part of the equation. Or should I change it to one of the forms below: $$ z^n = r^n \mathbf{cis} n \theta $$ $$ z^n = r^n e^{n\theta i} $$ And then plug that into the equation? I did that. But it looked unsolvable. I'm so confused.
The easiest thing is just try to guest a root of the polynomial first. In this case, for $$p(z) = z^3 - 3z^2 + 6z - 4,$$ we have that $p(1) = 0$. Therefore, you can factorize it further and get $$z^3 - 3z^2 + 6z - 4 = (z-1)(z^2 - 2z + 4)$$ $$= (z-1)((z-1)^2 + 3).$$ Their roots are just $$z_{1} = 1, \hspace{10pt}z_{2} = 1 + i\sqrt{3}, \hspace{10pt}z_{3} = 1 - i\sqrt{3}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/484906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 3 }
How to find the inverse of 70 (mod 27) The question pertains to decrypting a Hill Cipher, but I am stuck on the part where I find the inverse of $70 \pmod{ 27}$. Does the problem lie in $70$ being larger than $27$? I've tried Gauss's Method: $\frac{1}{70} = \frac{1}{16} ^{\times2}_{\times2} = \frac{2}{32} = \frac{2}{5} = \frac{12}{30} = \frac{12}{3} = \frac{132}{33} = \frac{24}{6} = \frac{120}{30} $ And the denominators start repeating so I can never get 1 in the denominator. And the Euclidean Algorithm $\ 70 = 2(27) + 16 $ $\ 27 = 1(16) + 11 $ $\ 16 = 1(11) + 5 $ $\ 11 = 1(5) + 6 $ $\ 5 = 1(6) -1 $ Which is also not helpful. I think I'm trying to get + 1 on the last equation for $1 \pmod{ 27}$, but maybe I'm misunderstanding the method. Am I approaching this incorrectly? I'm new to modular arithmetic.
A couple of ideas (working all the time modulo $\,27\,$): $$\begin{align*}\bullet&\;\;70=14\cdot 5\\ \bullet&\;\;14\cdot2=28=1\implies& 14^{-1}=2\\ \bullet&\;\;5\cdot 11=55=2\cdot 27+1=1\implies&5^{-1}=11\end{align*}$$ Thus, finally, we get $$70^{-1}=14^{-1}\cdot 5^{-1}=2\cdot 11=22$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/484990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 6, "answer_id": 1 }
Evaluate $\lim_{x\to49} \frac{x-49}{\sqrt{x}-7}$ Evaluate $\lim_{x\to 49} \frac{x-49}{\sqrt{x}-7}$ I'm guessing the answer is 7 but again that is only a guess. I don't know how to solve this type of problem. Please help.
$$ \lim_{x \to 49} \frac {x - 49}{\sqrt x - 7} = \lim_{x \to 49} \frac {(\sqrt x + 7)(\sqrt x - 7)}{\sqrt x - 7} = \lim_{x \to 49} (\sqrt x + 7) = 14 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/485158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Undergraduate math competition problem: find $\lim \limits_{n \to \infty} \int \limits^{2006}_{1385}f(nx)\, \mathrm dx$ Suppose $f\colon [0, +\infty) \to \mathbb{R}$ is a continuous function and $\displaystyle \lim \limits_{x \to +\infty} f(x) = 1$. Find the following limit: $$\large\displaystyle \lim \limits_{n \to \infty} \int \limits^{2006}_{1385}f(nx)\, \mathrm dx$$
Using the substitution $t=nx$, we get $I_n = \int^{2006}_{1385}f(nx)dx = \frac{1}{n}\int_{1385n}^{2006n} f(t) dt$. Let $I=2006-1385$. Now let $\epsilon>0$, and choose $L>0$ such that if $t\ge L$, then $-\frac{\epsilon}{I} < f(t)-1 < \frac{\epsilon}{I}$. Now choose $N\ge \frac{L}{1385}$. Then if $n \ge N$ and $t \in [1385n,2006n]$, we have $-\frac{\epsilon}{I} < f(t)-1 < \frac{\epsilon}{I}$. Integrating over $[1385n,2006n]$ and dividing by $n$ gives $$ -\epsilon < I_n -I < \epsilon $$ It follows that $\lim_n I_n = I$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/485267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 1 }
Period of the function $\sin (8\pi \{x\})$ My question is to to find the period of $$\sin(8\pi\{x\}),$$ where $\{\cdot\}$-is the fractional part of function. I know that the period of $\{\cdot\}$ is 1 and the period of $\sin(8\pi x)$ is $1/4$. But how to find the overall period of the given function?
$\sin(8 \pi \{x\}) = \sin(8 \pi ( x - \lfloor x \rfloor)) = \sin(8 \pi x - 8 \pi \lfloor x \rfloor) = \sin(8 \pi x)$, hence $\sin(8 \pi \{x\})$ has period $\frac{1}{4}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/485340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Counter-Example (or Proof) to $\int_{0}^{1}f_{n}\;dx\to0$ Implies $f_{n}\to0$ a.e. $x$ Whenever $f_{n}\geq0$. I am dealing with a problem at the moment where the hypothesis can be restated as $\int_{0}^{1}f_{n}\;dx\to0$ and $f_{n}\geq0$. Under these conditions, I want to conclude that $\lim f_{n}$ exists and is $0$ for almost every $x$ in $[0,1]$. Without boundedness or monotonicity, the usual convergence theorems are not immedaitely applicable. Also, if the limit of integrals converged to anything other than $0$, or if the $f_{n}$ were allowed to be signed, or if the set of integration had infinite measure, well known counter-example(s) would be available. It seems that the finite measure of $[0,1]$, the nonnegativity of $f_{n}$, and the assumption $\int_{0}^{1}f_{n}\;dx\to0$ should force $f_{n}\to0$ by appealing (in some manner) the well known fact that for $g\geq0$ measurable, $\int_{0}^{1}g\;dx=0$ if and only if $g=0$ a.e. $x\in[0,1]$. The only way out of this is if the $f_{n}$ spiked on sets of small measure; but as $n\to\infty$, these sets where the $f_{n}$ spike must become correspondingly smaller (in measure) since we have $\int_{0}^{1}f_{n}\;dx<\epsilon$ for $n$ large. In the limit, these "spike sets" should yield to a null set, thus proving the claim. The question I am referring to is here Limit of Integral of Difference Quotients of Measurable/Bounded $f$ Being $0$ Implies $f$ is Constant
The canonical counterexample is to take the indicator functions of $[0,1]$; $[0,1/2]$,$[1/2,1]$, $[0,1/4]$, $[1/4,1/2]$, $[1/2,3/4]$, $[3/4,1]$ &c. (If the pattern is not evident: break $[0,1]$ into $2^k$ intervals let $f_n$ be the sequence of indicator functions of each $2^k$ intervals obtained at each step from left to right, let $k=0,1,2,\ldots$) In this case $\liminf f_n=0$ while $\limsup f_n=1$ so the sequence of functions converges nowhere, yet $$\int_0^1 f_n\to 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/485404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why can't all subsets of sample space be considered as events? My textbook is Probability and random processes by Grimmett & Stirzaker and the first chapter does not explain this," for reasons beyond the scope of the book". The authors introduce the reader to sample spaces and to events and then go on to say that events are subsets of sample space. Then they ask, "Need all subsets of sample space be events ?" and then they say no. But I don't see why not. Can anyone give mean an intuitive explanation for this?
google vitali sets......you cannot measure all the sets if you assume axiom of choice to be correct...however recently there was a paper which showed that you can measure almost all sets if you do not assume axiom of choice
{ "language": "en", "url": "https://math.stackexchange.com/questions/485461", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Proving a function $f(m,n)$ which satisfies two conditions is a constant I found the following question in a book only with one sentence. "This question can be solved by an elementary way. Note that the following two are false: (1) If a function is bounded from below, then it has minimum value. (2) A monotone decreasing sequence reaches a negative value." Question: Let $m,n$ be integers. Supposing that a function $f(m,n)$ defined by $m,n$ satisfies the following two conditions, then prove that $f(m,n)$ is a constant. 1. $f(m,n)\ge0$. 2. $4f(m,n)=f(m-1,n)+f(m+1,n)+f(m,n-1)+f(m,n+1)$. I suspect this question can be solved by a geometric aspect. I've tried to prove this, but I'm facing difficulty. Could you show me how to prove this?
A probabilistic proof is given at the bottom of this page.
{ "language": "en", "url": "https://math.stackexchange.com/questions/485527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
On verifying Proj S is a scheme In Hartshorne II Prop 2.5, it says $D_+{(f)}$ is homeomorphic to $\text{Spec}(S_{(f)})$, but I cannot prove it. Since $D_+{(f)}$ homeomorphic to $S_f$, I have to show $\text{Spec}(S_{(f)})$ homeomorphic to $\text{Spec}(S_f)$. Take $p \in \text{Spec} S$. I cannot show $S_f(pS_f \cap pS_(f))=pS_f$.
You already defined the (homeo)morphism $\phi:D_+(f)\to\textrm{Spec}\,S_{(f)}$ by $\mathfrak p\mapsto \mathfrak pS_f\cap S_{(f)}$. Its inverse $\psi$ sends $\mathfrak q\mapsto \ell^{-1}(\mathfrak qS_f)$, where $\ell:S\to S_f$ is localization at $f$. (I write $\ell^{-1}(-)$ instead of $S\cap -$ as I do not have Hartshorne's book here and cannot check if $S$ is integral.) Hint: It should be clear that $\mathfrak q\subset \phi(\psi(\mathfrak q))$ for every $\mathfrak q\in \textrm{Spec}\,S_{(f)}$. For the reverse inclusion, show that given an element $x\in \phi(\psi(\mathfrak q))$, a sufficiently high power of $x$ lies in $\mathfrak q$. To show that $Y=\textrm{Proj}\,S$ is a scheme, you may set $\mathscr O_Y(D_+(f))=S_{(f)}$ and notice that for every open covering $D_+(f)=\bigcup_iD_+(f_i)$ the sequence $$0\to S_{(f)}\to\prod_iS_{(f_i)}\rightrightarrows\prod_{i,j}S_{(f_if_j)}$$ is an equalizer diagram. As $f$ varies in $S_+$, the $D_+(f)$ generate the topology of $Y$, so we get a uniquely determined sheaf $\mathscr O_Y$ on the whole $Y$. And finally, by construction, $(D_+(f),\mathscr O_Y|_{D_+(f)})\cong (\textrm{Spec}\,S_{(f)},\mathscr O_{\textrm{Spec}\,S_{(f)}})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/485641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $\sum_n^{\infty} \prod_{k=0}^n \dfrac{1}{x+k} = e \sum_ n^{\infty} \dfrac{(-1)^n}{(x+n)n!}$ Let $$f_n(x) = \prod_{k=0}^n \dfrac{1}{x+k}.$$ Show that $$\sum_{n=0}^{\infty} f_n(x) = e \sum_ {n=0}^{\infty} \dfrac{(-1)^n}{(x+n)n!}.$$
Using the partial fraction identity that was proved by a straightforward induction technique at this MSE post, we have that $$\prod_{k=0}^n \frac{1}{x+k} = \frac{1}{n!} \sum_{k=0}^n (-1)^k \binom{n}{k} \frac{1}{x+k}.$$ Now to compute $$\sum_{n=0}^\infty \prod_{k=0}^n \frac{1}{x+k}$$ we ask about the coefficient of $$\frac{1}{x+k}$$ taking into account all terms of the series. We see immediately that all products with $n\ge k$ include this term, so the coefficient is $$(-1)^k \sum_{n\ge k} \frac{1}{n!} \binom{n}{k} = \frac{(-1)^k}{k!} \sum_{n\ge k} \frac{1}{(n-k)!} = e\frac{(-1)^k}{k!}.$$ Now summing for $k\ge 0$ we get the desired result $$e\sum_{k=0}^\infty \frac{(-1)^k}{k!} \frac{1}{x+k}.$$ Interestingly enough the term $$g_n(x) = \prod_{k=1}^n \frac{1}{x+k}$$ can also be evaluated using Mellin transforms. (Drop the factor $1/x$ for the moment to keep the Mellin integral simple -- no pole at zero.) We get $$g_n^*(s) = \mathfrak{M}(g_n(x); s) = \int_0^\infty \prod_{k=1}^n \frac{1}{x+k} x^{s-1} dx$$ which gives (use a keyhole contour with the slot on the real axis, which is also the branch cut of the logarithm for $x^{s-1}$) $$g_n^*(s) (1-e^{2\pi i(s-1)}) = 2\pi i \sum_{q=1}^n \operatorname{Res}(g_n(x); x=-q).$$ The sum of the residues is $$ \sum_{q=1}^n \operatorname{Res}(g_n(x); x=-q) \\= \sum_{q=1}^n (-q)^{s-1} \prod_{k=1}^{q-1} \frac{1}{-q+k} \prod_{k=q+1}^n \frac{1}{-q+k} = \sum_{q=1}^n e^{i\pi(s-1)} q^{s-1} \frac{(-1)^{q-1}}{(q-1)!} \frac{1}{(n-q)!} \\= - e^{i\pi s} \sum_{q=1}^n q^s \frac{(-1)^{q-1}}{q!} \frac{1}{(n-q)!} = -\frac{e^{i\pi s}}{n!}\sum_{q=1}^n q^s (-1)^{q-1} \binom{n}{q}.$$ This gives $$g_n^*(s) = - \frac{1}{n!} \frac{2\pi i \times e^{i\pi s}}{1-e^{2\pi i(s-1)}} \sum_{q=1}^n q^s (-1)^{q-1} \binom{n}{q} \\ = - \frac{1}{n!} \frac{2\pi i }{e^{-\pi i s} - e^{\pi i s}} \sum_{q=1}^n q^s (-1)^{q-1} \binom{n}{q} = \frac{1}{n!} \frac{\pi}{\sin(\pi s)} \sum_{q=1}^n q^s (-1)^{q-1} \binom{n}{q}.$$ We apply Mellin inversion to recover $g_n(x)$ with the Mellin inversion integral being $$\frac{1}{2\pi i}\int_{1/2-i\infty}^{1/2+i\infty} g_n^*(s)/x^s ds,$$ shifting to the left to recover an expansion about zero. The residue at $s=0$ is special, it has the value $$\frac{1}{n!} \sum_{q=1}^n (-1)^{q-1} \binom{n}{q} = \frac{1}{n!} .$$ The remaining residues at the negative integers $-p$ contribute $$\frac{1}{n!} \sum_{p=1}^\infty (-1)^p x^p \sum_{q=1}^n \frac{1}{q^p} (-1)^{q-1} \binom{n}{q} = \frac{1}{n!} \sum_{q=1}^n (-1)^{q-1} \binom{n}{q} \sum_{p=1}^\infty \frac{(-1)^p\times x^p}{q^p} \\= \frac{1}{n!} \sum_{q=1}^n (-1)^{q-1} \binom{n}{q} \frac{-x/q}{1+x/q} = \frac{1}{n!} \sum_{q=1}^n (-1)^q \binom{n}{q} \frac{x}{x+q}.$$ Including the residue at zero we thus obtain $$ g_n(x) = \frac{1}{n!} + \frac{1}{n!} \sum_{q=1}^n (-1)^q \binom{n}{q} \frac{x}{x+q} = \frac{1}{n!} \sum_{q=0}^n (-1)^q \binom{n}{q} \frac{x}{x+q}.$$ Since $f_n(x) = 1/x \times g_n(x)$ we get that $$f_n(x) = \frac{1}{n!} \sum_{q=0}^n (-1)^q \binom{n}{q} \frac{1}{x+q}.$$ Observation Aug 25 2014. The Mellin transform calculation is little more than a reworked computation of the partial fraction decomposition of the product term by residues and is in fact not strictly necessary here. An example of this very simple technique (no transforms) is at this MSE link.
{ "language": "en", "url": "https://math.stackexchange.com/questions/485715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Find all functions $f(x+y)=f(x^{2}+y^{2})$ for positive $x,y$ Find all functions $f:\mathbb{R}^{+}\to \mathbb{R}$ such that for any $x,y\in \mathbb{R}^{+}$ the following holds: $$f(x+y)=f(x^{2}+y^{2}).$$
If $$f(A(x,y))=f(B(x,y))$$ on a connected open set of $(x,y)$ where $A$ and $B$ are functionally independent (have nonzero Jacobian determinant), the function $f$ is constant on that set . Locally one has both $xy$-coordinates and $AB$-coordinates. From $(A,B)$ you can get to all close enough $(A + \epsilon_1, B + \epsilon_2)$ by moving $A$-only and then $B$-only along the $AB$ coordinates and this motion does not change the value of $f$. The motion can be tracked in $xy$ coordinates (uniquely determined by $A,B$ in a small neighborhood), and such paths can reach an open set of $(x,y)$ near any given point. This shows $f$ is locally constant. On a connected set that means constant. The translation to this problem is: $A(x,y) = x+y$, $B(x,y)=x^2+y^2$, Jacobian is $\pm(x-y)$, so that the connected open set of $(x,y)$ with $0 < x < y$ can be used. This is a subset of the allowed pairs but it covers all possible values of $A$ and $B$, and therefore of $f(A)$ and $f(B)$. So it solves the problem with slightly less than the full set of assumptions, which is still a solution. The need for an independence hypothesis can be seen from cases like $B=A^3$ where there are continuous nonconstant solutions on some intervals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/485774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 4, "answer_id": 1 }
Why is compactness so important? I've read many times that 'compactness' is such an extremely important and useful concept, though it's still not very apparent why. The only theorems I've seen concerning it are the Heine-Borel theorem, and a proof continuous functions on R from closed subintervals of R are bounded. It seems like such a strange thing to define; why would the fact every open cover admits a finite refinement be so useful? Especially as stating "for every" open cover makes compactness a concept that must be very difficult thing to prove in general - what makes it worth the effort? If it helps answering, I am about to enter my third year of my undergraduate degree, and came to wonder this upon preliminary reading of introductory topology, where I first found the definition of compactness.
Compactness does for continuous functions what finiteness does for functions in general. If a set $A$ is finite, then every function $f:A\to \mathbb R$ has a max and a min, and every function $f:A\to\mathbb R^n$ is bounded. If $A$ is compact, then every continuous function from $A$ to $\mathbb R$ has a max and a min and every continuous function from $A$ to $\mathbb R^n$ is bounded. If $A$ is finite, then every sequence of members of $A$ has a sub-sequence that is eventually constant, and "eventually constant" is the only sort of convergence you can talk about without talking about a topology on the set. If $A$ is compact, then every sequence of members of $A$ has a convergent subsequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/485822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "209", "answer_count": 13, "answer_id": 11 }
coordinate of shorter line If I have a line segment with endpoints AB,CD. The length of the line is 5 units. If I make the line shorter (eg. 3 units), and one of the endpoints is still AB, how do I figure out what the new CD is? Thanks for the help
We assume that your A, B, C, D are coordinates. So we will more conventionally call them $(a,b)$ and $(c,d)$. For your particular case, the coordinates of the new endpoint are $(x,y)$, where $$x=a+\frac{3}{5}(c-a)\qquad\text{and} \qquad y=b+\frac{3}{5}(d-b).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/485888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Greatest possible integer value of x+y? I found a interesting question in one exam. If 5 < x < 10 and y = x + 5, what is the greatest possible integer value of x + y ? (A) 18 (B) 20 (C) 23 (D) 24 (E) 25 MySol: For max value of x+y , x should be 9. So x+y = 9+14 = 23 But this is not correct. Can someone explain.
Note that $x+y=2x+5$. The greatest possible integer value of $2x$ occurs at $x=9.5$. Remark: Unfortunately, a bit of a trick question. Not nice! One of my many objections to multiple choice questions is that they are too often designed to fool people into giving the "wrong" answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/485965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Integration of $x^2 \sin(x)$ by parts How would I integrate the following? $$\int_0^{\pi/2} x^2\sin(x)\,dx$$ I did $u=x^2$ and $dv=\sin(x)$ I got $x^2-\cos(x)+2\int x\cos(x)\,dx.\quad$ I then used $u=x$ and $dv=\cos(x).$ I got $$x^2-\cos(x)+2[x-\sin(x)-\int\sin(x)]$$ then $x^2-\cos(x)+-2 \sin(x)(x)-\cos(x)\Big|_0^{\pi/2} =\dfrac{\pi^2}{4}-0-2$
You need to multiply $u$ and $v$, then subtract the subsequent integral: So you should have $$\begin{align} \int_0^{\pi/2} x^2\sin(x)\,dx & = -x^2\cos(x)+2\int x\cos(x)\,dx \\ \\ & = -x^2 \cos x + 2\Big[x \sin x - \int \sin x\,dx\Big]\\ \\ & = -x^2\cos x + 2x \sin x - (-2\cos x)\\ \\ & = -x^2 \cos x + 2x \sin x + 2\cos x \Big|_0^{2\pi}\end{align}$$ And proceed from there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/486062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Proving Cantor's theorem $\DeclareMathOperator{\card}{card}$From Problem-Solvers Topology Prove the following: CANTOR'S THEOREM If $A$ is a set, then $$\card A < \card \mathcal{P}(A)$$ where $\card A$ stands for the cardinality of set $A$. My Answer If $A = \emptyset$, then $\card A =0$ and $\card \mathcal{P}(A)=1$. If $A = \{a\}$, then $\card A=1$ and $\card \mathcal{P}(A)=2$. So suppose that $A$ has at least two elements. Define a function $f: A \rightarrow \mathcal{P}(A)$ such that $f(x) = \{x\}$ for all $x \in A$. Then $f$ is injective. But it cannot be surjective, because for any two distinct elements $a,b \in A$, there is no element in $A$ that is sent to the set $\{a,b\}$ in $\mathcal{P}(A)$. Therefore, there is no bijection, and $\card A < \card\mathcal{P}(A)$. Do you think my answer is correct? Thanks in advance
$\DeclareMathOperator{\card}{card}$No. Your answer is not correct. Between any two sets (both having two elements or more) there is a non-surjective map. Your task is to show that every set $A$ and every function from $A$ to $\mathcal P(A)$ is not surjective. To clarify the point you're missing, the argument in your proof amounts to the following "proof": "Theorem": There is no bijection between $\{0,1\}$ and $\{2,3\}$. "Proof". Consider the function $f(0)=f(1)=2$. It is clearly not surjective, therefore it is not a bijection. And therefore $\card(\{0,1\})\neq\card(\{2,3\})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/486126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Trigonometric problem using basic trigonometry If $x$ is a solution of the equation: $$\tan^3 x = \cos^2 x - \sin^2 x$$ Then what is the value of $\tan^2 x$? This is the problem you are supposed to do it just with highschool trigonometry , but i can't manage to do it please help Here are the possible answers: $$a) \sqrt{2}-1, b) \sqrt{2}+1, c) \sqrt{3}-1, d) \sqrt{3}+1, e)\sqrt{2}+3$$
$\tan^2 x=\cos^2 x-\sin^2 x$ $\sin^2 x=\cos^4 x -\sin^2 x \cos^2 x$ $0=\cos^4 x -\sin^2 x \cos^2 x -\sin^2 x$ $0=\cos^4 x - \sin^2 x(1+\cos^2 x)$ $0=\cos^4 x - (1-\cos^2 x)(1+\cos^2 x)$ $0=\cos^4 x -(1- \cos^4 x)$ $0=2 \cos^4 x -1$ $\cos^2 x=\frac {\sqrt 2}{2}$ $\large \frac {1}{\cos^2 x}=\sqrt 2$ $\tan^2 x=\sqrt 2 \sin^2 x$ Looks like we're stuck, but from above we have $\cos^2 x=\frac {\sqrt 2}{2}$ , so $\sin^2 x=1-\frac {\sqrt 2}{2}$ By substitution, $\tan^2 x=\sqrt 2 (1-\frac {\sqrt 2}{2})$ $\tan^2 x=\sqrt 2 -1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/486194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Book Searching in Stability Theory. Can anyone recommend me a book on Stability Theory with an intuitive approach? I have some course notes on that subject, but it's really abstract and theoretical. I really want to understand it, ex: Stable by Lyapunov/Asymptotically Stable/Globally Asymptotically Stable/ Lyapunov's Stability Theorem/ Hurwitz criteria... If there are many exercises (Or examples) with instructions in books which are wonderful for me. Any suggestions about titles and authors's books (free download :) ) or pdf/djvu file, it will be appreciated. Thanks!
I like the book written by Jorge Sotomayor. Teoria Qualitativa das Equações Diferenciais. Thats a good one! Or Perko L. Differential equations and dynamical systems (Springer, 1991)(K)(T)(208s)
{ "language": "en", "url": "https://math.stackexchange.com/questions/486265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
How can all 3 of these be true? * *Most numbers are composite. *If you choose a random whole number there is a 50/50 chance that it's even or odd. *If you take 2 random whole numbers and multiply them there is a 75% chance the result is even and a 25% chance it is odd. (That is even*even=even, odd*even=even, even*odd=even, and only odd*odd=odd) How can all 3 of these be true?
You're question #3 almost answer themselves, if there are 4 possible options, and 3 out of 4 of the possible options give us an even product, than there is a 75% chance the product is even and a 25% chance that the product is odd. The first 2 questions are general proporties which can be shown by taking any 2 consecutive numbers as examples
{ "language": "en", "url": "https://math.stackexchange.com/questions/486306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
positive series convergent properties So I have a test in the next weak and I found myself struggled in an "easy" question. Given two series $(a_n)_{n=1}^{\infty} \ $ so that $\forall n \in \mathbb{N} \ a_n > 0$ then: $ \sum _{n=1}^{ \infty} a_n$ convergent iff $ \sum _{n=1}^{ \infty} \frac{a_n}{a_n +1}$ convergent. So I know that $ \forall n \in \mathbb{N} \ \ \frac{\frac{a_n}{a_n + 1}}{\frac{a_n}{1}} = \frac{1}{a_n +1} < \frac{1}{1} = 1$ Assuming $ \sum _{n=1}^{ \infty} a_n$ convergent then I know that $ \ a_n \rightarrow 0$ and therefore $ \ \frac{1}{a_n +1} \rightarrow 1 \ $ and because we know that $ \sum _{n=1}^{ \infty} a_n$ convergent we conclude from the comparison test that $ \sum _{n=1}^{ \infty} \frac{a_n}{a_n +1}$ convergent. But what about the opposite ? What can I conclude from the fact that $ \ \frac{a_n}{a_n +1} \rightarrow 0 \ $ That could help me proof that $ \ \frac{1}{a_n +1} \rightarrow L_1 > 0 \ $ So that $ \sum _{n=1}^{ \infty} a_n$ convergent ? If I have to proof the other way in a different way' please tell me how . Thanks in advanced !!
Hint: $$\frac{1}{a_n+1} = 1 - \frac{a_n}{a_n+1}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/486383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving a radical equation $\sqrt{x+1} - \sqrt{x-1} = \sqrt{4x-1}$ $$ \sqrt{x+1} - \sqrt{x-1} = \sqrt{4x-1} $$ How many solutions does it have for $x \in \mathbb{R}$? I squared it once, then rearranges terms to isolate the radical, then squared again. I got a linear equation, which yielded $x = \frac54$, but when I put that back in the equation, it did not satisfy. So I think there is no solution, but my book says there is 1. Can anyone confirm if there is a solution or not?
$$x+1+x-1-2\sqrt{x+1}\sqrt{x-1}=4x-1\implies(2x-1)^2=4(x^2-1)\implies$$ $$4x^2-4x+1=4x^2-4\implies 4x=5\implies x=\frac54$$ But, indeed $$\sqrt{\frac54+1}-\sqrt{\frac54-1}\stackrel ?=\sqrt{5-1}\iff\frac32-\frac12=2$$ which is false, thus no solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/486484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Every quotient of a number ring is finite Let $K$ be a number field, i.e. a subfield of $\mathbb{C}$ of finite degree over $\mathbb{Q}$. Let $\mathscr{O}_K$ be the ring of integers of $K$, i.e. algebraic integers which are in $K$. Let $I$ be an ideal of $\mathscr{O}_K$. I read many times that the quotient $\mathscr{O}_K/I$ is obviously/clearly a finite ring, but i've never seen a proof. Could someone suggest me how to see this?
I guess you know that, as an Abelian group, $O_K\cong\mathbb Z^k$, where $k=[K:\mathbb Q]$. Now if $0\neq a\in J$ then $(a)\subset J$, and as $a$ divides its norm $Na\in\mathbb Z$, also $(Na)\subset(a)$. And $\mathbb Z^k/(Na)\mathbb Z^k$ is certainly finite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/486591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
If $E$ is a measurable set, how to prove that there are Borel sets $A$ and $B$ such that $A\subset E$, $E\subset B$ and $m(A)=m(E)=m(B)$? If $E$ is a measurable set, then how to prove that there are Borel sets $A$ and $B$ such that $A$ is a subset of $E$, $E$ is a subset of $B$ and $m(A)=m(E)=m(B)$?
I assume by $m$ you mean Lebesgue measure on $\mathbb R^n$. Use that this measure is regular. This gives us that, if $m(E)<\infty$, then for any $n$ there are a compact set $K_n$ and and open set $U_n$ with $K_n\subset E\subset U_n$, and $m(E)-1/n<m(K_n)$ and $m(U_n)<m(E)+1/n$. This implies that $A=\bigcup_n K_n$ and $B=\bigcap U_n$ have the same measure as $E$, and they are clearly a Borel subset and a Borel superset of $E$, respectively. If $E$ has infinite measure, it is even easier: Take as $B$ the set $\mathbb R^n$. As before, regularity gives us for each $n$ a compact set $K_n$ with $K_n\subset E$ and $m(K_n)\ge n$. Then we can again take as $A$ the set $\bigcup_n K_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/486642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
using induction to prove $(n+1)^2 < 2n^2$ (Im not English and just started doing maths in English so my termiology is still way off) So the title for $n\ge 3$ * *First I use calculate both sides with $3$, which is true *I make my induction. $(k+1)^2 < 2k^2$ then I replace $N$ with $k+1$: $(k+2)^2 < 2(k+1)^2$ Now what? I cant seem to find how to use my induction in this one. I've also tried working out the brackets, but that also didn't seem to help me.
HINT: $(k+2)^2=\big((k+1)+1\big)^2=(k+1)^2+2(k+1)+1$; now apply the induction hypothesis that $(k+1)^2<2k^2$. (There will still be a bit of work to do; in particular, you’ll have to use the fact that $k\ge 1$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/486705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
How to relate the valuation of x/y (For a minimal Weierstrass equation) I'm reading an article about elliptic curves, but since I'm not very experienced on this subject, I ended up getting stuck. The problem starts as: "Let $K/\mathbb{Q}$ be a number field and $E/K$ an elliptic curve defined over $K$. Let $v\in M_K$ be a finite place of good reduction for $E$, and fix a minimal Weierstrass equation for $E$ at $v$, $$y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6$$ Then..." After this there are some equations concerning the local canonical height which the article wants to prove, but the problem I have is not really about this equations. Its all about the following statement made during the proof: $``$ The integrality of the Weierstrass equation implies EASILY that \begin{equation}\tag{1} v(x^{-1})<0 \iff v(x/y)<0 \quad" \end{equation} After playing with the Weierstrass equation and it's integrality ($v(a_i)\ge 0$ for all the $i$'s), indeed I was able to conclude $$v(x)<0 \iff v(y)<0 \quad\text{and in this case }\quad 2v(y)=3v(x)$$ Thus $$v(x)<0 \Rightarrow v(y/x)<0$$ Hence $$v(x/y)\le0 \Rightarrow v(x^{-1})\le0$$ But this was the closest I could get to the statement $(1)$. I have already tried (without success) to work on the Weierstrass equation in a lots of different ways to explicit the $(x/y)$ and somehow manage to relate $v(x/y)$ and $v(x^{-1})$ as expected in $(1)$. At this point, due to the "easily" on the text and since I'm not very experienced, I'm starting to think it is some kind of standard trick or I'm missing something very obvious. I would appreciate so much any kind of help. I'm sorry for my English, it is not my native language. Thanks a lot!!
If $y$ is a unit, then (1) says $v(x^{-1})<0$ iff $v(x) < 0$ which is a contradiction unless $v(x)=0$. So we can look for counterexample by finding a point where $v(y)=0$ but $v(x) \ne 0$. Consider the curve $y^2=x^3-4$ over $\mathbb{Q}$. Its discriminant is $-2^8 3^3$, hence it is a global minimal Weierstrass equation (no prime in the discriminant occurs with multiplicity $\ge 12$ - see Silverman Arithmetic of Elliptic Curves, Chapter VII, section 1). Take the point $x=5$, $y=11$, and valuation $v$ at the prime 5, which is a prime of good reduction for this curve. Then $v(y)=0$, $v(x)=1$, and we have our counterexample. In fact, $v(x^{-1}) = v(1/5) = -1$ but $v(x/y) = v(5/11) = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/486772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Problem with differentiation as a concept. I don't understand quiet good something here, for example if we want to find the derivative of the function $\displaystyle f'(x) = \lim_{h \to 0} \frac{f(x+h)-f(h)}{h} $ and if we compute it from the function: $ f(x) = 12 + 7x $ We get that the derivative of $f(x)$ is equal to $$\lim_{h \to 0} \frac{7h}{h}$$ But I thought that we can't divide by zero (here we cancel 0 over 0), I'm I wrong or $\displaystyle \frac{0}{0}$ equals 1?
The whole point of the limit operation is that it avoids any bad behaviour of a function around the given point. We don't care what the function value is, nor whether it's even defined at a given point. In your case, so long as $h \ne 0$, we can cancel to find that $\frac {7h}{h} = 7$; it doesn't matter that $\frac{7h}{h}$ isn't defined at $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/486842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 1 }
What does it mean for a number to be in a set? Frustratingly my book gives me several examples of a number in a set but offers no explanation at all. Anyways what is going on here? According to the book $2$ is not an element of these sets: $$\{\{2\},\{\{2\}\}\}$$ $$\{\{2\},\{2,\{2\}\}\}$$ $$\{\{\{2\}\}\}$$ What is going on? Clearly $2$ is in all of those sets. Or are they saying that $2$ isn't in any of these sets but a set is in all these sets and in that set is $2$? Which really seems like a logical fallacy because $2$ is in those sets contained in a set means the set has $2$ even if it is behind a layer of sets. For example you wouldn't say that there are no cars in a neighborhood if all the cars in in a garage, so why does math take this approach?
You can think of 'is an element of' as stripping off a single layer of set braces. $2$ is not an element of $\{\{2\}\}$, because removing one layer of braces, you get $\{2\}$, the set containing $2$, which is different from $2$. Also, if this wasn't the case, then there would be no way to distinguish between, for example, $$\{\{2\}, \{2,3\}\} \text{ and } \{\{2,3\}\},$$ which are clearly different sets, even though each of their elements only contain $2$s and $3$s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/486865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 11, "answer_id": 0 }
Simple approximation to a sum involving Stirling numbers? I have also posted this question at https://mathoverflow.net/questions/141552/simple-approximation-to-a-sum-involving-stirling-numbers#141552. I have an exact answer to a problem, which is the function: $f(x,y)=\frac{1}{y^x}\sum_{i=1}^{x-1} [i\binom{y}{x-i}(x-i)!S(x,x-i)]$ where $S(x,x-i)$ is Stirling number of the second kind. Equivalently, $f(x,y)=\frac{1}{y^x}\sum_{i=1}^{x-1}{\{i\binom{y}{x-i}\sum_{j=0}^{x-i} [(-1)^{x-i-j}\binom{x-i}{j}j^x]}\}$. Equivalently, $f(x,y)=\frac{y!}{y^x}\sum_{i=1}^{x-1}{\{\frac{i}{(y-(x-i))!}\sum_{j=0}^{x-i} [\frac{(-1)^{x-i-j}j^x}{j!(x-i-j)!}]}\}$. I have noticed that the percent difference between $f(x,y)$ and $g(x,y)$ goes to $0$ for larger values of $x$ and $y$, where $g(x,y)$ is the far more elegant $x-y(1-e^{-\frac{x}{y}})$. How can $f(x,y)$ be approximated by $g(x,y)$? What approximations should be used to make this connection? I have tried approximations for $S(n,m)$ listed at http://dlmf.nist.gov/26.8#vii, to no avail.
Note that $\binom{y}{x-i}(x-i)!=(y)_{x-i}$ is the Pochhammer symbol or "falling factorial" and the Stirling numbers of the second kind relate falling factorials to monomials by this formula, $$ \sum_{i=0}^x(y)_i\,\begin{Bmatrix}x\\i\end{Bmatrix}=y^x\tag{1} $$ The recurrence relation for Stirling numbers of the second kind is $$ \begin{Bmatrix}n+1\\k\end{Bmatrix}=k\,\begin{Bmatrix}n\\k\end{Bmatrix}+\begin{Bmatrix}n\\k-1\end{Bmatrix}\tag{2} $$ Therefore, $$ \begin{align} \sum_{i=0}^xi(y)_i\,\begin{Bmatrix}x\\i\end{Bmatrix} &=\sum_{i=0}^{x+1}(y)_i\left(\begin{Bmatrix}x+1\\i\end{Bmatrix}-\begin{Bmatrix}x\\i-1\end{Bmatrix}\right)\\ &=\sum_{i=0}^{x+1}(y)_i\begin{Bmatrix}x+1\\i\end{Bmatrix} -\sum_{i=0}^{x+1}y(y-1)_{i-1}\begin{Bmatrix}x\\i-1\end{Bmatrix}\\[8pt] &=y^{x+1}-y(y-1)^x\\[8pt] \sum_{i=0}^xi(y)_{x-i}\,\begin{Bmatrix}x\\x-i\end{Bmatrix} &=\sum_{i=0}^x(x-i)(y)_i\,\begin{Bmatrix}x\\i\end{Bmatrix}\\[8pt] &=(x-y)y^x+y(y-1)^x\tag{3} \end{align} $$ Thus, since the $i=0$ and $i=x$ terms are $0$, that is $\begin{Bmatrix}x\\0\end{Bmatrix}=0$, $$\begin{align} f(x,y) &=\frac1{y^x}\left((x-y)y^x+y(y-1)^x\right)\\ &=x-y+y\left(1-\frac1y\right)^x\\ &\sim x-y\left(1-e^{-x/y}\right)\tag{4} \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/486917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Is powers of rationals dense in $\mathbb R$ Consider $\mathbb {\tilde {Q}} = \{ x^n : x \in \mathbb Q \} $ $n$ is fixed odd integer. I have two questions here. * *Is this set dense in $\mathbb R$ and *Is there any bijection exists between $\mathbb Q$ and $\mathbb {\tilde Q}$ For the first question, I think the set is dense. Consider $ a,b \in \mathbb R$ WLOG assume $ a,b \ge 0$ and $ b \ge a$. We can find $c \in \mathbb Q$ such that $ a^{1/n} \le c \le b^{1/n}$. And now $ c^n \in \mathbb {\tilde Q}$ and $ a \le c^n \le b$. So the set is dense. Is there a way to define bijection between $\mathbb Q$ and $\mathbb {\tilde Q}$
Here is a simpler proof of density. Consider the map $f(x)=x^n$, $n>0$ is odd, $f: {\mathbb R}\to {\mathbb R}$. This map is clearly continuous. The intermediate value theorem implies that this map is surjective. The set of rational numbers is dense in ${\mathbb R}$. Therefore, its image under the continuous map $f$ is also dense in $f({\mathbb R})={\mathbb R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/486978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
If $\lim_{n\to\infty}\int_0^1 f_n(x)dx=0$, are there points $x_0\in[0,1]$ such that $\lim_{n\to\infty}f_n(x_0)=0$? This is part of an old qual problem at my school. Assume $\{f_n\}$ is a sequence of nonnegative continuous functions on $[0,1]$ such that $\lim_{n\to\infty}\int_0^1 f_n(x)dx=0$. Is it necessarily true that there are points $x_0\in[0,1]$ such that $\lim_{n\to\infty}f_n(x_0)=0$? I think that there should be some $x_0$. My intuition is that if the integrals converge to $0$, then the $f_n$ should start to be close to zero in most places in $[0,1]$. If $\lim_{n\to\infty}f_n(x_0)\neq 0$ for any $x_0$, then the sequences $\{f_n(x_0)\}$ for each fixed $x_0$ have to have positive terms of arbitrarily large index. Since there are only countably many functions, I don't think it's possible to do this without making $\lim_{n\to\infty}\int_0^1 f_n(x)dx=0$. Is there a proof or counterexample to the question?
No. The standard counterexample would be indicator functions of $[0, 1]$, $[0, 1/2]$, $[1/2, 1]$, $[0, 1/3]$, $[1/3, 2/3]$, and so on. In order to make these continuous, add in line segments on either end with very large slope.
{ "language": "en", "url": "https://math.stackexchange.com/questions/487049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Proving a point inside a triangle is no further away than the longest side divided by $\sqrt{3}$ Problem: In a triangle $T$ , all the angles are less than 90 degrees, and the longest side has length $s$. Show that for every point $p$ in $T$ we can pick a corner $h$ in $T$ such that the distance from $p$ to $h$ is less than or equal to $s/\sqrt{3}$. Source: Problem 2 in http://abelkonkurransen.no/problems/abel_1213_f_prob_en.pdf Here's my try: If $p$ is on any of the edges of $T$, it can't be further away than $s/2$. Then I thought that the point furthest away from any $h$ would be a point equidistant to all the vertices. I also know that the equidistant point has to be inside the triangle because no angle is obtuse. I am assuming an equilateral triangle (I'm not sure if I can). It has no longest side, therefore all must be of length $s$. Let $h_1,h_2,h_3$ be the vertices, and $z=h_1e=h_2e=h_3e$. To find the equidistant point $e$, I could half all the angles. By trigonometry I will get that, $\cos 30^\circ=\frac{\sqrt{3}}{2}=\frac{s/2}{z} \implies z=\frac{s}{\sqrt{3}}$. Since this is the length from the closest corner to the equidistant point it cannot be further away. I am not sure if I can rightly assume the triangle is equilateral without loss of generality, probably not. However this is the closest I've gotten to proving this. Could you explain a better approach? PS. Calculators would not be allowed.
Rephrasing Christian Blatter: for any point interior to an acute triangle, measure the distance to each of the four vertices, and find the smallest value. Now find the point where the this minimum value is largest. Almost obviously, this is the circumcenter. Connect the circumcenter to each of the vertices and drop perpendiculars from the circumcenter to each of the three sides. This divides the original triangle into six right triangles. Any point will be in one of these, and will be closer or equidistant to the vertex than the circumcenter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/487121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
How many ways to choose $k$ out of $n$ numbers with exactly/at least $m$ consecutive numbers? How many ways to choose $k$ out of $n$ numbers is a standard problem in undergraduate probability theory that has the binomial coefficient as its solution. An example would be lottery games were you have $13983816$ ways to choose $6$ numbers out of $49$. My question is: How many ways are there to choose $k$ out of $n$ numbers with exactly/at least $m$ consecutive numbers? An example would be how many ways are there to choose $6$ out of $49$ numbers with exactly/at least $5$ consecutive numbers, e.g. $\{2,3,4,5,6,26\}$? I read that the answer here is $1936$ ways for the "at least"-case. I would like to have a general formula and if possible a derivation of it. Good references are also welcome. Thank you.
After some further googling I found the following reference which gives a general formula and a derivation: Lottery combinatorics by McPherson & Hodson
{ "language": "en", "url": "https://math.stackexchange.com/questions/487207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Neighborhood of the Origin in the Weak Topology I am in difficulty with the following question. Let $H$ be a real infinite-dimensional Hilbert space and $u\in H\setminus \{0\}$. Let $V$ be an any neighborhood of $0$ in the weak topology on $H$. Is there a vector $v\ne 0$ such that $tv\in V$ for all $t\in \mathbb{R}$ and $\langle u, v\rangle\ne 0$? I would like to thank all help and comments.
Base on the solution of Daniel Fischer, choosing $V=\{x\in H: |\langle u, x\rangle|<1\}$. Then $V$ is a neighborhood of $0$ in the weak topology. Let $v\ne 0$ such that $tv\in V$ for all $t\in \mathbb{R}$. Then $\langle u, tv\rangle=t\langle u, v\rangle<1$ for all $t\in \mathbb{R}$. Hence $\langle u, v\rangle=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/487311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the kernel of an action on conjugate subgroups I'm trying to solve the following problem: Let $G$ be a group of order 12. Assume the 3-Sylow subgroups of $G$ are not normal. Prove that $G\cong A_4$. Here's my attempt: let $\mathscr S$ be the set of 3-Sylow subgroups of $G$. Since the elements of $\mathscr S$ are not normal, by Sylow's theorem, $\# \mathscr S > 1$. Again by Sylow's theorem, $\#\mathscr S = 4$ and the elements of $\mathscr S$ are conjugate to each other. Hence, one can define a group action of $G$ on $\mathscr S$ by conjugation, and this defines a homomorphism $\phi : G\rightarrow \mathrm{Sym}(\mathscr S)\cong S_4$. Thus, it suffices to show that $\phi$ is injective and its image is $A_4$. But I'm stuck at this last step. I tried to find the kernel of $\phi$ and found that $a\in\mathrm{Ker}\phi\Leftrightarrow \forall H\in\mathscr S\ aHa^{-1} = H$, but I do not understand what this leads to. I would be most grateful if you could provide a clue (not necessarily a complete solution).
Let $S_1$ and $S_2$ be two of the Sylow $3$-subgroups. If $a \in \ker \phi$, then in particular $$aS_1a^{-1} = S_1 \Rightarrow a \in N_G(S_1).$$ The same holds for $S_2$, so $$\ker \phi \subset N_G(S_1) \cap N_G(S_2).$$ Now, since the Sylow $3$-subgroups aren't normal, what is $N_G(S_i)$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/487380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Exponential algebra problem We need to solve for x: $$54\cdot 2^{2x}=72^x\cdot\sqrt{0.5}$$ My proposed solution is below.
I think this is a more systemactic way: $$54\cdot 2^{2x}=72^x\cdot\sqrt{0.5}$$ Apply logarithm on both sides of the equation. For now the base of the logaritm does no really matter $$\log{(54\cdot 2^{2x})}=\log{(72^x\cdot\sqrt{0.5})}$$ and simplify by applying the laws of logarithm for products and powers $$\log{(54)} + 2x \log{(2)}=x\log{(72)} + \log{(\sqrt{0.5})}$$ to get a linear equation in x. Now solve this equation : $$x=\frac{\log{(\sqrt{0.5})}-\log{(54)} }{2\log{(2)} - \log{(72)}}$$ Now you can try to simplify this expression for $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/487458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
General misconception about $\sqrt x$ I noticed a large portion of general public (who knows what square root is) has a different concept regarding the surd of a positive number, $\sqrt\cdot$, or the principal square root function. It seems to me a lot of people would say, for example, $\sqrt 4 = \pm 2$, instead of $\sqrt 4 = 2$. People even would correct a statement of the latter form to one with a $\pm$ sign. Some also claim that, since $2^2 = 4$ and $(-2)^2 = 4$, $\sqrt 4 = \pm 2$. Some people continue to quote other "evidences" like the $y=x^2$ graph. While most people understand there are two square roots for a positive number, some seem to have confused this with the surd notation. From an educational viewpoint, what might be lacking when teaching students about surd forms? Is a lack of understanding to functions a reason for this misconception? Now I have noticed another recent question that hinted that poster was confused. Following @AndréNicolas's comment below, might these confusion really come from two different communities using the same symbol?
The square root of $x$ is a number which when squared gives $x$. For $16$ there are two such numbers, so there are two square roots of $16$. For $0$, there is one and for any negative number there is none. Now, simply call the non-negative square root of a number, the principal square root. There is only one such number for all non-negative numbers and thus, the principal square root of a number is unambiguous (unless the number is negative, in which case it is undefined). We denote the principal square root of $x\geq0$ as $\sqrt{x}$. The other square root is then $-\sqrt x$. So, we can say that the square roots of $2$ are $\sqrt{2}$ and $-\sqrt{2}$ and of $16$ are $\sqrt{16}(=4)$ and $-\sqrt{16}=(-4)$. I think the reason some people may have confusion with this is that they don't understand/know that $\sqrt{x}$ is used to denote the non-negative square root and nothing else.
{ "language": "en", "url": "https://math.stackexchange.com/questions/487509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
Seeking for a proof on the relation between Euler totient and Möbius function Can someone help me prove the relation $\varphi\left(n\right)={\displaystyle \sum_{d|n}}d\mu\left(n/d\right)$, where $\mu$ is the Möbius function defined by $$ \mu\left(n\right)=\begin{cases} 1 & \mbox{if }n=1\\ \left(-1\right)^{t} & \mbox{if }n\mbox{ is a product of }t\mbox{ distinct primes}\\ 0 & \mbox{if }p^{2}\mbox{ divides }n\mbox{ for some prime }p. \end{cases} $$
Let $n = p_1^{e_1}p_2^{e_2}\ldots p_m^{e_k}$ for some primes $\{p_1, \ldots, p_m\}$. By definition $\phi(n)$ equals the number of elements in the set $\{0,1,\ldots,n-1\}$ that have no common divisor with $n$. We will count the number of elements that will be excluded in this setup, i.e. the number of elements of the set $S$ that do have a common divisor with $n$. In order for this to be true such a number $a$ has to be a multiple of one of the primes $p_1,\ldots,p_m$. If we denote by $S_d$ the set of multiples of $d$ in $\{0,1,\ldots,n-1\}$ then we can write $S = \bigcup_{p \in \{p_1, \ldots, p_m\}}S_p$. We will use the inclusion - exclusion principle to calculate the number of elements of $S$. Note that $\#S_d = \frac{n}{d}$ (which we will denote by $d'$) for any divisor $d$ of $n$, and that for $p_i \neq p_j$ we have $S_{p_ip_j} = S_{p_i} \bigcap S_{p_j}$. The the inclusion - exclusion principle goes like this: $\#S = \#S_{p_1} + \#S_{p_2} + \ldots \#S_{p_m} - \#S_{p_1p_2} - \ldots -\#S_{p_ip_j} \ldots + \#S_{p_ip_jp_k}\ldots$. Remark that the summation goes over the divisors of $n$ that are square free and $>1$. We can also write this as $\#S = \sum_{d | n, d >1} d'\mu(d).(-1)$ since $\mu$ kills the non square free divisors. Finally : $$\phi(n) = n - \#S = n - \sum_{d | n, d >1} d'\mu(d).(-1) = n + \sum_{d | n, d >1} d'\mu(d) =\\ \sum_{d | n} d'\mu(d) = \sum_{d | n} d\mu(d')$$ Since $d$ and $d'$ are interchangeable and $1' = n$, $\mu(1) = 1$. Example: $n = 225 = 3^25^2$. Then $S_3 = \{0, 3, 6, \ldots, 222\}$, $S_5 = \{0, 5, 10, \ldots, 220\}$ and $S_{15} = S_3 \bigcap S_5 = \{0, 15, 30, \ldots, 210\}$. Now $\#S_3 = 3' = 3.5^2$, $\#S_5 = 5' = 3^2.5$ and $\#S_{15} = 15' = 3.5$. Then the inclusion - exclusion principle gives: $$\#S = \#S_3 + \#S_5 - \#S_{15} = -(\mu(3)3'+\mu(5)5'+\mu(15)15') = -(\mu(3)3'+\mu(5)5'+ \mu(9)9' + \mu(15)15' + \mu(25)25' + \mu(45)45' + \mu(75)75' + \mu(225)225') = -\sum_{d \in \{3, 5, 9, 15, 25, 45, 75, 225\}}\mu(d)d' $$ Now $$\phi(n) = n - \#S = n + \sum_{d \in \{3, 5, 9, 15, 25, 45, 75, 225\}}\mu(d)d' = \sum_{d \in \{1, 3, 5, 9, 15, 25, 45, 75, 225\}}\mu(d)d'$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/487599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
Let $M_1,M_2,M_3,M_4$ be the suprema of $|f|$ on the edges of a square. Show that $|f(0)|\le \sqrt[4]{M_1M_2M_3M_4}$ Let $G$ denote the interior of the square with vertices $1,i,-1,-i$. Suppose $f$ is holomorphic on $G$ extends continuously to $\overline{G}$, and $M_1,M_2,M_3,M_4$ are the suprema of $|f|$ on the edges of $G$. Show that $$|f(0)|\le \sqrt[4]{M_1M_2M_3M_4}$$ I have an idea if we assume $f$ is never zero. By Schwarz-Christoffel formula, we can map the unit cicle to the square ,fixing the origin and the vertices. Then define $h=f(g)$ and log $|h|$, we have log$|h(0)|={1\over{2\pi}}\int_0^{2\pi}$log$|h(e^{i\theta})|d\theta$ and then we can get the inequality. However, if $f$ is zero somewhere, we can't define log $h$ then I'm stuck. Any help would be appreciated!
Consider the function $$g(z) = f(z)\cdot f(iz)\cdot f(-z) \cdot f(-iz).$$ $g$ is holomorphic on $G$ and extends continuously to $\overline{G}$, and the maximum of $g$ on each of the edges is at most $M_1\cdot M_2\cdot M_3\cdot M_4$. $g(0) = f(0)^4$. The maximum modulus principle does the rest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/487675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Calculus: The tangent line intersects a curve at two points. Find the other point. The line tangent to $y = -x^3 + 2x + 1$ when $x = 1$ intersects the curve in another point. Find the coordinates of the other point. This was never taught in class, and I have a test on this tomorrow. This question came off of my test review worksheet, and I don't understand how to solve it. The answers are on the back, and for this one it says the answer is (-2,5), but I don't understand how to get that. I did the derivative and substituted 1 for x to get the slope of the line: $y = -x^3 + 2x + 1$ $y' = -3x^2 + 2$ $y' = -3 + 2$ $y' = -1$ I don't know where to go from here.
Your statement that "This was never taught in class" might astonish your instructor. But even if not, it is very unreasonable to expect to be required to do only that which someone has shown you how to do. And this is so close to the beaten path that it's not a good example of something you might not have been shown how to do. When $x=1$, then $y=2$ so you've got a line passing through $(1,2)$ with slope $-1$. In earlier courses you learned how to write an equation for that line. Now you need the point of intersection of the graphs of two equations, $y=−x^3+2x+1$ and one other. You've probably seen problems like that before, and if not, apply some common sense, and if that doesn't work, then tell us with specificity at what point you ran into difficulty.
{ "language": "en", "url": "https://math.stackexchange.com/questions/487740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 7, "answer_id": 2 }
Additive group of rationals has no minimal generating set In a comment to Arturo Magidin's answer to this question, Jack Schmidt says that the additive group of the rationals has no minimal generating set. Why does $(\mathbb{Q},+)$ have no minimal generating set?
Let $S\subseteq\mathbb Q$ be such that $\langle S\rangle=\mathbb Q$. Fix $a\in S$, and put $T=S\setminus\{a\}$, let us see that also $\langle T\rangle=\mathbb Q$. We have $$\frac{a}{2}=a\cdot k_0+\sum_{i=1}^na_i\cdot k_i,$$ for some $k_i\in\mathbb Z$ and $a_i\in T$. Then $$a=a\cdot (2k_0)+\sum_{i=1}^na_i\cdot (2k_i),$$ that is, $$a\cdot m=\sum_{i=1}^na_i\cdot (2k_i),$$ where $m=1-2k_0$ is nonzero; as $k_0$ is an integer. Now $\frac{a}{m}$ can be expressed as a combination of elements of $S$, say $\frac{a}{m}=a\cdot r_0+\sum_{i=1}^lb_i\cdot r_i,$ with $b_i\in T$,$r_i\in\mathbb Z$, thus $$a=a\cdot mr_0+\sum_{i=1}^lb_i\cdot mr_i=\sum_{i=1}^na_i\cdot r_0(2k_i) +\sum_{i=1}^lb_i\cdot mr_i.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/487820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 0 }
Closed form solution to simple recurrence I have this recurrence : $$f(i) = \begin{cases} 0 &i=0\\ 1 &i=M\\ \frac{f(i-1) + f(i+1)} 2& 0 < i < M \end{cases}$$ I have guessed that $$f(i) = \frac i M$$ and proved it via induction. What is the right way of solving it without guessing ? Later Edit: Thank you very much for your answers. I found them all very helpful. Thank you very much for your time !
If you look at the third form, you should recognize it as an arithmetic mean. $f(i)$ is the mean of $f(i-1)$ and $f(i+1)$, which tells you immediately that you're looking for a linear equation. The endpoints then give you slope and intercept.
{ "language": "en", "url": "https://math.stackexchange.com/questions/487884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Unexpected Practical Applications of Calculus Calculus shows up in a lot of places in the world. Specifically, here are three areas where I see it used the most: * *Optimization problems. *Anything involving rates of change (e.g. velocity $\rightarrow$ acceleration). *Anything involving "averages" (e.g. surface area). I am more interested in the non-intuitive and unexpected applications of Calculus, however. For instance, the Fourier Transform is an alright example. But in some ways I still feel like Calculus isn't totally unexpected here, as it becomes really intuitive once you understand that the integral is just computing the average power at each signal frequency. So, in what fields/areas of science does Calculus pop up unexpectedly? Preferably those applications which are practical in the real world. (i.e. not number theory)
Ryan, perhaps a bit unexpected is the application of calculus in the human heart. More precise, cardiac output. The definition of cardiac output is the volume of blood pumped by the heart per unit time. The formula for this turns out to be a Riemann sum which in turn becomes an integral. And I find that unexpected in the sense that most people will look for calculus applications in physics/engineering or perhaps economics. But who generally thinks about calculus at work in our own hearts?
{ "language": "en", "url": "https://math.stackexchange.com/questions/487985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Converting between $T_1$ and $T_2$ Given that every $T_2$ space is a $T_1$ space, is it possible to start with a $T_1$ space and to specify in terms of its sets a family of additional sets sufficient to make that $T_1$ space into a $T_2$ space? If so, can this be done for any $T_1$ space or only particular ones?
I suspect that every way of doing this is somehow either arbitrary or trivial. Suppose we have an operation $\mathcal{H}$, such that if $(X, T)$ is a T1 space then $T \subset \mathcal{H}(T)$ and $(X, \mathcal{H}(T))$ is a T2 space. Suppose furthermore that this operation preserves symmetry, in the sense that if $f: X \to X$ is a homeomorphism of $(X, T)$ to itself, then it is also a homeomorphism of $(X, \mathcal{H}(T))$ to itself. Finally, suppose that the operation is monotone, in the sense that if $T \subset T'$, then $\mathcal{H}(T) \subset \mathcal{H}(T')$. With these, apparently reasonable, assumptions it is already unavoidable that $(X, \mathcal{H}(T))$ is discrete, whatever $T$ is. To see this, consider the cofinite topology. It is T1 and every bijection is a homeomorphism. The only T2 topology with the property that every bijection is a homeomorphism is the discrete one. Since the cofinite topology is the coarsest T1 topology and the discrete topology is the finest of all, monotonicity then demands that $\mathcal{H}$ makes every T1 topology discrete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/488075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Meaning of signed volume I want to understand the definition of the determinant of a $n\times n$ real matrix $A$ as the signed volume of the image of the unit cube $C'$ under the linear transformation given by $A$, i.e. $x\to Ax$. However I am failing to make sense of the words signed volume. What will be the precise definition of this? Will it be the $\int\int...\int dx_1dx_2\cdots dx_n$ over the image $C'$. I think this is an overcomplicated way to define the signed volume. Can someone suggest another way? Thanks
In this context, signed volume is simply a term that carries slightly more information than volume alone. It's analogous to speed and velocity. The magnitude of the determinant of a linear transformation is the number that it scales volumes in the space by. We only need to consider the unit ball however, because if you know it scales the unit ball by that number, by linearity we can simply multiply arbitrary volumes by the same number and it will tell us the volume of the scaled volume for the arbitrary figure. However, we don't need to take the magnitude of the determinant, we can simply work with the determinant. In fact, taking the magnitude is losing some information; namely the sign. The sign actually tells us something interesting: it tells us whether the linear transformation inverts the space. To understand inversion, we can tell if a linear operator has inverted our space if we can't rotate and scale our way back. Think about the plane and the two operators: \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix} \begin{bmatrix} -2 & 0 \\ 0 & 2 \end{bmatrix} The first one scales any volume by 4. The second one has determinant -4, but if we take magnitudes we see that it also scales volumes by 4. The difference is that volumes have been inverted under the second operator. We can tell it's inverted because even if we re-scale by $1/2$, we cannot rotate our way back to our original coordinate system (We cannot rotate the coordinate system $(-\hat{x},\hat{y})$ to $(\hat{x},\hat{y}))$. To put it simply, we have the following complete notion: For a linear operator $T : V \to V$, $A \subset V$, $B=T(A) $, $Vol(B) = Vol(A)*|det(T)|$ If $det(T) < 0$, then $T$ inverts the space. If not, $T$ does not invert the space. The fact that the determinant gives us this extra data of invertedness is the reason why calling it just volume would be a little unfair, so we call it signed volume.
{ "language": "en", "url": "https://math.stackexchange.com/questions/488163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 3 }