Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Factorial number of digits Is there any neat way to solve how many digits the number $20!$ have? I'm looking a solution which does not use computers, calculators nor log tables, just pen and paper.
I will use log as the base b logarithm and ln as the natural log. Then number of digits of x in base b is given by one more than the floor of the log(x). log(n!)=sum(log(k)) for k=1,2,...,n We can interpret this as the Riemann sum for the integral from 1 to n of log(x) dx. This integral is actually a lower bound. The upper bound is the same integral from 2 to n+1 rather than 1 to n. The lower bound integral is given by n log(n)-(n-1)/ln(b). The upper bound gives (n+1) log(n+1)-n/ln(b) -2ln(2)+1/ln(b). For n=20, working in base 10, we get about 17.8 as the lower bound and 18.9 as the upper bound. One more than the floor gives 18 or 19 digits. Not surprisingly, the answer is 19 as the lower bound is nearly 19 digits and the upper is nearly 20. The Riemann sum approximations will get better as n increases, but the answer is already quite good by n=20.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 3 }
What is $\limsup\limits_{n\to\infty} \cos (n)$, when $n$ is a natural number? I think the answer should be $1$, but am having some difficulties proving it. I can't seem to show that, for any $\delta$ and $n > m$, $|n - k(2\pi)| &lt \delta$. Is there another approach to this or is there something I'm missing?
You are on the right track. If $|n-2\pi k|&lt\delta$ then $|\frac{n}{k}-2\pi|&lt\frac \delta k$. So $\frac{n}{k}$ must be a "good" approximation for $2\pi$ to even have a chance. Then it depends on what you know about rational approximations of irrational numbers. Do you know about continued fractions?
{ "language": "en", "url": "https://math.stackexchange.com/questions/136897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to find $\int{\frac{x}{\sqrt{x^2+1}}dx}$? I started simplifying $$\int{\dfrac{x}{\sqrt{x^2+1}}dx}$$ but I always get this: $$\int{x(x^2+1)^{-1/2}dx}.$$ But I don't know how to follow by that way.
Personally, I dislike the use of variable substitution, which is sort of mechanical, for problems that can be solved by applying concept. Not to mention that changing variables is always taken extremely lightly, as if we can just plug in any expression for $x$ and presto! new variable! For example $u = x^2+1$ is clearly not valid in all of $\mathbb R$ as it isn't injective, but it is in $\mathbb R^+$, which works out because that's all we need in this case, but no one seemed to care to use this justification! Also in for beginner calculus students, "$\mathrm dx$" means god knows what, all I need to know is that $\mathrm d(f(x))=f'(x)\mathrm dx,$ and hence we encourage liberal manipulation of meaningless symbols. For these kinds of integrals just use the chain rule: $$ (f(u(x))' = f'(u(x))u'(x)\Rightarrow f(u(x))=\int f'(u(x))u'(x) \mathrm dx$$ So here just identify $u(x)=x^2+1$ and $u'(x)=2x$, so all we need is a factor of $2$ inside the integral which we can obtain if we also divide by $2$, which we can then factor out. I think this is a very good method in general, that is, the method of looking for when a function and its own derivative are present in an integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Why is the Connect Four gaming board 7x6? (or: algorithm for creating Connect $N$ board) The Connect Four board is 7x6, as opposed to 8x8, 16x16, or even 4x4. Is there a specific, mathematical reason for this? The reason I'm asking is because I'm developing a program that will be able to generate Connect $N$ boards, for any given number. At first I assumed that the board size was 2n by 2n, but then I realized it's 7x6. What's going on here? P.S.: Forgive me if my question tags are incorrect; I'm not quite sure what this falls under.
So it seems that a 7x6 board was chosen because it's "the smallest board which isn't easily shown to be a draw". In addition, it was also speculated that there should probably be an even amount of columns. Therefore, it seems that the dimensions of a Connect $N$ board are a function of $N$. I see two possible functions: N.B.: I'm not sure if there's a rule about the numbers being consecutive, but I'm assuming that that is the case here. Times 1.5 function pseudo-code: column_height = N * 1.5; If column_height is an even number: row_height = N + 1; Otherwise (if column_height is an odd number): column_height = (N * 1.5) + 1; //truncate the decimal portion of (N * 1.5) before adding one row_height = column_height + 1; Add 3 function psuedo-code: column_height = N + 3 If column_height is an even number: row_height = N + 2; Otherwise (if column_height is an odd number): column_height = N + 4; row_height = N + 3; The first one seems more likely, but since I'm trying to generate perfectly mathematically balanced game boards and there doesn't seem to be any symmetry that I can see, I'm still not sure. Does this seem about right?
{ "language": "en", "url": "https://math.stackexchange.com/questions/137103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why do engineers use the Z-transform and mathematicians use generating functions? For a (complex valued) sequence $(a_n)_{n\in\mathbb{N}}$ there is the associated generating function $$ f(z) = \sum_{n=0}^\infty a_nz^n$$ and the $z$-Transform $$ Z(a)(z) = \sum_{n=0}^\infty a_nz^{-n}$$ which only differ by the sign of the exponent of $z$, that is, both are essentially the same and carry the same information about the sequence, though encoded slightly differently. The basic idea is the same: associate a holomorphic function with the sequence and use complex calculus (or formal power series). However, the engineering books I know which treat the $Z$-transform do not even mention the word "generating function" (well one does but means the generator of a multiscale analysis...) and the mathematics books on generating function do not mention the $Z$-transform (see for example "generatingfunctionology"). I am wondering: Why is that? Has one formulation some advantage over the other? Or is it just for historical reasons? (BTW: There is not a tag for the $Z$-transform, and the closest thing I found was "integral-transforms"...)
Given a sequence of numbers $\{x[n] \colon n \in \mathbb Z\}$ the $z$-transform is defined as $$X(z) = \sum_n x[n]z^{-n}$$ which when evaluated at $z = \exp(j\omega)$ (where $j = \sqrt{-1}$ is what electrical engineers typically use for what mathematicians denote by $i$) gives $${X}(\exp(j \omega)) = \sum_n x[n] \exp(-j\omega n)$$ which is called the discrete-time Fourier Transform (DTFT) of the sequence. Engineers view this as slightly easier to use and remember than evaluating the generating function $$\hat{X}(D) = \sum_n x[n]D^{n}$$ (where $D$ denotes delay) at $D = \exp(-j\omega)$ to arrive at the same result. So, it is essentially a matter of convention.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 2, "answer_id": 1 }
An infinite finitely generated group contains an isometric copy of $\mathbb{R}$, i.e., contains a bi-infinite geodesic The question is: prove that an infinite finitely generated group $G$ contains an isometric copy of $\mathbb{R}$, i.e., contains a bi-infinite geodesic ($G$ is equipped with the word metric). I do not even know what I have to prove. It does not make sense to me. The word metric of $G$ assumes values in the natural numbers. How could there be an isometry between a subgraph of the Cayley graph of $G$ and the real line $\mathbb{R}$. I am really confused. I found this question here (sheet 6, ex. 1).
I'm just going to focus on what you've said you are confused about, namely: "How could there be an isometry between a subgraph of the Cayley graph of G and the real line $\mathbb{R}$?". We can extend the word metric on $G$ to a metric on the Cayley graph in a natural way, with each edge being an isometric copy of a unit interval. Under this metric, the Cayley graph of $\mathbb{Z}$ with respect to the generator $1$ is isometric to $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
compactness property I am a new user in Math Stack Exchange. I don't know how to solve part of this problem, so I hope that one of the users can give me a hand. Let $f$ be a continuous function from $\mathbb{R}^{n}$ to $\mathbb{R}^{m}$ with the following properties:$A\subset \mathbb{R}^{n}$ is open then $f(A)$ is open. If $B\subset \mathbb{R}^{m}$ is compact then $f^{-1}(B)$ is compact. I want to prove that $f( \mathbb{R}^{n}) $ is closed.
Take $y \in \overline{f(\mathbb{R}^n)}$. Let $B_\varepsilon = \{x | d(x,y) \leq \varepsilon\}$. Now, $\emptyset \neq B_\varepsilon \cap f(\mathbb{R}^n) = f\left(f^{-1}(B_\varepsilon)\right)$. Because $f^{-1}(B_\varepsilon)$ is compact, $B_\varepsilon \cap f(\mathbb{R}^n)$, as the image of a compact by $f$, is a decreasing sequence of nonempty compact sets. Therefore, $\bigcap_\varepsilon (B_\varepsilon \cap f(\mathbb{R}^n))$ is nonempty. Now, $\emptyset \neq \bigcap_\varepsilon (B_\varepsilon \cap f(\mathbb{R}^n)) \subset \bigcap_\varepsilon B_\varepsilon = \{y\}$ implies that $y \in f(\mathbb{R}^n)$. That is, $f(\mathbb{R}^n) = \overline{f(\mathbb{R}^n)}$. By the way, the only "clopen" sets in $\mathbb{R}^m$ are $\emptyset$ and $\mathbb{R}^m$. Since $f(\mathbb{R}^n)$ is not empty, we have that $f(\mathbb{R}^n) = \mathbb{R}^m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
How to find perpendicular vector to another vector? How do I find a vector perpendicular to a vector like this: $$3\mathbf{i}+4\mathbf{j}-2\mathbf{k}?$$ Could anyone explain this to me, please? I have a solution to this when I have $3\mathbf{i}+4\mathbf{j}$, but could not solve if I have $3$ components... When I googled, I saw the direct solution but did not find a process or method to follow. Kindly let me know the way to do it. Thanks.
The vectors perpendicular to $(3,4,-2)$ form a two dimensional subspace, the plane $3x+4y-2z=0$, through the origin. To get solutions, choose values for any two of $x,y$ and $z$, and then use the equation to solve for the third. The space of solutions could also be described as $V^{\perp}$, where $V=\{(3t,4t,-2t):t\in\Bbb R\}$ is the line (or one dimensional vector space) spanned by $(3,4-2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "78", "answer_count": 18, "answer_id": 11 }
Successive Lottery Drawings and Choosing Winning Number Consider the following scenario: Suppose on some date $D1$ the number $N$ is a winning number in a fair lottery where a "play" is considered the selection of a finite set of numbers. By "fair" I mean that the winning number will be selected at random. At some later date $D2$ another lottery, using the same rules, will be held. Suppose one picks the same number $N$ that was the winning number on $D1$ to play on $D2$. Does this selection increase/decrease or have no effect on ones chances of winning the lottery on $D2$? I believe that picking a previously winning number will have no impact on one's chance of success for the simple reason that the universe has no way of remembering which previous numbers won and which ones did not; since the selection is random, each number has an equally likely chance of being picked regardless of whether it was picked before. Other than a basic assumption of causality, I really don't know though how one would rigorously prove this. The counterargument, which I believe is faulty, argues against "reusing" winning numbers because the likelihood of the same number coming up twice is infinitesimally small so, of course, one should not reuse numbers. The problem with this as I see it though is that the probability of picking any two specific numbers, regardless of whether they are the same, is identical to picking the same number twice. The fault is that picking numbers this way is like trying to play two lotteries in succession - which is very different from the given problem.
Just adding to what André Nicolas said, he's accurate. Some prize tiers are usually shared between all people who got a winning combination so this has an effect. For example, in 2005 A record 110 players won the second prize tier of 500,000 and 100,000 dollar prizes (depending on Powerplay) in a single Powerball drawing, The powerball winning numbers most of them used apparently came from a fortune-cookie message. Ordinarily it was expected that only four tickets to win at the Match 5 prize level. The odds of this happening were so high that an investigation was launched.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Sex distribution Suppose there are N male and N female students. They are randomly distributed into k groups. Is it more probable for a male student to find himself in a group with more guys and for female student to find herself in a group with more girls? The question is motivated by an argument with my mother. She claimed that in the majority of collectives where she was, the number of women besides her was greater than men, while I had the opposite impression, that in most collectives where I was a member there was more men even if not to count myself. I never was in a majority-girls collective, so I think for a male it is more probable to find oneself in a majority-male collective (even if we exclude oneself).
This problem also gives the answer why you are "always" in the longer queue at the supermarket. If $k=1$ the answer is trivial: All groups are gender balanced. Therefore we shall assume that $k>1$. Assume Samuel and Samantha were ill the day the groups were originally formed. If the two Sams are assigned to the groups randomly, the result is just as if there had been no illness. If Sam and Sam are asigned to the same group (which happens with probability $\frac1k$ if the distribution is uniform, with other distributions, your mileage may vary), we see by symmetry that more guys is exactly as probable as more gals for this group. If Sam is assigned to a different group than Sam (which happens with probability $1-\frac1k>0$ in the uniform case, but we actually need only assume that the probability is $>0$), then three cases are possible: * *The group was gender-balanced before with some probability $p$, say (clearly, $p>0$, though the exact value depends on the group size). *Or the group had more male members. *Or the group had more female members. By symmetry, the probabilities for the latter two events are equal, hence are $\frac{1- p}2$ each. Then the probability that the group including Sam has more people of the same gender as Sam is at least $p+\frac{1- p}2=\frac{1+ p}2>\frac12$. In total, it is more probable to find oneself in a group with more people of ones own gender than with less. This holds both for Sam=Samual and Sam=Samantha.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Find equation of a plane that passes through point and contains the intersection line of 2 other planes Find equation of a plane that passes through point P $(-1,4,2)$ that contains the intersection line of the planes $$\begin{align*} 4x-y+z-2&=0\\ 2x+y-2z-3&=0 \end{align*}$$ Attempt: I found the the direction vector of the intersection line by taking the cross product of vectors normal to the known planes. I got $\langle 1,10,6\rangle$. Now, I need to find a vector normal to the plane I am looking for. To do that I need one more point on that plane. So how do I proceed?
Consider the family of planes $u(4x-y+z-2)+(1-u)(2x+y-2z-3)=0$ where $u$ is a parameter. You can find the appropriate value of $u$ by substituting in the coordinates of the given point and solving for $u$; the value thus obtained can be substituted in the equation for the family to yield the particular member you need.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Finding subgroups of index 2 of $G = \prod\limits_{i=1}^\infty \mathbb{Z}_n$ I looked at this question and its answer. The answer uses the fact that every vector space has a basis, so there are uncountable subgroups of index 2 if $n=p$ where $p$ is prime. Are there uncountable subgroups of index 2 if $n$ is not prime ? The problem looks the same (with minimal change), but the way we found the subgroups is not good for this case (I think).
If $n$ is odd, $G$ has no subgroups of index $2$. Indeed, if $H$ is a subgroup of index dividing $2$, and $g\in G$, then $2g\in H$ (since $G/H$ has order $2$, so $2(g+H) = 0+H$). Since every element of $G$, hence of $H$, has order dividing $n$, and $\gcd(2,n)=1$, then $\langle 2g\rangle = \langle g\rangle$, so $g\in\langle 2g\rangle\subseteq H$, hence $g+H$ is trivial. That is, $G\subseteq H$. So the only subgroup of index dividing $2$ is $G$. If $n$ is even, then let $H=2\mathbb{Z}_n$, which is of index $2$ in $\mathbb{Z}_n$. Then $G/\prod_{i=1}^{\infty} H \cong \prod_{i=1}^{\infty}(\mathbb{Z}_n/2\mathbb{Z}_n) \cong \prod_{i=1}^{\infty}\mathbb{Z}_2$. Note: $\prod_{i=1}^{\infty} H$ is not itself of index $2$ in $G$; in fact, $\prod_{i=1}^{\infty} H$ has infinite index in $G$. We are using $\prod_{i=1}^{\infty}H$ to reduce to a previously solved case. Since $\prod_{i=1}^{\infty}\mathbb{Z}_2$ has uncountably many subgroups of index $2$ by the previously solved case, by the isomorphism theorems so does $G$. Below is an answer for the wrong group (I thought $G=\prod_{n=1}^{\infty}\mathbb{Z}_n$; I leave the answer because it is exactly the same idea.) For each $n$, let $H_n$ be the subgroup of $\mathbb{Z}_n$ given by $2\mathbb{Z}_n$. Note that $H_n=\mathbb{Z}_n$ if $n$ is odd, and $H_n$ is the subgroup of index $2$ in $\mathbb{Z}_n$ if $n$ is even. Now let $\mathcal{H}=\prod_{n=1}^{\infty}H_n$. Then $$G/\mathcal{H}\cong\prod_{n=1}^{\infty}(\mathbb{Z}_n/H_n) = \prod_{n=1}^{\infty}(\mathbb{Z}/2\mathbb{Z}).$$ (In the last isomorphism, the odd-indexed quotients are trivial, the even-indexed quotients are isomorphic to $\mathbb{Z}/2\mathbb{Z}$; then delete all the trivial factors). Since $G/\mathcal{H} \cong \prod_{n=1}^{\infty}(\mathbb{Z}/2\mathbb{Z})$ has uncountably many subgroups of index $2$, so does $G$ by the isomorphism theorems.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Primitive roots as roots of equations. Take $g$ to be a primitive root $\pmod p$, and $n \in \{0, 1,\ldots,p-2\}$ write down a necessary sufficient condition for $x=g^n$ to be a root of $x^5\equiv 1\pmod p$ . This should depend on $n$ and $p$ only, not $g$. How many such roots $x$ of this equation are there? This answer may only depend on $p$. At a guess for the first part I'd say as $g^{5n} \equiv g^{p-1}$ it implies for $x$ to be a root $5n \equiv p-1 \pmod p$. No idea if this is right and not sure what to do for second part. Thanks for any help.
Hint. In any abelian group, if $a$ has order $n$, then $a^r$ has order $n/\gcd(n,r)$. (Your idea is fine, except that you got the wrong congruence: it should be $5n\equiv p-1\pmod{p-1}$, not modulo $p$; do you see why?) For the second part, you'll need to see what you get from the first part. That will help you figure it out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Symmetric and exterior power of representation Does there exist some simple formulas for the characters $$\chi_{\Lambda^{k}V}~~~~\text{and}~~~\chi_{\text{Sym}^{k}V},$$ where $V$ is a representation of some finite group? Thanks.
This is not quite an answer, but Fulton & Harris, §2.1 on page 13 gives a Formula for $k=2$: $$\chi_{\bigwedge^2 V}(g) = \frac{1}{2}\cdot\left( \chi_V(g)^2 - \chi_V(g^2)\right)$$ as well as, in the Exercise below, $$\chi_{\mathrm{Sym}^2(V)}(g) = \frac{1}{2}\cdot\left( \chi_V(g)^2 + \chi_V(g^2)\right)$$ Maybe you can look into the proof for the first equality and generalize.
{ "language": "en", "url": "https://math.stackexchange.com/questions/137951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 3, "answer_id": 0 }
is it possible to get the Riemann zeros since we know that the number of Riemann zeros on the interval $ (0,E) $ is given by $ N(E) = \frac{1}{\pi}\operatorname{Arg}\xi(1/2+iE) $ is then possible to get the inverse function $ N(E)^{-1}$ so with this inverse we can evaluate the Riemann zeros $ \rho $ ?? i mean the Riemann zeros are the inverse function of $\arg\xi(1/2+ix) $
No, your formula is wrong. $N(E)= \frac{1}{\pi} Arg \xi (1/2+iE) $ + a nonzero term coming from the integration along the lines $\Im s =E$ (you are applying an argument prinicple). Besides, any function $N: \mathbb{R} \rightarrow\mathbb{Z}$ can't be injective for cardinality considerations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/138041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What does it really mean for something to be "trivial"? I see this word a lot when I read about mathematics. Is this meant to be another way of saying "obvious" or "easy"? What if it's actually wrong? It's like when I see "the rest is left as an exercise to the reader", it feels like a bit of a cop-out. What does this all really mean in the math communities?
It can mean different things. For example: Obvious after a few moments thought. Clear from a commonly used argument or a short one line proof. However, it is often also used to mean the most simple example of something. For example, a trivial group is the group of one element. A trivial vector space is the space {0}.
{ "language": "en", "url": "https://math.stackexchange.com/questions/138112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 5, "answer_id": 1 }
Factoring over a finite field Consider $f=x^4-2\in \mathbb{F}_3[x]$, the field with three elements. I want to find the Galois group of this polynomial. Is there an easy or slick way to factor such a polynomial over a finite field?
The coefficients are reduced modulo 3, so $$ x^4-2=x^4-3x^2+1=(x^4-2x^2+1)-x^2=(x^2-1)^2-x^2=(x^2+x-1)(x^2-x-1). $$ It is easy to see that neither $x^2+x-1$ nor $x^2-x-1$ have any roots any $F_3$. As they are both quadratic, the roots are in $F_9$. Therefore the Galois group is $Gal(F_9/F_3)$, i.e. cyclic of order two.
{ "language": "en", "url": "https://math.stackexchange.com/questions/138175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Probability of components to fail I want to verify my reasoning with you. An electronic system contains 15 components. The probability that a component might fail is 0.15 given that they fail independently. Knowing that at least 4 and at most 7 failed, what is the probability that exactly 5 failed? My solution: $X \sim Binomial(n=15, p=0.15)$ I guess what I have to calculate is $P(X=5 | 4 \le X \le 7) = \frac{P(5 \cap \{4,5,6,7\})}{P(\{4,5,6,7\})}$. Is it correct? Thank you
You already know the answer is $a=p_5/(p_4+p_5+p_6+p_7)$ where $p_k=\mathrm P(X=k)$. Further simplifications occur if one considers the ratios $r_k=p_{k+1}/p_k$ of successive weights. To wit, $$ r_k=\frac{{n\choose k+1}p^{k+1}(1-p)^{n-k-1}}{{n\choose k}p^{k}(1-p)^{n-k}}=\frac{n-k}{k+1}\color{blue}{t}\quad\text{with}\ \color{blue}{t=\frac{p}{1-p}}. $$ Thus, $$ \frac1a=\frac{p_4}{p_5}+1+\frac{p_6}{p_5}+\frac{p_7}{p_5}=\frac1{r_4}+1+r_5(1+r_6), $$ which, for $n=15$ and with $\color{blue}{t=\frac3{17}}$, yields $$ \color{red}{a=\frac1{\frac5{11\color{blue}{t}}+1+\frac{10\color{blue}{t}}6\left(1+\frac{9\color{blue}{t}}7\right)}}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/138224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Inscrutable proof in Humphrey's book on Lie algebras and representations This is a question pertaining to Humphrey's Introduction to Lie Algebras and Representation Theory Is there an explanation of the lemma in §4.3-Cartan's Criterion? I understand the proof given there but I fail to understand how anybody could have ever devised it or had the guts to prove such a strange statement... Lemma: Let $k$ be an algebraically closed field of characteristic $0$. Let $V$ be a finite dimensional vector space over $k$, and $A\subset B\subset \mathrm{End}(V)$ two subspaces. Let $M$ be the set of endomorphisms $x$ of $V$ such that $[x,B]\subset A$. Suppose $x\in M$ is such that $\forall y\in M, \mathrm{Tr}(xy)=0$. Then, $x$ is nilpotent. The proof uses the diagonalisable$+$nilpotent decomposition, and goes on to show that all eigenvalues of $x$ are $=0$ by showing that the $\mathbb{Q}$ subspace of $k$ they generate has only the $0$ linear functional. Added: (t.b.) here's the page from Google books for those without access:
This doesn't entirely answer your question but the key ingredients are (1) the rationals are nice in that their squares are non-negative (2) you can get from general field elements to rationals using a linear functional f (3) getting a handle on x by way of the eigenvalues of s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/138369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 2, "answer_id": 1 }
Algebrically independent elements Possible Duplicate: Why does K->K(X) preserve the degree of field extensions? Suppose $t_1,t_2,\ldots,t_n$ are algebrically independent over $K$ containing $F$. How to show that $[K(t_1,\ldots,t_n):F(t_1,\ldots,t_n)]=[K:F]$?
Using the answer in link provided by Zev, your question can be answered by simple induction over $n$. For $n=1$ we proceed along one of the answers shown over there. Assume we have shown the theorem for some $n$. Then we have $[K(t_1,\ldots,t_n):F(t_1,\ldots,t_n)]=[K:F]$, and by the theorem for $n=1$ we have also $[K(t_1,\ldots,t_n,t_{n+1}):F(t_1,\ldots,t_n,t_{n+1})]=[K(t_1,\ldots,t_n)(t_{n+1}):F(t_1,\ldots,t_n)(t_{n+1})]=[K(t_1,\ldots,t_n):F(t_1,\ldots,t_n)]=[K:F]$ which completes the proof by induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/138465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Showing a series is a solution to a differential equation I am attempting to show that the series $y(x)\sum_{n=0}^{\infty} a_{n}x^n$ is a solution to the differential equation $(1-x)^2y''-2y=0$ provided that $(n+2)a_{n+2}-2na_{n+1}+(n-2)a_n=0$ So i have: $$y=\sum_{n=0}^{\infty} a_{n}x^n$$ $$y'=\sum_{n=0}^{\infty}na_{n}x^{n-1}$$ $$y''=\sum_{n=0}^{\infty}a_{n}n(n-1)x^{n-2}$$ then substituting these into the differential equation I get: $$(1-2x+x^2)\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n-2}-2\sum_{n=0}^{\infty} a_{n}x^n=0$$ $$\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n-2}-2\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n-1}+\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n}-2\sum_{n=0}^{\infty} a_{n}x^n=0$$ relabeling the indexes: $$\sum_{n=-2}^{\infty}(n+2)(n+1)a_{n+2}x^{n}-2\sum_{n=-1}^{\infty}n(n+1)a_{n+1}x^{n}+\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n}-2\sum_{n=0}^{\infty} a_{n}x^n=0$$ and then cancelling the $n=-2$ and $n=-1$ terms: $$\sum_{n=0}^{\infty}(n+2)(n+1)a_{n+2}x^{n}-2\sum_{n=0}^{\infty}n(n+1)a_{n+1}x^{n}+\sum_{n=0}^{\infty}n(n-1)a_{n}x^{n}-2\sum_{n=0}^{\infty} a_{n}x^n=0$$ but this doesn't give me what I want (I don't think) as I have $n^2$ terms as I would need $(n^2+3n+2)a_{n+2}-(2n^2+n)a_{n+1}+(n^2-n-2)a_{n}=0$ I'm not sure where I have gone wrong? Thanks very much for any help
You are correct. Only you need to go on and observe that the lhs of your last equation factorizes as: $$(n+1)[(n+2)a_{n+2}-2n a_{n+1}+(n-2)a_n]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/138520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Intuition and derivation of the geometric mean I've run through a bunch of searches, especially here on SO, but I simply couldn't find something that answers a question that has been on my mind lately. How was the geometric mean derived? What is the intuition behind it. Most simply use the final equation as a justification for its existence. The $n$-th root of the product of all elements equation really doesn't do it justice. Could someone elaborate on how and why is it interesting?
To expand on Domagoj Pandža's great answer, the "distance" function for the geometric mean is: $$ d(n) = \sum_i \left( \ln x_i - \ln n \right)^2. $$ The geometric mean $n = \prod_i x_i^{1/k}$ can be derived as the function that minimizes the value of this distance function. Proof Expand: $$ d(n) = \sum_i (\ln x_i)^2 - 2 \ln n \sum_i \ln x_i + k (\ln n)^2 $$ Differentiate with respect to $n$: $$ \frac{d}{dn}d(n) = -\frac{2}{n} \sum_i \ln x_i + \frac{2k}{n} \ln n $$ Solve $d d(n) / dn = 0$: $$ \frac{2k}{n} \ln n = \frac{2}{n} \sum_i \ln x_i\\ k \ln n = \sum_i \ln x_i \\ \ln n = \frac{1}{k} \sum_i \ln x_i \\ n = \exp \frac{1}{k} \sum_i \ln x_i \\ $$ The last line is the geometric mean, expressed in log form. Log form of geometric mean The geometric mean is: $$ n = \prod_i x_i^{1/k}, $$ which implies that: $$ \ln n = \ln \prod_i x_i^{1/k} \\ = \frac{1}{k} \sum \ln x_i. \\ $$ Thus the geometric mean may also be written as: $$ n = \exp \frac{1}{k} \sum \ln x_i. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/138589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 4 }
Finding a constant to make a valid pdf Let $f(x) = c\cdot 2^{-x^2}$. How do I find a constant $c$ such that the integral evaluates to $1$?
Hint: Rewrite $$f(x) = c \,[e^{\ln(2)}]^{-x^2} = c\, e^{-x^2\ln(2)}$$ and try to exploit the following integral together with some change of variable: $$ \int^{\infty}_0 e^{-x^2} \,dx = \frac{\sqrt{\pi}}{2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/138652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Central Limit Theorem/Markov's inequality Here is the question: Chip dies are manufactured in a facility where it was observed that the width of the die is normally distributed with mean 5mm and standard deviation $\sigma$. The manufacturer wants to guarantee that no more than 1 out of 100 dies fall outside the range of (5mm +/- 0.5mm). What should be the maximal standard deviation $\sigma$ of this manufacturing process? My attempt at a solution: I figured I could use the central limit theorem and Markov's inequality for this one: thus- Pr{die will be in range} = 99/100 I assumed that this should be a normal R.V. (because using a Poisson R.V. to solve this would be tedious) And now I'm horribly stuck. Any advice as to where I went wrong? Thank you.
Assume, without much justification except that we were told to do so, that the width $X$ of the die has normal distribution with mean $5$ and variance $\sigma^2$. The probability that we are within $k\sigma$ of the mean $5$ (formally, $P(5-k\sigma\le X \le 5+k\sigma)$) is equal to the probability that $|Z|\le k$, where $Z$ has standard normal distribution. We want this probability to be $0.99$. If we look at a table for the standard normal, we find that $k\approx 2.57$. We want $k\sigma=0.5$ to just meet the specification. Solve for $\sigma$. We get $\sigma\approx 0.19455$, so a standard deviation of about $0.195$ or less will do the job. We did not use the Central Limit Theorem, nor the Markov Inequality, since we were asked to assume normality. The Poisson distribution has no connection with the problem. Remark: The table that we used shows that the probability that $Z\le 2.57$ is about $0.995$. It follows that $P(Z>2.57)\approx 0.005$, we have $1/2$ of $1$ percent in the right tail. We also have by symmetry $1/2$ of $1$ percent in the left tail, for a total of $1$ percent, as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/138704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Lowenheim-Skolem theorem confusion This Wikipedia entry on the Löwenheim–Skolem theorem says: In mathematical logic, the Löwenheim–Skolem theorem, named for Leopold Löwenheim and Thoralf Skolem, states that if a countable first-order theory has an infinite model, then for every infinite cardinal number κ it has a model of size κ. What does the "size" of a model referring to (or mean)? Edit: If it is referring to the cardinality of a model (set), how do you get the cardinality of one model (-> It's synonymous with interpretation, right?)? What is inside the model, then? I mean, it seems sensical to define a model of a language, as a language has some constant numbers and objects, but defining a model of a single object - a number - seems nonsensical to me. What is inside the model of an infinite number? Thanks.
Each model has a set of individuals. The size of the model is the cardinality of this set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/138786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Proof of the Schwarz Lemma I have a question which is (the Schwarz Lemma): Suppose that $f:\mathbb{D}\rightarrow\mathbb{D}$ is holomorphic and suppose that $f(0)=0$, show that $\lvert f(z)\rvert \leq \lvert z \rvert \forall{z}\in\mathbb{D}$ and the solution is: Let $g(z)=\frac{f(z)}{z}$ for $z\neq0$ and $g(0)=f'(0)$. Then g is holomorphic in $\mathbb{D}$. Now apply the maximum principle to to g on the disc $\bar{D(0,r)}$ for $r<1$ to conclude that for $\lvert z \rvert \leq r$ we have $\lvert g(z) \rvert \leq\frac{1}{r}$ and then letting $r\rightarrow 1$ we get $\lvert g(z)\rvert\leq 1$ and so we get $\lvert f(z)\rvert \leq \lvert z \rvert$. I am confused as to why $\lvert f(z) \rvert \leq 1$ for $z$ on the boundary of the disc.
$f$ is a function of the unit disk into itself. This means that $|f(z)| < 1$ for all $z \in \mathbb{D}$, and in particular this is true for all $z$ in the boundary of the disk $\mathbb{D}(0,r)$ , $r<1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/138850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Advection diffusion equation The advection diffusion equation is the partial differential equation $$\frac{\partial C}{\partial t} = D\frac{\partial^2 C}{\partial x^2} - v \frac{\partial C}{\partial x}$$ with the boundary conditions $$\lim_{x \to \pm \infty} C(x,t)=0$$ and initial condition $$C(x,0)=f(x).$$ How can I transform the advection diffusion equation into a linear diffusion equation by introducing new variables $x^\ast=x-vt$ and $t^\ast=t$? Thanks for any answer.
We could simply apply the chain rule, to avoid some confusions we let $ C(x,t) = C(x^* + vt,t^*) = C^*(x^*,t^*)$: $$ \frac{\partial C}{\partial x} = \frac{\partial C^*}{\partial x^{\phantom{*}}}= \frac{\partial C^*}{\partial x^*} \frac{\partial x^*}{\partial x^{\phantom{*}}} + \frac{\partial C^*}{\partial t^*} \frac{\partial t^*}{\partial x^{\phantom{*}}} = \frac{\partial C}{\partial x^*} $$ remember here in chain rule, the partial derivative is being taken wrt the first and second variable if not to confuse this wrt the total derivative, similary we could have $\displaystyle \frac{\partial^2 C}{\partial x^2} = \frac{\partial^2 C^*}{\partial {x^*}^2} $, $$ \frac{\partial C}{\partial t} = \frac{\partial C^*}{\partial t} = \frac{\partial C^*}{\partial x^*} \frac{\partial x^*}{\partial t^{\phantom{*}}} + \frac{\partial C^*}{\partial t^*} \frac{\partial t^*}{\partial t^{\phantom{*}}} = -v\frac{\partial C^*}{\partial x^*} + \frac{\partial C^*}{\partial t^*} $$ Plugging back to the original equation you will see the convection term is gone if we have done this velocity cone rescaling, you could think the original equation like a diffusion on a car with velocity $v$ measured by a standing person, after the change of variable it is just a pure diffusion measured on a car: $$ \frac{\partial C^*}{\partial t^*} = D\frac{\partial^2 C^*}{\partial {x^*}^2} $$ and the initial condition changes to $C^*(x^*,0) = C(x^*+vt^*,t^*)\Big\vert_{t^*=0}= f(x^*)$, the boundary condition remains the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/138919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Self-Dual Code; generator matrix and parity check matrix Hi ! I have a parity check matrix $H$ for a code $C$ 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 0 1 0 1 0 1 0 1 0 1 1 1 1 1 1 1 1 I am allowed to assume that 1) the dual of an $(n,k)$-code is an $[n,n-k]$-code 2) $(C^{\perp})^{\perp} = C$ (Here $\perp$ denotes the dual) I want to prove that my code $C$ is self-dual. (ie that $C=C^{\perp}$) Here is my logic: I know that, since $H$ is a parity check matrix for $C$, $H$ is a generator matrix for $C^{\perp}$. Since $C^{\perp}$ is an $[n,n-k]$-code, the generator matrix $H$ is a matrix: $[n-k]$ x $ n$ So now looking at $H$, n=8 and k=4, so the corresponding $C$ code is a $8$x$4$ matrix. Now let $G=[g_{i,j}]$ be the generator matrix for $C$. $(GH^T)=0$ since every vector in the rowspace of $G$ is orthogonal to every vector in the rowspace of $H$; Can anyone tell me what is missing to finish off my proof? Note: i see that each row in $H$ has even number of entries and that the distance between any two rows is even. maybe this helps if I can right a definition of weight relating to duals...
The rows of $H$ generate $C^\perp$. By definition of the parity check, $xH^\mathrm{T}=0$ iff $x\in C$. What can you conclude from the fact that $HH^\mathrm{T}=[0]$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/138984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Commutativity between diagonal and unitary matrices? Quick questions: * *if you have a diagonal matrix $A$ and a unitary matrix $B$. Do $A$ and $B$ commute? *if $A$ and $B$ are positive definite matrices. if $a$ is an eigenvalue of $A$ and $b$ is an eigenvalue of $B$, does it follow that $a+b$ is an eigenvalue of $A+B$?
For the first question, the answer is no, an explicit example is given by $A:=\pmatrix{1&0\\ 0&2}$ and $B=\pmatrix{1&1\\ -1&1}$. An other way to see it's not true is the following: take $S$ a symmetric matrix, then you can find $D$ diagonal and $U$ orthogonal (hence unitary) such that $S=U^tDU$ and if $U$ and $D$ commute then $S$ is diagonal. For the second question, the answer is "not necessarly", because the set $$\{a+b\mid a\mbox{ eigenvalue of }A,b\mbox{ eigenvalue of }B\}$$ may contain more elements than the dimension of the space we are working with.
{ "language": "en", "url": "https://math.stackexchange.com/questions/139054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Independent, Normally Distributed R.V. Working on this: A shot is fired at a circular target. The vertical and the horizontal coordinates of the point of impact (with the origin sitting at the target’s center) are independent and normally distributed with $\nu(0, 1)$. Show that the distance of the point of impact from the center is distributed with PDF $$p(r) = re^{-r^2/2}, r \geq 0.$$ Find the median of this distribution. So I'm guessing this would be graphed on an X and Y axis. I can intuit that I need to take the integral of the PDF from the lower bound to $m$ (or from $m$ to the upper bound), but I don't know what the normal distribution with $\nu$(0, 1) mean. Also, how would I show that the point of impact has the desired PDF? Thank you.
Let $r\ge 0$; put $R = \sqrt{X^2 + Y^2}$, where $X$ and $Y$ are the coordinates of the shot. Then $$P(R\le r) = {1\over 2\pi} \mathop{\int\!\!\int}_{B_r(0)} \exp\{(x^2 + y^2)/2\}\,dx\,dy.$$ Change to polars to get $$P(R\le r) = {1\over 2\pi}\int_0^r \int_0^{2\pi} \exp(r^2/2)r\,d\theta\, dr =\int_0^r r\exp\{r^2/2\}\,dr$$ Differentiate and you are there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/139144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Area under a curve, difference between dy and dx I am trying to find the area of $ y = 1 $ and $y = x^\frac{1}{4}$ from 0 to 1 and revolving around $ x = 1$ In class we did the problem with respect to y, so from understanding that is taking the "rectangles" from f(y) or the y axis. I was wondering why not just do it with respect to x, it would either be a verticle or horizontal slice of the function but the result would be the same. I was not able to get the correct answer for the problem but I am not sure why. Also one other question I had about this, is there a hole at 1,1 in the shape? The area being subtracted is defined there so shouldn't that be a hole since we are taking that area away? Both function at 1 are 1.
I expect you have drawn a picture, and that it is the region below $y=1$, above $y=x^{1/4}$, from $x=0$ to $x=1$ that is being rotated about $x=1$. When you rotate, you get a cylinder with a kind of an upside down bowl carved out of it, very thin in the middle. You have asked similar questions before, so I will be brief. It is probably easiest to do it by slicing parallel to the $x$-axis. So take a slice of thickness $dy$, at height $y$. We need to find the area of cross-section. Look at the cross-section. It is a circle with a circle removed. The outer radius is $1$, and the inner radius is $1-x$. So the area of cross-section is $\pi(1^2-(1-x)^2)$. We need to express this in terms of $y$. Note that $x=y^4$. so our volume is $$\int_0^1 \pi\left(1^2-(1-y^4)^2\right)\,dy.$$ I would find it more natural to find the volume of the hollow part, and subtract from the volume of the cylinder. You could also use shells. Take a thin vertical slice, with base going from $x$ to $x+dx$, and rotate it. At $x$, we are going from $x^{1/4}$ to $1$. The radius of the shell is $1-x$, and therefore the volume is given by $$\int_0^1 2\pi(1-x)(1-x^{1/4})\,dx.$$ Multiply through, and integrate term by term. Not too bad, but slicing was easier.
{ "language": "en", "url": "https://math.stackexchange.com/questions/139217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving identities using Pythagorean, Reciprocal and Quotient Back again, with one last identity that I cannot solve: $$\frac{\cos \theta}{\csc \theta - 2 \sin \theta} = \frac{\tan\theta}{1-\tan^2\theta}$$. The simplest I could get the left side to, if at all simpler, is $$\frac{\cos\theta}{\csc^2\theta-2}$$ As for the right side, it has me stumped, especially since the denominator is so close to a identity yet so far away. I've tried rationalizing the denominators (both sides) to little success. From my last question, where multiplying by a '1' worked, I didn't see the association here. Thanks!
HINT: $$\begin{align*} \frac{\tan\theta}{1-\tan^2\theta}&=\frac{\sin\theta\cos\theta}{\cos^2\theta-\sin^2\theta} \end{align*}$$ $$\begin{align*} \frac{\cos\theta}{\csc\theta-2\sin\theta}&=\frac{\sin\theta\cos\theta}{1-2\sin^2\theta} \end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/139263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Question on conditional independence Consider four random vectors $X, Z, C$ and $W$ in which $Z_i = W_i+N(0,\sigma)$: iid Gaussian noise for each element of $W$ If $X$ is conditionally independent of $Z$ given $C,$ will X be conditionally independent of $W$ given $C$? Thank you very much.
Not necessarily, given the conditions as stated. We can work in one dimension. Let $\eta, \xi$ be two iid $N(0,1)$ random variables. Set $X = \eta - \xi$, $W = \eta$, $Z = \eta + \xi$, and $C=0$ (so conditional independence given $C$ is just independence). Then $X$ and $Z$ are independent (they are jointly Gaussian with zero covariance) but $X$ and $W$ are not (their covariance is 1).
{ "language": "en", "url": "https://math.stackexchange.com/questions/139379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The Set of All Subsequential Limits Given $\{a_n\}_{n=0}^\infty$ and $\{b_n\}_{n=0}^\infty$ bounded sequences; show that if $\lim \limits_{n\to \infty}a_n-b_n=0$ then both sequences have the same subsequential limits. My attempt to prove this begins with: Let $E_A=\{L|L$ subsequential limit of $a_n$} and $E_B=\{L|L$ subsequential limit of $b_n$}. We need to show that $E_A=E_B$. Given bounded sequence $a_n$ and $b_n$ we know from B.W that each sequence has a subsequence that converges, therefore both $E_A$ and $E_B$ are not empty; Let $L\in E_A$. How can I show that $L\in E_B$? Thank you very much.
I think you may want to prove that: * *Consider two sequences $\{x_n\}$ and $\{y_n\}$ such that $x_n-y_n \to l$ and $y_n \to y$, then, $$x_n \to l+y$$ *Given a convergent sequence $\{x_n\}$ that converges to $x$, all its subsequences converge to the same limit, $x$. Do you see how that would pay here? * *Let $r \in E_B$. That is, $r$ is a limit point of the sequence $\{b_n\}$. So, there is a subsequence of $\{b_n\}$, say $\{b_{n_k}\}$ that converges to $r$. *Now, consider the same subsequence of $\{a_n-b_n\}$, namely $\{a_{n_k}-b_{n_k}\}$. Since this is a subsequence of a convergent subsequence, $\{a_{n_k}-b_{n_k}\}$ converges to $0$, $a_{n_k}-b_{n_k} \to 0$ by $(2)$. Now putting together the two claims, by $(1)$. you have that $a_{n_k} \to r$. That is, $r \in E_A$. This proves one inclusion, $E_B \subseteq E_A$. The proof of the other inclusion is similar. For the other inclusion, as Brian observes, note that $a_n -b_n \to 0$ implies $b_n-a_n \to 0$. Now appeal to the previous part, to see that $E_A \subseteq E_B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/139438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
In the history of mathematics, has there ever been a mistake? I was just wondering whether or not there have been mistakes in mathematics. Not a conjecture that ended up being false, but a theorem which had a proof that was accepted for a nontrivial amount of time before someone found a hole in the argument. Does this happen anymore now that we have computers? I imagine not. But it seems totally possible that this could have happened back in the Enlightenment. Feel free to interpret this how you wish!
Well, there have been plenty of conjectures which everybody thought were correct, which in fact were not. The one that springs to mind is the Over-estimated Primes Conjecture. I can't seem to find a URL, but essentially there was a formula for estimating the number of primes less than $N$. Thing is, the formula always slightly over-estimates how many primes there really are... or so everybody thought. It turns out that if you make $N$ absurdly large, then the formula starts to under-estimate! Nobody expected that one. (The "absurdly large number" was something like $10^{10^{10^{10}}}$ or something silly like that.) Fermat claimed to have had a proof for his infamous "last theorem". But given that the eventual proof is a triumph of modern mathematics running to over 200 pages and understood by only a handful of mathematicians world wide, this cannot be the proof that Fermat had 300 years ago. Therefore, either 300 years of mathematicians have overlooked something really obvious, or Fermat was mistaken. (Since he never write down his proof, we can't claim that "other people believed it before it was proven false" though.) Speaking of which, I'm told that Gauss or Cauchy [I forget which] published a proof for a special case of Fermat's last theorem - and then discovered that, no, he was wrong. (I don't recall how long it took or how many people believed it.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/139503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "297", "answer_count": 28, "answer_id": 1 }
Solve $\frac{\cos x}{1+\sin x} + \frac{1+\sin x}{\cos x} = 2$ I am fairly good at solving trig equations yet this one equation has me stumped. I've been trying very hard but was unable to solve it. Can anyone help please? Thank you. $$\frac{\cos x}{1+\sin x} + \frac{1+\sin x}{\cos x} = 2$$ solve for $x$ in the range of $[-2\pi, 2\pi]$ I do know we have to do a difference of squares, yet after that, I don't know what to do and I get lost. Thank you.
HINT: $$\begin{align*} \frac{\cos x}{1+\sin x} + \frac{1+\sin x}{\cos x}&=\frac{\cos^2 x+(1+\sin x)^2}{\cos x(1+\sin x)}\\ &=\frac{\cos^2 x+\sin^2x+1+2\sin x}{\cos x(1+\sin x)}\;; \end{align*}$$ now use a familiar trig identity and find something to cancel.
{ "language": "en", "url": "https://math.stackexchange.com/questions/139508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 0 }
How to find the minimum variance portfolio? I am doing some revision questions on my Portfolio Theory module, and have come across the following question: Consider an investor who has constructed a risky portfolio from N securities. The investment opportunity set is described by the equation: $$\sigma^2 = 10 - 5{\times}E(r) + 0.5\times(E(r))^2$$ Find the minimum variance portfolio. I can't find any info in my notes, but my intuition says differentiate, set to zero and rearrange for E(r)?
We want to minimize $\sigma^2$ as a function of $E(r)$. We also likely want $E(r) \ge 0$, but this won't be necessary. $$0 =\frac{d \sigma^2}{dE(r)}=-5+E(r).$$ Now, looking at the second derivative $$\frac{d^2 \sigma^2}{dE(r)^2} = 1>0$$ It is convex at every point, so we only need to check minima. And $E(r)=5$ is the minimum variance portfolio. However, the variance at that point, $\sigma^2=-2.5$, so perhaps there is a mistake in the question. Can you clarify?
{ "language": "en", "url": "https://math.stackexchange.com/questions/139561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Conditional probability: Why can't I make this assumption? Here is the question :If we are told that among the 10 coins, at least three have landed on heads. What is the probability that there are at most five heads? In other words, P( at most five head | at least three have landed). The solution says, $P( X\leq 5 | X\geq 3) = P( 3\leq X\leq 5)/P( X\geq3) \approx 0.601$ Which completely makes sense. But initially, I got this question wrong because I assumed that $P(\text{at most five head} | \text{at least three have landed})$ is equivalent to $P(\text{at most 2 heads out of 7 toss}) = \binom{7}{2}(\frac{1}{2})^2(\frac{1}{2})^5\approx 0.1641$ Why can't I make this assumption? "Given that three have already landed head," don't I only care about $2$ more heads out of $7$ tosses? and aren't tosses all independent events? I understand the correct method but I cannot resolve as to why my original solution is incorrect.
Your calculation of $P(\text{at most $2$ heads out of $7$ tosses})$ is wrong because you must include $X=0$ and $X=1$ along with $X=2$. The correct result is $\approx 0.227$. But that is not the main point. You know that three of the ten tosses are heads, but you don't know which ones. For instance, if you are given that the first three tosses landed on heads, then your result is OK. Does this make sense? Edit: I should explain myself better. Suppose that you have tossed two coins. The probability of two heads, given that the first one is a head, is $1/2.$ The probability of two heads, given that at least one of them is a head, is $1/3.$ (Agreed?) This simple variant contains the main point of this apparent conflict.
{ "language": "en", "url": "https://math.stackexchange.com/questions/139605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What are some examples of a mathematical result being counterintuitive? As I procrastinate studying for my Maths Exams, I want to know what are some cool examples of where math counters intuition. My first and favorite experience of this is Gabriel's Horn that you see in intro Calc course, where the figure has finite volume but infinite surface area (I later learned of Koch's snowflake which is a 1d analog). I just remember doing out the integrals for it and thinking that it was unreal. I later heard the remark that you can fill it with paint, but you can't paint it, which blew my mind. Also, philosophically/psychologically speaking, why does this happen? It seems that our intuition often guides us and is often correct for "finite" things, but when things become "infinite" our intuition flat-out fails.
I think a puzzle at calculus level is the following: Given a real number $x$ and a conditionally convergent series, the series can be re-arranged so that its sum is $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/139699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "133", "answer_count": 45, "answer_id": 0 }
What are some examples of a mathematical result being counterintuitive? As I procrastinate studying for my Maths Exams, I want to know what are some cool examples of where math counters intuition. My first and favorite experience of this is Gabriel's Horn that you see in intro Calc course, where the figure has finite volume but infinite surface area (I later learned of Koch's snowflake which is a 1d analog). I just remember doing out the integrals for it and thinking that it was unreal. I later heard the remark that you can fill it with paint, but you can't paint it, which blew my mind. Also, philosophically/psychologically speaking, why does this happen? It seems that our intuition often guides us and is often correct for "finite" things, but when things become "infinite" our intuition flat-out fails.
Here's a counterintuitive example from The Cauchy Schwarz Master Class, about what happens to cubes and spheres in high dimensions: Consider a n-dimensional cube with side length 4, $B=[-2,2]^n$, with radius 1 spheres placed inside it at every corner of the smaller cube $[-1,1]^n$. Ie, the set of spheres centered at coordinates $(\pm 1,\pm 1, \dots, \pm 1)$ that all just barely touch their neighbor and the wall of the enclosing box. Place another sphere $S$ at the center of the box at 0, large enough so that it just barely touches all of the other spheres in each corner. Below is a diagram for dimensions n=2 and n=3. Does the box always contain the central sphere? (Ie, $S \subset B$?) Surprisingly, No! The radius of the blue sphere $S$ actually diverges as the dimension increases, as shown by the simple calculation in the following image, The crossover point is dimension n=9, where the central sphere just barely touches the faces of the red box, as well as each of the 512(!) spheres in the corners. In fact, in high dimensions nearly all of the central sphere's volume is outside the box.
{ "language": "en", "url": "https://math.stackexchange.com/questions/139699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "133", "answer_count": 45, "answer_id": 32 }
If $b^2$ is the largest square divisor of $n$ and $a^2 \mid n$, then $a \mid b$. I am trying to prove this: $n$, $a$ and $b$ are positive integers. If $b^2$ is the largest square divisor of $n$ and $a^2 \mid n$, then $a \mid b$. I want to prove this by contradiction, and I don't want to go via the fundamental theorem of arithmetic to do the contradiction. Can I prove this purely from properties of divisibility and GCD? Since $b^2$ is the largest square divisor of $n$, $$ a^2 \le b^2 \implies a \le b. $$ Let us assume that $a \nmid b$. Now, I want to arrive at the fact that there is a positive integer $c$ such that $c > b$ and $c^2 \mid n$. This will be a contradiction to the assumption that $b^2$ is the largest square divisor of $n$. How can I do this?
Let $\operatorname{lcm}(a,b)=\frac{ab}{\gcd(a,b)}$. Since the $\gcd$ divides both $a$ and $b$, it's clear from the definition that the $\operatorname{lcm}$ is an integer divisible by both $a$ and $b$. And if $a$ does not divide $b$, then the $\operatorname{lcm}$ is strictly greater than $b$, since $a\neq \gcd(a,b)$. By this question, the squareroot of an integer is either an integer or irrational, so since $a^2b^2|n^2$, $ab|n$. Pick $x$ and $y$ so that $ax+by=\gcd(a,b)$. Then $\frac{n}{(\operatorname{lcm}(a,b))^2}=\frac{n}{a^2b^2}\cdot (ax+by)^2=\frac{n}{a^2b^2}\cdot (a^2x^2+2abxy+b^2c^2)=\frac{nx^2}{b^2}+\frac{2nxy}{ab}+\frac{ny^2}{a^2}$ is an integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/139736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Understanding the Leontief inverse What I remember from economics about input/output analysis is that it basically analyses the interdependencies between business sectors and demand. If we use matrices we have $A$ as the input-output matrix, $I$ as an identity matrix and $d$ as final demand. In order to find the final input $x$ we may solve the Leontief Inverse: $$ x = (I-A)^{-1}\cdot d $$ So here's my question: Is there a simple rationale behind this inverse? Especially when considering the form: $$ (I-A)^{-1} = I+A + A^2 + A^3\ldots $$ What happens if we change an element $a_{i,j}$ in $A$? How is this transmitted within the system? And is there decent literature about this behaviour around? Thank you very much for your help!
The equation you are concerned with relates total output $x$ to intermediate output $Ax$ plus final output $d$, $$ x = Ax + d $$. If the inverse $(I - A)^{-1}$ exists, then a unique solution to the equation above exists. Note that some changes of $a_{ij}$ may cause a determinate system to become indeterminate, meaning there can be many feasible production plans. Also, increasing $a_{ij}$ is equivalent to increasing the demand by sector $i$ for the good produced by sector $j$. Thus, as sector $i$ produces more, it will consume more of sector $j$'s goods in its production process.
{ "language": "en", "url": "https://math.stackexchange.com/questions/139801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 1 }
How to prove $\left\{ \omega|X(\omega)=Y(\omega)\right\} \in\mathcal{F}$ is measurable, if $X$ and $Y$ are measurable? Given a probability space $(\Omega ,\mathcal{F} ,\mu)$. Let $X$ and $Y$ be $\mathcal{F}$-measurable real valued random variables. How would one proove that $\left\{ \omega|X(\omega)=Y(\omega)\right\} \in\mathcal{F}$ is measurable. My thoughts: Since $X$ and $Y$ are measurable, it is true, that for each $x\in\mathbb{R}:$ $\left\{ \omega|X(\omega)<x\right\} \in\mathcal{F}$ and $\left\{ \omega|Y(\omega)<x\right\} \in\mathcal{F}$. It follows that $\left\{ \omega|X(\omega)-Y(\omega)\leq x\right\} \in\mathcal{F}$ Therefore $\left\{ -\frac{1}{n}\leq\omega|X(\omega)-Y(\omega)\leq \frac{1}{n} \right\} \in\mathcal{F}$, for $n\in\mathbb{N}$. Therefore $0=\bigcap_{n\in\mathbb{N}}\left\{ -\frac{1}{n}\leq\omega|X(\omega)-Y(\omega)\leq \frac{1}{n} \right\} \in\mathcal{F}$. Am working towards the correct direction? I appreciate any constructive answer!
$$[X\ne Y]=\bigcup_{q\in\mathbb Q}\left([X\lt q]\cap[Y\geqslant q]\right)\cup\left([X\geqslant q]\cap[Y\lt q]\right)=\bigcup_{q\in\mathbb Q}\left([X\lt q]\Delta[Y\lt q]\right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/139853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Is 1100 a valid state for this machine? A room starts out empty. Every hour, either 2 people enter or 4 people leave. In exactly a year, can there be exactly 1100 people in the room? I think there can be because 1100 is even, but how do I prove/disprove it?
The process can be rephrased as making a required motion of $+2$ every time, with an optional additional motion of $-6$ at some steps. The forward part of that motion will reach $+2N$ in $N$ steps, plus some backward motion divisible by $6$ to attain the final total motion of $T=+1100$. Here $N$ and $T$ are given and the exact condition for a pair $(N,T)$ to be realizable is that $(2N - T)/6$ be a non-negative integer. That integer is equal to the number of backward moves, whether thought of as $-4$'s or $-6$'s. This answer was found in reverse, by solving the linear $2 \times 2$ system for the number of $+2$ and $-4$ moves (given $N$ and $T$), and interpreting the condition for the numbers to be non-negative integers. As explained in the previous answers, for the problem as posted, $2N = 2 \times 24 \times \text{(days in year)}$ is divisible by $6$ and $1100$ is not, so the conditions are not satisfied.
{ "language": "en", "url": "https://math.stackexchange.com/questions/139931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 3 }
A characterization of open sets Let $(M,d)$ be a metric space. Then a set $A\subset M$ is open if, and only if, $A \cap \overline X \subset \overline {A \cap X}$ for every $X\subset M$. This is a problem from metric spaces, but I think that it only requires topology. I don't know how to do it.
Suppose first that $A$ is open. Let $X\subseteq M$, and suppose that $x\in A\cap\operatorname{cl}X$. Let $U$ be any open set containing $x$; then $A\cap U$ is an open neighborhood of $x$, and $x\in\operatorname{cl}X$, so $U\cap X\ne\varnothing$. Thus, $U\cap(A\cap X)\ne\varnothing$, and since $U$ was an arbitrary open set containing $x$, we conclude that $x\in\operatorname{cl}(A\cap X)$. Finally, $x$ was an arbitrary element of $A\cap\operatorname{cl}X$, so $A\cap\operatorname{cl}\subseteq\operatorname{cl}(A\cap X)$. This proves the ($\Rightarrow$) direction. Here's a hint for the ($\Leftarrow$) direction: suppose that $A$ is not open, let $X=M\setminus A$, and show that $A\cap\operatorname{cl}X\nsubseteq\operatorname{cl}(A\cap X)$. (This is pretty easy to do.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/140012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How to prove here exists a function $f \in L^1(X,\mu)$ with $f>0$ $\mu$-a.e. iff $\mu$ is $\sigma$-finite. How to show that let $(X,\mathcal{M},\mu)$ be a measurable space, there exists a function $f \in L^1(X,\mu)$ with $f>0$ $\mu$-a.e. iff $\mu$ is $\sigma$-finite. Can you please help me out? Thank you.
* *If such a function exists, then put $A_n:=\{x\mid f(x)\geq \frac 1n\}$, $A_n$ is measurable and of finite measure since $f$ is integrable. Let $N:=\{x,f(x)\leq 0\}$. Then $X=N\cup\bigcup_{n\geq 1}A_n$. *Conversely, if $(X,\mathcal M,\mu)$ is $\sigma$-finite, let $\{A_n\}$ a sequence of pairwise disjoint sets such that $X=\bigcup_{n\geq 1}A_n$ and $\mu(A_n)$ is finite for each $n$. Then definite $f(x)=\sum_{n\geq 1}\frac 1{n^2(\mu(A_n)+1)}\chi_{A_n}$. Then check that $f$ is measurable, almost everywhere positive and integrable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/140085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Determinant of a 3x3 matrix with 6 unknowns given the determinants of two 3x3 matrices with same unknowns? Given: $$ det(A) = 3 \\ det(B) = -4 $$ $$ A = \begin{pmatrix} a & b & c \\ 1 & 1 & 1\\ d & e & f \end{pmatrix} \\ B = \begin{pmatrix} a & b & c \\ 1 & 2 & 3 \\ d & e & f \end{pmatrix} \\ C = \begin{pmatrix} a & b & c \\ 4 & 6 & 8 \\ d & e & f \end{pmatrix} $$ Find $det(C)$. $$ det(A) = (af-cd)+(bd-ae)+(ce-bf) = 3 \\ det(B) = 2(af-cd)+3(bd-ae)+(ce-bf) = -4 \\ det(C) = 6(af-cd)+8(bd-ae)+4(ce-bf) = x $$ I've written this as an augmented matrix with $(af-cd), (bd-ae), (ce-bf)$ as the unknowns and found the reduced row echelon form to be: $$ \begin{pmatrix} 1 & 0 & 2 & 3 \\ 0 & 1 & -1 & -10 \\ 0 & 0 & 0 & x+2 \end{pmatrix} $$ Can I then conclude that $det(C) = -2$?
copper.hat's answer is a lovely answer, which uses very fundamental attributes of the determinant. Notice that your answer is an algebraic way of saying the same thing. They are really equivalent, just that you have an algebraic error in your working; your augmented matrix should look like (I tried to preserve the column order you used, and can't seem to get the nice vertical line...): $$ \begin{pmatrix} 1 & 1 & 1 & 3 \\ 2 & 3 & 1 & -4 \\ 5 & 6 & 4 & x \end{pmatrix} $$ Solving this indeed yields $x=5$. The important point here is that once you've extracted your equations and put them in your system, the row reduction you perform is exactly what cooper.hat did in his breakdown.
{ "language": "en", "url": "https://math.stackexchange.com/questions/140144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Minimum tiles for a grid given a few conditions Today, I came across an exercise in Problem Solving Strategies by Johnson and Herr which I was not sure was the best way to solve it. The problem given was: Below I drew up a quick sketch of a diagram. Note that each row and column should be the same height and width, which is not apparent by my sketch (sorry!). Also, there should be a horizontal line ending the last row of tiles. My line of thinking was that since there was $12$ vertical lines and $9$ horizontal lines, there are $11$ horizontal tiles per row and $8$ vertical tiles per column. Also note that the total number of intersections is given by $12(9) = 108$. (Note I am unsure whether to count the very top line horizontal line as one of the nine lines and the far left line as one of the twelve vertical lines.) Each title has $4$ intersections. Thus the minimum number of tiles to cover the area would be $\frac{108}{4} = 27$ tiles. However, due to there being an odd number of lines ($9$), $24$ tiles would only cover $8$ lines ($96$ intersections). So, to account for those last $12$ intersections, we need to add $6$ more tiles - giving us a total min number of tiles needed being $30$. Is this the type of logic you would use to solve similar problems to this - or is there a sneakier way perhaps?
Here's the general formula I came up with: Given $m$ horizontal lines and $n$ vertical lines, the number of tiles needed to cover the grid is $\lceil\frac{m}{2}\rceil\times\lceil\frac{n}{2}\rceil$ So with $m=12$ and $n=9$ $\lceil\frac{12}{2}\rceil\times\lceil\frac{9}{2}\rceil=\lceil6\rceil\times\lceil4.5\rceil=6\times5=30$ As far as the method goes, I started with a very simple (1 vertical by 1 horizontal line) case and then expanded it to a larger case (2 vertical by 3 horizontal) and so on. Start with a trivial case, then with a slightly more complex case, and then generalize.
{ "language": "en", "url": "https://math.stackexchange.com/questions/140191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can every continuous function that is curved be expressed as a formula? By "curved", I mean that there is no portion of a graph of a function that is a straight line. (Let us first limit the case into 2-dimensional cases.. and if anyone can explain the cases in more than 2-dimensional scenarios I would appreciate.. but what I really want to know is the 2-dimensional case.) Let us also say that a function is surjective (domain-range relationship). So, can every continuous and curved function be described as a formula, such as sines, cosines, combinations of them, or whatever formula? thanks.
No. First, because when you say "whatever formula" you probably include the elementary operations, maybe fractional power, and a few known functions (sine, cosine, exponential, say); there are many other functions, not only continuous but infinitely differentiable that are known to be impossible to express in terms of those elementary functions mentioned above. But there is also something deeper, which is that when you say "continuous", you are graphically thinking of a differentiable function. Any function that is expressed "by a formula" is almost surely differentiable, and most likely infinitely differentiable. But there are continuous functions which are not even differentiable at a single point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/140240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Diagonal in projective space This is exercise $2.15$ from Harris book "Algebraic Geometry: A First Course". Show that the image of the diagonal in $\mathbb{P}^{n} \times \mathbb{P}^{n}$ under the Segre map is isomorphic to the Veronese variety $v_{2}(\mathbb{P}^{n})$. Would the idea is just map everything to itself and ignore the repeated monomials of degree $2$ for example if $n=1$ we get that the diagonal under the Segre mapping sends a point $([a : b],[a:b]) \mapsto [a^{2} : ab : ab : b^{2}]$. Now this almost looks like the $2$-Veronese map $\mathbb{P}^{1} \rightarrow \mathbb{P}^{2}$ given by $[s : t] \mapsto [s^{2} : st : t^{2}]$. So what I mean is simply ignore the repeated monomial $ab$ and map $[a^{2} : ab: ab : b^{2}]$ to $[a^{2} : ab : b^{2}]$. Would this work?
You have exactly the right idea. I would formulate it slightly differently. If we continue with your example, we can write down a map $\mathbf P^2 \to \mathbf P^3$ as $$ [x:y:z]\mapsto [x:y:y:z].$$ If we restrict this map to the Veronese embedded copy of $\mathbf P^1$, then we get an isomorphism onto the image of the diagonal under the Segre. This formalizes the idea that one can "ignore repeated monomials".
{ "language": "en", "url": "https://math.stackexchange.com/questions/140291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Linear algebra question about definition of basis From Wikipedia: "A basis $B$ of a vector space $V$ over a field $K$ is a linearly independent subset of $V$ that spans (or generates) $V$.(1) $B$ is a minimal generating set of $V$, i.e., it is a generating set and no proper subset of B is also a generating set.(2) $B$ is a maximal set of linearly independent vectors, i.e., it is a linearly independent set but no other linearly independent set contains it as a proper subset."(3) I tried to prove (1) => (3) => (2), to see that these are equivalent definitions, can you tell me if my proof is correct: (1) => (3): Let $B$ be linearly independent and spanning. Then $B$ is maximal: Let $v$ be any vector in $V$. Then since $B$ is spanning, $\exists b_i \in B, k_i \in K: \sum_{i=1}^n b_i k_i = v$. Hence $v - \sum_{i=1}^n b_i k_i = 0$ and hence $B \cup \{v\}$ is linearly dependent. So $B$ is maximal since $v$ was arbitrary. (3) => (2): Let $B$ be maximal and linearly independent. Then $B$ is minimal and spanning: spanning: Let $v \in V$ be arbitrary. $B$ is maximal hence $B \cup \{v\}$ is linearly dependent. i.e. $\exists b_i \in B , k_i \in K : \sum_i b_i k_i = v$, i.e. $B$ is spanning. minimal: $B$ is linearly independent. Let $b \in B$. Then $b \notin span( B \setminus \{b\})$ hence $B$ is minimal. (2) => (1): Let $B$ be minimal and spanning. Then $B$ is linearly independent: Assume $B$ not linearly independent then $\exists b_i \in B, k_i \in K: b = \sum_i b_i k_i$. Then $B \setminus \{b\}$ is spanning which contradicts that $B$ is minimal.
The proof looks good (appart form the obvious mix up in the numbering). One thing which is not totally precise: In your second proof you write Let $v\in V$ be arbitrary. $B$ is maximal hence $B\cup\{v\}$ is linearly dependent. i.e. $\exists b_i\in B,k_i\in K: \sum_i b_ik_i=v$, i.e. $B$ is spanning. To be precise you have $k_i\in K$ not all vanishing such that $k_0v+\sum_i k_ib_i=0$. Since $B$ is linearly independent $k_0=0$ implies $k_i=0$ for all $i$, therefore $k_0\neq 0$ and $v$ is in the span of $B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/140359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
a function as a character I meet difficulty in Problem 4.5 in the book "Representation theory of finite group, an introductory approach" of Benjamin Steinberg : For $v=(c_1,\cdots,c_m)\in(\mathbb{Z}/2\mathbb{Z})^m$, let $\alpha(v)=\{i : c_i=[1]\}$. For each $Y\subseteq\{1,\cdots,m\}$, define a function $\chi_Y : (\mathbb{Z}/2\mathbb{Z})^m \to \mathbb{C} $ by $\chi_Y(v)=(-1)^{|\alpha(v)\cap Y|}$. * *Prove that $\chi_Y$ is a character. *Show that every irreducible character of $(\mathbb{Z}/2\mathbb{Z})^m$ is of the form $\chi_Y$ for some subset $Y$ of $\{1,\cdots,m\}$. Thanks in advance.
I think here is the solution : Looking the second question, it suggests us to prove that $\chi_Y$ is a representation, i.e., to prove that $\chi_Y$ is a group homomorphism. Let $v_1,v_2\in(\mathbb{Z}/2\mathbb{Z})^m$ so $\alpha(v_1+v_2)=\alpha(v_1)+\alpha(v_2)-\alpha(v_1)\cap\alpha(v_2)$, and so $|\alpha(v_1+v_2)\cap Y| = |\alpha(v_1)\cap Y|+|\alpha(v_2)\cap Y| -2|(\alpha(v_1)\cap\alpha(v_2))\cap Y|$, so the results follows. For (2), Since $(\mathbb{Z}/2\mathbb{Z})^m$ is abelian, every irreducible representation is linear, let $\phi$ be a such one, we need to prove there is a set $Y_\phi$ associated with $\phi$. Since every irreducible representation of $(\mathbb{Z}/2\mathbb{Z})^m$ comes from the irreducible representations of $\mathbb{Z}/2\mathbb{Z}$, which are $1$ and $\rho$, where $\rho(0)=1$ and $\rho(1)=-1$, i.e., $\phi$ is a product $f_1\cdots f_m$ where $f_i$ is either $1$ or $\rho$. Then $Y_\phi=\{i : f_i=\rho\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/140397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Can someone walk me through this differential problem? I'm having a little difficulty understanding how to do the .05 using differentials. I'm just hoping someone can walk me through, step by step, and explain why they are during it that way. $$\sqrt[3] {27.05}$$ Edit Apologies to all, I wasn't very clear on what I was asking. I was looking for someone to explain how to find an approximate answer to the equation above using differentials. (such as @Arturo has done) @Sasha @Ross I apologize for wasting your time when answering the question using series.
The general philosophy when using differentials: You are trying to find the value of a function at a difficult point, but there is a point, very close to the difficult point, where the function is easy to evaluate. Then one can use the formula $$f(a+\Delta x) = f(a)+\Delta y \approx f(a)+dy = f(a)+f'(a)dx=f(a)+f'(a)(x-a)\quad \text{ for }x\text{ near }a,$$ where $\Delta x = dx$ is the change in $x$ (how far the difficult point is from $a$, which is $x-a$), $\Delta y$ is the change in value (which is $f(x)-f(a)$), and $dy$ is the differential of $y$ (which is $dy = f'(x)dx$). If $\Delta x=dx$ is small, then $\Delta y$ is very close to $dy$, so we can use $dy$ to approximate. So you want to do the following: * *Identify the function you are trying to evaluate. This will be your $f$. *Find a point near the point where you are trying to evaluate where the function is easy: this will be your $a$. *Compute $f(a)$ (which is supposed to be easy) and compute $f'(a)$ (which hopefully will be easy as well). *Plug into the formula. In this case, $f(x) = \sqrt[3]{x}$, and the "hard point" is $x=27.05$. Is there a point $a$ near $x$ where $f$ is easy to evaluate? Yes! $f(27)$ is easy. So take $a=27$, $\Delta x = dx = .05$, and use the approximation. You'll have to compute $f'(x)$ and $f'(27)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/140454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Determining the dimension and a basis for a vector space I have the following problem: Let $W$ be a vector space of all solutions to these homogenous equations: $$\begin{matrix} x &+& 2y &+& 2z &-& s &+& 3t &=& 0 \\ x &+& 2y &+& 3z &+& s &+& t &=& 0 \\ 3x &+& 6y &+& 8z &+& s &+& 5t &=& 0\end{matrix}$$ Find the dimension of the space $W$ and determine a basis for $W$. I tried solving the above kernel to get the solutions. The matrix: $$\left(\begin{matrix} 1 & 2 & 2 & -1 & 3 \\ 1 & 2 & 3 & 1 & 1 \\ 3 & 6 & 8 & 1 & 5\end{matrix}\right)$$ When performing Gauss-Jordan on it, I get the matrix rank to be $3$: $$\left(\begin{matrix} 1 & 0 & -1 & 0 & 0 \\ 0 & 5 & 2 & 0 & 0 \\ 4 & 10 & 0 & 0 & 0\end{matrix}\right)$$ So I get lost at this point. I don't know how to get the dimension nor how to determine a basis for it. Can anyone point out the next thing I should do and whether I started off good?
Subtracting the first row $4$ times from the third, we get the matrix $$\left(\begin{matrix} 1 & 0 & -1 & 0 & 0 \\ 0 & 5 & 2 & 0 & 0 \\ 0 & 10 & 4 & 0 & 0\end{matrix}\right)$$ Subtracting the second row $2$ times from the third, we get the matrix $$\left(\begin{matrix} 1 & 0 & -1 & 0 & 0 \\ 0 & 5 & 2 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0\end{matrix}\right).$$ This means the original set of equations is equivalent to $$\begin{align} x - z &= 0 \\ 5y + 2z &= 0.\end{align}$$ So for any $\lambda, \mu, \rho$ we get $z = \lambda$, $s = \mu$, $r = \rho$, $x = \lambda$ and $y = \frac{-2}{5} \lambda$ as a solution. In particular: * *$(\lambda, \mu, \rho) = (1,0,0)$ gives the solution $(x,y,z,s,t) = (1, \frac{-2}{5}, 1, 0, 0)$. *$(\lambda, \mu, \rho) = (0,1,0)$ gives the solution $(x,y,z,s,t) = (0,0,0,1,0)$. *$(\lambda, \mu, \rho) = (0,0,1)$ gives the solution $(x,y,z,s,t) = (0,0,0,0,1)$. This means that $W$ is spanned by the three vectors: $$W = \left\langle \left(\begin{matrix} 1 \\ -2/5 \\ 1 \\ 0 \\ 0\end{matrix}\right), \left(\begin{matrix} 0 \\ 0 \\ 0 \\ 1 \\ 0\end{matrix}\right), \left(\begin{matrix} 0 \\ 0 \\ 0 \\ 0 \\ 1\end{matrix}\right)\right\rangle.$$ So there are three basis vectors. Can you then determine the dimension of $W$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/140532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
About the localization of a UFD I was wondering, is the localization of a UFD also a UFD? How would one go about proving this? It seems like it would be kind of messy to prove if it is true. If it is not true, what about localizing at a prime? Or what if the UFD is Noetherian?
One slick way is via Kaplansky's characterization: a domain is a UFD iff every nonzero prime ideal contains a nonzero prime. This is easily seen to be preserved by localization, hence the proof. Alternatively, proceed directly by showing that primes stay prime in the localization if they survive (don't become units). Thus UFD localizations are characterized by the set of primes that survive. The converse is also true for atomic domains, i.e. domains where nonzero nonunits factor into atoms (irreducibles). Namely, if $\rm\:D\:$ is an atomic domain and $\rm\:S\:$ is a saturated submonoid of $\rm\:D^*$ generated by primes, then $\rm\: D_S$ UFD $\rm\:\Rightarrow\:D$ UFD $\:\!$ (a.k.a. Nagata's Lemma). This yields a slick proof of $\rm\:D$ UFD $\rm\Rightarrow D[x]$ UFD, viz. $\rm\:S = D^*\:$ is generated by primes, so localizing yields the UFD $\rm\:F[x],\:$ $\rm\:F =\,$ fraction field of $\rm\:D.\:$ Thus $\rm\:D[x]\:$ is a UFD, by Nagata's Lemma. This gives a more conceptual, more structural view of the essence of the matter (vs. traditional argument by Gauss' Lemma).
{ "language": "en", "url": "https://math.stackexchange.com/questions/140584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 4, "answer_id": 1 }
Using modular arithmetic, how can one quickly find the natural number n for which $n^5 = 27^5 + 84^5 + 110^5 + 133^5$? Using modular arithmetic, how can one quickly find the natural number n for which $n^5 = 27^5 + 84^5 + 110^5 + 133^5$? I tried factoring individual components out, but it seemed really tedious.
If there is such an $n$, it must be a multiple of 6 and 1 less than a multiple of 5, and it must exceed 133 but not by a whole lot, so my money's on 144.
{ "language": "en", "url": "https://math.stackexchange.com/questions/140659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Parametric equation for a line which lies on a plane Struggling to begin answering the following question: Let $L$ be the line given by $x = 3-t, y= 2+t, z = -4+2t$. $L$ intersects the plane $3x-2y+z=1$ at the point $P = (3,2,-4)$. Find parametric equations for the line through $P$ which lies on plane and is perpendicular to $L$. So far, I know that I need to find some line represented by a vector $n$ which is orthogonal to $L$. So, with the vector of $L$ represented by $v$, I have: $$n\cdot v = 0 \Rightarrow [a, b, c] \cdot [-1, 1, 2] = 0 \Rightarrow -a + b + 2c = 0$$ I am not sure how to proceed from here, or if I am even on the right track.
You know that the line you want is perpendicular to the line L, which has direction vector $\langle -1,1,2\rangle$, and that the line you want lies in the given plane, which has normal vector $\langle 3, -2, 1\rangle$. So the line you want is orthogonal to both $\langle -1,1,2\rangle$ and $\langle 3, -2, 1\rangle$ and you can use the cross product of these two vectors as the direction vector of the line you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/140728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to tell if a quartic equation has a multiple root. Is there any way to tell whether a quartic equation has double or triple root(s)? $$x^4 + a x^3 + b x^2 + c x + d = 0$$
Maybe I misunderstood the question, but I think this might be useful for you. Let $f(x)\in F[x]$ be a polynomial and $f'(x)$ be the formal derivative of $f(x)$ where $F$ is any field. Then $f(x)$ has multiple roots if and only if $\gcd(f(x),f'(x))\ne 1$. If $F$ is a field of characteristic zero then you know more: If you denote $g(x)=\gcd(f(x),f'(x))$ and divide $f(x)$ by $g(x)$, i.e. you find the polynomial $h(x)$ such that $f(x)=g(x)\cdot h(x)$, then the polynomials $f(x)$ and $h(x)$ have the same roots, but the multiplicity of each root of $h(x)$ is one. You are asking about polynomial over $\mathbb C$; for this field the above results are true. Moreover, $\mathbb C$ is algebraically closed, hence every quartic polynomial has 4 roots in $\mathbb C$, if we count them with multiplicities. So in your case, you can compute $g(x)=\gcd(f(x),f'(x))$, e.g. using Euclidean algorithm. If $g(x)=1$, then all (complex) roots of $f(x)$ have multiplicity one. If $g(x)=f'(x)$ then $f(x)$ has only one root of multiplicity 4. In the remaining two cases there is a root of multiplicity 2 or 3. (However, if you're only interested in multiplicity of real roots, the situation is slightly more complicated.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/140797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Vector derivative with power of two in it I want to compute the gradient of the following function with respect to $\beta$ $$L(\beta) = \sum_{i=1}^n (y_i - \phi(x_i)^T \cdot \beta)^2$$ Where $\beta$, $y_i$ and $x_i$ are vectors. The $\phi(x_i)$ simply adds additional coefficients, with the result that $\beta$ and $\phi(x_i)$ are both $\in \mathbb{R}^d$ Here is my approach so far: \begin{align*} \frac{\partial}{\partial \beta} L(\beta) &= \sum_{i=1}^n ( \frac{\partial}{\partial \beta} y_i - \frac{\partial}{\partial \beta}( \phi(x_i)^T \cdot \beta))^2\\ &= \sum_{i=1}^n ( 0 - \frac{\partial}{\partial \beta}( \phi(x_i)^T \cdot \beta))^2\\ &= - \sum_{i=1}^n ( \partial \phi(x_i)^T \cdot \beta + \phi(x_i)^T \cdot \partial \beta))^2\\ &= - \sum_{i=1}^n ( 0 \cdot \beta + \phi(x_i)^T \cdot \textbf{I}))^2\\ &= - \sum_{i=1}^n ( \phi(x_i)^T \cdot \textbf{I}))^2\\ \end{align*} But what to do with the power of two? Have I made any mistakes? Because $\phi(x_i)^T \cdot \textbf I$ seems to be $\in \mathbb{R}^{1 \times d}$ $$= - 2 \sum_{i=1}^n \phi(x_i)^T\\$$
Vector differentiation can be tricky when you're not used to it. One way to get around that is to use summation notation until you're confident enough to perform the derivatives without it. To begin with, let's define $X_i=\phi(x_i)$ since it will save some typing, and let $X_{ni}$ be the $n$th component of the vector $X_i$. Using summation notation, you have $$\begin{align} L(\beta) & = (y_i-X_{mi}\beta_m)(y_i - X_{ni}\beta_n) \\ & = y_i y_i - 2 y_i X_{mi} \beta_m + X_{mi}X_{ni}\beta_m\beta_n \end{align}$$ To take the derivative with respect to $\beta_k$, you do $$\begin{align} \frac{\partial}{\partial\beta_k} L(\beta) & = -2 y_i X_{mi} \delta_{km} + 2 X_{mi} X_{ni} \beta_n \delta_{km} \\ & = -2 X_{mi} \delta_{km} (y_i - X_{ni} \beta_n) \\ & = -2 X_{ki} (y_i - X_{ni} \beta_n) \end{align}$$ which you can then translate back into vector notation: $$ \frac{\partial}{\partial\beta} L(\beta) = -2 \sum_{i=1}^n X_i (y_i - X_i^T\beta) $$ Does that help?
{ "language": "en", "url": "https://math.stackexchange.com/questions/140878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
When the group of automorphisms of an extension of fields acts transitively Let $F$ be a field, $f(x)$ a non-constant polynomial, $E$ the splitting field of $f$ over $F$, $G=\mathrm{Aut}_F\;E$. How can I prove that $G$ acts transitively on the roots of $f$ if and only if $f$ is irreducible? (if we suppose that $f$ doesn't have linear factor and has degree at least 2, then we can take 2 different roots that are not in $F$, say $\alpha,\beta$, then the automorphism that switches these 2 roots and fixes the others and fixes also $F$, is in $G$, but I'm not using the irreducibility, so where is my mistake?
I think what you needed as a hypothesis was that the extension is Galois. By this, I mean that the polynomial $f(x)$ splits into distinct linear factors in an extension, so I am just assuming the negation of the problem about having repeated roots that other folks were suggesting. In this case, the transitive action on the roots of $f(x)$ should give irreduciblity, as the action permutes the roots of any irreducible factor of $f(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/140927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 2, "answer_id": 1 }
Lebesgue measurable set that is not a Borel measurable set exact duplicate of Lebesgue measurable but not Borel measurable BUT! can you please translate Miguel's answer and expand it with a formal proof? I'm totally stuck... In short: Is there a Lebesgue measurable set that is not Borel measurable? They are an order of magnitude apart so there should be plenty examples, but all I can find is "add a Lebesgue-zero measure set to a Borel measurable set such that it becomes non-Borel-measurable". But what kind of zero measure set fulfills such a property?
Let $\phi(x)$ be the Cantor function, which is non-decreasing continuous function on the unit interval $\mathcal{U}_{(0,1)}$. Define $\psi(x) = x + \phi(x)$, which is an increasing continuous function $\psi: [0,1] \to [0,2]$, and hence for every $y \in [0,2]$, there exists a unique $x \in [0,1]$, such that $y = \psi(x)$. Thus $\psi$ and $\psi^{-1}$ maps Borel sets into Borel sets. Now choose a non Borel subset $S \subseteq \psi(C)$. Its preimage $\psi^{-1}(S)$ must be Lebesgue measurable, as a subset of Cantor set, but it is not Borel measurable, as a topological mapping of a non-Borel subset.
{ "language": "en", "url": "https://math.stackexchange.com/questions/141017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48", "answer_count": 1, "answer_id": 0 }
What did Gauss think about infinity? I have someone who is begging for a conversation with me about infinity. He thinks that Cantor got it wrong, and suggested to me that Gauss did not really believe in infinity, and would not have tolerated any hierarchy of infinities. I can see that a constructivist approach could insist on avoiding infinity, and deal only with numbers we can name using finite strings, and proofs likewise. But does anyone have any knowledge of what Gauss said or thought about infinity, and particularly whether there might be any justification for my interlocutor's allegation?
Here is a blog post from R J Lipton which throws some light on this question. Quoting from a letter by Gauss: ... so protestiere ich zuvörderst gegen den Gebrauch einer unendlichen Größe als einer vollendeten, welcher in der Mathematik niemals erlaubt ist. Das Unendliche ist nur eine façon de parler, indem man eigentlich von Grenzen spricht, denen gewisse Verhältnisse so nahe kommen als man will, während anderen ohne Einschränkung zu wachsen gestattet ist. The blog gives this translation: ... first of all I must protest against the use of an infinite magnitude as a completed quantity, which is never allowed in mathematics. The Infinite is just a mannner of speaking, in which one is really talking in terms of limits, which certain ratios may approach as close as one wishes, while others may be allowed to increase without restriction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/141130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 3, "answer_id": 0 }
Highest power of a prime $p$ dividing $N!$ How does one find the highest power of a prime $p$ that divides $N!$ and other related products? Related question: How many zeros are there at the end of $N!$? This is being done to reduce abstract duplicates. See Coping with *abstract* duplicate questions. and List of Generalizations of Common Questions for more details.
Here's a different approach I found while thinking in terms of relating the sum of digits of consecutive numbers written in base $p$. Part of the appeal of this approach is you might have at one point learned that there is a connection but don't remember what, this gives a quick way to reconstruct it. Let's consider $s_p(n)$, the sum of digits of $n$ in base $p$. How does the sum of digits change if we add $1$ to it? It usually just increments the last digit by 1, so most of the time, $$s_p(n+1) = s_p(n)+1$$ But that's not always true. If the last digit was $p-1$ we'd end up carrying, so we'd lose $p-1$ in the sum of digits, but still gain $1$. So the formula in that case would be, $$s_p(n+1) = s_p(n) + 1 - (p-1)$$ But what if after carrying, we end up carrying again? Then we'd keep losing another $p-1$ term. Every time we carry, we leave a $0$ behind as a digit in that place, so the total number of times we lose $p-1$ is exactly the number of $0$s that $n+1$ ends in, $v_p(n+1)$. $$s_p(n+1) = s_p(n) + 1 - (p-1)v_p(n+1)$$ Now let's rearrange to make the telescoping series and sum over, $$\sum_{n=0}^{k-1} s_p(n+1) - s_p(n) = \sum_{n=0}^{k-1}1 - (p-1)v_p(n+1)$$ $$s_p(k) = k - (p-1) \sum_{n=0}^{k-1}v_p(n+1)$$ Notice that since $v_p$ is completely additive so long as $p$ is a prime number, $$s_p(k) = k - (p-1) v_p\left(\prod_{n=0}^{k-1}(n+1)\right)$$ $$s_p(k) = k - (p-1) v_p(k!)$$ This rearranges to the well-known Legendre's formula, $$v_p(k!) = \frac{k-s_p(k)}{p-1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/141196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "70", "answer_count": 9, "answer_id": 4 }
Showing that $\cos(x)$ is a contraction mapping on $[0,\pi]$ How do I show that $\cos(x)$ is a contraction mapping on $[0,\pi]$? I would normally use the mean value theorem and find $\max|-\sin(x)|$ on $(0,\pi)$ but I dont think this will work here. So I think I need to look at $|\cos(x)-\cos(y)|$ but I can't see what to do to get this of the form $|\cos(x)-\cos(y)|\leq\alpha|x-y|$? Thanks for any help
To show that $\cos(x)$ is a contraction mapping on $[0,1]$ you just need to show that it is Lipschitz with a Lipschitz constant less than $1$. Because $\cos(x)$ is continuously differentiable, there is a maximum absolute value of the derivative on each closed interval, and the mean value theorem can be used to show that maximum absolute value works as a Lipschitz constant. Since the derivative of $\cos(x)$ is bounded below $1$ in absolute value on $[0,1]$, that will give the desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/141325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What is the difference between regression and classification? What is the difference between regression and classification, when we try to generate output for a training data set $x$?
Regression and classification can work on some common problems where the response variable is respectively continuous and ordinal. But the result is what would make us choose between the two. For example, simple/hard classifiers (e.g. SVM) simply try to put the example in specific class (e.g. whether the project is "profitable" or "not profitable", and doesn't account for how much), where regression can give an exact profit value as some continuous value. However, in the case of classification, we can consider probabilistic models (e.g. logistic regression), where each class (or label) has some probability, which can be weighted by the cost associated with each label (or class), and thus give us with a final value on basis of which we can decide to put it some label or not. For instance, label $A$ has a probability of $0.3$, but the payoff is huge (e.g. 1000). However, label $B$ has probability $0.7$, but the payoff is very low (e.g. $10$). So, for maximizing the profit, we might label the example as label $A$ instead of $B$. Note: I am still not an expert, perhaps someone would rectify if I am wrong in some part.
{ "language": "en", "url": "https://math.stackexchange.com/questions/141381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "139", "answer_count": 11, "answer_id": 0 }
When a field extension $E\subset F$ has degree $n$, can I find the degree of the extension $E(x)\subset F(x)?$ This is not a problem I've found stated anywhere, so I'm not sure how much generality I should assume. I will try to ask my question in such a way that answers on different levels of generality could be possible. I'm also not sure that this question isn't trivial. Let $E\subset F$ be a fields (both can be finite if needed), and let $n$ be the degree of the field extension ($n$ can be finite if needed). Can we find the degree of the extension $E(x)\subset F(x)$ of rational function fields? Say $E=\mathbb F_2$ and $F=\mathbb F_4$. Then $(F:E)=2.$ I can take $\{1,\xi\}$ to be an $E$-basis of $F$. Now let $f\in F(x),$ $$f(x)=\frac{a_kx^k+\cdots +a_0 } {b_lx^l+\cdots+b_0 }$$ for $a_0,\ldots a_k,b_0,\ldots,b_l\in F$ I can write $a_0,\ldots a_k,b_0,\ldots,b_l$ in the basis: $$\begin{eqnarray}&a_i&=p_i\xi+q_i\\&b_j&=r_j\xi+s_j\end{eqnarray}$$ But all I get is $$f(x)=\frac{p_k\xi x^k+\cdots+p_0+q_kx^k+\cdots+q_0} { r_k\xi x^k+\cdots+r_0+s_kx^k+\cdots+s_0},$$ and I have no idea what to do with this. On the other hand, my intuition is that the degree of the extension of rational function fields should only depend on the degree of the extension $E\subset F,$ even regardless of any finiteness conditions.
My proof is concerned with commutative algebra. Probably there is also field-theoretic one. I prove that: if $E \subseteq F$ is a finite field extension, then $[F(x) : E(x)] = [F \colon E]$. This follows from: if $E \subseteq F$ is an algebraic field extension, then $F(x) \simeq E(x) \otimes_E F$ as $E(x)$-vector space. The ring $F[x]$ is integral over $E[x]$. Consider the multiplicative subset $S = E[x] \setminus \{ 0 \}$ and the ring $A = S^{-1} F[x]$. Obviously $A$ is integral over the field $E(x)$, hence $A$ is a field (Atiyah-Macdonald 5.7); but $A$ is contained in $F(x)$ and contains $F[x]$, hence $A = F(x)$. Therefore $$ F(x) = S^{-1} F[x] = E(x) \otimes_{E[x]} F[x] = E(x) \otimes_E F. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/141480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How do I figure out the speed of a jet of water in this example? I should know how to do this but I don't. I'm not very familiar with vectors. Perhaps I will be after this. So I have a stream of water falling out of a pipe. It obviously forms a parabola of the form $f(x) = -ax^2+h$, where $h$ is the height of the pipe from the ground, $a$ is some constant and $x$ is the horizontal distance from the pipe. The question is: if I know $h$, and I know at what distance the water hits the ground, what is the initial speed of the water as it exits the pipe, assuming that it exits the pipe parallel to the ground? I'm going to try to figure this out myself, but I'd like to return to see how you guys solve this. (btw, not a homework question. I'm 30 years old and just curious.)
If the water exits horizontally (you seem to assume that) the height of the water is $y=h-\frac {gt^2}2$. The time to reach the ground comes from setting $y=0, t=\sqrt {\frac{2h}{g}}$The horizontal position is $x=vt$ where $v$ is the velocity on exit from the pipe. Let $d$ be the horizontal distance where the water hits the ground. We have $v=\frac dt=d\sqrt{\frac g{2h}}$. Your $a$ is my $g$, the acceleration of gravity, about $9.8 m/s^2$ or $32 ft/s^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/141564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that $\|f \chi_F\|_p < \epsilon$ Let $ f \in L_p(X, \mathcal{A}, \mu)$, $1 \leq p < \infty$, and let $\epsilon > 0$. Show that there exists a set $E_\epsilon \in \mathcal{A}$ with $\mu(E_\epsilon) < \infty$ such that if $F \in \mathcal{A}$ and $F \cap E_\epsilon = \emptyset$, then $\|f \chi_F\|_p < \epsilon$. I was wondering if I could do the following: $$ \int_X |f|^p = \int_{X-E_{\epsilon}}|f|^p + \int_{E_\epsilon}|f|^p \geq \int_{F}|f|^p + \int_{E_\epsilon}|f|^p $$ $$ \int_{X}|f\chi_F|^p=\int_{F}|f|^p \leq \int_X |f|^p- \int_{E_\epsilon}|f|^p < \epsilon$$ Please just point out any fallacies within my logic as I have found this chapter's exercise to be quite difficult, so any help is appreciated.
Your estimates are fine since everything is finite, now you must exhibit the set $E_\epsilon$. For this consider $E_{\frac{1}{n}} = \{ x\in X : |f(x)|^p \geq \frac{1}{n} \}$, now prove that $g_n=|f\chi_{E_{\frac{1}{n}}}|^p$ gives a monotone increasing sequence, with $|g_n|\leq |f|^p$ and $g_n(x)\to |f(x)|^p$ everywhere in $X$
{ "language": "en", "url": "https://math.stackexchange.com/questions/141635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Result of the product $0.9 \times 0.99 \times 0.999 \times ...$ My question has two parts: * *How can I nicely define the infinite sequence $0.9,\ 0.99,\ 0.999,\ \dots$? One option would be the recursive definition below; is there a nicer way to do this? Maybe put it in a form that makes the second question easier to answer. $$s_{i+1} = s_i + 9\cdot10^{-i-2},\ s_0 = 0.9$$ Edit: Suggested by Kirthi Raman: $$(s_i)_{i\ge1} = 1 - 10^{-i}$$ *Once I have the sequence, what would be the limit of the infinite product below? I find the question interesting since $0.999... = 1$, so the product should converge (I think), but to what? What is the "last number" before $1$ (I know there is no such thing) that would contribute to the product? $$\prod_{i=1}^{\infty} s_i$$
By looking at the decimal representation, it appears that: $$ \prod_{i=1}^\infty\left(1-\frac1{10^i}\right)= \sum_{i=1}^\infty \frac{8 + \frac{10^{2^i-1}-1}{10^{2i-1}} + \frac1{10^{6i-2}} + \frac{10^{4i}-1}{10^{12i-2}} }{10^{(2i-1)(3i-2)}} $$ I don't have a proof, but the pattern is so regular that I'm confident.
{ "language": "en", "url": "https://math.stackexchange.com/questions/141705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 3, "answer_id": 0 }
First-order logic advantage over second-order logic What is the advantage of using first-order logic over second-order logic? Second-order logic is more expressive and there is also a way to overcome Russell's paradox... So what makes first-order logic standard for set theory? Thanks.
First order logic has the completeness theorem (and the compactness theorem), while second-order logic does not have such theorems. This makes first-order logic pretty nice to work with. Set theory is used to transform other sort of mathematical theories into first-order. Let us take as an example the natural numbers with the Peano Axioms. The second-order theory (replace the induction schema by a second-order axiom) proves that there is only one model, while the first-order theory has models of every cardinality and so on. Given a universe of set theory (e.g. ZFC), we can define a set which is a model of the second-order theory but everything we want to say about it is actually a first-order sentence in set theory, because quantifying over sets is a first-order quantification in set theory. This makes set theory a sort of interpreter, it takes a second-order theory and says "Okay, I will be a first-order theory and I can prove this second-order theory." and if we have that the set theory is consistent then by completeness it has a model and all the higher-order theories it can prove are also consistent. To read more: * *what is the relationship between ZFC and first-order logic? *First-Order Logic vs. Second-Order Logic
{ "language": "en", "url": "https://math.stackexchange.com/questions/141759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 1, "answer_id": 0 }
The maximum number of nodes in a binary tree of depth $k$ is $2^{k}-1$, $k \geq1$. I am confused with this statement The maximum number of nodes in a binary tree of depth $k$ is $2^k-1$, $k \geq1$. How come this is true. Lets say I have the following tree 1 / \ 2 3 Here the depth of the tree is 1. So according to the formula, it will be $2^1-1 = 1$. But we have $3$ nodes here. I am really confused with this depth thing. Any clarifications?
We assume that the root of a binary tree is on level 1, so in your mentioned tree, the depth is 2 not 1, so (2 to the power 2 ) - 1 = 3 nodes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/141783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 5, "answer_id": 3 }
Looking for equation that matches graph (image inside) I've been trying to come up with a one-variable function that roughly matches this graph: The closest I've gotten is $ f(d) = \dfrac{\log(c \cdot (1 - \frac{d}{c}))}{\log c} $, but this approaches $y=0$ at $x=c-1$, and steadily declines from $x=0$, instead of sharply. Are there any functions or tricks I could look into to develop this further? Thank you.
Let's try it in polar coordinates (as suggested by John). We will start with a four petals flower getting the polar expression $$\rho=\frac{m-1+\cos(4\theta)}m$$ $m$ is a parameter and $m\approx 5$ seems appropriate giving : In your case the graph will be obtained by $$x=C\rho \cos(\theta),\ y=\rho \sin(\theta)$$ To rewrite it with just one parameter you may use $\ u:=\cos(\theta)$ getting : $$\rho=\frac{m+8(u^4-u^2)}m$$ $$x=C\;\rho\;u,\ y=\rho \sqrt{1-u^2}$$ But $y$ is not a simple function of $x$ and worse the bump at the left is a little too large and the middle part not smooth enough... (but it had to be tried!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/141836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Limit of derivative If $\displaystyle \lim_{x \to \infty} f(x) = a$, and knowing that $\displaystyle \lim_{x \to \infty} xf'(x)$ exists , how would I find that limit?
I think I've seen this question before, but I've thought of a pleasant little proof. $\lim f(x) < \infty$, so $\lim \dfrac{f(x)}{\ln x} = 0$ But $\lim \dfrac{f(x)}{\ln x} = \lim \dfrac{f'(x)}{\frac{1}{x}} = \lim x f'(x)$ if $\lim x f'(x)$ exists.
{ "language": "en", "url": "https://math.stackexchange.com/questions/141900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Divergent series and $p$-adics If we naïvely apply the formula $$\sum_0^\infty a^i = {1\over 1-a}$$ when $a=2$, we get the silly-seeming claim that $1+2+4+\ldots = -1$. But in the 2-adic integers, this formula is correct. Surely this is not a coincidence? What is the connection here? Do the $p$-adics provide generally useful methods for summing divergent series? I know very little about either divergent series or $p$-adic analysis; what is a good reference for either topic?
This is not so much an answer as a related reference. I wrote a short expository note "Divergence is not the fault of the series," Pi Mu Epsilon Journal, 8, no. 9, 588-589, that discusses this idea and its relation to 2's complement arithmetic for computers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/141971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 4, "answer_id": 0 }
Is my Riemann Sum correct (Example # 2)? Possible Duplicate: Is my Riemann Sum correct? This is my second attempt, the answer seems rather odd so I thought I would have it checked as well. For the integral: $$\int_{-5}^{2} \left( x^{2} -4 \right) dx$$ My calculations: $$\begin{align*}\Delta x &= \frac7n\\\\ x_i &= -5 + \frac{7i}n\\\\ f(x_i) &= 21 - \frac{70i}{n} + \frac{49i^2}{n^2} \\\\ A&=-738 \end{align*}$$
The preliminary computations are fine. That means that the $n$th right hand Riemann sum will be: $$\begin{align*} \text{RHS} &= \sum_{i=1}^n f(x_i)\Delta x\\ &= \sum_{i=1}^n\left(21 - \frac{70i}{n} +\frac{49i^2}{n^2}\right)\frac{7}{n}\\ &= \frac{7(21)}{n}\sum_{i=1}^n1 - \frac{7(70)}{n^2}\sum_{i=1}^n i + \frac{7(49)}{n^3}\sum_{i=1}^ni^2\\ &= \frac{147}{n}(n) - \frac{490}{n^2}\left(\frac{n(n+1)}{2}\right) + \frac{343}{n^3}\left(\frac{n(n+1)(2n+1)}{6}\right)\\ &= 147 - 245\frac{n^2}{n^2+n} + \frac{343}{6}\frac{n(n+1)(2n+1)}{n^3}, \end{align*}$$ using the formulas that say that $$\begin{align*} 1+2+3+\cdots + n &= \frac{n(n+1)}{2}\\ 1^2+2^2+3^2+\cdots+n^2 &= \frac{n(n+1)(2n+1)}{6}. \end{align*}$$ Now, if we take the limit as $n\to\infty$, we have $$\begin{align*} \lim\limits_{n\to\infty}\frac{n^2}{n^2+n} &= 1\\ \lim\limits_{n\to\infty}\frac{n(n+1)(2n+1)}{n^3} &= 2, \end{align*}$$ which means the area should be $$147 -245 +\frac{343}{3} = -98 + 114+\frac{1}{3} = 16+\frac{1}{3} = \frac{49}{3}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/142015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
what is the difference between functor and function? As it is, what is the difference between functor and function? As far as I know, they look really similar. And is functor used in set theory? I know that function is used in set theory. Thanks.
A simpler explanation: Functions map arguments to values while functors map arguments and functions defined over the arguments to values and functions defined over the values, respectively. Moreover, the functor mappings preserve function composition over the functions on arguments and values. Briefly, functions map element while Functors map systems (=elements+functions over them).
{ "language": "en", "url": "https://math.stackexchange.com/questions/142078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 2, "answer_id": 1 }
Does this polynomial evaluate to prime number whenever $x$ is a natural number? I am trying to prove or disprove following statment: $x^2-31x+257$ evaluates to a prime number whenever $x$ is a natural number. First of all, I realized that we can't factorize this polynomial using its square root like $$ax^2+bx+c=a(x-x_1)(x-x_2)$$ because discriminant is negative, also I used excel for numbers from 1 to 1530 (just for checking), and yes it gives me prime numbers, unfortunately I dont know what the formula is, which evaluates every natural number for given input, maybe we may write $x_{k+1}=k+1$ for $k=0,\ldots\infty$, but how can I use this recursive statment? I have tried instead of $x$, put $k+1$ in this polynomial, and so I got $$(k+1)^2-31k+257=k^2+2k+1-31k+257=k^2-29k+258$$ but now this last polynomial for $k=1$ evaluates to $259-29=230$, which is not prime, but are my steps correct? Please help me.
If $x$ is divisible by $257$ then so is $x^2 - 31 x + 257$. More generally, if $f(x)$ is any polynomial $f(x)$ is divisible by $f(y)$ whenever $x-y$ is divisible by $f(y)$. So there are no non-constant polynomials that produce primes for all positive integers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/142133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
how evaluate $\int_{-1}^{1} z^{\frac{1}{2}}\, dz$? How can evaluate $\int_{-1}^{1} z^{\frac{1}{2}}\, dz$ with the main branch of $z^{\frac{1}{2}}$? Thanks for your help
This is an expansion on anon's comment above. Caveat: I'm not 100% certain what the "main branch" is supposed to do to the negative real axis, but I am going to assume it maps to the positive imaginary axis. To integrate from $0$ to $1$, that's no problem, that's an old-school integral of a real-valued function on the real line, and we get 2/3. From $-1$ to $0$, we have a complex valued function. I think the easiest way to do this one is to let $t = -z$. Now, because you're working with the main branch, $\sqrt{-t} = i\sqrt{t}$ for $t$ a positive real number - note, confusingly, that this identity isn't true for all complex numbers, moreover, a different choice of branch cut of the square root function can make it false. $$ \int_{-1}^0 z^{\frac{1}{2}}dz = -i\int_1^0 t^{\frac{1}{2}}dt $$ This latter integral is $\frac{2}{3}i$ so the final answer is $\frac{2}{3} + \frac{2}{3}i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/142211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
When $X_sWhen I have shown, for $s\le t$ and for two continuous stochastic process an inequality: $$ X_s \le Y_t$$ P-a.s. How can I deduce that this P-a.s. simultaneously for all rational $s\le t$ ? Thank you for your help EDIT: According to Ilya's answer, I see that we have $$P(X_s\le Y_t\text{ simultaneously for all rationals }s\le t) = 1.$$ How could we use continuity of $X,Y$ to deduce $P(X_s\le Y_t,s\le t)=1$. Of course we take sequences of rational, however I mess up the details. So a detailed answer how to do this, would be appreciated.
This follows from the fact that the complement of the event $[\forall s\leqslant t,\,X_s\leqslant Y_t]$ is the event $$ \left[\exists s\leqslant t,\,X_s\gt Y_t\right]\ =\left[\exists n\in\mathbb N,\,\exists s\in\mathbb Q,\,\exists t\in\mathbb Q,\,s\leqslant t,\,X_s\geqslant Y_t+\frac1n\right], $$ hence $$ \left[\exists s\leqslant t,\,X_s\gt Y_t\right]\ =\bigcup\limits_{s\leqslant t,\, s\in\mathbb Q,\,t\in \mathbb Q}\ \bigcup_{n\geqslant1}\ \left[X_s\geqslant Y_t+\frac1n\right]. $$ Since $\mathrm P(X_s\leqslant Y_t)=1$ for every $s\leqslant t$, $\mathrm P(X_s\geqslant Y_t+\frac1n)=0$ for every $n\geqslant1$. The union on the RHS of the displayed identity above is countable hence $\mathrm P(\exists s\leqslant t,\,X_s\gt Y_t)=0$. Considering the complement, one gets $$ \mathrm P(\forall s\leqslant t,\,X_s\leqslant Y_t)=1. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/142271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to prove this inequality about $e$? Possible Duplicate: Proving $(1 + 1/n)^{n+1} \gt e$ How to prove this: $$ \left(\frac{x}{x-1}\right)^x \geq e \qquad\text{for}\qquad x \in \mathbb{N}^* $$ $e$ is the base of the natural logarithm. and I think the equal satisfies if $x$ converges to infinity. Thank you!
First off, $\frac{x}{x-1} > 0$ iff $x < 0$ or $x > 1$, so we can't take the natural logarithm if $x\in[0,1]$. My answer addresses the inequality for real-valued $x$, as in the original post (proving it for $x > 1$ and disproving it for $x < 0$). Now $$ e\le \left(\frac{x}{x-1}\right)^{x}= \left(1-\frac1x\right)^{-x} \to e $$ already guarantees the limiting behavior for us. Depending on the sign of $x$, this inequality becomes $$ e^{-1/x} \le 1-\frac1x \qquad(x < 0) $$ $$ e^{-1/x} \ge 1-\frac1x \qquad(x > 1) $$ Setting $t=\frac1x\lt1$ and using the Taylor series, this translates to $$ \eqalign{ 1-t &\le e^{-t} = \sum_{n=0}^\infty\frac{(-1)^n t^n}{n!} \\ &\le 1-t+\frac{t^2}2+\frac{t^3}6+\cdots } $$ for $t \in (0,1)$, which is patently true, while for negative values of $t$, the reversed inequality is patently false (which we can easily check in the original by trying $x=-1$ since $2 < e$). Therefore I would suggest adding the stipulation that $x > 0$ (necessarily) or actually, $x > 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/142335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is the function "signomial"? Function $f:(0, \infty)\longrightarrow \mathbb{R}$ is called $\textbf{signomial}$, if $$ f(x)=a_0x^{r_0}+a_1x^{r_1}+\ldots+a_kx^{r_k}, $$ where $k \in \mathbb{N}^*:=\{0,1,2, \ldots\}$, and $a_i, r_i \in \mathbb{R}$, $a_i\neq 0$, $r_0<r_1<\ldots<r_k$, and $x$ is a real variable with $x>0$. My question is simple in the first glamce, but I cannot get it. Question: whether function $\displaystyle{\sqrt p \int_0^{\infty}\left(\frac{\sin t}{t}\right)^p}dt$, for $t>0, p\ge 2$ is signomial? Thank you for your help.
This is a non-rigorous derivation of an expansion of the function in inverse powers of $p$. I asked a question here about a rigorous justification for it. It turns out that a) the expansion was known, b) it can be rigorously justified and c) it appears to be only an asymptotic expansion, not a convergent series. However, the conclusion that the function cannot be a signomial remains valid, since the errors of the partial sums of the expansion are bounded such that each term in the expansion would have to be contained in the signomial, which would thus need to have an infinite number of terms. Let $u=\sqrt pt$. Then $$ \begin{align} \left(\frac{\sin t}t\right)^p &=\left(1-\frac16t^2+\frac1{120}t^4-\dotso\right)^p \\ &=\left(1-\frac16\frac{u^2}p+\frac1{120}\frac{u^4}{p^2}-\dotso\right)^p \\ &=\left(1+\frac1p\left(-\frac16u^2+\frac1{120}\frac{u^4}p-\dotso\right)\right)^p\;. \end{align} $$ With $$\left(1+\frac xn\right)^n=\mathrm e^x\left(1-\frac{x^2}{2n}+\frac{x^3(8+3x)}{24n^2}+\dotso\right)$$ (see Wikipedia), we have $$ \begin{align} \left(\frac{\sin t}t\right)^p &=\mathrm e^{-u^2/6}\left(1+\frac1{120}\frac{u^4}p+\dotso\right)\left(1-\frac1{72}\frac{u^4}p+\dotso\right) \\ &= \mathrm e^{-u^2/6}\left(1-\frac1{180}\frac{u^4}p+\dotso\right)\;, \end{align} $$ where the expansions are in inverse powers of $p$. The expansion cannot terminate, since otherwise the left-hand side would have to exhibit Gaussian decay, which it doesn't. Thus we have $$ \begin{align} \sqrt p\int_0^\infty\left(\frac{\sin t}t\right)^p\mathrm dt &= \int_0^\infty\mathrm e^{-u^2/6}\left(1-\frac1{180}\frac{u^4}p+\dotso\right)\mathrm du \\ &= \sqrt{\frac{3\pi}2}\left(1-\frac{3}{20}\frac1p+\dotso\right) \end{align} $$ with a non-terminating expansion in decreasing powers of $p$. If this were a signomial, the leading term would have to be the leading term of the expansion, then the leading term of the remainder would have to be the leading term of the remainder of the expansion, and so on; thus the expansion cannot be replicated my a finite linear combination of powers of $p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/142394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prove that [ContextFreeLanguage - RegularLanguage] is always a context free language, but the opposite is false Let L be a context-free grammar and R a regular language. Show that L-R is always context-free, but R-L is not. Hint: try to connect both automata) The above hint did not help me :(
Hints: express $R-L$ more basically in set-theoretic terms. Notice anything about what you get in terms of things you know about CFLs? Try some very simple $R$ (always a good tactic, at least to start).
{ "language": "en", "url": "https://math.stackexchange.com/questions/142470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
eigenvector computation Given a full-rank matrix $X$, and assume that the eigen-decomposition of $X$ is known as $X=V \cdot D \cdot V^{-1}$, where $D$ is a diagonal matrix. Now let $C$ be a full-rank diagonal matrix, now I want to calucate the eigen-decomposition of $C \cdot X$, that is to find a matrix $V_c$ and a diagonal matrix $D_c$ such that $C \cdot X =V_c \cdot D_c \cdot V_c^{-1}$. Since the eigen-decomposition of $X$ is known, how can we obtain $V_c$ and $D_c$ from $V$ and $D$, respectively? Thanks!
There is no simple relation between the eigen-decompositions of $C$, $X$ and $C X$. In fact, $C X$ does not even have to be diagonalizable. About all you can say is that $\text{det}(CX) = \det(C) \det(X)$, so the product of the eigenvalues for $CX$ (counted by algebraic multiplicity) is the product for $C$ times the product for $X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/142534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why $H\neq N_G(H)$? Let $K$ be a field, $f(x)$ a separable irreducible polynomial in $K[x]$. Let $E$ be the splitting field of $f(x)$ over $K$. Let $\alpha,\beta$ be distinct roots of $f(x)$. Suppose $K(\alpha)=K(\beta)$. Call $G=\mathrm{Gal}(E/K)$ and $H=\mathrm{Gal}(E/K(\alpha))$. How can I prove that $H\neq N_G(H)$? My idea was to take a $\sigma: E\rightarrow\bar{K}$ with $\sigma_{|K}=id$ and $\sigma(\alpha)=\beta$. Then $\sigma\in G\backslash H$. So if I prove that for every $\tau\in H$ one has $\sigma\tau\sigma^{-1}(\alpha)=\alpha$, then this means $\sigma\in N_G(H)$. But I don't know how to prove it, I even don't know if it is true. Any help?
Since every $\tau\in H$ by definition fixes $\alpha$, one has $\sigma\tau\sigma^{-1}(\beta)=\sigma\tau(\alpha)=\sigma(\alpha)=\beta$, so $\sigma\tau\sigma^{-1}$ fixes $\beta$ and therefore $K(\beta)=K(\alpha)$, whence $\sigma\tau\sigma^{-1}\in H$. So indeed $\sigma\in N_G(H)\setminus H$. This was too easy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/142598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there any good example about Lie algebra homomorphisms? My textbook gave an example of the trace, but I think to get a better comprehension, more examples are still needed. Any example will be helpful ~
A good source of examples is the free Lie algebra $\mathcal L(n)$, on generators $X_1,\ldots, X_n$. This is defined as the vector space with basis given by all formal bracketing expressions of generators, such as $[X_1,X_2]$, $[X_3+2X_4,[X_5,X_7]]$, etc. One the takes the quotient by relators representing antisymmetry of the bracket, the Jaobi identity and multilinearity. This gives a lot of homomorphisms. For example, $\mathcal L(n)$ maps homomorphically onto $\mathcal L(n-1)$ where the homomorphism is defined by setting a variable equal to $0$. Similarly $\mathcal L(n)$ maps to $\mathcal L(n+1)$ by inclusion, which is a Lie algebra homomorphism. Indeed, since the free Lie algebra only includes relations that are present in every Lie algebra, one can show that there is a surjective homomorphism from a free Lie algebra onto any finitely generated Lie algebra. Another example that just occurred to me: any associative ring can be thought of as a Lie algebra using the bracket $[a,b]=ab-ba$. Any ring homomorphism will induce a Lie algebra homomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/142676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Question on y axis of Gamma probability distribution Afternoon. I'm looking into using the Gamma (Erlang) distribution for a certain quantity that I need to model. I noticed by plugging in some values for the distribution parameters that the y axis values which represent the probability that a random value from the x axis be drawn (unless I've gotten it all wrong) can jump above the value of 1, which doesn't make any sense for a probability distribution. For an example, check out this example distribution produced by Wolfram Alpha: http://www.wolframalpha.com/input/?i=Gamma%282%2C+0.1%29&a=*MC.Gamma%28-_*DistributionAbbreviation- Obviously there's some misconception on my part here. Care to point it out for me? Thanks.
A probability density function can easily be greater than $1$ in some interval, as long as the total area under the curve is $1$. As a simple example, suppose that the random variable $X$ has uniform distribution on the interval $[0,1/4]$. Then the appropriate density function is $f_X(x)=4$ on $[0,1/4]$, and $0$ elsewhere. For a more complicated example, look at a normally distributed random variable with mean $0$ and standard deviation $1/10$. You can easily verify that the density function of $X$, for $x$ close to $0$, is greater than $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/142725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Every embedded hypersurface is locally a regular surface? I'm reading do Carmo's Riemannian Geometry, in ex6.11 d) he wrote that"every embedded hypersurface $M^{n} \subset \bar{M}^{n+1}$ is locally the inverse image of a regular value". Could anyone comment on how to show this? To be more specific, let $\bar{M}^{n+1}$ be an $n+1$ dimensional Riemannian manifold, let $M^{n}$ be some $n$ dimensional embedded submanifold of $\bar{M}$, then locally we have that $M=f^{-1}(r)$, where $f: \bar{M}^{n+1} \rightarrow \mathbb{R}$ is a differentiable function and $r$ is a regular value of $f$. Thank you very much!
By choosing good local coordinates, you can assume that $M = \mathbb{R}^n\subset\mathbb{R}^{n+1} = \overline{M}$. Specifically, assume that $M = \{x\in \mathbb{R}^{n+1} : x_{n+1} = 0\}$. Then $M = f^{-1}(0)$, where $f\colon \mathbb{R}^{n+1}\to \mathbb{R}$ is the map $f(x) = x_{n+1}$. Since $0$ is a regular value for $f$, this is exactly what you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/142797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to sketch $y=2\cos\,2x+3\sin\,2x$ , $x$ for $[-\pi,\pi]$. Use addition of ordinate to sketch the graph of $y=2\cos\,2x+3\sin\,2x$ , $x$ for $[-\pi,\pi]$. I know that there will be three line in graph from the example it show that $x=0$, $x=\frac{\pi}{4}$, $x=\frac{\pi}{2}$ and something like that I haven't no clue how to do. Can you please explain in step by step, so that I'll be able to do other questions. Answer look like this. Thanks.!
You probably know the graph of $y=\cos(\theta)$ and of $y=\sin(\theta)$ on $[-2\pi,2\pi]$. The graph of $y=\cos(2\theta)$ on $[-\pi,\pi]$ is obtained from the graph of $y=\cos(\theta)$ on $[-2\pi,2\pi]$ by performing a horizontal compression by a factor of $2$ (we are making the change from $y=f(x)$ to $y=f(2x)$). Likewise, the graph of $y=\sin(2\theta)$ on $[-\pi,\pi]$ is the result of compressing horizontally by a factor of 2 the graph of $y=\sin(\theta)$ on $[-2\pi,2\pi]$. The graph of $y=2\cos(2\theta)$ is obtained from the graph of $y=\cos(2\theta)$ by performing a vertical stretch by a factor of $2$. The graph of $y=3\sin(2\theta)$ is obtained from the graph of $y=\sin(2\theta)$ by performing a vertical stretch by a factor of $3$. Once you have the graphs of both $y=2\cos(2\theta)$ and $y=3\sin(2\theta)$ (obtained by the simple geometric operations described above) you obtain the graph of $$y= 2\cos(2\theta) + 3\sin(2\theta)$$ by "addition of ordinate". You want to imagine that you are graphing $y=3\sin(2\theta)$ "on top of" the graph of $y=2\cos(2\theta)$, so that you end up adding the values. You can get a fairly reasonable geometric approximation by doing this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/142862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
$\lim_{x \to 0}(f+g)$ if $\lim_{x \to 0}g$ does not exist Let $f$ such that $\lim_{x\to 0}f(x)=\infty$ and let $g(x)=\sin(\frac{1}{x})$. I know that $g$ does not have a limit at $x=0$, but what about $\lim_{x\rightarrow 0}(f(x)+g(x))$? Thanks!
Always limit is infinity considering your problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/142978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Can the product of non-zero ideals in a unital ring be zero? Let $R$ be a ring with unity and $0\neq I,J\lhd R.$ Can it be that $IJ=0?$ It is possible in rings without unity. Let $A$ be a nontrivial abelian group made a ring by defining a zero multiplication on it. Then any subgroup of $S$ of $A$ is an ideal, because for any $a\in A,$ $s\in S,$ we have $$sa=as=0\in S.$$ Then if $S,T$ are nontrivial subgroups of $A,$ we have $ST=0.$ A non-trivial ring with unity cannot have zero multiplication, so this example doesn't work in this case. So perhaps there is no example? I believe there should be, but I can't find one. If there isn't one, is it possible in non-unital rings with non-zero multiplication?
Take $n=ab \in \mathbb Z$ and the ideals $(a)$ and $(b)$ in $\mathbb Z/(n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/143036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Coproduct in the category of (noncommutative) associative algebras For the case of commutative algebras, I know that the coproduct is given by the tensor product, but how is the situation in the general case? (for associative, but not necessarily commutative algebras over a ring $A$). Does the coproduct even exist in general or if not, when does it exist? If it helps, we may assume that $A$ itself is commutative. I guess the construction would be something akin to the construction of the free products of groups in group theory, but it would be nice to see some more explicit details (but maybe that would be very messy?) I did not have much luck in finding information about it on the web anyway.
The following is a link to an article which provides a partial answer, namely it gives (on page 8, without proof) the coproduct of two non-commutative algebras (over a field rather than a ring, I don't know the ring case) http://www.google.co.uk/url?q=http://citeseerx.ist.psu.edu/viewdoc/download%3Fdoi%3D10.1.1.6.6129%26rep%3Drep1%26type%3Dpdf&sa=U&ei=PK3IUeLGIdHktQbUsoCwAQ&ved=0CB4QFjAB&usg=AFQjCNHZM3ux74AVdgFECW5HPfM3syw9rg
{ "language": "en", "url": "https://math.stackexchange.com/questions/143098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 4, "answer_id": 2 }
What is the method to compute $\binom{n}{r}$ in a recursive manner? How do you solve this? Find out which recurrence relation involving $\dbinom{n}{r}$ is valid, and thus prove that we can compute $\dbinom{n}{r}$ in a recursive manner. I appreciate any help. Thank You
There are many recurrence relations for $\dbinom{n}{r}$. One of the most commonly used one is the following. $$\binom{n+1}{r} = \binom{n}{r} + \binom{n}{r-1}.$$ There are many ways to prove this and one simple way is to look at $\displaystyle (1+x)^{n+1}$. We know that $$(1+x)^{n+1} = (1+x)^{n} (1+x).$$ Now compare the coefficient of $x^r$ on both sides. The left hand the coefficient of $x^r$ is $$\dbinom{n+1}{r}.$$ On the right hand side, the $x^r$ term is obtained by multiplying the $x^r$ term of $(1+x)^n$ with the $1$ from $(1+x)$ and also by multiplying the $x^{r-1}$ term of $(1+x)^n$ with the $x$ from $(1+x)$. Hence, the coefficient of $x^r$ is given by the sum of these two terms from the right hand side i.e. $$\dbinom{n}{r} + \dbinom{n}{r-1}$$ As Rob and J.M have pointed out, these recurrences define the celebrated Pascal's triangle. A pictorial representation is shown above. Each row represents the value of $n$ starting from $0$. On a row, as we proceed from left to right, the different values of $r$ from $0$ to $n$ are hit. The arrows indicate how the $r^{th}$ value on the $(n+1)^{th}$ row is obtained as the sum of the $(r-1)^{th}$ and $r^{th}$ value on the $n^{th}$ row.
{ "language": "en", "url": "https://math.stackexchange.com/questions/143150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Derivative of a random variable w.r.t. a deterministic variable I'm reading about time series and I thought of this procedure: can you differentiate a function containing a random variable. For example: $f(t) = a t + b + \epsilon$ where $\epsilon \sim N(0,1)$. Then: $df/dt = \lim\limits_{\delta t \to 0} {(f(t + \delta t) - f(t))/ \delta t} = (a \delta t + \epsilon_2 - \epsilon_1)/\delta t = a + (\epsilon_2 - \epsilon_1)/\delta t$ But: $\epsilon_2 - \epsilon_1 = \xi$ where $\xi \sim N(0,2)$. But this means that we have a random variable over an infinitesimally small value. so $\xi/\delta t$ will be infinite except for the cases when $\xi$ happens to be 0. Am I doing something wrong?
A random variable is a function from sample space to the real line. Hence $f(t)$ really stands for $f(t,\omega) = a t + b + \epsilon(\omega)$. This function can be differentiated with respect to $t$, for fixed $\omega$, of course. The resulting derivative, being a function of $\omega$, is a random variable. In this case: $$ \frac{\partial f}{\partial t}(t, \omega) = a $$ Since it does not depend on $\omega$, the derivative is deterministic, in this example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/143186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
How can one solve the equation $\sqrt{x\sqrt{x} - x} = 1-x$? $$\sqrt{x\sqrt{x} - x} = 1-x$$ I know the solution but have no idea how to solve it analytically.
Just writing out Robert's manipulation: $$\eqalign{ & \sqrt {x\sqrt x - x} = 1 - x \cr & x\sqrt x - x = {\left( {1 - x} \right)^2} \cr & x\left( {\sqrt x - 1} \right) = {\left( {1 - x} \right)^2} \cr & \sqrt x - 1 = \frac{{{{\left( {1 - x} \right)}^2}}}{x} \cr & \sqrt x = \frac{{{{\left( {1 - x} \right)}^2}}}{x} + 1 \cr & x = {\left( {\frac{{{{\left( {1 - x} \right)}^2}}}{x} + 1} \right)^2} \cr & x = {\left( {\frac{{1 - 2x + {x^2}}}{x} + 1} \right)^2} \cr & x = {\left( {\frac{1}{x} + x - 1} \right)^2} \cr & x = {x^2} - 2x + 3 - \frac{2}{x} + \frac{1}{{{x^2}}} \cr & {x^3} = {x^4} - 2{x^3} + 3{x^2} - 2x + 1 \cr & 0 = {x^4} - 3{x^3} + 3{x^2} - 2x + 1 \cr & 0 = \left( {x - 1} \right)\left( {{x^3} - 2{x^2} + x + 1} \right) \cr} $$ Note you will most probably have two complex solutions apart from $x=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/143248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 2 }
$f$ continuous iff $\operatorname{graph}(f)$ is compact The Problem: Let $(E,\tau_E)$ be a compact space and $(F,\tau_F)$ be a Hausdorff space. Show that a function $f:E\rightarrow F$ is continuous if and only if its graph is compact. My Work: First assume $(E,\tau_E)$ compact and $(F,\tau_F)$ a Hausdorff space . Assume $f:E\rightarrow F$ continuous. Then certainly $f(E)$ is compact. Then $$\operatorname{graph}(f)\subseteq E\times f(E)\subseteq E\times F.$$ Since the graph is closed ( we know this since $(F,\tau_F)$ Hausdorff and $f$ continouous) and $E\times f(E)$ is compact, as the product of two compact sets, than somehow this should give us that $\operatorname{graph}(f)$ compact. I was thinking the canonical projections would be helpful here but i'm unsure. As for the other way, I'm unsure. Any help is appreciated.
$\textbf{Attention Mathstudent:}$ I think you need to assume that $E$ is Hausdorff. Here are some preliminary thoughts on your problem . I think we are ready to prove the other direction. Recall that if you can show that given any closed set $B$ in $Y$, the preimage under $f$ is also closed then you have proven that $f$ is a continuous function. Now we already know that the canonical projection $\pi_2 : E \times F \longrightarrow E$ is a continuous function. Therefore since $(E \times B) = \pi_2^{-1}(B)$ and $B$ is closed it follows that $(E \times B)$ is a closed subset of $E \times F$. Furthermore we know that $\operatorname{graph}(f)$ is a compact subset of $E \times F$ so consider $$\operatorname{graph}(f) \cap (E \times B).$$ Now note that $\operatorname{graph}(f) \cap (E \times B)$ is closed in the graph of $f$. Since a closed subspace of a compact space is compact, it follows that $$\operatorname{graph}(f) \cap (E \times B)$$ is compact. Now we use the following theorem from Munkres: $\textbf{Theorem 26.6 (Munkres)}$ Let $f : X \rightarrow Y$ be a bijective continuous function. If $X$ is compact and $Y$ is Hausdorff, then $f$ is a homeomorphism. In our case we have $f$ being $\pi_1|_{\operatorname{graph}(f)}: E \times F \longrightarrow F$, $X = \operatorname{graph}(f)$ and $Y= E$. Now note that $\pi_1$ when restricted to the graph of $f$ becomes a bijective function. To be able to apply your theorem we need that $E$ is Hausdorff (because otherwise the hypotheses of the theorem are not satisfied). Assuming this, the theorem gives that since $\Big(\operatorname{graph}(f) \cap (E \times B)\Big)$ is a compact subset of the graph, $$\pi_1|_{\operatorname{graph}(f)} \Big(\operatorname{graph}(f) \cap (E \times B)\Big) = f^{-1}(B)$$ is compact. Now we use the assumption again that $E$ is Hausdorff: Since $f^{-1}(B)$ is a compact subset of $E$ that is Hausdorff it is closed so you are done. $\hspace{6in} \square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/143306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
How does composition affect eigendecomposition? What relationship is there between the eigenvalues and vectors of linear operator $T$ and the composition $A T$ or $T A$? I'm also interested in analogous results for SVD.
Friedland has proved the following over the complex field: If the principal minors of $A$ are not zero, then for every set of $n$ numbers $\lambda_1,\dots,\lambda_n$ there exist a diagonal matrix $B$ such that $BA$ has $\lambda_i$'s as eigenvalues. Later Dias da Silva extended it to any arbitrary algebraically closed field.
{ "language": "en", "url": "https://math.stackexchange.com/questions/143362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
how to evaluate this integral by considering $\oint_{C_{(R)}} \frac{1}{z^{2}+1}$ Consider the integral $I=\int_{-\infty}^{\infty} \frac{1}{x^{2}+1}\, dx$. Show how to evaluate this integral by considering $\oint_{C_{(R)}} \frac{1}{z^{2}+1}, dz$ where $C_{R}$ is the closed semicircle in the upper half plane with endpoints at $(-R, 0)$ and $(R, 0)$ plus the $x$ axis. I use $\frac{1}{z^{2}+1}=-\frac{1}{2i}\left[\frac{1}{z+i}-\frac{1}{z-i}\right]$ and I must prove without using the residue theorem the integral along the open semicircle in the upper half plane vanishes as $R\rightarrow \infty$ Could someone help me through this problem?
See @anon's answer. For completion's sake we will examine the function $f(z) = \frac{1}{z^2 + 1}$ parametrized by $z = z_0 + Re^{i \theta}$. If we center the contour at $z_0 = 0$ then the expansion $z^2 = R^2 e^{2it}$ so $f(z) = \frac{1}{R^2 e^{2it} + 1}$. Given the line integral: $$\oint_{C}{f(z)\ \mathrm{d}z} = \int_a^{b}{f(C(\theta)) C'(\theta)}\ \mathrm{d}\theta$$ Where $C(\theta)$ is the parametrization of a circular contour using the variable theta. Fitting our function to the parametrization and evaluating: $$\begin{aligned} \oint_{C}{f(z)\ \mathrm{d}z} &= \int_0^{\pi}{\frac{ie^{i \theta}}{R^2 e^{2it} + 1}}\ \mathrm{d}\theta \\&= -\frac{2 \tan^{-1}(R)}{R} \end{aligned}$$ We know that the ArcTangent is bounded by [0, 1) for all $0 \leq \theta \leq \pi$ so we can say $-\frac{2 \tan^{-1}(R)}{R} \leq -\frac{2M}{R}$. Next apply the limit for $R \to 0$ $$\lim_{R \to \infty}{ -\frac{2M}{R}} = 0$$ Thus, we have shown that as $R \to \infty$, the integral around the contour "vanishes". This may go nowhere but perhaps you can even apply Cauchy-Goursat theorem to show that the integral about the contour is 0 so long as $i$ is not in the region enclosed by the contour. i.e. $R < 1$. Otherwise, when $R > 1$ use deformation of contours. Something to think about.
{ "language": "en", "url": "https://math.stackexchange.com/questions/143472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$N$ submodule of $M$ and $N \cong M$ does not necessarily imply that $M=N$ Let $M, N$ be $A$-modules with $A$ being a commutative ring. Suppose that $N$ is a submodule of $M$ and also that $N$ is isomorphic to $M$. According to my understanding this does not necessarily imply that $M=N$. Is this statement accurate? If yes, at what kind of cases do we have this phenomenon?
To answer the half about "When can we expect this?": A module is called cohopfian if every injective endomorphism is surjective. A cohopfian module $M$ will not have any proper submodules isomorphic to $M$. $M$ will be cohopfian if it is any of the following: * *finite *Artinian *Noetherian and injective
{ "language": "en", "url": "https://math.stackexchange.com/questions/143523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
There exists a unique function $u\in C^0[-a,a]$ which satisfies this property The problem: Let $a>0$ and let $g\in C^0([-a,a])$. Prove that there exists a unique function $u\in C^0([-a,a])$ such that $$u(x)=\frac x2u\left(\frac x2\right)+g(x),$$ for all $x\in[-a,a]$. My attempt At first sight I thought to approach this problem as a fixed point problem from $C^0([-a,a])$ to $C^0([-2a,2a])$, which are both Banach spaces if equipped with the maximum norm. However i needed to define a contraction, because as it stands it is not clear wether my operator $$(Tu)(x)=\frac x2u\left(\frac x2\right)+g(x)$$ is a contraction or not. Therefore I tried to slightly modify the operator and I picked a $c>a>0$ and defined $$T_cu=\frac 1cTu.$$ $T_cu$ is in fact a contraction, hence by the contraction lemma i have for granted the existence and the uniqueness of a function $u_c\in C^0([-a,a])$, which is a fixed point for $T_cu.$ Clearly this is not what I wanted and it seems difficult to me to finish using this approach. Am I right, is all what I have done useless? And if this were the case, how to solve this problem? Thanks in advance.
Your approach will work if $a<2$, in the general case, write $$u(x)=\frac x2u\left(\frac x2\right)+g(x)$$ as $$u(x)=x^k\alpha_ku\left(\frac x{2^k}\right)+F_k(g)(x)$$ where $F_k$ is a functional of $g$ and $\alpha_k$ satisfy the recurrence relation $\alpha_{k+1}=\frac{\alpha_k}{2^{k+1}}$. Therefore, $a^{k+1}\alpha_k$ converges to $0$ and you apply Banach fixed point for $T_k(u)(x)=x^k\alpha_ku\left(\frac x{2^k}\right)+F_k(g)(x)$ for a $k$ large enough.
{ "language": "en", "url": "https://math.stackexchange.com/questions/143587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Is a closed subset of isolated points in a compact set necessarily finite? If I have a compact set $A$ and a closed subset $\Sigma \subset A$ which only contains isolated points (that is, none of them is a limit point). Does the compactness of $A$ then force $\Sigma$ to have finite cardinality ? Here is my attempt at a proof that the above question can be answered in the positive: Suppose for contradiction that $\Sigma$ contains infinitely many distinct points. EDIT : Then we can construct a sequence of points in $\Sigma$ which consists of distinct points. By compactness of A, this sequence must have a convergent subsequence, and by the fact that $\Sigma$ is closed, this limit lies in $\Sigma$. But then it cannot be a limit point, because all points in $\Sigma$ are isolated. So the subsequence must eventually constant and equal to the limit, contrary to the construction of the sequence. Is the reasoning above correct ? If no, what did go wrong ?
Compactness does not mean that sequences have convergent subsequences in a general topological space. See sequential compactness at Wikipedia. However, your statement is still true. Just stick more closely to the definition of compactness.
{ "language": "en", "url": "https://math.stackexchange.com/questions/143647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }