Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How would I evaluate this limit? I have no idea how to evaluate this limit. Wolfram gives $0$, and I believe this, but I would like to see how it is done. The limit is $$\lim_{n\rightarrow\infty}\frac{x^n}{(1+x)^{n-1}}$$ assuming $x$ is positive. Thanks in advance.
$\frac{x^n}{(1+x)^{n-1}} = \frac{x^n(1+x)}{(1+x)^{n}} = (\frac{x}{1+x})^{n}(1+x) = (\frac{x+1-1}{1+x})^{n}(1+x) = (1 - \frac{1}{1+x})^{n}(1+x)$ So taking the limit: $\lim_{n\to\infty} \frac{x^n}{(1+x)^{n-1}} = \lim_{n\to\infty} (1 - \frac{1}{1+x})^{n}(1+x) = (1+x) * \lim_{n\to\infty} (1 - \frac{1}{1+x})^{n}$ Since $x>0$ we know $1 - \frac{1}{1+x} < 1$ therefore $\lim_{n\to\infty} (1 - \frac{1}{1+x})^{n} = 0$ giving us $(1+x)*0 = 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/80779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Solutions to Linear Diophantine equation $15x+21y=261$ Question How many positive solutions are there to $15x+21y=261$? What I got so far $\gcd(15,21) = 3$ and $3|261$ So we can divide through by the gcd and get: $5x+7y=87$ And I'm not really sure where to go from this point. In particular, I need to know how to tell how many solutions there are.
5, 7, and 87 are small enough numbers that you could just try all the possibilities. Can you see, for example, that $y$ can't be any bigger than 12?
{ "language": "en", "url": "https://math.stackexchange.com/questions/80822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 7, "answer_id": 1 }
Help remembering a Putnam Problem I recall that there was a Putnam problem which went something like this: Find all real functions satisfying $$f(s^2+f(t)) = t+f(s)^2$$ for all $s,t \in \mathbb{R}$. There was a cool trick to solving it that I wanted to remember. But I don't know which test it was from and google isn't much help for searching with equations. Does anyone know which problem I am thinking of so I can look up that trick?
No idea. But I have a book called Putnam and Beyond by Gelca and Andreescu, and on page 185 they present a problem from a book called Functional Equations: A Problem Solving Approach by B. J. Venkatachala, from Prism Books PVT Ltd., 2002. I think the Ltd. means the publisher is British. Almost, the publisher is (or was?) in India (Bangalore): http://www.prismbooks.com/ http://www.hindbook.com/order_info.php EDIT, December 3, 2011: The book is available, at least, from an online firm in India that is similar to Amazon.com http://www.flipkart.com/m/books/8172862652 on http://www.flipkart.com/ I cannot tell whether they ship outside India. But it does suggest that contacting the publisher by email is likely to work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/80932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 2 }
the name of a game I saw a two-player game described the other day and I was just wondering if it had an official name. The game is played as follows: You start with an $m \times n$ grid, and on each node of the grid there is a rock. On your turn, you point to a rock. The rock and all other rocks "northeast" of it are removed. In other words, if you point to the rock at position $(i,j)$ then any rock at position $(r,s)$ where $r$ and $s$ satisfy $1 \leq r \leq i$ and $j \leq s \leq n$ is removed. The loser is the person who takes the last rock(s).
This game is called Chomp.
{ "language": "en", "url": "https://math.stackexchange.com/questions/80991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding the Laurent expansion of $\frac{1}{\sin^3(z)}$ on $0<|z|<\pi$? How do you find the Laurent expansion of $\frac{1}{\sin^3(z)}$ on $0<|z|<\pi$? I would really appreciate someone carefully explaining this, as I'm very confused by this general concept! Thanks
use this formula $$\sum _{k=1}^{\infty } (-1)^{3 k} \left(-\frac{x^3}{\pi ^3 k^3 (\pi k-x)^3}-\frac{x^3}{\pi ^3 k^3 (\pi k+x)^3}+\frac{3 x^2}{\pi ^2 k^2 (\pi k-x)^3}-\frac{3 x^2}{\pi ^2 k^2 (\pi k+x)^3}-\frac{x^3}{2 \pi k (\pi k-x)^3}-\frac{x^3}{2 \pi k (\pi k+x)^3}+\frac{x^2}{(\pi k-x)^3}-\frac{x^2}{(\pi k+x)^3}-\frac{\pi k x}{2 (\pi k-x)^3}-\frac{3 x}{\pi k (\pi k-x)^3}-\frac{\pi k x}{2 (\pi k+x)^3}-\frac{3 x}{\pi k (\pi k+x)^3}\right)+\frac{1}{x^3}+\frac{1}{2 x}=\csc ^3(x)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/81105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How can I prove $[0,1]\cap\operatorname{int}{(A^{c})} = \emptyset$? If $A \subset [0,1]$ is the union of open intervals $(a_{i}, b_{i})$ such that each rational number in $(0, 1)$ is contained in some $(a_{i}, b_{i})$, show that boundary $\partial A= [0,1] - A$. (Spivak- calculus on manifolds) If I prove that $[0,1]\cap\operatorname{int}{(A^{c})} = \emptyset$, the proof is complete. I tried to find a contradiction, but I didn't find one.
Since you say you want to prove it by contradiction, here we go: Suppose that $x\in \mathrm{int}(A^c)\cap [0,1]$. Then there is an open interval $(r,s)$ such that $x\in (r,s)\subseteq A^c$. Every open interval contains infinitely many rational numbers, so there are lots and lots of rationals in $(r,s)$. However, since $A$ contains all rationals in $(0,1)$, then the only rationals that can be in $(r,s)$ are $0$, $1$, and rationals that are either negative or greater than $1$. Since $x\in [0,1]$, the only possibility is $x=0$ or $x=1$ (there's a small argument to be made here; I'll leave it to you). Why can we not have $x=0$? Well, if $x=0$, then $s\gt 0$, so $(r,s)$ contains $[0,\min\{s,1\})$. Are there any rationals between $0$ an d $1$ that are in $[0,\min\{s,1\})$? And what happens if $x=1$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/81166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Arithmetic error in Feller's Introduction to Probability? In my copy of An Introduction to Probability by William Feller (3rd ed, v.1), section I.2(b) begins as follows: (b) Random placement of r balls in n cells. The more general case of [counting the number of ways to put] $r$ balls in $n$ cells can be studied in the same manner, except that the number of possible arrangements increases rapidly with $r$ and $n$. For $r=4$ balls in $n=3$ cells, the sample space contains already 64 points ... This statement seems incorrect to me. I think there are $3^4 = 81$ ways to put 4 balls in 3 cells; you have to choose one of the three cells for each of the four balls. Feller's answer of 64 seems to come from $4^3$. It's clear that one of us has made a very simple mistake. Who's right, me or Feller? I find it hard to believe the third edition of a universally-respected textbook contains such a simple mistake, on page 10 no less. Other possible explanations include: (1) My copy, a cheap-o international student edition, is prone to such errors and the domestic printings don't contain this mistake. (2) I'm misunderstanding the problem Feller was examining.
Assume sampling with replacement, there are four possible balls for cell 1, four possible balls for cell 2, and four possible balls for cell 3. So there are 4^3=64 possibilities. (Assuming sampling without replacement, there are four possible balls for cell 1, three for cell 2, and two for cell 3, for a total of 4*3*2=24.) So it seems the original Feller was correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/81280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 2 }
Why are Gram points for the Riemann zeta important? Given the Riemann-Siegel function, why are the Gram points important? I say if we have $S(T)$, the oscillating part of the zeros, then given a Gram point and the imaginary part of the zeros (under the Riemann Hypothesis), are the Gram points near the imaginary part of the Riemann zeros? I say that if the difference $ |\gamma _{n}- g_{n} | $ is regulated by the imaginary part of the Riemann zeros.
One thing Gram points are good for is that they help in bracketing/locating the nontrivial zeroes of the Riemann $\zeta$ function. More precisely, recall the Riemann-Siegel decomposition $$\zeta\left(\frac12+it\right)=Z(t)\exp(-i\;\vartheta(t))$$ where $Z(t)$ and $\vartheta(t)$ are Riemann-Siegel functions. $Z(t)$ is an important function for the task of finding nontrivial zeroes of the Riemann $\zeta$ function, in the course of of verifying the hypothesis. (See also this answer.) That is to say, if some $t_k$ satisfies $Z(t_k)=0$, then $\zeta\left(\frac12+it_k\right)=0$. Now, Gram points $\xi_k$ are numbers that satisfy the relation $\vartheta(\xi_k)=k\pi$ for some integer $k$. They come up in the context of "Gram's law", which states that $(-1)^k Z(\xi_k)$ tends to be positive. More crudely, we can say that Gram points tend to bracket the roots of the Riemann-Siegel function $Z(t)$ (i.e. there is often a root of $Z(t)$ in between consecutive $\xi_k$): "Gram's law" doesn't always hold, however: There are a number of other "bad" Gram points.
{ "language": "en", "url": "https://math.stackexchange.com/questions/81346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Series around $s=1$ for $F(s)=\int_{1}^{\infty}\text{Li}(x)\,x^{-s-1}\,dx$ Consider the function $$F(s)=\int_{1}^{\infty}\frac{\text{Li}(x)}{x^{s+1}}dx$$ where $\text{Li}(x)=\int_2^x \frac{1}{\log t}dt$ is the logarithmic integral. What is the series expansion around $s=1$? It has a logarithmic singularity at $s=1$, and I am fairly certain (although I cannot prove it) that it should expand as something of the form $$\log (1-s)+\sum_{n=0}^\infty a_n (s-1)^n.$$ (An expansion of the above form is what I am looking for) I also have a guess that the constant term is $\pm \gamma$ where $\gamma$ is Euler's constant. Does anyone know a concrete way to work out such an expansion? Thanks!
Note that the integral $F(s)$ diverges at infinity for $s\leqslant1$ and redefine $F(s)$ for every $s\gt1$ as $$ F(s)=\int_2^{+\infty}\frac{\text{Li}(x)}{x^{s+1}}\mathrm dx. $$ An integration by parts yields $$ sF(s)=\int_2^{+\infty}\frac{\mathrm dx}{x^s\log x}, $$ and the change of variable $x^{s-1}=\mathrm e^t$ yields $$sF(s)=\mathrm{E}_1(u\log2),\qquad u=s-1,$$ where the exponential integral function $\mathrm{E}_1$ is defined, for every complex $z$ not a nonpositive real number, by $$ \mathrm{E}_1(z)=\int_z^{+\infty}\mathrm e^{-t}\frac{\mathrm dt}t. $$ One knows that, for every such $z$, $$ \mathrm{E}_1(z) = -\gamma-\log z-\sum\limits_{k=1}^\infty \frac{(-z)^{k}}{k\,k!}. $$ On the other hand, $$\frac1s=\frac1{1+u}=\sum_{n\geqslant0}(-1)^nu^n,$$ hence $$F(s)=\frac1s\mathrm{E}_1(u\log2)=\sum_{n\geqslant0}(-1)^nu^n\cdot\left(-\gamma-\log\log2-\log u-\sum\limits_{k=1}^\infty \frac{(-1)^{k}(\log2)^k}{k\,k!}u^k\right). $$ One sees that $F(1+u)$ coincides with a series in $u^n$ and $u^n\log u$ for nonnegative $n$, and that $G(u)=F(1+u)+\log u$ is such that $$G(0)=-\gamma-\log\log2.$$ Finally, $$ F(s)=-\gamma-\log\log2-\log(s-1)-\sum\limits_{n=1}^{+\infty}(-1)^{n}(s-1)^n\log(s-1)+\sum\limits_{n=1}^{+\infty}c_n(s-1)^n, $$ for some coefficients $(c_n)_{n\geqslant1}$. Due to the logarithmic terms, this is a slightly more complicated expansion than the one suggested in the question, in particular $s\mapsto G(s-1)=F(s)+\log(s-1)$ is not analytic around $s=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/81371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Are sin and cos the only continuous and infinitely differentiable periodic functions we have? Sin and cos are everywhere continuous and infinitely differentiable. Those are nice properties to have. They come from the unit circle. It seems there's no other periodic function that is also smooth and continuous. The only other even periodic functions (not smooth or continuous) I have seen are: * *Square wave *Triangle wave *Sawtooth wave Are there any other well-known periodic functions?
"Are there any other well-known periodic functions?" In one sense, the answer is "no". Every reasonable periodic complex-valued function $f$ of a real variable can be represented as an infinite linear combination of sines and cosines with periods equal the period $\tau$ of $f$, or equal to $\tau/2$ or to $\tau/3$, etc. See Fourier series. There are also doubly periodic functions of a complex variable, called elliptic functions. If one restricts one of these to the real axis, one can find a Fourier series, but one doesn't do such restrictions, as far as I know, in studying these functions. See Weierstrass's elliptic functions and Jacobi elliptic functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/81411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Factorization of zeta functions and $L$-functions I'm rewriting the whole question in a general form, since that's probably easier to answer and it's also easier to spot the actual question. Assume that we have some finite extension $K/F$ of number fields and assume that the extension is not Galois. Denote the Galois closure by $E/F$. My question is the following: Is it possible to factor $\zeta_K(s)$ in terms of either Artin $L$-functions of the form $L(s,\chi,E/F)$ or $L$-functions corresponding to intermediate Galois extensions of $E/F$?
Yes. The first thing to notice is that the Zeta function is the Artin L-function associated to the trivial representation of $Gal(E/K)$ i.e. $$\zeta_K(s) = L(s, \mathbb{C}, E/K),$$ where $\mathbb{C}$ is endowed with the trivial action of $\mathrm{Gal}(E/K).$ Let $G = \mathrm{Gal}(E/F)$ and $H = \mathrm{Gal}(E/K).$ Now recall that the $L$-function attached to any representation of $\rho$ of $H$ is equal to the $L$-function associated to the induced representation $\rho^G_H$ of $G.$ In particular, $$L(s, \mathbb{C}, E/K) = L(s, \mathbb{C}[G]\otimes_{\mathbb{C}[H]}\mathbb{C}, E/F).$$ Therefore, if $\chi_1,...,\chi_n$ are the irreducible characters of $G$ and $\chi$ is the character of $\mathbb{C}[G]\otimes_{\mathbb{C}[H]}\mathbb{C},$ $$L(s, \mathbb{C}[G]\otimes_{\mathbb{C}[H]}\mathbb{C}, E/F) =L(s, \chi,E/F) = \displaystyle\prod_{i=1}^nL(s, \chi_i,E/F)^{\langle \chi_i, \chi \rangle},$$ And so, $$\zeta_K(s) = \displaystyle\prod_{i=1}^nL(s, \chi_i,E/F)^{\langle \chi_i, \chi \rangle}.$$ In the case $H$ and $G$ are explicit, the characters $\chi_1,...,\chi_n$ and $\chi$ are easily calculated and thus give an explicit factorization of $\zeta_K(s).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/81480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Limits of integration for random variable Suppose you have two random variables $X$ and $Y$. If $X \sim N(0,1)$, $Y \sim N(0,1)$ and you want to find k s.t. $\mathbb P(X+Y >k)=0.01$, how would you do this? I am having a hard time finding the limits of integration. How would you generalize $\mathbb P(X+Y+Z+\cdots > k) =0.01$? I always get confused when problems involve multiple integrals.
Hint: Are the random variables independent? If so, you can avoid integration by using the facts * *the sum of independent normally distributed random variables has a normal distribution *the mean of the sum of random variables is equal to the sum of the means *the variance of the sum of independent random variables is the sum of the variances *for a standard normal distribution $N(0,1)$: $\Phi^{-1}(0.99)\approx 2.326$
{ "language": "en", "url": "https://math.stackexchange.com/questions/81531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Divisibility by 4 I was asked to find divisibility tests for 2,3, and 4. I could do this for 2 and 3, but for 4. I could come only as far as: let $a_na_{n-1}\cdots a_1a_0$ be the $n$ digit number. Now from the hundredth digit onwards, the number is divisible by 4 when we express it as sum of digits. So, the only part of the proof that's left is to prove that $10a_1+a_0$ is divisible by 4. So if we show that this happens only when the number $a_1a_0$ is divisible by 4, the proof is complete. So the best way to show it is by just taking all combinations of $a_1,a_0$ or is there a better way?
Note that $10a_1+a_0\equiv2a_1+a_0$ (mod 4). So for divisibility by 4, $a_0$ must be even and in this case $2a_1+a_0=2(a_1+\frac{a_0}{2})$. So, $a_1$ and $\frac{a0}{2}$ must be of same parity (means both are either even, or odd).
{ "language": "en", "url": "https://math.stackexchange.com/questions/81598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Using recurrences to solve $3a^2=2b^2+1$ Is it possible to solve the equation $3a^2=2b^2+1$ for positive, integral $a$ and $b$ using recurrences?I am sure it is, as Arthur Engel in his Problem Solving Strategies has stated that as a method, but I don't think I understand what he means.Can anyone please tell me how I should go about it?Thanks. Edit:Added the condition that $a$ and $b$ are positive integers.
Yes. See, for example, the pair of sequences https://oeis.org/A054320 and https://oeis.org/A072256, where the solutions are listed. The recurrence is defined by $$a_0 = a_1 = 1; \qquad a_n = 10a_{n-1} - a_{n-2},\ n\ge 2.$$ As to how to go about solving this, there are many good references on how to do this, including Wikipedia.
{ "language": "en", "url": "https://math.stackexchange.com/questions/81674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How many $p$-adic numbers are there? Let $\mathbb Q_p$ be $p$-adic numbers field. I know that the cardinal of $\mathbb Z_p$ (interger $p$-adic numbers) is continuum, and every $p$-adic number $x$ can be in form $x=p^nx^\prime$, where $x^\prime\in\mathbb Z_p$, $n\in\mathbb Z$. So the cardinal of $\mathbb Q_p$ is continuum or more than that?
The field $\mathbb Q_p$ is the fraction field of $\mathbb Z_p$. Since you already know that $|\mathbb Z_p|=2^{\aleph_0}$, let us show that this is also the cardinality of $\mathbb Q_p$: Note that every element of $\mathbb Q_p$ is an equivalence class of pairs in $\mathbb Z_p$, much like the rationals are with respect to the integers. Since $\mathbb Z_p\times\mathbb Z_p$ is also of cardinality continuum, we have that $\mathbb Q_p$ can be injected into this set either by the axiom of choice, or directly by choosing representatives which are co-prime. This shows that $\mathbb Q_p$ has at most continuum many elements, since $\mathbb Z_p$ is a subset of its fraction field, then the $p$-adic field has exactly $2^{\aleph_0}$ many elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/81774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Pumping lemma usage I need to know if my solution for a problem related with regular languages and pumping lemma is correct. So, let $L = \{a^ib^jc^k \mid i, j,k \ge 0 \mbox{ and if } i = 1 \mbox{ then } j=k \}$ Now i need to use the pumping lemma to prove that this language is not regular. I wrote my proof like this: Let's assume that $L$ is regular. Let $|w|= p$ be the pumping length and $q = p -1$. Now if we consider $i = 1$ then $j=k$, so now i can pick a string from $L$ such as $w = ab^qc^q=xyz$. Since $q = p - 1$, it implies that $x = a$, $y=b^q$ and $z=c^q$. It satisfies the property $|xy| \le p$ and $|y| \gt 0$. Assuming that $L$ is regular, then $\forall_i\ge_0\ xy^iz \in L$, but if we choose $i=2$ we have $xy^2z$, which means that we have more $b's$ than $c's$, and we reached a contradiction, therefore $L$ is not regular, which completes the proof. Is my proof correct? i'm having some doubts related with my $q = p - 1$, but i think that it makes sense to choose a $q$ like that to "isolate" $y=b^q$, that will make the proof trivial after. Thanks in advance.
You cannot choose $x$, $y$ and $z$. That is, the following statement does not help you prove that the language is not regular: Since $q=p−1$, it implies that $x=a$, $y=b^q$ and $z=c^q$. It satisfies the propery $|xy| \le p$ and $|y|>0$. The pumping lemma states that for every regular language $L$, there exists a string $xyz \in L$ with $|xyz| \ge p$, $|xy| \le p$ and $|y| \ne 0$ such that $xy^iz \in L$ for all $i \ge 0$. Therefore, if you wish to prove a language is not regular, you may go by contradiction and show that for all $xyz \in L$ with $|xyz| \ge p$, $|xy| \le p$ and $|y| \ne 0$, there exists an $i \ge 0$ such that $xy^iz \not\in L$. Note how the quantifiers flip. You are only showing that one choice of $x$, $y$ and $z$ violates the pumping lemma, but you must show that all choices violate the pumping lemma.
{ "language": "en", "url": "https://math.stackexchange.com/questions/81847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Another quadratic Diophantine equation: How do I proceed? How would I find all the fundamental solutions of the Pell-like equation $x^2-10y^2=9$ I've swapped out the original problem from this question for a couple reasons. I already know the solution to this problem, which comes from http://mathworld.wolfram.com/PellEquation.html. The site gives 3 fundamental solutions and how to obtain more, but does not explain how to find such fundamental solutions. Problems such as this have plagued me for a while now. I was hoping with a known solution, it would be possible for answers to go into more detail without spoiling anything. In an attempt to be able to figure out such problems, I've tried websites, I've tried some of my and my brother's old textbooks as well as checking out 2 books from the library in an attempt to find an answer or to understand previous answers. I've always considered myself to be good in math (until I found this site...). Still, judging from what I've seen, it might not be easy trying to explain it so I can understand it. I will be attaching a bounty to this question to at least encourage people to try. I do intend to use a computer to solve this problem and if I have solved problems such as $x^2-61y^2=1$, which will take forever unless you know to look at the convergents of $\sqrt{61}$. Preferably, I would like to understand what I'm doing and why, but failing that will settle for being able to duplicate the methodology.
You can type it into Dario Alpern's solver and tick the "step-by-step" button to see a detailed solution. EDIT: I'm a little puzzled by Wolfram's three fundamental solutions, $(7,2)$, $(13,4)$, and $(57,18)$. It seems to me that there are two fundamental solutions, $(3,0)$ and $(7,2)$, and you can get everything else by combining those two with solutions $(19,6)$ of $x^2-10y^2=1$. Using mercio's formalism, $$(7-2\sqrt{10})(19+6\sqrt{10})=13+4\sqrt{10}$$ shows you how to get $(13,4)$; $$(3+0\sqrt{10})(19+6\sqrt{10})=57+18\sqrt{10}$$ shows you how to get $(57,18)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/81917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 0 }
Probability when I have one or 3 choices I'm wondering what is "better" to have in terms of profit: Lets say we have 300 people come to your store and you have 3 products. Each individual is given (randomly) one product. If the individual likes the product he will buy it, but if he doesn't like the product, he will turn around and walk out of the store forever. Now, I'm wondering is it better (more probable) to have 3 products or just one, in order to maximize the total number of sold products. Important to note is that customer doesn't decide which product he gets (he gets it randomly), he only decides if he likes it or not (50-50 chance he likes it). Any help is welcome, or maybe link to some theory I should read in order to come p with a solution. edit: Ok, so, a little detailed explanation: let say that I have unlimited number of each of the items in the store(currently 3 different items - but unlimited number of them in the stock), and lets say that each customer that comes in to my store (aproximatelly 1000 a day) either likes or doesn't like the randomly offered product to him. Each of the products offered is of the same popularity - we sold almost the same amount of product1, product2 and product3. Product1 sold for example just 100 more units than Product2 and Product 2 sold like 100 more units than Product3, but the sold numbers are as high as 1000000 so this difference is really very low. Does this now help in determining if we should chose only Product1 and keep forcing it, or should ve leave the three products as is?
is it better (more probable) to have 3 products or just one, in order to maximize the total number of sold products. Important to note is that customer doesn't decide which product he gets (he gets it randomly), he only decides if he likes it or not (50-50 chance he likes it). Under the conditions you have described, the main objective is to maximize the total number of sold products. This number will be <= population number of 800. The only factors here are: 1 - The number of people showing in the store 2 - The number of items you have in stock factor 1 above is not described in your statement of the problem, but if you think that having more than 1 product will affect it then having 3 products is better than 1. Otherwise, the only factor would be factor (2) and adding new products will not add value. I hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/81972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Prove that $(1 - \frac{1}{n})^{-n}$ converges to $e$ This is a homework question and I am not really sure where to go with it. I have a lot of trouble with sequences and series, can I get a tip or push in the right direction?
You have: $$ x_n:=\left(1-\frac1n\right)^{-n} = \left(\frac{n-1}n\right)^{-n} = \left(\frac{n}{n-1}\right)^{n} $$ $$ = \left(1+\frac{1}{n-1}\right)^{n} = \left(1+\frac{1}{n-1}\right)^{n-1}\cdot \left(1+\frac{1}{n-1}\right) = a_n\cdot b_n. $$ Since $a_n\to \mathrm e$ and $b_n\to 1$ you obtain what you need.
{ "language": "en", "url": "https://math.stackexchange.com/questions/82034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 1, "answer_id": 0 }
Is there an easy way to determine when this fractional expression is an integer? For $x,y\in \mathbb{Z}^+,$ when is the following expression an integer? $$z=\frac{(1-x)-(1+x)y}{(1+x)+(1-x)y}$$ The associated Diophantine equation is symmetric in $x, y, z$, but I couldn't do anything more with that. I tried several factoring tricks without luck. The best I could do was find three solutions such that $0<x\le y\le z$. They are: $(2,5,8)$, $(2,4,13)$ and $(3,3,7)$. The expression seems to converge pretty quickly to some non-integer between 1 and 2.
Since $$ \frac{(1-x)-(1+x)y}{(1+x)+(1-x)y} = \frac{ xy+x+y-1}{xy-x-y-1} = 1 + \frac{2(x+y) }{xy-x-y-1} $$ and $ 2x+2y < xy - x -y - 1 $ if $ 3(x+y) < xy - 1 .$ Suppose $ x\leq y$, then $ 3(x+y) \leq 6y \leq xy-1 $ if $ x\geq 7. $ So all solutions must have $0\leq x< 7 $ so it is reduced to solving $7$ simpler Diophantine equations. If $x=0 $ then $ \displaystyle z= 1 - \frac{2y}{y+1}$ so the only solutions are $ (0,0,1)$ and $ (0,1,0).$ If $x=1$ then $ \displaystyle z= -y$ so $(1,m,-m)$ is a solution for $ m\geq 1.$ If $x=2$ then $ \displaystyle z = 1 + \frac{4+2y}{y-3}$ which is an integer for $y=1,2,4,5,8,13.$ I will leave you to find the others. Each of the cases are now simple Diophantine equations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/82087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find the equation of the plane passing through a point and a vector orthogonal I have come across this question that I need a tip for. Find the equation (general form) of the plane passing through the point $P(3,1,6)$ that is orthogonal to the vector $v=(1,7,-2)$. I would be able to do this if it said "parallel to the vector" I would set the equation up as $(x,y,x) = (3,1,6) + t(1,7,-2)$ and go from there. I don't get where I can get an orthogonal vector. Normally when I am finding an orthogonal vector I have two other vectors and do the cross product to find it. I am thinking somehow I have to get three points on the plane, but I'm not sure how to go about doing that. Any pointers? thanks in advance.
vector equation of a plane is in the form : r.n=a.n, in this case, a=(3,1,6), n=(1,7,-2). Therefore, r.(1,7,-2)=(3,1,6).(1,7,-2) r.(1,7,-2)=(3x1)+(1x7)+(6x-2) r.(1,7,-2)=3+7-12 r.(1,7,-2)=-2 OR r=(x,y,z) therefore, (x,y,z).(1,7,-2)=-2 x+7y-2z=-2
{ "language": "en", "url": "https://math.stackexchange.com/questions/82151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
A circle with infinite radius is a line I am curious about the following diagram: The image implies a circle of infinite radius is a line. Intuitively, I understand this, but I was wondering whether this problem could be stated and proven formally? Under what definition of 'circle' and 'line' does this hold? Thanks!
There is no such thing as a circle of infinite radius. One might find it useful to use the phrase "circle of infinite radius" as shorthand for some limiting case of a family of circles of increasing radius, and (as the other answers show) that limit might give you a straight line.
{ "language": "en", "url": "https://math.stackexchange.com/questions/82220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 8, "answer_id": 1 }
Product rule in calculus This is wonderful question I came across whiles doing calculus. We all know that $$\frac{d(AB)}{dt} = B\frac{dA}{dt} + A\frac{dB}{dt}.$$ Now if $A=B$ give an example for which $$\frac{dA^2} {dt} \neq 2A\frac{dA}{at}.$$ I have tried many examples and could't get an example, any help?
let's observe function $y=(f(x))^2$ , this function can be decomposed as the composite of two functions: $y=f(u)=u^2$ and $u=f(x)$ So : $\frac { d y}{ d u}=(u^2)'_u=2u=2f(x)$ $\frac{du}{dx}=f'(x)$ By the chain rule we know that : $\frac{dy}{dx}=\frac{dy}{du}\cdot \frac{du}{dx}=2f(x)f'(x)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/82266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Solving the recurrence relation $A_n=n!+\sum_{i=1}^n{n\choose i}A_{n-i}$ I am attempting to solve the recurrence relation $A_n=n!+\sum_{i=1}^n{n\choose i}A_{n-i}$ with the initial condition $A_0=1$. By "solving" I mean finding an efficient way of computing $A_n$ for general $n$ in complexity better than $O(n^2)$. I tried using the identity $\dbinom{n+1}i=\dbinom{n}{i-1}+\dbinom{n}i$ but I still ended up with a sum over all previous $n$'s. Another approach was to notice that $2A_n=n!+\sum_{i=0}^{n}{n \choose i}A_{n-i}$ and so if $a(x)$ is the EFG for $A_n$, then we get the relation $$2a(x)=\frac{1}{1-x}+a(x)e^x,$$ so $$a(x)=\frac{1}{(1-x)(2-e^x)}$$ (am I correct here?) but I can't see to to use this EGF for the more efficient computation of $A_n$.
This isn’t an answer, but it may lead to useful references. The form of the recurrnce suggests dividing through by $n!$ and substituting $B_n=\dfrac{A_n}{n!}$, after which the recurrence becomes $$B_n=1 + \sum_{i=1}^n\binom{n}i\frac{(n-i)!}{n!}B_{n-i}=1+\sum_{i=1}^n\frac{B_{n-i}}{i!}.$$ You didn’t specify an initial condition, so for the first few terms we have: $$\begin{align*} B_0&=0+B_0\\ B_1&=1+B_0\\ B_2&=2+\frac32B_0\\ B_3&=\frac72+\frac{13}6B_0\\ B_4&=\frac{17}3+\frac{25}8B_0\\ B_5&=\frac{211}{24}+\frac{541}{120}B_0 \end{align*}$$ If we set $B_n=u_n+v_nB_0$, then $$u_n=1+\sum_{i=1}^n\frac{u_{n-i}}{i!}$$ with $u_0=0$, and $$v_n=\sum_{i=1}^n\frac{v_{n-i}}{i!}$$ with $v_0=1$. The ‘natural’ denominator of $u_n$ is $(n-1)!$, while that of $v_n$ is $n!$, so I looked at the sequences $$\langle (n-1)!u_n:n\in\mathbb{N}\rangle = \langle 0,1,2,7,34,211,\dots\rangle$$ and $$\langle n!v_n:n\in\mathbb{N}\rangle=\langle 1,1,3,13,75,541,\dots\rangle\;.$$ The first is OEIS A111539, and second appears to be OEIS A000670. There’s evidently a great deal known about the latter; there’s very little on the former. Added: And with $A_0=1$ we have $B_0=1$ and $B_n=u_n+v_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/82315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 2 }
How to prove $\sum_{ d \mid n} \mu(d)f(d)=\prod_{i=1}^r (1-f(p_i))$? I have to prove for $n \in \mathbb{N}>1$ with $n=\prod \limits_{i=1}^r p_i^{e_i}$. $f$ is a multiplicative function with $f(1)=1$: $$\sum_{ d \mid n} \mu(d)f(d)=\prod_{i=1}^r (1-f(p_i))$$ How I have to start? Are there different cases or can I prove it in general? Any help would be fine :)
Please see Theorem 2.18 on page $37$ in Tom Apostol's Introduction to analytic number theory book. The proof goes as follows: Define $$ g(n) = \sum\limits_{d \mid n} \mu(d) \cdot f(d)$$ * *Then $g$ is multiplicative, so to determine $g(n)$ it suffices to compute $g(p^a)$. But note that $$g(p^a) = \sum\limits_{d \mid p^{a}} \mu(d) \cdot f(d) = \mu(1)\cdot f(1) + \mu(p)\cdot f(p) = 1-f(p)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/82379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Sketch the graph of $y = \frac{4x^2 + 1}{x^2 - 1}$ I need help sketching the graph of $y = \frac{4x^2 + 1}{x^2 - 1}$. I see that the domain is all real numbers except $1$ and $-1$ as $x^2 - 1 = (x + 1)(x - 1)$. I can also determine that between $-1$ and $1$, the graph lies below the x-axis. What is the next step? In previous examples I have determined the behavior near x-intercepts.
You can simplify right away with $$ y = \frac{4x^2 + 1}{x^2 - 1} = 4+ \frac{5}{x^2 - 1} =4+ \frac{5}{(x - 1)(x+1)} $$ Now when $x\to\infty$ or $x\to -\infty$, adding or subtracting 1 doesn't really matter hence that term goes to zero. When $x$ is quite large, say 1000, the second term is very small but positive hence it should approach to 4 from above (same holds for negative large values). The remaining part to be done is when $x$ approaches to $-1$ and $1$ from both sides. For the values $x<-1$ and $x>1$ you can show that the second term is positive and negative for $-1<x<1$. Therefore the limit jumps from $-\infty$ to $\infty$ at each vertical asymptote. Here is the whole thing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/82443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A counterexample to theorem about orthogonal projection Can someone give me an example of noncomplete inner product space $H$, its closed linear subspace of $H_0$ and element $x\in H$ such that there is no orthogonal projection of $x$ on $H_0$. In other words I need to construct a counterexample to theorem about orthogonal projection when inner product space is not complete.
Let $H$ be the inner product space consisting of $\ell^2$-sequences with finite support, let $\lambda = 2^{-1/2}$ and put $$ z = \sum_{n=1}^\infty \;\lambda^n \,e_n \in \ell^2 \smallsetminus H $$ Then $\langle z, z \rangle = \sum_{n=1}^\infty \lambda^{2n} = \sum_{n=1}^{\infty} 2^{-n} = 1$. The subspace $H_0 = \{y \in H\,:\,\langle z, y \rangle = 0\}$ is closed in $H$ because $\langle z, \cdot\rangle: H \to \mathbb{R}$ is continuous. The projection of $x = e_1$ to $H_0$ should be $$ y = e_1 - \langle z,e_1\rangle\, z = e_1 - \lambda z = \lambda^2 e_1 - \sum_{n=2}^\infty \lambda^{n+1}e_n \in \ell^2 \smallsetminus H_0. $$ For $k \geq 2$ put $$ z_k = \sum_{n=2}^k \;\lambda^{n} \,e_n + \lambda^{k+1} \frac{1}{1-\lambda} e_{k+1} \in H_0. $$ Then $y_k = \lambda^2 e_1-\lambda z_k \in H_0$ because $$ \langle y_k, z\rangle =\lambda^2 - \sum_{n=2}^{k}\,\lambda^{2n+1}-\frac{\lambda^{2k+2}}{1+\lambda} = \lambda^2 - \lambda^2 \sum_{n=1}^\infty \lambda^{2n} = 0. $$ On the other hand, we have $y_k \to y$ in $\ell^2$, so $$ \|e_1 - y\| \leq d(e_1,H_0) \leq \lim_{k\to\infty} \|e_1-y_k\| = \|e_1-y\| $$ and we're done because $y \in \overline{H}_0$ in $\ell^2$ is the only point realizing $d(e_1,\overline{H}_0)$ in $\ell^2$, thus there can be no point in $H_0$ minimizing the distance to $e_1$ because $y \notin H_0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/82499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 0 }
First Order Logic (deduction proof in Hilbert system) Possible Duplicate: First order logic proof question I need to prove this: ⊢ (∀x.ϕ) →(∃x.ϕ) Using the following axioms: The only thing I did was use deduction theorem: (∀x.ϕ) ⊢(∃x.ϕ) And then changed (∃x.ϕ) into (~∀x.~ϕ), so: (∀x.ϕ) ⊢ (~∀x.~ϕ) How can I continue with this? I cannot use soundness/completeness theorems. EDIT: ∀* means it is a finite sequence of universal quantifiers (possible 0)
If the asterisks in your axioms mean that the axioms are to be fully universally quantified, so that they become sentences, and if your language has no constant symbols, then it will not be possible to make the desired deduction in your system. The reason is that since all the axioms are fully universally quantified, they are (vacuously) true in the empty structure, and your rule of inference is truth-preserving for any structure including the empty structure. But your desired deduction is not valid for the empty structure, since the hypothesis is vacuously true there, but the conclusion is not. So it would actually be unsound for you to able to make that deduction. Your desired validity is only valid in nonempty domains, and so you need a formal system appropriate for reasoning in nonempty domains.
{ "language": "en", "url": "https://math.stackexchange.com/questions/82567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Improper integral; exponential divided by polynomial How can I evaluate $$\int_{-\infty}^\infty {\exp(ixk)\over -x^2+2ixa+a^2+b^2} dx,$$ where $k\in \mathbb R, a>0$? Would Fourier transforms simplify anything? I know very little about complex analysis, so I am guessing there is a rather simple way to evaluate this? Thanks.
Assume $b \neq 0$ and $k\neq 0$. Write $\dfrac{\exp(ixk)}{-x^2+2iax+a^2+b^2}= \dfrac{\exp(ixk)}{-(x-ia)^2+b^2}=\dfrac{\exp(i(x-ia)k)}{-(x-ia)^2+b^2} \exp(-ak)$ hence the integral becomes $I=\int_{-\infty}^\infty \dfrac{\exp(i(x-ia)k)}{-(x-ia)^2+b^2} \exp(-ak)dx=\int_{-\infty-ia}^{\infty -ia} \dfrac{\exp(izk)}{-z^2+b^2} \exp(-ak)dz$ on the contour the straight line parallel the $x$-axis and intercepting the $y$-axis (imaginary) at $-ia$. We need to close the contour by a great semicircle in the upper half plane if $k>0$ and in this case there are two poles at $z=b$ and $z=-b$ enclosed in the contour. Now we will use the residue theorem. The poles of the fraction $\dfrac{\exp(izk)}{-z^2+b^2}$ are $-b,+b$ and then the integral will be $I=(2\pi i)\exp(-ak)\lbrace Res(z=b)\exp(ibk)+Res(z=-b)\exp(-ibk)\rbrace$. Now observe that $Res(z=b)=1/2b$ and $Res(z=-b)=-1/2b$ hence $I=2 \pi i \exp(-ak) \frac{\sin bk}{b}$. If $k<0$ then we close the contour by a semicircle in the lower halfplane and in this case there are no poles enclosed and so the integral becomes zero. Now assume $b=0$ and $K\neq 0$: in this case the residue (at $z=0$) becomes $-ik$ and the integral becomes $2\pi k$ (if $k>0$) and $0$ if $k<0$. Finally let $k=0$: then in this case the result will be easy and I leave it for you as an exersice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/82642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Proof of Convergence: Babylonian Method $x_{n+1}=\frac{1}{2}(x_n + \frac{a}{x_n})$ a) Let $a>0$ and the sequence $x_n$ fulfills $x_1>0$ and $x_{n+1}=\frac{1}{2}(x_n + \frac{a}{x_n})$ for $n \in \mathbb N$. Show that $x_n \rightarrow \sqrt a$ when $n\rightarrow \infty$. I have done it in two ways, but I guess I'm not allowed to use the first one and the second one is incomplete. Can someone please help me? * *We already know $x_n \rightarrow \sqrt a$, so we do another step of the iteration and see that $x_{n+1} = \sqrt a$. *Using limit, $x_n \rightarrow x, x_{n+1} \rightarrow x$ (this is the part I think it's incomplete, don't I have to show $x_{n+1} \rightarrow x$, how?), we have that $$x = \frac x 2 (1 + \frac a {x^2}) \Rightarrow 1 = a/x^2 \Rightarrow x = \sqrt a$$ b) Let the sequence $x_n$ be defined as $x_{n+1}= 1 + \frac 1 {x_n} (n \in \mathbb N), x_1=1$. Show that it converges and calculate its limit. "Tip: Show that sequences $x_{2n}$ and $x_{2n+1}$ monotone convergent to the limit." I didn't understand the tip, how can this help me? Does it make a difference if the number is odd or even? Thanks in advance!
For a): The proof of convergence can be deduced from the question/answer LFT theory found in Iterative Convergence Formulation for Linear Fractional Transformation with Rational Coefficients Proof when $x_1^2 > a$ Note: If both $a$ and $x_1$ are rational numbers, then this solution is obtained without recourse to the real number system. Let $S$ represent $a$ and $K$ denote $x_1$. We have our LFT theory for $F(x) = \frac{S + Kx}{K + x}$ as espoused in the above link. Considering Proposition 2 & 3, we have a sequence $\{F^1, F^2, F^3, ..., F^n, ...\}$ of LFTs with the corresponding decreasing sequence of Ks $\{K, K_2, K_3, ..., K_n, ...\}$ and $(K_n)^2$ converges to $S$. Might as well set $K_1$ to $K$ now. Now with a little thought, you can see the following holds: $K_2 = (S + K_1^2)/2K_1$ $K_4 = (S + K_2^2)/2K_2$ $K_8 = (S + K_4^2)/2K_4$ $K_{16} = (S + K_8^2)/2K_8$ etc. But this is exactly the Babylonian Method. So the method can actually be described as calculating numbers that are a subsequence of our convergent sequence $\{K_1, K_2, K_3, ..., K_n, ...\}$ So we have shown that the squares of the numbers produced by the Babylonian Method converge to $S$. Proof when $x_1^2 < a$ Let $K$ denote $(a + x_1^2)/2x_1$. We know by the LFT theory that the square of this number is greater than $a$. So, the proof given above can now be applied, by simply 'throwing out' the first number $x_1$ of the sequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/82682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 7, "answer_id": 5 }
Differential Equation Breaks Euler Method Solving ${dy\over dx} = 2y^2$, $y(0)=2$ analytically yields $y(8)= -2/31$, but from using Euler's method and looking at the slope field, we see that $y(8)$ should be a really large positive answer. Why? Differential equation: $$\begin{align} &\frac{dy}{dx}=2y^2\\ &\frac{dy}{y^2} = 2\, dx\\ -&\frac{1}{y} = 2x + c\\ -&\frac{1}{2} = c\\ -&\frac{1}{y}=2x-\frac{1}{2}\\ &\frac{1}{y}=-2x+\frac{1}{2}\\ &y=\frac{1}{-2x+\frac{1}{2}}\\ &y=\frac{2}{-4x+1}\\ y(8)=-2/31\end{align}$$
As you found, the solution is $y={2\over 1-4x}$, which has a vertical asymptote at $x=1/4$. In the slope field, you should be able to convince yourself that such a function can indeed "fall along the slope vectors". The curve will shoot up to infinity as you approach $x=1/4$ from the left. To the right of $x=1/4$ the curve "comes from below". The graph of $y={2\over 1-4x}$ over $[0,1]$ is shown below:
{ "language": "en", "url": "https://math.stackexchange.com/questions/82746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Locally compact nonarchimedian fields Is it true that if $F$ is a locally compact topological field with a proper nonarchimedean absolute value $A$, then $F$ is totally disconnected? I am aware of the classifications of local fields, but I can't think of a way to prove this directly.
Yes: the non-Archimedean absolute value yields a non-Archimedean metric (also known as an ultrametric), and every ultrametric space is totally disconnected. In fact, every ultrametric space is even zero-dimensional, as it has a base of clopen sets. Proof: Let $\langle X,d\rangle$ be an ultrametric space, meaning that $d$ is a metric satisfying $$d(x,y)\le\max\{d(x,z),d(y,z)\}$$ for any $x,y,z\in X$. Let $B(x,r)=\{y\in X:d(x,y)<r\}$; by definition $B(x,r)$ is open. Suppose that $y\in X\setminus B(x,r)$; then $d(x,y)\ge r$, and I claim that $B(x,r)\cap B(y,r)=\varnothing$. To see this, suppose that $z\in B(x,r)\cap B(y,r)$; then $$d(x,y)\le\max\{d(x,z),d(y,z)\}<r\;,$$ which is impossible. Thus, $y\notin\operatorname{cl}B(x,r)$, and $B(x,r)$ is closed. Thus, every open ball is clopen, and $X$ is zero-dimensional (and hence totally disconnected). $\dashv$ The metric associated with the non-Archimedean absolute value $\|\cdot\|$ is of course $d(x,y)=\|x-y\|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/82796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Function $f\colon 2^{\mathbb{N}}\to 2^{\mathbb{N}}$ preserving intersections and mapping sets to sets which differs only by finite number of elements Define on $2^{\mathbb{N}}$ equivalence relation $$ X\sim Y\Leftrightarrow \text{Card}((X\setminus Y)\cup(Y\setminus X))<\aleph_0 $$ Is there exist a function $f\colon 2^{\mathbb{N}}\to 2^{\mathbb{N}}$ such that $$ f(X)\sim X $$ $$ X\sim Y \Rightarrow f(X)=f(Y) $$ $$ f(X\cap Y)=f(X)\cap f(Y) $$
Let $\{X_\alpha : \alpha\in\mathcal{A}\}\subset\mathbb{N}$ be an uncountable family of sets such that $$ \alpha,\beta\in\mathcal{A},\quad\alpha\neq\beta\Rightarrow \text{Card}(X_\alpha\cap X_\beta)<\aleph_0 $$ Such a family does exist. Indeed for each irrational number $x\in\mathbb{I}$ consider sequence of rational numbers $\{x_n\}_{n=1}^{\infty}\subset\mathbb{Q}$ tending to $x$. Let $\varphi(x)=\{x_n:n\in\mathbb{N}\}$ be the set of this rational numbers. Obviously for $x,y\in\mathbb{I}$ such that $x\neq y$ we have $\text{Card}(\varphi(x)\cap\varphi(y))<\aleph_0$. Also obviously for all $x\in\mathbb{I}$ we have $\text{Card}(\varphi(x))=\aleph_0$. Let $i\colon 2^\mathbb{Q}\to 2^\mathbb{N}$ be some bijection between $2^\mathbb{Q}$ and $2^\mathbb{N}$ then we may take by definition $\{X_\alpha : \alpha\in\mathcal{A}\}=\{i(\varphi(x)):x\in\mathbb{I}\}$. This will be desired family. Let $\alpha,\beta\in\mathcal{A},\alpha\neq\beta$. Then $X_\alpha\cap X_\beta\sim\varnothing$. And from the second and third properties we obtain $f(X_\alpha)\cap f(X_\beta)=f(X_\alpha\cap X_\beta)=f(\varnothing)$. Now for each $\alpha\in\mathcal{A}$ consider $Y_\alpha=f(X_\alpha)\setminus f(\varnothing)$. By construction $X_\alpha$ is infinite, so does $f(X_\alpha)$, and as the consequence $Y_\alpha\neq\varnothing$. Now for all $\alpha,\beta\in\mathcal{A},\alpha\neq\beta$ we have $$ Y_\alpha\cap Y_\beta=f(X_\alpha\cap X_\beta)\setminus f(\varnothing)=\varnothing $$ Thus we built an uncountable family of disjoint subsets $\{Y_\alpha : \alpha\in\mathcal{A}\}$ in countable set $\mathbb{N}$. Contradiction, hence such a function doesn't exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/82855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
how to find the parabola of a flying object how can you find the parabola of a flying object without testing it? what variables do you need? I want to calculate the maximum hight and distance using a parabola. Is this possible? Any help will be appreciated.
I assume that you know that if an object is thrown straight upwards, with initial speed $v$, then its height $h(t)$ above the ground at time $t$ is given by $$h(t)=vt-\frac{1}{2}gt^2,$$ where $g$ is the acceleration due to gravity. The acceleration is taken to be a positive number, constant since if our thrown object achieves only modest heights. In metric units, $g$ at the surface of the Earth is about $9.8$ metres per second per second. Of course the equation only holds until the object hits the ground. We are assuming that there is no air resistance, which is unrealistic unless we are on the Moon. Now imagine that we are standing at the origin, and throw a ball with speed $s$, at an angle $\theta$ to the ground, where $\theta$ is not $90$ degrees. The horizontal component of the velocity is $s\cos\theta$, and the vertical component is $s\sin\theta$. So the "$x$-coordinate" of the position at time $t$ is given by $$x=x(t)=(s\cos\theta)t.\qquad\qquad(\ast)$$ The height ($y$-coordinate) at time $t$ is given by $$y=y(t)=(s\sin\theta)t-\frac{1}{2}gt^2.\qquad\qquad(\ast\ast)$$ To obtain the equation of the curve described by the ball, we use $(\ast)$ to eliminate $t$ in $(\ast\ast)$. From $(\ast)$ we obtain $t=\dfrac{x}{s\cos\theta}$. Substitute for $t$ in $(\ast\ast)$. We get $$y=(\tan\theta)x-\frac{g}{2s^2\cos^2\theta}x^2.$$ For the maximum height reached, we do not need the equation of the parabola. For the height $y$ at time $t$ is $(s\sin\theta)t -\frac{1}{2}gt^2$. We can find the $t$ that maximizes height. More simply, the vertical component of the velocity at time $t$ is $s\sin\theta -gt$, and at the maximum height this upwards component is $0$. Now we can solve for $t$. Nor do we need the equation of the parabola to find the horizontal distance travelled. By symmetry, to reach the ground takes time equal to twice the time to reach maximum height. The time to reach the ground is therefore $\dfrac{2s \sin\theta}{g}$, and therefore the horizontal distance travelled is $\dfrac{2s^2\sin\theta\cos\theta}{g}$. This can also be written as $\dfrac{s^2\sin 2\theta}{g}$. Comment: Note that for fixed $s$ the horizontal distance $\dfrac{s^2\sin 2\theta}{g}$ travelled until we hit the ground is a maximum when $\theta$ is $45$ degrees. So for maximizing horizontal distance, that's the best angle to throw at, if your throw speed does not depend on angle. (But it does!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/82934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Convergence of rationals to irrationals and the corresponding denominators going to zero If $(\frac{p_k}{q_k})$ is a sequence of rationals that converges to an irrational $y$, how do you prove that $(q_k)$ must go to $\infty$? I thought some argument along the lines of "breaking up the interval $(0,p_k)$ into $q_k$ parts", but I'm not sure how to put it all together. Perhaps for every $q_k$, there is a bound on how close $\frac{p_k}{q_k}$ can get to the irrational $y$ ?
Hint: For every positive integer $n$, consider the set $R(n)$ of rational numbers $p/q$ such that $1\leqslant q\leqslant n$. Show that for every $n$ the distance $\delta(n)$ of $y$ to $R(n)$, defined as $\delta(n)=\inf\{|y-r|\mid r\in R(n)\}$, is positive. Apply this to any sequence $(p_k/q_k)$ converging to $y$, showing that for every positive $n$, there exists $k(n)$ such that for every $k\geqslant k(n)$, $p_k/q_k$ is not in $R(n)$, hence $q_k\geqslant n+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/82990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Continuity at accumulation point Following was the homework question for my analysis class. Given any sequence $x_n$ in a metric space $(X; d)$, and $x \in X$, consider the function $f : \mathbb N^\ast = \mathbb N \cup \{\infty\} \to X$ defined by $f(n) = x_n$, for all $n\in \mathbb N$, and $f(\infty) = x$. Prove that there exists a metric on $\mathbb N^\ast$ such that the sequence $x_n$ converges to $x$ in $(X; d)$ if and only if the function $f$ is continuous. Our teacher said that since $\infty$ is the only accumulation point of $\mathbb N^\ast$, if $f$ is continuous at $\infty$, it is continuous on $\mathbb N^\ast$. So, it is enough to show it is continuous at $\infty$. But I did not understand why this is true. It is also possible that I misunderstood what my teacher meant. If you can tell me the statement is correct or not and why, I would be grateful. Thanks in advance.
It's not clear to me what your teacher meant by $\infty$ being the only accumulation point of $\mathbb N^*$, since $\mathbb N^*$ is yet to be equipped with a metric and it makes no sense to speak of accumulation points before that. A metric on $\mathbb N^*$ satisfying the requirement is induced by mapping $\mathbb N^*$ to $[0,1]$ using $n\mapsto1/n$, with $1/\infty:=0$, and using the canonical metric on $[0,1]$. Then all natural numbers are isolated points, and the open neighbourhoods of $\infty$ are the cofinite sets. The function $f$ is continuous if and only if the preimages of all open sets are open. The preimages of open sets not containing $x$ are open because they consist of a finite number of isolated points. The preimages of open sets containing $x$ are open because $x_n$ eventually ends up in every such open set, and thus the preimage is a cofinite set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/83044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Zero divisor in $R[x]$ Let $R$ be commutative ring with no (nonzero) nilpotents. If $f(x) = a_0+a_1x+\cdots+a_nx^n$ is a zero divisor in $R[x]$, how do I show there's an element $b \ne 0$ in $R$ such that $ba_0=ba_1=\cdots=ba_n=0$?
This is the case of Armendariz Rings, which I studied in last summer briefly. It is an interesting topic. A ring $R$ is called Armendariz if whenever $f(x)=\sum_{i=0}^{m}a_ix^i, g(x)=\sum_{j=0}^{n}b_jx^j \in R[x]$ such that $f(x)g(x)=0$, then $a_ib_j=0\ \forall\ i,j$. In his paper "A NOTE ON EXTENSIONS OF BAER AND P. P. -RINGS" in 1973, Armendariz proved that Reduced rings are Armendariz which is a nice generalization of your result. Proof- Let $fg=0$ and assuming $m=n$ is sufficient. We then have $$a_0b_0=0,\\ a_1b_0+a_0b_1=0 ,\\ \vdots \\a_nb_0+\dots +a_0b_n=0$$ Since $R$ is reduced, $a_0b_0=0\implies (b_0a_0)^2=b_0(a_0b_0)a_0=0 \implies b_0a_0=0$. Now left multiplying $a_1b_0+b_1a_0=0$ by $b_0$ we get $b_0a_1b_0=-b_0a_0b_1=0 \implies (a_1b_0)^2=a_1(b_0a_1b_0)=0 \implies a_1b_0=0$. Similarly we get $a_ib_0=0\ \forall\ 1\leq i \leq n$. Now original equations reduces to $$a_0b_1=0\\ a_1b_1+a_0b_2=0\\ \vdots\\ a_{n-1}b_1+\dots +a_0b_n=0$$ and then by same process first we will get that $a_0b_1=0$ and then multiplying on left of $a_1b_1+a_0b_2=0$ by $b_1$ we get $a_1b_1=0$, and so on we will get, $a_ib_1=0\ \forall\ 1\le i \le n$. Similarly, Repeating it we get $a_ib_j=0\ \forall\ 1 \leq i,j \leq n$. $\hspace{5.5cm}\blacksquare$ Some other examples of Armendariz Rings are: * *$\Bbb{Z}/n\Bbb{Z}\ \forall\ n$. *All domains and fields (which are reduced). *If A and B are Armendariz , then $A(+)B$ in which multiplication is defined by $(a,b)(a',b')=(aa',ab'+a'b)$ is Armendariz. *Direct Product of Armendariz rings.
{ "language": "en", "url": "https://math.stackexchange.com/questions/83121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42", "answer_count": 5, "answer_id": 0 }
Fourier cosine series and sum help I have been having some problems with the following problem: Find the Fourier cosine series of the function $\vert\sin x\vert$ in the interval $(-\pi, \pi)$. Use it to find the sums $$ \sum_{n\: =\: 1}^{\infty}\:\ \frac{1}{4n^2-1}$$ and $$ \sum_{n\: =\: 1}^{\infty}\:\ \frac{(-1)^n}{4n^2-1}$$ Any help is appreciated, thank you. edit:I have gotten as far as working out the Fourier cosine series using the equations for cosine series $$\phi (X) = 1/2 A_0 + \sum_{n\: =\: 1}^{\infty}\:\ A_n \cos\left(\frac{n\pi x}{l}\right)$$ and $$A_m = \frac{2}{l} \int_{0}^{l} \phi (X) \cos\left(\frac{m\pi x}{l}\right) dx $$ I have found $$A_0 = \frac{4}{l}$$ but the rest of the question is a mess on my end and then I don't know how to relate the rest of it back to those sums.
$f(x) = |\sin(x)| \quad \Rightarrow\quad f(x) = \left\{ \begin{array}{l l} -\sin(x) & \quad \forall x \in [- \pi, 0\space]\\ \sin(x) & \quad \forall x \in [\space 0,\pi\space ]\\ \end{array} \right.$ The Fourier coefficients associated are $$a_n= \frac{1}{\pi}\int_{-\pi}^\pi f(x) \cos(nx)\, dx = \frac{1}{\pi} \left[\int_{-\pi}^0 -\sin (x) \cos(nx)\, dx + \int_{0}^\pi \sin(x) \cos(nx)\, dx\right], \quad n \ge 0$$ $$b_n= \frac{1}{\pi}\int_{-\pi}^\pi f(x) \sin(nx)\, dx = \frac{1}{\pi} \left[\int_{-\pi}^0 -\sin (x) \sin(nx)\, dx + \int_{0}^\pi \sin(x) \sin(nx)\, dx\right], \quad n \ge 1$$ All functions are integrable so we can go on and compute the expressions for $a_n$ and $b_n$. $$a_n = \cfrac{2 (\cos(\pi n)+1)}{\pi(1-n^2)}$$ $$b_n = 0$$ The $b_n = 0$ can be deemed obvious since the function $f(x) = |\sin(x)|$ is an even function. and $a_n$ could have been calculated as $\displaystyle a_n= \frac{2}{\pi}\int_{0}^\pi f(x) \cos(nx)\, dx $ only because the function is even. The Fourier Series is $$\cfrac {a_0}{2} + \sum^{\infty}_{n=1}\left [ a_n \cos(nx) + b_n \sin (nx) \right ]$$ $$= \cfrac {2}{\pi}\left ( 1 + \sum^{\infty}_{n=1} \cfrac{(\cos(\pi n)+1)}{(1-n^2)}\cos(nx)\right )$$ $$= \cfrac {2}{\pi}\left ( 1 + \sum^{\infty}_{n=1} \cfrac{((-1)^n+1)}{(1-n^2)}\cos(nx)\right )$$ $$= \cfrac {2}{\pi}\left ( 1 + \sum^{\infty}_{n=1} \cfrac{2}{(1-4n^2)}\cos(2nx)\right )$$ Since for an odd $n$, $((-1)^n+1) = 0$ and for an even $n$, $((-1)^n+1) = 2$ At this point we can't just assume the function is equal to its Fourier Series, it has to satisfy certain conditions. See Convergence of Fourier series. Without wasting time, (you still have to prove that it satisfies those conditions) we assume the Fourier Series converges to our function i.e $$f(x) = |\sin(x)| = \cfrac {2}{\pi}\left ( 1 + \sum^{\infty}_{n=1} \cfrac{2}{(1-4n^2)}\cos(2nx)\right )$$ Note that $x=0$ gives $\cos(2nx) = 1$ then $$f(0) = |\sin(0)| = \cfrac {2}{\pi}\left ( 1 + 2\sum^{\infty}_{n=1} \cfrac{1}{(1-4n^2)}\right ) =0$$ which implies that $$\sum^{\infty}_{n=1} \cfrac{1}{(1-4n^2)} = \cfrac {-1}{2}$$ and $$\boxed {\displaystyle\sum^{\infty}_{n=1} \cfrac{1}{(4n^2 -1)}= -\sum^{\infty}_{n=1} \cfrac{1}{(1-4n^2)} = \cfrac {1}{2}}$$ Observe again that when $x = \cfrac \pi 2$, $\cos (2nx) = cos(n \pi) = (-1)^n$, thus $$f \left (\cfrac \pi 2 \right) = \left |\sin \left (\cfrac \pi 2\right )\right | = \cfrac {2}{\pi}\left ( 1 + 2\sum^{\infty}_{n=1} \cfrac{(-1)^n}{(1-4n^2)}\right ) =1$$ which implies that $$\sum^{\infty}_{n=1} \cfrac{(-1)^n}{(1-4n^2)} = \cfrac {1}{4}(\pi -2)$$ and $$\boxed {\displaystyle\sum^{\infty}_{n=1} \cfrac{(-1)^n}{(4n^2 -1)}= -\sum^{\infty}_{n=1} \cfrac{(-1)^n}{(1-4n^2)} = \cfrac {1}{4}(2-\pi)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/83176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Integral of product of two functions in terms of the integral of the other function Problem: Let $f$ and $g$ be two continuous functions on $[ a,b ]$ and assume $g$ is positive. Prove that $$\int_{a}^{b}f(x)g(x)dx=f(\xi )\int_{a}^{b}g(x)dx$$ for some $\xi$ in $[ a,b ]$. Here is my solution: Since $f(x)$ and $g(x)$ are continuous, then $f(x) g(x)$ is continuous. Using the Mean value theorem, there exists a $\xi$ in $[ a,b ]$ such that $\int_{a}^{b}f(x)g(x)dx= f(\xi)g(\xi) (b-a) $ and using the Mean value theorem again, we can get $g(\xi) (b-a)=\int_{a}^{b}g(x)dx$ which yields the required equality. Is my proof correct? If not, please let me know how to correct it.
The integrals on both sides of the problem are well defined since $f$ and $g$ are continuous, and $g$ is positive so $ \displaystyle \int^b_a g(x) dx > 0.$ Thus there exists some constant $K$ such that $$ \int^b_a f(x) g(x) dx = K\int^b_a g(x) dx . $$ If $\displaystyle K > \max_{x\in [a,b]} f(x) $ then the left side is smaller than the right. If $\displaystyle K < \min_{x\in [a,b]} f(x) $ then the left side is larger than the right. Thus $ K \in f( [a,b] ).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/83246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
Characteristic polynomial equals minimal polynomial iff $x, Mx, \ldots, M^{n-1} x$ are linearly independent I have been trying to compile conditions for when characteristic polynomials equal minimal polynomials and I have found a result that I think is fairly standard but I have not been able to come up with a proof for. Any references to a proof would be greatly appreciated. Let $M\in M_n(\Bbb F)$ and let $c_M$ be the characteristic polynomial of $M$ and $p_M$ be the minimal polynomial of $M$. How do we show that $p_M = c_M$ if and only if there exists a column vector $x$ such that $x, Mx, \ldots M^{n-1} x$ are linearly independent ?
Here is a simple proof for the "only if" part, using rational canonical form. For clarity's sake, I'll assume that $M$ is a linear map. If $M$ is such that $p_M=c_M$, then it is similar to $F$, the companion matrix of $p_M$.i.e. There is a basis $\beta = (v_1, \dots ,v_n)$ for $V$ under which the matrix of $M$ is $F$. Let $e_i (i \in \{1,\dots,n\})$ be the vector with $1$ as its $i$-th component and $0$ anywhere else, then $F^{i} e_{1} = e_{i+1}$ for $i \in \{1, \dots , n-1\}$. But $e_1, \dots , e_n$ are just the coordinates of the vectors $v_1, \dots , v_n$ under the basis $\beta$ itself. So this means that $M^i v_1 = v_{i+1}$ for $i \in \{1, \dots , n-1\}$. In other words, $(v_1, M v_1, \dots , M^{n-1} v_1) = (v_1, v_2, \dots , v_n)$ is a basis for $V$, which is of course linearly independent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/83299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 3 }
What exactly happens, when I do a bit rotation? I'm sitting in my office and having difficulties to get to know that exactly happens, when I do a bit rotation of a binary number. An example: I have the binary number 1110. By doing bit rotation, I get 1101, 1011, 0111 ... so I have 14 and get 7, 11, 13 and 14 again. I can't get a rule out of it... can somebody help me? Excuse my bad English, I'm just so excited about this problem.
Interpret the bits as representing a number in standard binary representation (as you are doing). Then, bit rotation to the right is division by $2$ modulo $15$, or more generally, modulo $2^n-1$ where $n$ is the number of bits. Put more simply, if the number is even, divide it by $2$, while if it is odd, add $15$ and then divide by $2$. So from $14$ you get $7$. Since $7$ is odd, add $15$ to get $22$ and diivide by $2$ to get $11$. Add $15$ and divide by $2$ to get $13$ and so on. The calculations are somewhat different if the bits are interpreted in $2$s-complement representation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/83361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Compact Sets in Projective Space Consider the projective space ${\mathbb P}^{n}_{k}$ with field $k$. We can naturally give this the Zariski topology. Question: What are the (proper) compact sets in this space? Motivation: I wanted nice examples of spaces and their corresponding compact sets; usually my spaces are Hausdorff and my go-to topology for non-Hausdorff-ness is the Zariski topology. I wasn't really able to find any proper compact sets which makes me think I'm doing something wrong here.
You are in for a big surprise, james: every subset of $\mathbb P^n_k$ is quasi-compact. This is true more generally for any noetherian space, a space in which every decreasing sequence of closed sets is stationary. However: the compact subsets of $\mathbb P^n_k$ are the finite sets of points such that no point is in the closure of another. Reminder A topological space $X$ is quasi-compact if from every open cover of $X$ a finite cover can be extracted. A compact space is a Hausdorff quasi-compact space. Bibliography Bourbaki, Commutative Algebra, Chapter II, §4,2.
{ "language": "en", "url": "https://math.stackexchange.com/questions/83413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A fair coin is tossed $n$ times by two people. What is the probability that they get same number of heads? Say we have Tom and John, each tosses a fair coin $n$ times. What is the probability that they get same number of heads? I tried to do it this way: individually, the probability of getting $k$ heads for each is equal to $$\binom{n}{k} \Big(\frac12\Big)^n.$$ So, we can do $$\sum^{n}_{k=0} \left( \binom{n}{k} \Big(\frac12\Big)^n \cdot \binom{n}{k}\Big(\frac12\Big)^n \right)$$ which results into something very ugly. This ugly thing is equal to the 'simple' answer in the back of the book: $\binom{2n}{n}\left(\frac12\right)^{2n},$ but the equality was verified by WolframAlpha -- it's not obvious when you look at it. So I think there's a much easier way to solve this, can someone point it out? Thanks.
As you have noted, the probability is $$ p_n = \frac{1}{4^n} \sum_{k=0}^n \binom{n}{k} \binom{n}{k} = \frac{1}{4^n} \sum_{k=0}^n \binom{n}{k} \binom{n}{n-k} = \frac{1}{4^n} \binom{2n}{n} $$ The middle equality uses symmetry of binomials, and last used Vandermonde's convolution identity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/83489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 0 }
Whether $f(x)$ is reducible in $ \mathbb Z[x] $? Suppose that $f(x) \in \mathbb Z[x] $ has an integer root. Does it mean $f(x)$ is reducible in $\mathbb Z[x]$?
No. $x-2$ is irreducible but has an integer root $2$. If the degree of $f$ is greater than one, then yes. If $a$ is a root of $f(x)$, carry out synthetic division by $x-a$. You will get $f(x) = (x-a)g(x) + r$, and since $f(a) = 0$, $r=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/83602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Lambert series expansion identity I have a question which goes like this: How can I show that $$\sum_{n=1}^{\infty} \frac{z^n}{\left(1-z^n\right)^2} =\sum_{n=1}^\infty \frac{nz^n}{1-z^n}$$ for $|z|<1$?
Hint: Try using the expansions $$ \frac{1}{1-x}=1+x+x^2+x^3+x^4+x^5+\dots $$ and $$ \frac{1}{(1-x)^2}=1+2x+3x^2+4x^3+5x^4+\dots $$ Expansion: $$ \begin{align} \sum_{n=1}^\infty\frac{z^n}{(1-z^n)^2} &=\sum_{n=1}^\infty\sum_{k=0}^\infty(k+1)z^{kn+n}\\ &=\sum_{n=1}^\infty\sum_{k=1}^\infty kz^{kn}\\ &=\sum_{k=1}^\infty\sum_{n=1}^\infty kz^{kn}\\ &=\sum_{k=1}^\infty\sum_{n=0}^\infty kz^{kn+k}\\ &=\sum_{k=1}^\infty\frac{kz^k}{1-z^k} \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/83680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Can't write context free grammar for language $L=\{a^n\#a^{n+2m}, n,m \geq 1\}$ Spent some time on this problem and seems like I am not able write context free grammar for language $L=\{a^n\#a^{n+2m}, n\geq 1\wedge m \geq 1, n\in \mathbb{N} \wedge m\in\mathbb{N}\}$ I am sure I am missing something obvious, but can't figure out what. I understand that strings are in L: for odd n 1 # 3, 5, 7, 9, ... 3 # 5, 7, 9, 11, ... 5 # 7, 9, 11, 13, ... i.e. one t followed by # followed by 3 or 5 or ... t's; three t's followed by 5 or 7 or 9 t's ... for even n 2 # 4, 6, 8, 10, ... 4 # 6, 8, 10, 12, ... 6 # 8, 10, 12, 14, ... i.e. two t's followed by 4 or 6 or 8 ... t's and so on. I am struggling to generate rules. I understand what I can start with one or two t's then followed by # then followed by 3 or 4 t's. What I can't figure out how to recursively manage n increment, i.e. to make sure for example there are least 7 t's after 5 t's followed by #. I also tried to check if L is CFL, but with no success :( Any hints to the right direction, ideas and solutions are welcomed!
I think this is the solution: $S \rightarrow aLaT$ $L \rightarrow aLa \mid \#$ $T \rightarrow Taa \mid aa$ This language is actually just $\{ a^n\#a^n \mid n \geq 1\} \circ (aa)^+$, where $\circ$ is the concatenation operator. Which is why this CFG is so easy to construct, as it is an easily expressible language followed by an arbitrary even number of a's.
{ "language": "en", "url": "https://math.stackexchange.com/questions/83740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Dense subset of given space If $E$ is a Banach space, $A$ is a subset such that $$A^{\perp}:= \{T \in E^{\ast}: T(A)=0\}=0,$$ then $$\overline{A} = E.$$ I don't why this is true. Does $E$ has to be Banach? Thanks
Did you mean to say that $A$ is a vector subspace, or does $\overline{A}$ mean the closed subspace of $E$ generated by $A$? If $A$ were only assumed to be a subset and $\overline{A}$ means the closure, then this is false. E.g., let $A$ be the unit ball of $E$. Suppose that the closed subspace of $E$ generated by $A$, $\overline{\mathrm{span}}(A)$, is not $E$. Let $x\in E\setminus \overline{\mathrm{span}}(A)$. Using Hahn-Banach you can show that there is an element $T$ of $E^*$ such that $T(A)=\{0\}$ and $T(x)=1$. (Start by defining $T$ on the subspace $\overline{\mathrm{span}}(A)+\mathbb C x$.) No, this does not depend on $E$ being complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/83806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is a real world application of polynomial factoring? The wife and I are sitting here on a Saturday night doing some algebra homework. We're factoring polynomials and had the same thought at the same time: when will we use this? I feel a bit silly because it always bugged me when people asked that in grade school. However, we're both working professionals (I'm a programmer, she's a photographer) and I can't recall ever considering polynomial factoring as a solution to the problem I was solving. Are there real world applications where factoring polynomials leads to solutions? or is it a stepping-stone math that will open my mind to more elaborate solutions that I actually will use? Thanks for taking the time!
You need polynomial factoring (or what's the same, root finding) for higher mathematics. For example, when you are looking for the eigenvalues of a matrix, they appear as the roots of a polynomial, the "characteristic equation". I suspect that none of this will be of any use to someone unless they continue their mathematical education at least to the junior classes like linear algebra (which deals with matrices) and differential equations (where polynomials also appear). And I would also bet that the majority of people who take these classes never end up using them in "real life".
{ "language": "en", "url": "https://math.stackexchange.com/questions/83837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "70", "answer_count": 17, "answer_id": 13 }
A metric space in which every infinite set has a limit point is separable I am struggling with one problem. I need to show that if $X$ is a metric space in which every infinite subset has a limit point then $X$ is separable (has countable dense subset in other words). I am trying to use the result I have proven prior to this problem, namely every separable metric space has a countable base (i.e. any open subset of the metric space can be expressed as a sub-collection of the countable collection of sets). I am not sure this is the right way, can anyone outline the proof? Thanks a lot in advance!
Let $\langle X,d\rangle$ be a metric space in which each infinite subset has a limit point. For any $\epsilon>0$ an $\epsilon$-mesh in $X$ is a set $M\subseteq X$ such that $d(x,y)\ge\epsilon$ whenever $x$ and $y$ are distinct points of $M$. Every $\epsilon$-mesh in $X$ is finite, since an infinite $\epsilon$-mesh would be an infinite set with no limit point. Let $\mathscr{M}(\epsilon)$ be the family of all $\epsilon$-meshes in $X$, and consider the partial order $\langle \mathscr{M}(\epsilon),\subseteq\rangle$. This partial order must have a maximal element: if it did not have one, there would be an infinite ascending chain of $\epsilon$-meshes $M_0\subsetneq M_1\subsetneq M_2\subsetneq\dots$, and $\bigcup_n M_n$ would then be an infinite $\epsilon$-mesh. Let $M_\epsilon$ be a maximal $\epsilon$-mesh; I claim that $$X=\bigcup_{x\in M_\epsilon}B(x,\epsilon)\;,$$ where as usual $B(x,\epsilon)$ is the open ball of radius $\epsilon$ centred at $x$. That is, each point of $X$ is within $\epsilon$ of some point of $M_\epsilon$. To see this, suppose that $y\in X\setminus \bigcup\limits_{x\in M_\epsilon}B(x,\epsilon)$. Then $d(y,x)\ge\epsilon$ for every $x\in M_\epsilon$, and $M_\epsilon \cup \{y\}$ is therefore an $\epsilon$-mesh strictly containing $M_\epsilon$, contradicting the maximality of $M_\epsilon$. Now for each $n\in\mathbb{N}$ let $M_n$ be a maximal $2^{-n}$-mesh, and let $$D=\bigcup_{n\in\mathbb{N}}M_n\;.$$ Each $M_n$ is finite, so $D$ is countable, and you should have no trouble showing that $D$ is dense in $X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/83876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 0 }
Convergence of $\lim_{n \to \infty} \frac{5 n^2 +\sin n}{3 (n+2)^2 \cos(\frac{n \pi}{5})},$ I'm in trouble with this limit. The numerator diverges positively, but I do not understand how to operate on the denominator. $$\lim_{n \to \infty} \frac{5 n^2 +\sin n}{3 (n+2)^2 \cos(\frac{n \pi}{5})},$$ $$\lim_{n \to \infty} \frac{5 n^2 +\sin n}{3 (n+2)^2 \cos(\frac{n \pi}{5})}= \lim_{x\to\infty}\frac {n^2(5 +\frac{\sin n}{n^2})}{3 (n+2)^2 \cos(\frac{n \pi}{5})} \cdots$$
Let's make a few comments. * *Note that the terms of the sequence are always defined: for $n\geq 0$, $3(n+2)^2$ is greater than $0$; and $\cos(n\pi/5)$ can never be equal to zero (you would need $n\pi/5$ to be an odd multiple of $\pi/2$, and this is impossible). *If $a_n$ and $b_n$ both have limits as $n\to\infty$, then so does $a_nb_n$, and the limit of $a_nb_n$ is the product of the limits of $a_n$ and of $b_n$, $$\lim_{n\to\infty}a_nb_n = \left(\lim_{n\to\infty}a_n\right)\left(\lim_{n\to\infty}b_n\right).$$ *If $b_n$ has a limit as $n\to\infty$, and the limit is not zero, then $\frac{1}{b_n}$ has a limit as $n\to\infty$, and the limit is the reciprocal of the limit of $b_n$: $$\lim_{n\to\infty}\frac{1}{b_n} = \frac{1}{\lim\limits_{n\to\infty}b_n},\qquad \text{if }\lim_{n\to\infty}b_n\neq 0.$$ As a consequence of $2$ and $3$, we have: * *If $\lim\limits_{n\to\infty}a_nb_n$ and $\lim\limits_{n\to\infty}a_n$ exists and is not equal to $0$, then $\lim\limits_{n\to\infty}b_n$ exists: Just write $\displaystyle b_n = \left(a_nb_n\right)\frac{1}{a_n}$ *Equivalently, if $\lim\limits_{n\to\infty}a_n$ exists and is not zero, and $\lim\limits_{n\to\infty}b_n$ does not exist, then $\lim\limits_{n\to\infty}a_nb_n$ does not exist either. So, consider $$a_n = \frac{5n^2 + \sin n}{3(n+2)^2},\qquad b_n =\frac{1}{\cos(n\pi/5)}.$$ We have, as you did: $$\begin{align*} \lim_{n\to\infty}a_n &= \lim_{n\to\infty}\frac{5n^2 + \sin n}{3(n+2)^2}\\ &= \lim_{n\to\infty}\frac{n^2\left(5 + \frac{\sin n}{n^2}\right)}{3n^2(1 + \frac{2}{n})^2}\\ &=\lim_{n\to\infty}\frac{5 + \frac{\sin n}{n^2}}{3(1+\frac{2}{n})^2}\\ &= \frac{5 + 0}{3(1+0)^2} = \frac{5}{3}\neq 0. \end{align*}$$ What about the sequence $(b_n)$? If $n=(2k+1)5$ is an odd multiple of $5$, then $$b_n = b_{(2k+1)5}\frac{1}{\cos\frac{n\pi}{5}} = \frac{1}{\cos((2k+1)\pi)} = -1;$$ so the subsequence $b_{(2k+1)5}$ is constant, and converges to $-1$. On the other hand, if $n=10k$ is an even multiple of $5$, then $$b_n = \frac{1}{\cos\frac{n\pi}{5}} = \frac{1}{\cos(2k\pi)} = 1.$$ so the subsequence $b_{10k}$ is constant and converges to $1$. Since a sequence converges if and only if every subsequence converges and converges to the same thing, but $(b_n)$ has two subsequences that converge to different things, it follows that $(b_n)$ does not converge. (It also does not diverge to $\infty$ or to $-\infty$, since there are subsequences that are constant). And so, what can we conclude, given our observations above about products of sequences?
{ "language": "en", "url": "https://math.stackexchange.com/questions/83919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
commuting matrices & polynomials 1 I need help on this problem: Problem: Find two 3x3 matrices, A and B that commute with each other; and neither A is a polynomials of B nor B is a polynomial of A
A=diag(1,1,2) en B is the matrix with rows [1,1,0\0,1,0\0,0,1]. Then AB=BA, B is not polynomial in A (B is not a diagonal matrix) en A is not polynomial in B. For any polynomial p of degree <3 with P(B)=A should have the property p(1)=1 (since p([1,1\0,1])=diag(1,1), so p(x)=1+(x-1)^2) and p(1)=2. J. Vermeer
{ "language": "en", "url": "https://math.stackexchange.com/questions/83962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is $O(\frac{1}{n}) = o(1)$? Sorry about yet another big-Oh notation question, I just found it very confusing. If $T(n)=\frac{5}{n}$, is it true that $T(n)=O(\frac{1}{n})$ and $T(n) = o(1)$? I think so because (if $h(n)=\frac{1}{n}$) $$ \lim_{n \to \infty} \frac{T(n)}{h(n)}=\lim_{n \to \infty} \frac{\frac{5}{n}}{\frac{1}{n}}=5>0 , $$ therefore $T(n)=O(h(n))$. At the same time (if $h(n)=1$) $$ \lim_{n \to \infty} \frac{T(n)}{h(n)}=\frac{(\frac{5}{n})}{1}=0, $$ therefore $T(n)=o(h(n))$. Thanks!
If $x_n = O(1/n)$, this means there exists $N$ and $C$ such that for all $n > N$, $|x_n| \le C|1/n|$. Hence $$ \lim_{n \to \infty} \frac{|x_n|}{1} \le \lim_{n \to \infty} \frac{C|1/n|}{1} = 0. $$ This means if $x_n = O(1/n)$ then $x_n = o(1)$. Conversely, it is not true though. Saying that $x_n = o(1)$ only means $x_n \to 0$, but there are sequences that go to zero and are not $O(1/n)$ (think of $1/\sqrt{n}$ for instance). Hope that helps,
{ "language": "en", "url": "https://math.stackexchange.com/questions/84021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
On the Origin and Precise Definition of the Term 'Surd' So, in the course of last week's class work, I ran across the Maple function surd() that takes the real part of an nth root. However, conversation with my professor and my own research have failed to produce even an adequate definition of the term, much less a good reason for why it is used in that context in Maple. Various dictionaries indicate that it refers to certain subsets (perhaps all of?) the irrationals, while the Wikipedia reference link uses it interchangeably with radical. However, neither of those jive with the Maple interpretation as $\mbox{Surd}(3,x) \neq\sqrt[3]{x}\;\;\;\;\;\;\;x<0$. So, the question is: what is a good definition for "surd"? For bonus points, I would be fascinated to see an origin/etymology of the word as used in mathematical context.
An irrational root of rational number is defined as surd. An example is a root of (-1)
{ "language": "en", "url": "https://math.stackexchange.com/questions/84075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 2 }
Can we say a Markov Chain with only isolated states is time reversible? By "isolated", I mean that each state of this Markov Chain has 0 probability to move to another state, i.e. transition probability $p_{ij} = 0$ for $ i \ne j$. Thus, there isn't a unique stationary distribution. But by definition, since for any stationary distribution $\pi$, we have $$ \pi_{i}p_{ij} = 0 = \pi_{j}p_{ji} $$ seems that we can still call this Markov Chain time reversible. Is the concept "time reversible" still make sense in this situation? A bit background, I was asked to find a Markov Chain, with certain restrictions, that is NOT time reversible. But I found if the stationary distribution exist, the chain is always reversible. So I guess that my be chance is that a chain who doesn't have unique stationary distribution. Maybe in this situation we can't call the chain reversible.
I do not think that constructing a markov chain with isolated states will give you a time irreversible markov chain. Consider the case when you have one isolated state. Since, an isolated state can never be reached from any other state, your chain is actually a union of two different markov chains. * *A markov chain that always stays in the isolated state (which is time reversible by definition) and *A markov chain on the non-isolated states which may or may not be time reversible. Thus, I do not think the above strategy will work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/84120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding a correspondence between $\{0,1\}^A$ and $\mathcal P(A)$ I got this question in homework: Let $\{0,1\}^A$ the set of all functions from A (not necessarily a finite set) to $\{0,1\}$. Find a correspondence (function) between $\{0,1\}^A$ and $\mathcal P(A)$ (The power set of $A$). Prove that this correspondence is one-to-one and onto. I don't know where to start, so I need a hint. What does it mean to find a correspondence? I'm not really supposed to define a function, right? I guess once I have the correspondence defined somehow, the proof will be easier. Any ideas? Thanks!
I'll try to say this without all the technicalities that accompany some of the earlier answers. Let $B$ be a member of $\mathcal{P}(A).$ That means $B\subseteq A$. You want to define a function $f$ corresponding to the set $B$. If $x\in A$, then what is $f(x)$? It is: $f(x)=1$ if $x\in B$ and $f(x) = 0$ if $x\not\in B$. After that, you need to show that this correspondence between $B$ and $f$ is really a one-to-one correspondence between the set of all subsets of $A$ and the set of all functions from $A$ into $\{0,1\}$. If has to be "one-to-one in both directions"; i.e. you need to check both, and you need to check that the word "all" is correct in both cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/84180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
How determine or visualize level curves Let $f:\mathbb{C}\to\mathbb{C}$ given for $f(z)=\int_0^z \frac{1-e^t}{t} dt-\ln z$ and put $g(x,y)=\text{Re}(f(z))$. While using the computer, how to determine the curve $g(x,y)=0$? Thanks for the help.
Using Mathematica: ContourPlot[With[{z = x + I y}, Re[EulerGamma - ExpIntegralEi[z]]] == 0, {x, -20, 20}, {y, -20, 20}]
{ "language": "en", "url": "https://math.stackexchange.com/questions/84239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Check if point on circle is in between two other points (Java) I am struggling with the following question. I'd like to check if a point on a circle is between two other points to check if the point is in the boundary. It is easy to calculate when the boundary doesn't go over 360 degrees. But when the boundary goes over 360 degrees (e.g. 270° - 180°), the second point is smaller than the first point of the boundary. And then I don't know how to check if my point on the circle is between the boundary points, because I cannot check "first boundary point" < "my point" < "second boundary point". Is there an easy way to check this? Either a mathematical function or an algorithm would be good.
From the question comments with added symbols I have a circle with a certain sector blocked. Say for example the sector between $a = 90°$ and $b = 180°$ is blocked. I now want to check if a point $P = (x,y)$ in the circle of center $C = (x_0,y_0)$ of radius $r$ is in this sector or not to see if it is a valid point or not. In other words what you need is the angle the $PC$ line forms with the $x$ axis of your system of reference. And that's already been answered here: $$v = \arccos\left(\frac{xx_0 + yy_0}{\sqrt{(x^2+y^2) \cdot (x_0^2+y_0^2)}}\right)$$ Notice that you still need to calculate the distance $\bar{PC}$ to make sure your point is in the circle to begin with.
{ "language": "en", "url": "https://math.stackexchange.com/questions/84305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
calculus textbook avoiding "nice" numbers: all numbers are decimals with 2 or 3 sig figs Many years ago, my father had a large number of older used textbooks. I seem to remember a calculus textbook with a somewhat unusual feature, and I am wondering if the description rings a bell with anyone here. Basically, this was a calculus textbook that took the slightly unusual route of avoiding "nice" numbers in all examples. The reader was supposed to always have a calculator at their side, and evaluate everything as a decimal, and only use 2 or 3 significant figures. So for instance, rather than asking for the $\int_1^{\sqrt3} \frac{1}{1+x^2}$, it might be from $x=1.2$ to $x=2.6$, say. The author had done this as a deliberate choice, since most "real-life" math problems involve random-looking decimal numbers, and not very many significant digits. Does this sound familiar to anybody? Any ideas what this textbook might have been?
Though this is probably not the book you are thinking of, Calculus for the Practical Man by Thompson does this. It is, most famously, the book that Richard Feynman learned calculus from, and was part of a whole series of math books "for the practical man". The reason I do not think it is the particular book you are thinking of is that the most recent edition was published in 1946, so there would be no mention of a calculator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/84380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
If a topological space has $\aleph_1$-calibre and cardinality at most $2^{\aleph_0}$ must it be star-countable? If a topological space $X$ has $\aleph_1$-calibre and the cardinality of $X$ is $\le 2^{\aleph_0}$, then it must be star countable? A topological space $X$ is said to be star-countable if whenever $\mathscr{U}$ is an open cover of $X$, there is a countable subspace $A$ of $X$ such that $X = \operatorname{St}(A,\mathscr{U})$.
Under CH the space is separable (hence, star countable ). Proof(Ofelia). On the contrary, suppose that X is not separable .Under CH write $X = \{ x_\alpha : \alpha \in \omega_1 \}$ and for each $\alpha$ in $\omega_1$ define $U_\alpha$ = the complement of $cl ( { x_\beta : \beta \le \alpha } )$ . The family of the $U_\alpha$ is a decreasing family of non-empty open sets; since $\aleph_1$ is a caliber of X, the intersection of all $U_\alpha$ must be non-empty (contradiction!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/84415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Computational complexity of least square regression operation In a least square regression algorithm, I have to do the following operations to compute regression coefficients: * *Matrix multiplication, complexity: $O(C^2N)$ *Matrix inversion, complexity: $O(C^3)$ *Matrix multiplication, complexity: $O(C^2N)$ *Matrix multiplication, complexity: $O(CN)$ where, N are the training examples and C is total number of features/variables. How can I determine the overall computational complexity of this algorithm? EDIT: I studied least square regression from the book Introduction to Data Mining by Pang Ning Tan. The explanation about linear least square regression is available in the appendix, where a solution by the use of normal equation is provided (something of the form $a=(X^TX)^{-1}X^Ty)$, which involves 3 matrix multiplications and 1 matrix inversion). My goal is to determine the overall computational complexity of the algorithm. Above, I have listed the 4 operations needed to compute the regression coefficients with their own complexity. Based on this information, can we determine the overall complexity of the algorithm? Thanks!
In this work https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/153646/eth-6011-01.pdf?sequence=1&isAllowed=y two implementation possibilities (the Gaussian elimination alternative vs. using the QR decomposition) are discussed in pages 32 and 33 if you are interested in the actual cost DFLOP-wise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/84495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37", "answer_count": 3, "answer_id": 2 }
proof of the Cauchy integral formula $ D=D_{r}(c), r > 0 .$ Show that if $f$ is continuous in $\overline{D}$ and holomorphic in $D$, then for all $z\in D$: $$f(z)=\frac{1}{2\pi i} \int_{\partial D}\frac{f(\zeta)}{\zeta - z} d\zeta$$ I don't understand this question because I don't see how it is different to the special case of the Cauchy Integral formula. I would be very glad if somebody could tell me what the difference is and how to show that it is true.
If I understand correctly, $D_r(c)$ is the open ball centered in $c$ with radius $r$? If this is the case, the difference between the two is that above your $c$ is fixed, and in the special case your $c$ "moves" with the ball. Fix $c$ then; we want to show that for every $z\in D_r(c)$ we have that $$f(z)=\frac{1}{2\pi i}\int_{\partial D_r(c)}\frac{f(\zeta)}{\zeta-z}d\zeta$$ (we assume that $f$ is holomorphic in the ball). Well, if we choose $s>0$ such that $D_s(z)\subseteq D_r(c)$, then by the special case we have that $$f(z)=\frac{1}{2\pi i}\int_{\partial D_s(z)}\frac{f(\zeta)}{\zeta-z}d\zeta,$$ and we see that $\partial D_s(z)$ is homologous to $D_r(c)$. We then have that $$f(z)=\frac{1}{2\pi i}\int_{\partial D_s(z)}\frac{f(\zeta)}{\zeta-z}d\zeta=\frac{1}{2\pi i}\int_{\partial D_r(c)}\frac{f(\zeta)}{\zeta-z}d\zeta.$$ Does that answer your question? Is that what your question was? The nice thing about the "general" Cauchy formula is that the curve that we're integrating over no longer depends on the point you want to evaluate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/84563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
$\limsup $ and $\liminf$ of a sequence of subsets relative to a topology From Wikipedia if $\{A_n\}$ is a sequence of subsets of a topological space $X$, then: $\limsup A_n$, which is also called the outer limit, consists of those elements which are limits of points in $A_n$ taken from (countably) infinitely many n. That is, $x \in \limsup A_n$ if and only if there exists a sequence of points $\{x_k\}$ and a subsequence $\{A_{n_k}\}$ of $\{A_n\}$ such that $x_k \in A_{n_k}$ and $x_k \rightarrow x$ as $k \rightarrow \infty$. $\liminf A_n$, which is also called the inner limit, consists of those elements which are limits of points in $A_n$ for all but finitely many n (i.e., cofinitely many n). That is, $x \in \liminf A_n$ if and only if there exists a sequence of points $\{x_k\}$ such that $x_k \in A_k$ and $x_k \rightarrow x$ as $k \rightarrow \infty$. According to the above definitions (or what you think is right), my questions are: * *Is $\liminf_{n} A_n \subseteq \limsup_{n} A_n$? *Is $(\liminf_{n} A_n)^c = \limsup_{n} A_n^c$? *Is $\liminf_{n} A_n = \bigcup_{n=1}^\infty\overline{\bigcap_{m=n}^\infty A_m}$? This is based on the comment by Pantelis Sopasakis following my previous question. *Is $\limsup_{n} A_n = \bigcap_{n=1}^\infty\overline{\bigcup_{m=n}^\infty A_m}$, or $\limsup_{n} A_n = \bigcap_{n=1}^\infty\operatorname{interior}(\bigcup_{m=n}^\infty A_m)$, or ...? Thanks and regards!
I must admit that I did not know these definitions, either. * *Yes, because if $x \in \liminf A_n$ you have a sequence $\{x_k\}$ with $x_k \in A_k$ and $x_k \rightarrow x$ and you can choose your subsequence $\{A_{n_k}\}$ to be your whole sequence $\{A_n\}$. *For $X = \mathbb{R}$ take $A_n = \{0\}$ for all $n$. Then you have $(\liminf A_n)^c = \{0\}^c = \mathbb{R}\setminus{\{0\}}$, but $\limsup A_n^c = \mathbb{R}$. So in general you do not have equality. *Of course you have "$\supseteq$". But if you take $A_n := [-1,1- \frac{1}{n}) \subseteq \mathbb{R}$ you have the sequence $\{x_k\}$ defined by $x_k := 1 - \frac{2}{k}$ which converges to $1$ which is not in $\overline{\bigcap_{n = m}^{\infty} A_n} = [-1,1-1/m]$ for any $m \in \mathbb{N}$. But it works of course, if $\{A_k\}$ is decreasing. *In your first assumption you have "$\subseteq$". A sequence $\{x_k\}$ with $x_k \in A_{n_k}$ with $\{A_{n_k}\}$ some subsequence of $\{A_n\}$ lies eventually in $\bigcup_{n = m}^\infty A_n$ for all $m$ by the definition of subsequence. Now for metric spaces you have "$\supset$", too, as you can start by choosing $x_1 \in A_{n_1}$ with $d(x_1,x) < 1$ and proceed by induction choosing $x_{n_k} \in A_{n_k}$ with $n_k > n_{k-1}$ and $d(x_{n_k},x) < \frac{1}{k}$. I doubt that this holds in general (at least this way of proving it does not).
{ "language": "en", "url": "https://math.stackexchange.com/questions/84601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Solve $f(f(n))=n!$ What am I doing wrong here: ( n!=factorial ) Find $f(n)$ such that $f(f(n))=n!$ $$f(f(f(n)))=f(n)!=f(n!).$$ So $f(n)=n!$ is a solution, but it does not satisfy the original equation except for $n=1$, why? How to solve $f(f(n))=n!$?
The hypothesis is $f(f(n))=n!$. This implies that $f(n)!=f(n!)$ like you say, but unfortunately the converse is not true; you can't reverse the direction and say that a function satisfying the latter equation also satisfies $f(f(n))=n!$. For a similar situation, suppose we have $x=1$ and square it to obtain $x^2=1$; now $x=-1$ is a solution to the latter equation but clearly $-1\ne+1$ so we've "lost information" by squaring.
{ "language": "en", "url": "https://math.stackexchange.com/questions/84660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Algebraic Structures Question I am having problems understanding what this question is asking. any help would be appreciated. Thanks. The dihedral group D8 is an 8 -element subgroup of the 24 -element symmetric group S4 . Write down all left and right cosets of D8 in S4 and draw conclusions regarding normality of D8 in S4 . According to your result determine NS4 (D8) .
HINT: Represent $D_8$ with your preferred notation. Perhaps it is the group generated by $(1234)$ and $(13)$. That's ok. Then write down the 8 elements. Then multiply each on the right and on the left by elements of $S_4$, i.e. write down the right and left cosets. You can just sort of do it, and I recommend it in order to get a feel for the group. Patterns will quickly emerge, and it's not that much work. By the way, I want to note that you should pay attention to what elements you multiply by each time. To see if it's normal, you're going to want to see if ever left coset is a right coset.
{ "language": "en", "url": "https://math.stackexchange.com/questions/84831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Complex equation solution. How can i resolve it? I have this complex equation $|z+2i|=| z-2 |$. How can i resolve it? Please help me
The geometric way The points $z$ that satisfy the equation are at the same distance of the points $2$ and $-2\,i$, that is, they are on the perpendicular bisector of the segment joining $2$ and $-2\,i$. This is a line, whose equation you should be able to find. The algebraic way When dealing vith equations with $|w|$, it is usually convenient to consider $|w|^2=w\,\bar w$. In your equation, if $z=x+y\,i$, this leads to $x^2+(y+2)^2=(x-2)^2+y^2$, whose solution I'll leave to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/84898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
estimate the perimeter of the island I'm assigned a task involving solving a problem that can be described as follows: Suppose I'm driving a car around a lake. In the lake there is an island of irregular shape. I have a GPS with me in the car so I know how far I've driven and every turns I've made. Now suppose I also have a camera that takes picture of the island 30 times a second, so I know how long sidewise the island appears to me all the time. Also assume I know the straight line distance between me and the island all the time. Given these conditions, if I drive around the lake for one full circle, will I be able to estimate the perimeter of the island? If yes, how? Thanks.
I am going to take a stab at this, although I would like to hear a flaw in my argument. If you have a reference point on the island where your camera is always pointed to then, since you know your exact travel path and distance from your path to the edge of the island, you can plot the shape of the island, then, finding the area can be done in multiple ways (integration being one of the ways). If you do not have a reference point that the camera(your eyes) can always point to, then I think the problem is tricky, and I believe there is no solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/84974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
$f\colon\mathbb{R}\to\mathbb{R}$ such that $f(x)+f(f(x))=x^2$ for all $x$? A friend came up with this problem, and we and a few others tried to solve it. It turned out to be really hard, so one of us asked his professor. I came with him, and it took me, him and the professor about an hour to say anything interesting about it. We figured out that for positive $x$, assuming $f$ exists and is differentiable, $f$ is monotonically increasing. (Differentiating both sides gives $f'(x)*[\text{positive stuff}]=2x$). So $f$ is invertible there. We also figured out that f becomes arbitrarily large, and we guessed that it grows faster than any linear function. Plugging in $f{-1}(x)$ for $x$ gives $x+f(x)=[f^{-1}(x)]^2$. Since $f(x)$ grows faster than $x$, $f^{-1}$ grows slower and therefore $f(x)=[f^{-1}(x)]^2-x\le x^2$. Unfortunately, that's about all we know... No one knew how to deal with the $f(f(x))$ term. We don't even know if the equation has a solution. How can you solve this problem, and how do you deal with repeated function applications in general?
UPDATE: This answer now makes significantly weaker claims than it used to. Define the sequence of functions $f_n$ recursively by $$f_1(t)=t,\ f_2(t) = 3.8 + 1.75(t-3),\ f_k(t) = f_{k-2}(t)^2 - f_{k-1}(t)$$ The definition of $f_2$ is rigged so that $f_2(3) = 3.8$ and $f_2(3.8) = 3^2- 3.8 = 5.2$. Set $g_k=f_k(3)=f_{k-1}(3.8)$. So the first few $g$'s are $3$, $3.8$, $5.2$, $9.24$, etc. Numerical data suggests that the $g$'s are increasing (checked for $k$ up to $40$). More specifically, it appears that $$g_n \approx e^{c 2^{n/2}} \quad \mbox{for}\ n \ \mbox {odd, where}\ c \approx 0.7397$$ $$g_n \approx e^{d 2^{n/2}} \quad \mbox{for}\ n \ \mbox {even, where}\ d \approx 0.6851$$ We have constructed the $f$'s so that $f_k(3)=g_k$ and $f_k(3.8) = g_{k+1}$. Numerical data suggests also that $f_k$ is increasing on $[3,3.8]$ (checked for $k$ up to $20$). Assuming this is so, define $$f(x) = f_{k+1} (f_{k}^{-1}(x)) \ \mathrm{where}\ x \in [g_k, g_{k+1}].$$ Note that we need the above numerical patterns to continue for this definition to make sense. This gives a function with the desired properties on $[3,\infty)$. Moreover, we can extend the definition downwards to $[2, \infty)$ by running the recursion backwards; setting $f_{-k}(t) = \sqrt{f_{-k+1}(t) + f_{-k+2}(t)}$. Note that, if $f_{-k+1}$ and $f_{-k+2}$ are increasing then this equation makes it clear that $f_{-k}$ is increasing. Also, if $g_{-k+1} < g_{-k+2}$ and $g_{-k+2} < g_{-k+3}$ then $g_{-k} = \sqrt{g_{-k+1} + g_{-k+2}} < \sqrt{g_{-k+2} + g_{-k+3}} = g_{-k+1}$, so the $g$'s remain monotone for $k$ negative. So this definition will extend our function successfully to the union of all the $[g_k, g_{k+1}]$'s, for $k$ negative and positive. This union is $[2, \infty)$. There is nothing magic about the number $3.8$; numerical experimentation suggests that $g_1$ must be chosen in something like the interval $(3.6, 3.9)$ in order for the hypothesis to hold. I tried to make a similar argument to construct $f:[0,2] \to [0,2]$, finding some $u$ such that the recursively defined sequence $1$, $u$, $1^2-u$, $u^2-(1^2-u)$ etcetera would be decreasing. This rapidly exceeded my computational ability. I can say that, if there is such a $u$, then $$0.66316953423239333 < u < 0.66316953423239335.$$ If you want to play with this, I would be delighted to hear of any results, but let me warn you to be very careful about round off errors!
{ "language": "en", "url": "https://math.stackexchange.com/questions/85148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 5, "answer_id": 3 }
Vector Mid Point vs Mid Point Formula Given $OA=(2,9,-6)$ and $OB=(6,-3,-6)$. If $D$ is the midpoint, isit $OD=((2+6)/2, (9-3)/2, (-6-6)/2)$? The correct answer is $OD=\frac{1}{2}AB=(2,-6,0)$
Your first answer is the midpoint of the line segment that joins the tip of the vector $OA$ to the tip of the vector $OB$. The one that you call the correct answer is gotten by putting the vector $AB$ into standard position with, its initial end at the origin, and then finding the midpoint.
{ "language": "en", "url": "https://math.stackexchange.com/questions/85224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can every group be represented by a group of matrices? Can every group be represented by a group of matrices? Or are there any counterexamples? Is it possible to prove this from the group axioms?
Every finite group is isomorphic to a matrix group. This is a consequence of Cayley's theorem: every group is isomorphic to a subgroup of its symmetry group. Since the symmetric group $S_n$ has a natural faithful permutation representation as the group of $n\times n$ 0-1 matrices with exactly one 1 in each row and column, it follows that every finite group is a matrix group. However, there are infinite groups which are not matrix groups, for example, the symmetric group on an infinite set or the metaplectic group. Note that every group can be represented non-faithfully by a group of matrices: just take the trivial representation. My answer above is for the question of whether every group has a faithful matrix representation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/85308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56", "answer_count": 5, "answer_id": 0 }
$f:[0,1] \rightarrow \mathbb{R}$ is absolutely continuous and $f' \in \mathcal{L}_{2}$ I am studying for an exam and am stuck on this practice problem. Suppose $f:[0,1] \rightarrow \mathbb{R}$ is absolutely continuous and $f' \in \mathcal{L}_{2}$. If $f(0)=0$ does it follow that $\lim_{x\rightarrow 0} f(x)x^{-1/2}=0$?
Yes. The Cauchy-Schwarz inequality gives $$f(x)^2=\left(\int^x_0 f^\prime(y)\ dy\right)^2\leq \left(\int^x_0 1\ dy\right) \left(\int^x_0 f^\prime(y)^2\ dy\right)=x\left(\int^x_0 f^\prime(y)^2\ dy\right).$$ Dividing by $x$ and taking square roots we get $$|f(x)/\sqrt{x}|\leq \left(\int^x_0 f^\prime(y)^2\ dy\right)^{1/2}$$ and since $f^\prime$ is square integrable, the right hand side goes to zero as $x\downarrow 0$. This gives the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/85358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Solving $A(x) = 2A(x/2) + x^2$ Using Generating Functions Suppose I have the recurrence: $$A(x) = 2A(x/2) + x^2$$ with $A(1) = 1$. Is it possible to derive a function using Generating Functions? I know in Generatingfunctionology they shows show to solve for recurrences like $A(x) = 2A(x-1) + x$. But is it possible to solve for the above recurrence as well?
I am a little confused by the way you worded this question (it seems that you have a functional equation rather than a recurrence relation), so I interpreted it in the only way that I could make sense of it. If this is not what you are looking for, then please clarify in your original question or in a comment. Let's assume that $A(x)$ is a formal power (or possibly Laurent) series, $A(x) = \sum_n a_n x^n$. Plugging this into your equation, we get $$ \sum_n a_n x^n = 2 \sum_n a_n \frac{x^n}{2^n} + x^2 $$ For $n\neq 2$, we get $a_n = 2^{1-n} a_n$, so if $n \neq 1,2$ we get $a_n = 0$. For $n=2$, we get $a_2 = a_2/2 + 1$, so $a_2 = 2$. Finally, the condition $A(1) = 1$ gives $a_1 = -1$, so we have $$ A(x) = -x + 2x^2 $$ Check: $$ 2 A(x/2) + x^2 = 2( -x/2 + x^2/2) + x^2 = -x + 2x^2 = A(x) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/85415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Secret santa problem We decided to do secret Santa in our office. And this brought up a whole heap of problems that nobody could think of solutions for - bear with me here.. this is an important problem. We have 4 people in our office - each with a partner that will be at our Christmas meal. Steve, Christine, Mark, Mary, Ken, Ann, Paul(me), Vicki Desired outcome Nobody can know who is buying a present for anybody else. But we each want to know who we are buying our present for before going to the Christmas party. And we don't want to be buying presents for our partners. Partners are not in the office. Obvious solution is to put all the names in the hat - go around the office and draw two cards. And yes - sure enough I drew myself and Mark drew his partner. (we swapped) With that information I could work out that Steve had a 1/3 chance of having Vicki(he didn't have himself or Christine - nor the two cards I had acquired Ann or Mary) and I knew that Mark was buying my present. Unacceptable result. Ken asked the question: "What are the chances that we will pick ourselves or our partner?" So I had a stab at working that out. First card drawn -> 2/8 Second card drawn -> 12/56 Adding them together makes 28/56 i.e. 1/2. i.e. This method won't ever work... half chances of drawing somebody you know means we'll be drawing all year before we get a solution that works. My first thought was that we attach two cards to our backs... put on blindfolds and stumble around in the dark grabbing the first cards we came across... However this is a little unpractical and I'm pretty certain we'd end up knowing who grabbed what anyway. Does anybody have a solution for distributing cards that results in our desired outcome? I'd prefer a solution without a third party..
See this algorithm here: http://weaving-stories.blogspot.co.uk/2013/08/how-to-do-secret-santa-so-that-no-one.html. It's a little too long to include in a Stack Exchange answer. Essentially, we fix the topology to be a simple cycle, and then once we have a random order of participants we can also determine who to get a gift for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/85470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 2 }
Limits Involving Trigonometric Functions I should prove using the limit definition that $$\lim_{x \rightarrow 0} \, x^{1/3}\cos(1/x) = 0.$$ I have a problem because the second function is much too complex, so I think I need transformation. And what form this function could have in case I will transform it?
You can solve this problem with the Squeeze Theorem. First, notice that $-1 \leq \cos(1/x) \leq 1$ (the cosine graph never goes beyond these bounds, no matter what you put inside as the argument). Multiplying through by $x^{1/3}$, we get $$ -x^{1/3} \leq x^{1/3}\cos(1/x) \leq x^{1/3}. $$ Now, the Squeeze Theorem says $$ \lim_{x \rightarrow 0} (-x^{1/3}) \leq \lim_{x \rightarrow 0} \, x^{1/3}\cos(1/x) \leq \lim_{x \rightarrow 0} \, x^{1/3}, $$ so we investigate the left- and right-most limits. Since, $x^{1/3}$ is continuous on $[0,\infty)$, $$ \lim_{x \rightarrow 0} \, x^{1/3} = \lim_{x \rightarrow 0} (-x^{1/3}) = 0. $$ Finally, we have $$0 \leq \lim_{x \rightarrow 0} \, x^{1/3}\cos(1/x) \leq 0,$$ which forces us to conclude $$ \lim_{x \rightarrow 0} \, x^{1/3}\cos(1/x) = 0. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/85538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Zero Dim Topological group I have this assertion which looks rather easy (or as always I am missing something): We have $G$ topological group which is zero dimensional, i.e it admits a basis for a topology which consists of clopen sets, then every open nbhd that contains the identity element of G also contains a clopen subgroup. I naively thought that if I take $\{e\}$, i.e the trivial subgroup, it's obviously closed, so it's also open in this topology, i.e clopen, and it's contained in every nbhd that contains $e$, but isn't it then too trivial. Missing something right? :-)
This is certainly false. Take $G=\mathbb{Q}$ with the standard topology and additive group structure. The topology is zero-dimensional since intersecting with $\mathbb{Q}$ the countably many open intervals whose endpoints are rational translates of $\sqrt 2$ gives a clopen basis for the topology on $G$. The trivial subgroup $\{0\}$ is certainly not open since the standard topology on $\mathbb{Q}$ is not discrete. So any clopen subgroup $H \subset G$ contains a nonzero rational number $q$ and so also $2q,3q,4q$ and so on. This shows $H$ is unbounded so, for example, the open neighbourhood $(-1,1) \cap \mathbb{Q}$ of $0 \in G$ contains no nontrival subgroup.
{ "language": "en", "url": "https://math.stackexchange.com/questions/85606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
The history of set-theoretic definitions of $\mathbb N$ What representations of the natural numbers have been used, historically, and who invented them? Are there any notable advantages or disadvantages? I read about Frege's definition not long ago, which involves iterating over all other members of the universe; clearly not possible in set theories without a universal set. The one that is commonly used today to construct natural numbers from the empty set is * *$0 = \{\}$ *$S(n)=n\cup\{n\}$ but I know that another early definition was * *$0=\{\}$ *$S(n)=\{n\}$ Unfortunately I don't know who first used or popularized these last two, nor whether there were other early contenders.
Maybe this book might be useful for you, too. I'll include a short quote from §1.2 Natural numbers. Ebbinghaus et al.: Numbers, p.14 Counting with the help of number symbols marks the beginning of arithmetic. Computation counting. Until well into the nineteenth century, efforts were made to trace the idea of number back to its origins in the psychological process of counting. The psychological and philosophical terminology used for this purpose met with criticism, however, after FREGE's logic and CANTOR'S set theory had provided the logicomathematical foundations for a critical assessment of the number concept. DEDEKIND, who had been in correspondence with CANTOR since the early 1870's, proposed in his book Was sind und was sollen die Zahlen? [9] (published in 1888, but for the most part written in the years 1872—1878) a "set-theoretical" definition of the natural numbers, which other proposed definitions by FREGE and CANTOR and finally PEANO'S axiomatization were to follow. That the numbers, axiomatized in this way, are uniquely defined, (up to isomorphism) follows from DEDEKIND'S recursion theorem. Dedekind's book Was sind und was sollen die Zahlen? seems to be available online: Wikipedia article on Richard Dedekind gives two links, one at ECHO and one at GDZ.
{ "language": "en", "url": "https://math.stackexchange.com/questions/85672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
Geometry problem: Line intersecting a semicircle Suppose we have a semicircle that rests on the negative x-axis and is tangent to the y-axis.A line intersects both axes and the semicircle. Suppose that the points of intersection create three segments of equal length. What is the slope of the line? I have tried numerous tricks, none of which work sadly.
In this kind of problem, it is inevitable that plain old analytic geometry will work. A precise version of this assertion is an important theorem, due to Tarski. If "elementary geometry" is suitably defined, then there is an algorithm that will determine, given any sentence of elementary geometry, whether that sentence is true in $\mathbb{R}^n$. So we might as well see what routine computation buys us. We can take the equation of the circle to be $(x+1)^2+y^2=1$, and the equation of the line to be (what else?) $y=mx+b$. Let our semicircle be the upper half of the circle. Substitute $mx+b$ for $y$ in the equation of the circle. We get $$(1+m^2)x^2+2(1+mb)x +b^2=0. \qquad\qquad(\ast)$$ Let the root nearest the origin be $r_1$, and the next one $r_2$. Note that the line meets the $x$-axis at $x=-b/m$. From the geometry we can deduce that $-r_2=-2r_1$ and $b/m=-3r_1$, and therefore $$r_1=-\frac{b}{3m} \qquad\text{and} \qquad r_2=-\frac{2b}{3m}.$$ By looking at $(\ast)$ we conclude that $$-\frac{b}{m}=-\frac{2(1+mb)}{1+m^2} \qquad\text{and} \qquad \frac{2b^2}{9m^2}=\frac{b^2}{1+m^2}.$$ Thus the algebra gives us the candidates $m=\pm\sqrt{\frac{2}{7}}$. (Of course, the first equation was not needed.) Sadly, we should not always believe what algebraic manipulation seems to tell us. I have checked out the details for the positive candidate for the slope, and everything is fine. Our line has equation $y=\sqrt{\frac{2}{7}}x+ \frac{2\sqrt{14}}{5}$. Pleasantly, the points $r_1$ and $r_2$ turn out to have rational coordinates. However, the negative candidate is not fine. That can be checked by looking at the geometry. But it is also clear from the algebra, which has been symmetrical about the $x$-axis. The algebra was not told that we are dealing with a semicircle, not a circle. So naturally it offered us a mirror symmetric list of configurations. We conclude that the slope is $\sqrt{\dfrac{2}{7}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/85775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to use the Extended Euclidean Algorithm manually? I've only found a recursive algorithm of the extended Euclidean algorithm. I'd like to know how to use it by hand. Any idea?
The way to do this is due to Blankinship "A New Version of the Euclidean Algorithm", AMM 70:7 (Sep 1963), 742-745. Say we want $a x + b y = \gcd(a, b)$, for simplicity with positive $a$, $b$ with $a > b$. Set up auxiliary vectors $(x_1, x_2, x_3)$, $(y_1, y_2, y_3)$ and $(t_1, t_2, t_3)$ and keep them such that we always have $x_1 a + x_2 b = x_3$, $y_1 a + y_2 b = y_3$, $t_1 a + t_2 b = t_3$ throughout. The algorithm itself is: (x1, x2, x3) := (1, 0, a) (y1, y2, y3) := (0, 1, b) while y3 <> 0 do q := floor(x3 / y3) (t1, t2, t3) := (x1, x2, x3) - q * (y1, y2, y3) (x1, x2, x3) := (y1, y2, y3) (y1, y2, y3) := (t1, t2, t3) At the end, $x_1 a + x_2 b = x3 = \gcd(a, b)$. It is seen that $x_3$, $y_3$ do as the classic Euclidean algorithm, and easily checked that the invariant mentioned is kept all the time. One can do away with $x_2$, $y_2$, $t_2$ and recover $x_2$ at the end as $(x_3 - x_1 a) / b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/85830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "85", "answer_count": 3, "answer_id": 1 }
Proof by Induction: Alternating Sum of Fibonacci Numbers Possible Duplicate: Show that $f_0 - f_1 + f_2 - \cdots - f_{2n-1} + f_{2n} = f_{2n-1} - 1$ when $n$ is a positive integer This is a homework question so I'm looking to just be nudged in the right direction, I'm not asking for my work to be done for me. The Fibonacci numbers are defined as follows: $f_0 = 0$, $f_1 = 1$, and $f_{n+2} = f_n + f_{n+1}$ whenever $n \geq 0$. Prove that when $n$ is a positive integer: \begin{equation*} f_0 - f_1 + f_2 + \ldots - f_{2n-1} + f_{2n} = f_{2n-1} - 1 \end{equation*} So as I understand it, this is an induction problem. I've done the basis step using $n = 1$: \begin{align*} - f_{2(1)-1} + f_{2(1)} &= f_{2(1)-1} - 1\newline - f_1 + f_2 &= f_1 - 1\newline - 1 + 1 &= 1 - 1\newline 0 &= 0 \end{align*} I've concluded that the inductive hypothesis is that $- f_{2n-1} + f_{2n} = f_{2n-1} - 1$ is true for some $n \geq 1$. From what I can gather, the inductive step is: \begin{equation*} f_0 - f_1 + f_2 + \ldots - f_{2n-1} + f_{2n} - f_{2n+1} = f_{2n} - 1 \end{equation*} However, what I find when I try to prove it using that equation is that it is incorrect. For example, when I take $n = 1$ \begin{align*} - f_{2(1)-1} + f_{2(1)} + f_{2(1)+1} &\neq f_{2(1)} - 1\newline - f_1 + f_2 - f_3 &\neq f_2 - 1\newline - 1 + 1 - 2 &\neq 1 - 1\newline -2 &\neq 0 \end{align*} I suppose that my inductive step is wrong but I'm not sure where I went wrong. Maybe I went wrong elsewhere. Any hints?
HINT $\ $ LHS,RHS both satisfy $\rm\ g(n+1) - g(n)\: =\: f_{2\:n},\ \ g(1) = 0\:.\:$ But it is both short and easy to prove by induction that the solutions $\rm\:g\:$ of this recurrence are unique. Therefore LHS = RHS. Note that abstracting away from the specifics of the problem makes the proof both much more obvious and, additionally, yields a much more powerful result - one that applies to any functions satisfying a recurrence of this form. Generally uniqueness theorems provide very powerful tools for proving equalities. For much further elaboration and many examples see my prior posts on telescopy and the fundamental theorem of difference calculus, esp. this one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/85894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 3 }
The construction of function If $X$ is a paracompact Hausdorff space and $\{ {X_i}\} $ $i \ge 0$ are open subsets of $X$ such that ${X_i} \subset {X_{i + 1}}$ and $\bigcup\nolimits_{i \ge o} {{X_i}} = X$, can we find a continuous function $f$ such that $f(x) \ge i + 1$ when $x \notin {X_i}$ ?
Yes, your function exists. As $X$ is paracompact and Hausdorff you get a partition of unity subordinate to your cover $\{X_i\}$, that is, a family $\{f_i : X \rightarrow [0,1]\}_{i \geq 0}$ with $\text{supp}(f_i) \subset X_i$ and every $x \in X$ has a neighborhood such that $f_i(x) = 0$ for all but finitely many $i \geq 0$ and $\sum_i f_i(x) = 1$. That implies that $g = \sum_i i \cdot f_i$ is a well defined, continuous map. For $x \notin X_k$ you have $f_i(x) = 0$ for all $i \leq k$ and that implies $$ \sum_{i \geq 0} i \cdot f_i(x) = \sum_{i \geq k+1} i \cdot f_i(x) \geq (k+1) \sum_{i \geq k+1} f_i(x) = k+1, $$ that is, $g(x) \geq k+1$ for all $x \notin X_k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/85956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Reciprocal of a continued fraction I have to prove the following: Let $\alpha=[a_0;a_1,a_2,...,a_n]$ and $\alpha>0$, then $\dfrac1{\alpha}=[0;a_0,a_1,...,a_n]$ I started with $$\alpha=[a_0;a_1,a_2,...,a_n]=a_0+\cfrac1{a_1+\cfrac1{a_2+\cfrac1{a_3+\cfrac1{a_4+\cdots}}}}$$ and $$\frac1{\alpha}=\frac1{[a_0;a_1,a_2,...,a_n]}=\cfrac1{a_0+\cfrac1{a_1+\cfrac1{a_2+\cfrac1{a_3+\cfrac1{a_4+\cdots}}}}}$$ But now I don't know how to go on. In someway I have to show, that $a_0$ is replaced by $0$, $a_1$ by $a_0$ and so on. Any help is appreciated.
Coming in late: there's a similar approach that will let you take the reciprocal of nonsimple continued fractions as well. * *change the denominator sequence from $[b_0;a_0,a_1,a_2...]$ to $[0; b_0,a_0,a_1,a_2...]$ *change the numerator sequence from $[c_1,c_2,c_3,...]$ to $[1,c_1,c_2,c_3...]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/86043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
An unintuitive probability question Suppose you meet a stranger in the Street walking with a boy. He tells you that the boy is his son, and that he has another child. Assuming equal probability for boy and girl, and equal probability for Monty to walk with either child, what is the probability that the second child is a male? The way I see it, we get to know that the stranger has 2 children, meaning each of these choices are of equal probability: (M,M) (M,F) (F,M) (F,F) And we are given the additional information that one of his children is a boy, thus removing option 4 (F,F). Which means that the probability of the other child to be a boy is 1/3. Am I correct?
I don't think so. The probability of this is $\frac 1 2$ which seems clear since all it depends on is the probability that the child not walking with Monty is a boy. The probability is not $\frac 1 3$! The fact that he is actually walking with a son is different than the fact that he has at least one son! Let $A$ be the event "Monty has $2$ sons" and $B$ be the event "Monty is walking with a son." Then the probability of $A$ given $B$ is $P(A | B) = \frac {P(A)} {P(B)}$ since $B$ happens whenever $A$ happens. $P(A) = \frac 1 4$ assuming the genders of the children are equally likely and the genders of the two children are independent. On the other hand, $P(B) = \frac 1 2$ by symmetry; if the genders are equally likely then Monty is just as likely to be walking with a boy as a girl. This particular problem isn't even counterintuitive; most people would guess $\frac 1 2$. Now, on the other hand, suppose $C$ is the event that Monty has at least one son. This event has probability $\frac 3 4$ and so if you happen to run into Monty on the street without a child, but you ask him if he has at least one son and he says "yes," it turns out that Monty has two sons $\frac 1 3$ of the time using the same calculation. This seems a little paradoxical, but you actually get more information from event $B$ than you do from event $C$; you know more than just that he has a son, but you can pin down which one it is: the one walking with Monty. The juxtaposition of these two results seems paradoxical. Really, I think the moral of the story is that human beings are bad at probability, since just about everyone gets this and the original Monty Hall problem wrong; in fact, people can be quite stubborn about accepting the fact that they were wrong! The common thread between the two problems is the concept of conditional probability, and the fact that humans are terrible at applying it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/86114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to find irreducible representation of a group from reducible one? I was reading this document to answer my question. But after teaching me hell lot of jargon like subgroup, normal subgroup, cosets, factor group, direct sums, modules and all that the document says this, You likely realize immediately that this is not a particularly easy thing to do by inspection. It turns out that there is a very straightforward and systematic way of taking a given representation and determining whether or not it is reducible, and if so, what the irreducible representations are. However, the details of how this can be done, while very interesting, are not necessary for the agenda of these notes. Therefore, for the sake of brevity, we will not pursue them. (>_<) I want to learn to do this by hand and then write a program. Please don't ask me to learn GAP or any other software instead. How to find irreducible representation of a group from reducible one? What is that straightforward and systematic way?
I do not believe that there is a straightforward way of doing what you want for complex representations. Probably the best way is to first compute the character table of the group. There are algorithms for that, such as Dixon-Schneider, but it is not something you can just sit down and program in an afternoon. Then you can use the orthogonality relations to find the irreducible constituents of your representation. There are then algorithms you could use to construct the matrices of the representations from its character - there is one due to Dixon, for example. This method is indirect in that you are not computing the irreducible constituents directly, but you are using the full chaarcter table, but I don't know any better way of doing it. Strangely, this problem is a little easier for representations over finite fields, where there is a comparatively simple algorithm known as the "MeatAxe" for finding the irreducible constituents directly. (But programming it efficiently would still take a lot fo effort.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/86184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
How to find the GCD of two polynomials How do we find the GCD $G$ of two polynomials, $P_1$ and $P_2$ in a given ring (for example $\mathbf{F}_5[x]$)? Then how do we find polynomials $a,b\in \mathbf{F}_5[x]$ so that $P_1a+ P_2b=G$? An example would be great.
If you have the factorization of each polynomial, then you know what the divisors look like, so you know what the common divisors look like, so you just pick out one of highest degree. If you don't have the factorization, the Euclidean algorithm works for polynomials in ${\bf F}_5[x]$ just as it does in the integers, which answers the first question; so does the extended Euclidean algorithm, which answers the second question. If you are unfamiliar with these algorithms, they are all over the web, and in pretty much every textbook that does field theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/86265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Infinite product of connected spaces may not be connected? Let $X$ be a connected topologoical space. Is it true that the countable product $X^\omega$ of $X$ with itself (under the product topology) need not be connected? I have heard that setting $X = \mathbb R$ gives an example of this phenomenon. If so, how can I prove that $\mathbb R^\omega$ is not connected? Do we get different results if $X^\omega$ instead has the box topology?
Maybe this should have been a comment, but since I don't have enough reputation points, here it is. On this webpage, you will find a proof that the product of connected spaces is connected (using the product topology). In case of another broken link in the future, the following summary (copied from here) could be useful: The key fact that we use in the proof is that for fixed values of all the other coordinates, the inclusion of any one factor in the product is a continuous map. Hence, every slice is a connected subset. Now any partition of the whole space into disjoint open subsets must partition each slice into disjoint open subsets; but since each slice is connected, each slice must lie in one of the parts. This allows us to show that if two points differ in only finitely many coordinates, then they must lie in the same open subset of the partition. Finally, we use the fact that any open set must contain a basis open set; the basis open set allows us to alter the remaining cofinitely many coordinates. Note that it is this part that crucially uses the definition of product topology, and it is the analogous step to this that would fail for the box topology.
{ "language": "en", "url": "https://math.stackexchange.com/questions/86395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Form of rational solutions to $a^2+b^2=1$? Is there a way to determine the form of all rational solutions to the equation $a^2+b^2=1$?
I up-voted yunone's answer, but I notice that it begins by saying "if you know some field theory", and then gets into $N_{\mathbb{Q}(i)/\mathbb{Q}}(a_bi)$, and then talks about Galois groups, and then Hilbert's Theorem 90, and tau functions (where "$\tau$ is just the complex conjugation map in this case" (emphasis mine)). Sigh. I would have no hesitation about telling a class of secondary-school pupils that all Pythagorean triples are of the form $$ (a,b,c) = (m^2-n^2,2mn,m^2+n^2) $$ and that they are primitive precisely if $m$ and $n$ are coprime and not both odd. That means rational points on the circle are $$ \left(\frac a c, \frac b c\right) = \left( \frac{m^2-n^2}{m^2+n^2}, \frac{2mn}{m^2+n^2} \right) $$ and they're in lowest terms precisely under those circumstances. But how much of what appears in my first paragraph above would I tell secondary-school pupils? Maybe it's better not to lose the audience before answering the question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/86443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Reference for "It is enough to specify a sheaf on a basis"? The wikipedia article on sheaves says: It can be shown that to specify a sheaf, it is enough to specify its restriction to the open sets of a basis for the topology of the underlying space. Moreover, it can also be shown that it is enough to verify the sheaf axioms above relative to the open sets of a covering. Thus a sheaf can often be defined by giving its values on the open sets of a basis, and verifying the sheaf axioms relative to the basis. However, it does not cite a specific reference for this statement. Does there exist a rigorous proof for this statement in the literature?
It is given in Daniel Perrin's Algebraic Geometry, Chapter 3, Section 2. And by the way, it is a nice introductory text for algebraic geometry, which does not cover much scheme theory, but gives a definition of an abstract variety (using sheaves, like in Mumford's Red book). Added: I just saw that Perrin leaves most of the details to the reader. For another proof, see Remark 2.6/Lemma 2.7 in Qing Liu's Algebraic Geometry and Arithmetic curves.
{ "language": "en", "url": "https://math.stackexchange.com/questions/86509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 5, "answer_id": 1 }
Proving an exponential bound for a recursively defined function I am working on a function that is defined by $$a_1=1, a_2=2, a_3=3, a_n=a_{n-2}+a_{n-3}$$ Here are the first few values: $$\{1,1,2,3,3,5,6,8,11,\ldots\}$$ I am trying to find a good approximation for $a_n$. Therefore I tried to let Mathematica diagonalize the problem,it seems to have a closed form but mathematica doesn't like it and everytime I simplify it gives: a_n = Root[-1 - #1 + #1^3 &, 1]^n Root[-5 + 27 #1 - 46 #1^2 + 23 #1^3 &, 1] + Root[-1 - #1 + #1^3 &, 3]^n Root[-5 + 27 #1 - 46 #1^2 + 23 #1^3 &, 2] + Root[-1 - #1 + #1^3 &, 2]^n Root[-5 + 27 #1 - 46 #1^2 + 23 #1^3 &, 3] I used this to get a numerical approximation of the biggest root: $$\text{Root}\left[x^3-x-1,1\right]=\frac{1}{3} \sqrt[3]{\frac{27}{2}-\frac{3 \sqrt{69}}{2}}+\frac{\sqrt[3]{\frac{1}{2} \left(9+\sqrt{69}\right)}}{3^{2/3}}\approx1.325$$ Looking at the function I set $$g(n)=1.325^n$$ and plotted the first 100 values of $\ln(g),\ln(a)$ ($a_n=:a(n)$) in a graph (blue = $a$, red = $g$): It seems to fit quite nicely, but now my question: How can I show that $a \in \mathcal{O}(g)$, if possible without using the closed form but just the recursion. If there would be some bound for $f$ thats slightly worse than my $g$ but easier to show to be correct I would be fine with that too.
Your function $a_n$ is a classical recurrence relation. It is well-known that $$a_n = \sum_{i=1}^3 A_i \alpha_i^n,$$ where $\alpha_i$ are the roots of the equation $x^3 = x + 1$. You can find the coefficients $A_i$ by solving a system of linear equations. In your case, one of the roots is real, and the other two are complex conjugates whose norm is less than $1$, so their contribution to $a_n$ tends to $0$. So actually $$a_n = A_1 \alpha_1^n + o(1).$$ Mathematica diligently found for you the value of $A_1$, and this way you can obtain your estimate (including the leading constant).
{ "language": "en", "url": "https://math.stackexchange.com/questions/86569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Numerically solve second-order ODE I want to solve a second-order ODE in the form of $$ y^{''} = \frac{a (y^{'})^2}{b y^{'}+cy+d} $$ by numerical method (eg, solver ODE45), given initial condition of $y(0)$ and $y'(0)$. The results are wield and numbers go out of machinery bound. I guess the catch is that what is in the denominator becomes highly unstable when it converges to zero. I tried bound it away from zero with no avail. Could anyone provide insights on how to proceed with the numerical procedure? Thanks in advance...
Discretizing the ODE by finite differences gives $$\frac{y_2-2y_1 + y_0}{h^2} = \frac{a\left(\frac{y_1-y_0}{h}\right)^2}{b\left(\frac{y_1-y_0}{h}\right) + cy_1 + d},$$ or $$y_2 = 2y_1 - y_0 + \frac{ah(y_1-y_0)^2}{b(y_1-y_0)+chy_1+dh}.$$ Here's C++ code I wrote which has no trouble integrating this ODE for $a=-10,b=c=d=1$, initial conditions $y(0)=0,y'(0)=10$ and time step $h=0.01$. I'm sure you can adapt it to whatever language you prefer: #include <iostream> using namespace std; double y0, y1; void step(double dt) { double y2 = 2*y1-y0 - 10*dt*(y1-y0)*(y1-y0)/(y1-y0+dt*y1+dt); y0 = y1; y1 = y2; } int main() { y0 = 0; y1 = .1; for(int i=0; i<1000; i++) { step(0.01); cout << y1 << endl; } }
{ "language": "en", "url": "https://math.stackexchange.com/questions/86633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Homogeneous Fredholm Equation of Second Kind I'm trying to show that the eigenvalues of the following integral equation \begin{align*} \lambda \phi(t) = \int_{-T/2}^{T/2} dx \phi(x)e^{-\Gamma|t-x|} \end{align*} are given by \begin{align*} \Gamma \lambda_k = \frac{2}{1+u_k^2} \end{align*} where $u_k$ are the solutions to the transcendental equation \begin{align*} \tan(\Gamma T u_k) = \frac{2u_k}{u_k^2-1}. \end{align*} My approach was to separate this into two integrals: \begin{align*} \int_{-T/2}^{T/2} dx \phi(x)e^{|t-x|} = e^{-\Gamma t}\int_{-T/2}^t dx \phi(x)e^{\Gamma x} + e^{\Gamma t} \int_t^{T/2} dx \phi(x) e^{-\Gamma x}. \end{align*} Then I differentiated the eigenvalue equation twice with this modification to find that \begin{align*} \ddot{\phi} = \frac{\Gamma^2-2\Gamma}{\lambda}\phi \end{align*} indicating that $\phi(t)$ is a sum of exponentials $Ae^{\kappa t} + Be^{-\kappa t}$ where $\kappa^2$ is the coefficient in the previous equation. Can anyone confirm this is the correct approach? I don't think there's any way to factor the original kernel and invert the result. And I'm having trouble determining the initial conditions of the equation to set $A$ and $B$. Assuming I can find these my instinct would be to plug $\phi$ back into the original equation and explicitly integrate and solve for $\lambda$.
This problem was solved in Mark Kac, Random Walk in the presence of absorbing barriers, Ann. Math. Statistics 16, 62-67 (1945) https://projecteuclid.org/euclid.aoms/1177731171
{ "language": "en", "url": "https://math.stackexchange.com/questions/86656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Complexity substitution of variables in multivariate polynomials I want to substitute a variable with a number in multivariate polynomials. For example for the polynomial $$ P = (z^2+yz^3)x^2 + zx $$ I want to substitute $z$ with $3$. I have intuition how to do that algorithmatic: I have to regard the coefficients in $F[y,z]$ and make a recursive call of that method to obtain even more coeffcients in $F[z]$, substitute them and return the result to the "lower" recursiv calls. Is that a good idea? I'm not really interested in the formulation a real algorithm but more in the complexity of the substitution operation. I think that the above sketched algorithm can be bound with $\mathcal{O}(\deg_x(P)\deg_y(P)\deg_z(P))$. Is that correct? The bound is not really strict. Any ideas for a stricter bound? Our is there a faster algorithm which is used in practice? And what is its complexity?
There are classical results (Ostrowski) on optimality of Horner's method and related evaluation schemes. These have been improved by Pan, Strassen and others. Searching on "algebraic complexity" with said authors should quickly locate pertinent literature.
{ "language": "en", "url": "https://math.stackexchange.com/questions/86751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Subspaces in Linear Algebra Find the $\operatorname{Proj}_wv$ for the given vector $v$ and subspace $W$. Let $V$ be the Euclidean space $\mathbb{R}^4$, and $W$ the subspace with basis $[1, 1, 0, 1], [0, 1, 1, 0], [-1, 0, 0, 1]$ (a) $v = [2,1,3,0]$ ans should be - $[7/5,11/5,9/5,-3/5]$ My attempt at the solution was basically we can find the basis perpendicular to $W$ as $[ 1,-2,2, 1]$ then, $[2, 1, 3, 0] = a[1, 1, 0, 1] + b[0, -1, 1, 0] + c[0 ,2, 0,3] + d[1,-2,2,1]$ We solve for $a,b,c,d$ and get $a = 16/3,b=29/3,c=-2/3,d=-10/3$ now the problem is what do I do from here?
You can do it that way (though you must have an arithmetical error somewhere; the denominator of $3$ cannot be right), and the remaining piece is then simply to take $a[1, 1, 0, 1] + b[0, -1, 1, 0] + c[0 ,2, 0,3]$, forgetting the part perpendicular to $W$. However, it is much easier to normalize your $[1,-2,2,1]$ to $n=\frac{1}{\sqrt{10}}[1,-2,2,1]$ -- then the projection map is simply $v\mapsto v - (v\cdot n)n$. (If you write that out fully, the square root even disappears).
{ "language": "en", "url": "https://math.stackexchange.com/questions/86864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
One-to-one mapping from $\mathbb{R}^4$ to $\mathbb{R}^3$ I'm trying to define a mapping from $\mathbb{R}^4$ into $\mathbb{R}^3$ that takes the flat torus to a torus of revolution. Where the flat torus is defined by $x(u,v) = (\cos u, \sin u, \cos v, \sin v)$. And the torus of revolution by $x(u,v) = ( (R + r \cos u)\cos v, (R + r \cos u)\sin v, r \sin u)$. I think an appropriate map would be: $f(x,y,z,w) = ((R + r x)z, (R + r x)w, r y)$ where $R$, $r$ are constants greater than $0$. But now I'm having trouble showing this is one-to-one.
Shaun's answer is insufficient since there are immersions which are not 1-1. For example, the figure 8 is an immersed circle. Also, the torus covers itself and all covering maps are immersions. http://en.wikipedia.org/wiki/Immersion_(mathematics) Your parametrization of the torus of rotation is the the same as in http://en.wikipedia.org/wiki/Torus You just have to notice that the minimal period in both coordinates of the $uv$ plane is the same $2\pi$ in the case of both the flat and rotated tori.
{ "language": "en", "url": "https://math.stackexchange.com/questions/86930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Calculating a Taylor Polynomial of a mystery function I need to calculate a taylor polynomial for a function $f:\mathbb{R} \to \mathbb{R}$ where we know the following $$f\text{ }''(x)+f(x)=e^{-x} \text{ } \forall x$$ $$f(0)=0$$ $$f\text{ }'(0)=2$$ How would I even start?
We have the following $$f''(x) + f(x) = e^{-x}$$ and $f(0) = 0$, $f'(0) = 2$. And thus we need to find $f^{(n)}(0)$ to construct the Taylor series. Note that we already have two values and can find $f''(0)$ since $$f''(0) + f(0) = e^{-0}$$ $$f''(0) +0 = 1$$ $$f''(0) = 1$$ So now we differentiate the original equation and get: $$f'''(x) + f'(x) = -e^{-x}$$ But since we know $f'(0) = 2$, then $$f'''(0) + f'(0) = -e^{-0}$$ $$f'''(0) + 2 = -1$$ $$f'''(0) = -3$$ And we have our third value. Differentiating one more time gives $$f^{IV}(x) + f''(x) = e^{-x}$$ So again we have $$f^{IV}(0) + f''(0) =1$$ $$f^{IV}(0) + 1 =1$$ $$f^{IV}(0) =0$$ Using this twice more you'll get $$f^{V}(0) =2$$ $$f^{VI}(0) =1$$ $$f^{VII}(0) =-3$$ In general the equation is saying that $$f^{(2n+2)}(0) + f^{(2n)}(0) = 1$$ $$f^{(2n+1)}(0) + f^{(2n-1)}(0) = -1$$ which will allow you to get all values. A little summary of the already known values: $f(0) = 0$ $f'(0) = 2$ $f''(0) = 1$ $f'''(0) = -3$ $f^{IV}(0) = 0$ $f^{V}(0) = 2$ $f^{VI}(0) = 1$ $f^{VII}(0) = -3$ Do you see a pattern?
{ "language": "en", "url": "https://math.stackexchange.com/questions/86981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Example of a real-life graph with a "hole"? Anyone ever come across a real non-textbook example of a graph with a hole in it? In Precalc, you get into graphing rational expressions, some of which reduce to a non-rational. The cancelled factors in the denominator still identify discontinuity, yet can't result in vertical asymptotes, but holes. Thanks!
A car goes 60 miles in 2 hours. So 60 miles/2 hours = 30 miles per hour. But how fast is the car going at a particular instant? It goes 0 miles in 0 hours. There you have a hole! It is for the purpose of removing that hole that limits are introduced in calculus. Then you can talk about instantaneous rates of change (such as the speed of a car at an instant), which is the topic of differential calculus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/87054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Is it possible that in a metric space $(X, d)$ with more than one point, the only open sets are $X$ and $\emptyset$? Is it possible that in a metric space $(X, d)$ with more than one point, the only open sets are $X$ and $\emptyset$? I don't think this is possible in $\mathbb{R}$, but are there any possible metric spaces where that would be true?
One of the axioms is that for $x, y \in X$ we have $d(x, y) = 0$ if and only if $x = y$. So if you have two distinct points, you should be able to find an open ball around one of them that does not contain the other.
{ "language": "en", "url": "https://math.stackexchange.com/questions/87135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Maximizing symmetric matrices v.s. non-symmetric matrices Quick clarification on the following will be appreciated. I know that for a real symmetric matrix $M$, the maximum of $x^TMx$ over all unit vectors $x$ gives the largest eigenvalue of $M$. Why is the "symmetry" condition necessary? What if my matrix is not symmetric? Isn't the maximum of $x^TMx=$ still largest eigenvalue of $M$? Thanks.
You can decompose any asymmetric matrix $M$ into its symmetric and antisymmetric parts, $M=M_S+M_A$, where $$\begin{align} M_S&=\frac12(M+M^T),\\ M_A&=\frac12(M-M^T). \end{align}$$ Observe that $x^TM_Ax=0$ because $M_A=-M_A^T$. Then $$x^TMx=x^T(M_S+M_A)x=x^TM_Sx+x^TM_Ax=x^TM_Sx.$$ Therefore, when dealing with something of the form $x^TMx$, we may as well assume $M$ to be symmetric; if it wasn't, we could replace it with its symmetric part $M_S$ and nothing would change.
{ "language": "en", "url": "https://math.stackexchange.com/questions/87199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Common internal tangent of two circles PA is the radius of a circle with center P, and QB is the radius of a circle with center Q, so that AB is a common internal tangent of the two circles, Let M be the midbout of AB and N be the point of line PQ so that line MN is perpendicular to PQ. Z is the point where AB and PQ intersects. If PA=5, QB=10, and PQ=17. compute PN. So I tried to compute the problem above and I found the ratio between triangle ZMN:PAN:BQZ is 1:2:4. After finding that I discovered that the distance from both circles is 2, so after some work I found MN to be 2.5 and MZ to be 17/6 but when I used the pythogerom therom to find ZN thus getting a weird answer (8/6). Ultimately my answer for PN was incorrect and I don't know how to solve this problem. Please help me.
Since $BQ=10$, $AP=5$ and triangles $BQZ$ and $APZ$ are similar, we get $QZ=2PZ$. Because $PQ=17$, we get $PZ=17/3$ and $QZ=34/3$. Using the Pythagorean theorem, we get $BZ=16/3$ and $AZ=8/3$, and thus $AB=8$. Since $MZ=AB/6$, we get $MZ=8/6$ (and not 17/6 as you computed). Could you do the rest of the computation?
{ "language": "en", "url": "https://math.stackexchange.com/questions/87251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving Integral Inequality I am working on proving the below inequality, but I am stuck. Let $g$ be a differentiable function such that $g(0)=0$ and $0<g'(x)\leq 1$ for all $x$. For all $x\geq 0$, prove that $$\int_{0}^{x}(g(t))^{3}dt\leq \left (\int_{0}^{x}g(t)dt \right )^{2}$$
Since $0<g'(x)$ for all $x$, we have $g(x)\geq g(0)=0$. Now let $F(x)=\left (\int_{0}^{x}g(t)dt \right )^{2}-\int_{0}^{x}(g(t))^{3}dt$. Then $$F'(x)=2g(x)\left (\int_{0}^{x}g(t)dt \right )-g(x)^3=g(x)G(x),$$ where $$G(x)=2\int_{0}^{x}g(t)dt-g(x)^2.$$ We claim that $G(x)\geq 0$. Assuming the claim, we have $F'(x)\geq 0$ from the above equality, which implies that $F(x)\geq F(0)=0$, which proves the required statement. To prove the claim, we have $$G'(x)=2g(x)-2g(x)g'(x),$$ which is nonnegative since $g'(x)\leq 1$ and $g(x)\geq 0$ for all $x$. Therefore, $G(x)\geq G(0)=0$ as required.
{ "language": "en", "url": "https://math.stackexchange.com/questions/87305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 0 }
More Theoretical and Less Computational Linear Algebra Textbook I found what seems to be a good linear algebra book. However, I want a more theoretical as opposed to computational linear algebra book. The book is Linear Algebra with Applications 7th edition by Gareth Williams. How high quality is this? Will it provide me with a good background in linear algebra?
I may be a little late responding to this, but I really enjoyed teaching from the book Visual Linear Algebra. It included labs that used Maple that I had students complete in pairs. We then were able to discuss their findings in the context of the theorems and concepts presented in the rest of the text. I think for many of them it helped make abstract concepts like eigenvectors more concrete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/87362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 9, "answer_id": 5 }
Continued fraction: Show $\sqrt{n^2+2n}=[n; \overline{1,2n}]$ I have to show the following identity ($n \in \mathbb{N}$): $$\sqrt{n^2+2n}=[n; \overline{1,2n}]$$ I had a look about the procedure for $\sqrt{n}$ on Wiki, but I don't know how to transform it to $\sqrt{n^2-2n}$. Any help is appreciated. EDIT: I tried the following: $\sqrt{n^2+2n}>n$, so we get $\sqrt{n^2+2n}=n+\frac{n}{x}$, and $\sqrt{n^2+2n}-n=\frac{n}{x}$ and further $x=\frac{n}{\sqrt{n^2+2n}}$. So we get $x=\frac{n}{\sqrt{n^2+2n}-n}=\frac{n(\sqrt{n^2+2n}+n)}{(\sqrt{n^2+2n}-n)(\sqrt{n^2+2n}+n}$ I don't know if it's right and how to go on.
HINT $\rm\ x = [\overline{1,2n}]\ \Rightarrow\ x\ = \cfrac{1}{1+\cfrac{1}{2\:n+x}}\ \iff\ x^2 + 2\:n\ x - 2\:n = 0\ \iff\ x = -n \pm \sqrt{n^2 + 2\:n} $
{ "language": "en", "url": "https://math.stackexchange.com/questions/87526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }