Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
What's bad about left $\mathbb{H}$-modules? Can you give me non-trivial examples of propositions that can be formulated for every left $k$-module, hold whenever $k$ is a field, but do not hold when $k = \mathbb{H}$ or, more generally, need not hold when $k$ is a division ring (thanks, Bruno Stonek!) which is not a field? I'm asking because in the theory of vector bundles $\mathbb{H}$-bundles are usually considered alongside those over $\mathbb{R}$ and $\mathbb{C}$, and I'd like to know what to watch out for in this particular case.
Linear algebra works pretty much the same over $\mathbb H$ as over any field. Multilinear algebra, on the other hand, breaks down in places. You can see this already in the fact that when $V$ and $W$ are left $\mathbb H$-modules, the set of $\mathbb H$-linear maps $\hom_{\mathbb H}(V,W)$ is no longer, in any natural way, an $\mathbb H$-module. As Bruno notes, tensor products also break (unless you are willing to consider also right modules, or bimodules—and then it is you who broke!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/61498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
How to represent XOR of two decimal Numbers with Arithmetic Operators Is there any way to represent XOR of two decimal Numbers using Arithmetic Operators (+,-,*,/,%).
I think what Sanisetty Pavan means is that he has two non-negative integers $a$ and $b$ which we assume to be in the range $0 \leq a, b < 2^{n+1}$ and thus representable as $(n+1)$-bit vectors $(a_n, \cdots, a_0)$ and $(b_n, \cdots, b_0)$ where $$ a = \sum_{i=0}^n a_i 2^i, ~~ b = \sum_{i=0}^n b_i 2^i. $$ He wants an an arithmetic expression for the integer $c$ where $$c = \sum_{i=0}^n (a_i \oplus b_i) 2^i = \sum_{i=0}^n (a_i + b_i -2 a_ib_i) 2^i = a + b - 2 \sum_{i=0}^n a_ib_i 2^i$$ in terms of $a$ and $b$ and the arithmetic operators $+, -, *, /, \%$. Presumably integer constants are allowed in the expression. The expression for $c$ above shows a little progress but I don't think it is much easier to express $\sum_{i=0}^n a_ib_i 2^i$ than it is to express $\sum_{i=0}^n (a_i \oplus b_i) 2^i$ in terms of $a$ and $b$, but perhaps Listing's gigantic formula might be a tad easier to write out, though Henning Makholm's objections will still apply. Added note: For fixed $n$, we can express $c$ as $c = a + b - 2f(a,b)$ where $f(a, b)$ is specified recursively as $$f(a, b) = (a\%2)*(b\%2) + 2f(a/2, b/2)$$ with $a\%2$ meaning the remainder when integer $a$ is divided by $2$ (that is, $a \bmod 2$) and $a/2$ meaning "integer division" which gives the integer quotient (that is, $a/2 = (a - (a\%2))/2$). Working out the recursion gives a formula with $n+1$ terms for $f(a, b)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/61556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 6, "answer_id": 1 }
Complex functions, integration Let $L_iL_j-L_jL_i = i\hbar\varepsilon_{ijk}L_k$ where $i,j,k\in\{1,2,3\}$ Let $u$ be any eigenstate of $L_3$. How might one show that $\langle L_1^2 \rangle = \int u^*L_1^2u = \int u^*L_2^2u = \langle L_2^2\rangle$ ? I can show that $\langle u|L_3L_1L_2-L_1L_2L_3|u\rangle=\int u^*(L_3L_1L_2-L_1L_2L_3)u =0$. And I know that $L_3 $ can be written as ${1\over C}(L_1L_2-L_2L_1)$. Hence I have $\int u^*L_1L_2^2L_1u=\int u^*L_2L_1^2L_2u$. But I don't seem to be getting the required form... Help will be appreciated. Added: The $L_i$'s are operators that don't necessarily commute.
Okay then. In below, we fix $u$ an eigenfunction of $L_3$, and denote by $\langle T\rangle:= \langle u|T|u\rangle$ for convenience for any operator $T$. Using self-adjointness of $L_3$, we have that $L_3 u = \lambda u$ where $\lambda \in \mathbb{R}$. And furthermore $$ \langle L_3T\rangle = \langle L_3^* T\rangle = \lambda \cdot \langle T\rangle = \langle TL_3\rangle $$ which we can also write as $$ \langle [T,L_3] \rangle = 0 $$ for any operator $T$. This implies $$ \frac{1}{\hbar}\langle L_1^2\rangle = \langle -iL_1L_2L_3 + iL_1L_3L_2 \rangle = \langle -iL_3L_1L_2 +i L_1L_3L_2\rangle = \frac{1}{\hbar}\langle L_2^2\rangle $$ The first and third equalities are via the defining relationship $[L_i,L_j] = i \hbar \epsilon_{ijk} L_k$. The middle equality is the general relationship derived above, applied to the first summand. (And is precisely the identity that you said you could show in the question.) Remark: it is important to note that the expression $\langle u| [T,A] |u\rangle = 0$ holds whenever $A$ is self adjoint and $u$ is an eigenvector for $A$. This does not imply that $[T,A] = 0$. This is already clear in a finite dimensional vector space where we can represent operators by matrices: consider $A = \begin{pmatrix} 1 & 0 \\ 0 & 2\end{pmatrix}$ and $T = \begin{pmatrix} 1 & 1 \\ 0 & 0\end{pmatrix}$. The commutator $[T,A] = \begin{pmatrix} 0 & 1 \\ 0 & 0\end{pmatrix}$, which is zero on the diagonals (as required), but is not the zero operator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/61608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What's the limit of the sequence $\lim\limits_{n \to\infty} \frac{n!}{n^n}$? $$\lim_{n \to\infty} \frac{n!}{n^n}$$ I have a question: is it valid to use Stirling's Formula to prove convergence of the sequence?
We will first show that the sequence $x_n = \frac{n!}{n^n}$ converges. To do this, we will show that the sequence is both monotonic and bounded. Lemma 1: $x_n$ is monotonically decreasing. Proof. We can see this with some simple algebra: $$x_{n+1} = \frac{(n+1)!}{(n+1)^{n+1}} = \frac{n+1}{n+1}\frac{n!}{(n+1)^n} \frac{n^n}{n^n} = \frac{n!}{n^n} \frac{n^n}{(n+1)^n} = x_n \big(\frac{n}{n+1}\big)^n.$$ Since $\big(\frac{n}{n+1}\big)^n < 1$, then $x_{n+1} < x_n$. Lemma 2: $x_n$ is bounded. Proof. Straightforward to see that $n! \leq n^n$ and $n! \geq 0$. We obtain the bounds $0 \leq x_n \leq 1$, demonstrating that $x_n$ is bounded. Together, these two lemmas along with the monotone convergence theorem proves that the sequence converges. Theorem: $x_n \to 0$ as $n \to \infty$. Proof. Since $x_n$ converges, then let $s = \lim_{n \to \infty} x_n$, where $s \in \mathbb{R}$. Recall the relation in Lemma 1: $$x_{n+1} = x_n \big(\frac{n}{n+1}\big)^n = \frac{x_n}{(1+ \frac{1}{n})^n}.$$ Since $x_n \to s$, then so does $x_{n+1}$. Furthermore, a standard result is the limit $(1+ \frac{1}{n})^n \to e$. With these results, we have $\frac{x_n}{(1+ \frac{1}{n})^n} \to \frac{s}{e}$ and consequently $$s = \frac{s}{e} \implies s(1 - e^{-1}) = 0$$ Since $1 \neq e^{-1}$, then this statement is satisfied if and only if $s = 0$ and that concludes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/61713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 6, "answer_id": 1 }
Is there a way to solve for an unknown in a factorial? I don't want to do this through trial and error, and the best way I have found so far was to start dividing from 1. $n! = \text {a really big number}$ Ex. $n! = 9999999$ Is there a way to approximate n or solve for n through a formula of some sort? Update (Here is my attempt): Stirling's Approximation: $$n! \approx \sqrt{2 \pi n} \left( \dfrac{n}{e} \right ) ^ n$$ So taking the log: $(2\cdot \pi\cdot n)^{1/2} \cdot n^n \cdot e^{-n}$ $(2\cdot \pi)^{1/2} \cdot n^{1/2} \cdot n^n \cdot e^{-n}$ $.5\log (2\pi) + .5\log n + n\log n \cdot -n \log e$ $.5\log (2\pi) + \log n(.5+n) - n$ Now to solve for n: $.5\log (2\pi) + \log n(.5+n) - n = r$ $\log n(.5+n) - n = r - .5 \log (2\pi)$ Now I am a little caught up here.
I believe you can solve Sterling’s formula for $n$ by way of the Lambert $W$ function, but that just leads to another expression, which can not be directly evaluated with elementary operations. (Although, I wish scientific calculators included the Lambert $W$, at least the principal branch). I’ve derived a formula which is moderately accurate. If you are rounding to integer values of $n$, it will provide exact results through $170!$, and perhaps further. The spreadsheet app on my iPad will not compute factorials beyond $170!$. If you are seeking rational solutions for $n$, in other words, trying to calculate values of a sort of inverse Gamma function, it will provide reasonably close approximations for $n$ in $[2, 170]$, and as I remarked above, possibly greater. My Clumsy Little Equation: I also have derived a variant which is much more accurate for $n$ in $[2, 40\text{ish}]$, however it starts to diverge beyond that. I initially derived these approximations for my daughter, who, while working on Taylor/Maclaurin series, wanted a quick way to check for potential factorial simplifications in the denominator of terms. I couldn’t find any solutions, except the obvious path through Stirling’s formula and the Lambert $W$ function. If more competent mathematicians can find a tweak which can improve accuracy, please share it. I apologize in advance, as I am new here (as a contributor, that is), and I am not yet allowed to directly embed an image. Hopefully, the linked one works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/61755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 3, "answer_id": 2 }
Computer algebra system to simplify huge rational functions (of order 100 Mbytes) I have a huge rational function of three variables (which is of order ~100Mbytes if dumped to a text file) which I believe to be identically zero. Unfortunately, neither Mathematica nor Maple succeeded in simplifying the expression to zero. I substituted a random set of three integers to the rational function and indeed it evaluated to zero; but just for curiosity, I would like to use a computer algebra system to simplify it. Which computer algebra system should I use? I've heard of Magma, Macaulay2, singular, GAP, sage to list a few. Which is best suited to simplify a huge rational expression? In case you want to try simplifying the expressions yourself, I dumped available in two notations, Mathematica notation and Maple notation. Unzip the file and do <<"big.mathematica" or read("big.maple") from the interactive shell. This loads expressions called gauge and cft, both rational functions of a1, a2 and b. Each of which is non-zero, but I believe gauge=cft. So you should be able to simplify gauge-cft to zero. The relation comes from a string duality, see e.g. this paper by M. Taki.
Mathematica can actually prove that gauge-cft is exactly zero. To carry out the proof observe that expression for gauge is much smaller than the cft. Hence we first canonicalize gauge using Together, and then multiply cft by it denominator:
{ "language": "en", "url": "https://math.stackexchange.com/questions/61810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
2 solutions for, solve $\cos x = -1/2$? Answer sheet displays only one, does this mean there is only one? $\cos x = -1/2$ can occur in quadrants 2 or 3, that gives it 2 answers, however the answer sheet only shows one. Does this mean im doing something completely wrong, or are they just not showing the other one? Thanks :D
It depends upon the range answers are allowed in. If you allow $[0,2\pi)$, there are certainly two answers as you say:$\frac{2\pi}{3}$ and $\frac{4\pi}{3}$. The range of arccos is often restricted to $[0,\pi)$ so there is a unique value. If $x$ can be any real, then you have an infinite number of solutions: $\frac{2\pi}{3}+2k\pi$ or $\frac{-2\pi}{3}+2k\pi$ for any integer $k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/61899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Expressing sums of products in terms of sums of powers I'm working on building some software that does machine learning. One of the problems I've come up against is that, I have an array of numbers: $[{a, b, c, d}]$ And I want to compute the following efficiently: $ab + ac + ad + bc + bd + cd$ Or: $abc + abd + acd + bcd$ Where the number of variables in each group is specified arbitrarily. I have a method where I use: $f(x) = a^x + b^x + c^x + d^x$ And then compute: $f(1) = a + b + c + d$ $(f(1)^2-f(2))/2 = ab + ac + ad + bc + bd + cd$ $(f(1)^3 - 3f(2)f(1) + 2f(3))/6 = abc + abd + acd + bcd$ $(f(1)^4 - 6f(2)f(1)^2 + 3f(2)^2 + 8f(3)f(1) - 6f(4))/24 = abcd$ But I worked these out manually and I'm struggling to generalize it. The array will typically be much longer and I'll want to compute much higher orders.
See Newton's identities en.wikipedia.org/wiki/Newton's_identities
{ "language": "en", "url": "https://math.stackexchange.com/questions/61966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Trouble with partial derivatives I've no clue how to get started .I'am unable to even understand what the hint is saying.I need your help please. Given $$u = f(ax^2 + 2hxy + by^2), \qquad v = \phi (ax^2 + 2hxy + by^2),$$ then prove that $$\frac{\partial }{\partial y} \left ( u\frac{\partial u }{\partial x} \right ) = \frac{\partial }{\partial x}\left ( u \frac{\partial v}{\partial y} \right ).$$ Hint. Given $$u = f(z),v = \phi(z), \text{where} z = ax^2 + 2hxy + by^2$$
I recommend you to use the chain rule, i.e. given functions $f:U\mathbb{R}^n\rightarrow V\subset\mathbb{R}^m$ differenciable on $x$ and $g:V_0\subset V\subset\mathbb{R}^m\rightarrow W\subset\mathbb{R}^p$ differenciable on $f(x)$ we have $$D_x(g\circ f)=D_{f(x)}(g)D_x(f)$$ where $D_x(f)$ represents the Jacobian matriz of $f$ at $x$. In your particular case when $n=2$ and $m=p=1$, we have for each coordinate that $$\left.\frac{\partial g\circ f}{\partial x_i}\right|_{(x_1,x_2)}=\left.\frac{d\,g}{dx}\right|_{f(x_1,x_2)}\left.\frac{\partial f}{\partial x_i}\right|_{(x_1,x_2)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/62000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to show $a,b$ coprime to $n\Rightarrow ab$ coprime to $n$? Let $a,b,n$ be integers such that $\gcd(a,n)=\gcd(b,n)=1$. How to show that $\gcd(ab,n)=1$? In other words, how to show that if two integers $a$ and $b$ each have no non-trivial common divisor with and integer $n$, then their product does no have a non-trivial common divisor with $n$ either. This is a problem that is an exercise in my course. Intuitively it seems plausible and it is easy to check in specific cases but how to give an actual proof is not obvious.
Let $P(x)$ be the set of primes that divide $x$. Then $\gcd(a,n)=1$ iff $P(a)$ and $P(n)$ are disjoint. Since $P(ab)=P(a)\cup P(b)$ (*), $\gcd(a,n)=\gcd(b,n)=1$ implies that $P(ab)$ and $P(n)$ are disjoint, which means that $\gcd(ab,n)=1$. (*) Here we use that if a prime divides $ab$ then it divides $a$ or $b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/62072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Proof that $6^n$ always has a last digit of $6$ Without being proficient in math at all, I have figured out, by looking at series of numbers, that $6$ in the $n$-th power always seems to end with the digit $6$. Anyone here willing to link me to a proof? I've been searching google, without luck, probably because I used the wrong keywords.
If you multiply any two integers whose last digit is 6, you get an integer whose last digit is 6: $$ \begin{array} {} & {} & {} & \bullet & \bullet & \bullet & \bullet & \bullet & 6 \\ \times & {} & {} &\bullet & \bullet & \bullet & \bullet & \bullet & 6 \\ \hline {} & \bullet & \bullet & \bullet & \bullet & \bullet & \bullet & \bullet & 6 \end{array} $$ (Get 36, and carry the "3", etc.) To put it another way, if the last digit is 6, then the number is $(10\times\text{something}) + 6$. So $$ \begin{align} & \Big((10\times\text{something}) + 6\Big) \times \Big((10\times\text{something}) + 6\Big) \\ = {} & \Big((10\times\text{something})\times (10\times\text{something})\Big) \\ & {} + \Big((10\times\text{something})\times 6\Big) + \Big((10\times\text{something})\times 6\Big) + 36 \\ = {} & \Big(10\times \text{something}\Big) +36 \\ = {} & \Big(10\times \text{something} \Big) + 6. \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/62126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 3 }
Proving $1^3+ 2^3 + \cdots + n^3 = \left(\frac{n(n+1)}{2}\right)^2$ using induction How can I prove that $$1^3+ 2^3 + \cdots + n^3 = \left(\frac{n(n+1)}{2}\right)^2$$ for all $n \in \mathbb{N}$? I am looking for a proof using mathematical induction. Thanks
Let the induction hypothesis be $$ (1^3+2^3+3^3+\cdots+n^3)=(1+2+3+\cdots+n)^2$$ Now consider: $$ (1+2+3+\cdots+n + (n+1))^2 $$ $$\begin{align} & = \color{red}{(1+2+3+\cdots+n)^2} + (n+1)^2 + 2(n+1)\color{blue}{(1+2+3+\cdots+n)}\\ & = \color{red}{(1^3+2^3+3^3+\cdots+n^3)} + (n+1)^2 + 2(n+1)\color{blue}{(n(n+1)/2)}\\ & = (1^3+2^3+3^3+\cdots+n^3) + \color{teal}{(n+1)^2 + n(n+1)^2}\\ & = (1^3+2^3+3^3+\cdots+n^3) + \color{teal}{(n+1)^3} \end {align}$$ QED
{ "language": "en", "url": "https://math.stackexchange.com/questions/62171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "67", "answer_count": 16, "answer_id": 2 }
Limits of Expectations I've been fighting with this homework problem for a while now, and I can't quite see the light. The problem is as follows, Assume random variable $X \ge 0$, but do NOT assume that $\mathbb{E}\left[\frac1{X}\right] < \infty$. Show that $$\lim_{y \to 0^+}\left(y \, \mathbb{E}\left[\frac{1}{X} ; X > y\right]\right) = 0$$ After some thinking, I've found that I can bound $$ \mathbb{E}[1/X;X>y] = \int_y^{\infty}\frac1{x}\mathrm dP(x) \le \int_y^{\infty}\frac1{y}\mathrm dP(x) $$ since $\frac1{y} = \sup\limits_{x \in (y, \infty)} \frac1{x}$ resulting in $$ \lim_{y \to 0^+} y \mathbb{E}[1/X; X>y] \le \lim_{y \to 0^+} y \int_y^{\infty}\frac1{y}\mathrm dP(x) = P[X>0]\le1 $$ Of course, $1 \not= 0$. I'm not really sure how to proceed... EDIT: $\mathbb{E}[1/X;X>y]$ is defined to be $\int_y^{\infty} \frac{1}{x}\mathrm dP(x)$. This is the notation used in Durret's Probability: Theory and Examples. It is NOT a conditional expectation, but rather a specifier of what set is being integrated over. EDIT: Changed $\lim_{y \rightarrow 0^-}$ to $\lim_{y \rightarrow 0^+}$; this was a typo.
Hint: For any $k > 1$, $\int_y^\infty \frac{y}{x} \ dP(x) \le \int_y^{ky}\ dP(x) + \int_{ky}^\infty \frac{1}{k} \ dP(x) \le \ldots$
{ "language": "en", "url": "https://math.stackexchange.com/questions/62187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Good book for self study of a First Course in Real Analysis Does anyone have a recommendation for a book to use for the self study of real analysis? Several years ago when I completed about half a semester of Real Analysis I, the instructor used "Introduction to Analysis" by Gaughan. While it's a good book, I'm not sure it's suited for self study by itself. I know it's a rigorous subject, but I'd like to try and find something that "dumbs down" the material a bit, then between the two books I might be able to make some headway.
The book of Bartle is more systematic; much clear arguments in all theorems; nice examples-always to keep in studying analysis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/62212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "172", "answer_count": 29, "answer_id": 2 }
Solve to find the unknown I have been doing questions from the past year and I come across this question which stumped me: The constant term in the expansion of $\left(\frac1{x^2}+ax\right)^6$ is $1215$; find the value of $a$. (The given answer is: $\pm 3$ ) Should be an easy one, but I don't know how to begin. Some help please?
Sofia,Sorry for the delayed response.I was busy with other posts. you have two choices.One is to use pascals triangle and the other one is to expand using the binimial theorem. You can compare the expression $$\left ( \frac{1}{x^2} + ax \right )^6$$ with $$(a+x)^6$$ where a = 1/x^2 and x = ax,n=6.Here'e the pascal triangle way of expanding the given expression.All you need to do is to substitute the values of a and x respectively. $$(a + x)^0 = 1$$ $$(a + x)^1 = a +x a+ x$$ $$(a + x)^2 = (a + x)(a + x) = a^2 + 2ax + x^2$$ $$(a + x)^3 = (a + x)^2(a + x) = a^3 + 3a^2x + 3ax^2 + x^3$$ $$(a + x)^4 = (a + x)^3(a + x) = a^4 + 4a^3x + 6a^2x^2 + 4ax^3 + x^4$$ $$(a + x)^5 = (a + x)^4(a + x) = a^5 + 5a^4x + 10a^3x^2 + 10a^2x^3 + 5ax^4 + x^5$$ $$(a + x)^6 = (a + x)^5(a + x) = a^6 + 6a^5x + 15a^4x^2 + 20a^3x^3 + 15a^2x^4 + 6ax^5 + x^6$$ Here'e the Binomial theorem way of expanding it out. $$(1+x)^n = 1 + nx + \frac{n(n-1)}{2!}x^2 + \frac{n(n-1)(n-2)}{3!}x^3 + ...$$ using the above theorem you should get $$a^6x^6 + 6a^5x^3 + 15a^4 + \frac{20a^3}{x^3} + \frac{15a^2}{x^6}+\frac{6a}{x^9}+\frac{1}{x^{12}}$$ You can now substitute the constant term and get the desired answer
{ "language": "en", "url": "https://math.stackexchange.com/questions/62234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Real square matrices space as a manifold Let $\mathrm{M}(n,\mathbb{R})$ be the space of $n\times n$ matrices over $\mathbb{R}$. Consider the function $m \in \mathrm{M}(n,\mathbb{R}) \mapsto (a_{11},\dots,a_{1n},a_{21}\dots,a_{nn}) \in \mathbb{R}^{n^2}$. The space $\mathrm{M}(n,\mathbb{R})$ is locally Euclidean at any point and we have a single chart atlas. I read that the function is bicontinuous, but what is the topology on $\mathrm{M}(n,\mathbb{R})$? Second question... in what sense it is defined a $C^{\infty}$ structure when there are no (non-trivial) coordinate changes? Do we have to consider just the identity change? Thanks.
The topology on $\mathrm{M}(n, \mathbb{R})$ making the map you give bicontinuous is, well, the topology that makes that map bicontinuous. Less tautologically, it is the topology induced by any norm on $\mathrm{M}(n, \mathbb{R})$. (Recall that all norms on a finite-dimensional real vector space are equivalent.) Because Euclidean space has a canonical smooth structure, the fact that you have a single-chart atlas means that you can give $\mathrm{M}(n, \mathbb{R})$ the smooth structure by pulling it back from $\mathbb{R}^{n^2}$. There are no compatibility conditions to verify: the identity transition map will always be smooth.
{ "language": "en", "url": "https://math.stackexchange.com/questions/62302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Integral around unit sphere of inner product For arbitrary $n\times n$ matrices M, I am trying to solve the integral $$\int_{\|v\| = 1} v^T M v.$$ Solving this integral in a few low dimensions (by passing to spherical coordinates) suggests the answer in general to be $$\frac{A\,\mathrm{tr}(M)}{n}$$ where $A$ is the surface area of the $(n-1)$-dimensional sphere. Is there a nice, coordinate-free approach to proving this formula?
The integral is linear in $M$, so we only have to calculate it for canonical matrices $A_{kl}$ spanning the space of matrices, with $(A_{kl})_{ij}=\delta_{ik}\delta_{jl}$. The integral vanishes by symmetry for $k\neq l$, since for every point on the sphere with coordinates $x_k$ and $x_l$ there's one with $x_k$ and $-x_l$. So we only have to calculate the integral for $k=l$. By symmetry, this is independent of $k$, so it's just $1/n$ of the integral for $M$ the identity. But that's just the integral over $1$, which is the surface area $A$ of the sphere. Then by linearity the integral for arbitrary $M$ is the sum of the diagonal elements, i.e. the trace, times the coefficient $A/n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/62358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 2, "answer_id": 0 }
A "fast" way to compute number of pairs of positive integers $(a,b)$ with lcm $N$ I am looking for a fast/efficient method to compute the number of pairs of $(a,b)$ so that its LCM is a given integer, say $N$. For the problem I have in hand, $N=2^2 \times 503$ but I am very inquisitive for a general algorithm. Please suggest a method that should be fast when used manually.
If $N$ is a prime power $p^n$, then there are $2n+1$ such (ordered) pairs -- one member must be $p^n$ itself, and the other can be a lower power. If $N=PQ$ for coprime $P$ and $Q$, then each combination of a pair for $P$ and a pair for $Q$ gives a different, valid pair for $N$. (And all pairs arise in this manner). Therefore, the answer should be the product of all $2n+1$ where $n$ ranges over the exponents in the prime factorization of $N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/62417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is a finite integral domain always field? This is how I'm approaching it: let $R$ be a finite integral domain and I'm trying to show every element in $R$ has an inverse: * *let $R-\{0\}=\{x_1,x_2,\ldots,x_k\}$, *then as $R$ is closed under multiplication $\prod_{n=1}^k\ x_i=x_j$, *therefore by canceling $x_j$ we get $x_1x_2\cdots x_{j-1}x_{j+1}\cdots x_k=1 $, *by commuting any of these elements to the front we find an inverse for first term, e.g. for $x_m$ we have $x_m(x_1\cdots x_{m-1}x_{m+1}\cdots x_{j-1}x_{j+1}\cdots x_k)=1$, where $(x_m)^{-1}=x_1\cdots x_{m-1}x_{m+1}\cdots x_{j-1}x_{j+1}\cdots x_k$. As far as I can see this is correct, so we have found inverses for all $x_i\in R$ apart from $x_j$ if I am right so far. How would we find $(x_{j})^{-1}$?
Simple arguments have already been given. Let us do a technological one. Let $A$ be a finite integral commutative domain. It is an artinian, so its radical $\mathrm{rad}(A)$ is nilpotent—in particular, the non-zero elements of $\mathrm{rad}(A)$ are themselves nilpotent: since $A$ is a domain, this means that $\mathrm{rad}(A)=0$. It follows that $A$ is semisimple, so it is a direct product of matrix rings over division rings. It is a domain, so there can only be one factor; it is is commutative, so that factor must be a ring of $1\times 1$ matrices over a commutative division ring. In all, $A$ must be a field.
{ "language": "en", "url": "https://math.stackexchange.com/questions/62548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40", "answer_count": 7, "answer_id": 2 }
What is your favorite application of the Pigeonhole Principle? The pigeonhole principle states that if $n$ items are put into $m$ "pigeonholes" with $n > m$, then at least one pigeonhole must contain more than one item. I'd like to see your favorite application of the pigeonhole principle, to prove some surprising theorem, or some interesting/amusing result that one can show students in an undergraduate class. Graduate level applications would be fine as well, but I am mostly interested in examples that I can use in my undergrad classes. There are some examples in the Wikipedia page for the pigeonhole principle. The hair-counting example is one that I like... let's see some other good ones! Thanks!
In a finite semigroup $S$, every element has an idempotent power, i.e. for every $s \in S$ there exists some $k$ such that $s^k$ is idempotent, i.e. $(s^{k})^2 = s^k$. For the proof consider the sequence $s, s^2, s^3, \ldots$ which has to repeat somewhere, let $s^n = s^{n+p}$, then inductively $s^{(n + u) + vp} = s^{n + u}$ for all $u,v \in \mathbb N_0$, so in particular for $k = np$ we have $s^{2k} = s^{np + np} = s^{np} = s^k$. This result is used in the algebraic theory of automata in computer science.
{ "language": "en", "url": "https://math.stackexchange.com/questions/62565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "89", "answer_count": 23, "answer_id": 20 }
On factorizing and solving the polynomial: $x^{101} – 3x^{100} + 2x^{99} + x^{98} – 3x^{97} + 2x^{96} + \cdots + x^2 – 3x + 2 = 0$ The actual problem is to find the product of all the real roots of this equation,I am stuck with his factorization: $$x^{101} – 3x^{100} + 2x^{99} + x^{98} – 3x^{97} + 2x^{96} + \cdots + x^2 – 3x + 2 = 0$$ By just guessing I noticed that $(x^2 – 3x + 2)$ is one factor and then dividing that whole thing we get $(x^{99}+x^{96}+x^{93} + \cdots + 1)$ as the other factor , but I really don't know how to solve in those where wild guessing won't work! Do we have any trick for factorizing this kind of big polynomial? Also I am not sure how to find the roots of $(x^{99}+x^{96}+x^{93} + \cdots + 1)=0$,so any help in this regard will be appreciated.
In regard to the first part of your question ("wild guessing"), the point was to note that the polynomial can be expressed as the sum of three polynomials, grouping same coefficients: $$ P(x)= x^{101} – 3x^{100} + 2x^{99} + x^{98} – 3x^{97} + 2x^{96} + \cdots + x^2 – 3x + 2 = A(x)+B(x)+C(x)$$ with $$\begin{eqnarray} A(x) &= x^{101} + x^{98} + \cdots + x^2 &= x^2 (x^{99} + x^{96} + \cdots + 1) \\ B(x) &= - 3 x^{100} -3 x^{97} - \cdots -3 x &= - 3 x (x^{99} + x^{96} + \cdots + 1)\\ C(x) &= 2 x^{99} + 2 x^{96} + \cdots + 2 &= 2 (x^{99} + x^{96} + \cdots + 1) \\ \end{eqnarray} $$ so $$P(x) = (x^2 - 3x +2) (x^{99} + x^{96} + \cdots + 1) $$ and applying the geometric finite sum formula: $$P(x)=(x^2 - 3x +2) ({(x^{3})}^{33} + {(x^{3})}^{32} + \cdots + 1) = (x^2 - 3x +2) \frac{x^{102}-1}{x^3-1} $$ As Andre notes in the comments, your "guessing" was dictated by the very particular structure of the polynomial, you can't hope for some general guessing recipe...
{ "language": "en", "url": "https://math.stackexchange.com/questions/62655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Is there a solid where all triangles on the surface are isosceles? Are there any solids in $R^{3}$ for which, for any 3 points chosen on the surface, at least two of the lengths of the shortest curves which can be drawn on the surface to connect pairs of them are equal?
There can be no smooth surface with this property, because a smooth surface looks locally like a plane, and the plane allows non-isosceles triangles. As for non-smooth surfaces embedded in $\mathbb R^3$ -- which would need to be everywhere non-smooth for this purpose -- it is not clear to me that there is even a good general definition of curve length that would allow us to speak of "shortest curves".
{ "language": "en", "url": "https://math.stackexchange.com/questions/62804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove that meas$(A)\leq 1$ Let $f:\mathbb R\to [0,1]$ be a nondecreasing continuous function and let $$A:=\{x\in\mathbb R : \exists\quad y>x\:\text{ such that }\:f(y)-f(x)>y-x\}.$$ I've already proved that: a) if $(a,b)$ is a bounded open interval contained in $A$, and $a,b\not\in A$, then $f(b)-f(a)=b-a.$ b)$A$ contains no half line. What remains is to prove that the Lebesgue measure of $A$ is less or equal than $1$. I've tried to get estimates on the integral of $\chi_A$ but i went nowhere further than just writing down the integral. I'm not sure whether points a) and b) are useful, but i've reported them down for the sake of completeness so you can use them if you want. Thanks everybody.
hint (c) show that $A$ is open (d) Any open subset of $\mathbb{R}$ is the union of countably many disjoint open intervals $\amalg (a_i,b_i)$. The Lebesgue measure is equal to $\sum (b_i - a_i)$. Now apply item (a) and the fact that $f$ is non-decreasing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/62855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to show that $\lim\limits_{x \to \infty} f'(x) = 0$ implies $\lim\limits_{x \to \infty} \frac{f(x)}{x} = 0$? I was trying to work out a problem I found online. Here is the problem statement: Let $f(x)$ be continuously differentiable on $(0, \infty)$ and suppose $\lim\limits_{x \to \infty} f'(x) = 0$. Prove that $\lim\limits_{x \to \infty} \frac{f(x)}{x} = 0$. (source: http://www.math.vt.edu/people/plinnell/Vtregional/E79/index.html) The first idea that came to my mind was to show that for all $\epsilon > 0$, we have $|f(x)| < \epsilon|x|$ for sufficiently large $x$. (And I believe I could do this using the fact that $f'(x) \to 0$ as $x \to \infty$.) However, I was wondering if there was a different (and nicer or cleverer) way. Here's an idea I had in mind: If $f$ is bounded, then $\frac{f(x)}{x}$ clearly goes to zero. If $\lim\limits_{x \to \infty} f(x)$ is either $+\infty$ or $-\infty$, then we can apply l'Hôpital's rule (to get $\lim\limits_{x \to \infty} \frac{f(x)}{x} = \lim\limits_{x \to \infty} \frac{f'(x)}{1} = 0$). However, I'm not sure what I could do in the remaining case (when $f$ is unbounded but oscillates like crazy). Is there a way to finish the proof from here? Also, are there other ways of proving the given statement?
You can do a proof by contradiction. If ${\displaystyle \lim_{x \rightarrow \infty} {f(x) \over x}}$ were not zero, then there is an $\epsilon > 0$ such that there are $x_1 < x_2 < ... \rightarrow \infty$ such that ${\displaystyle\bigg|{f(x_n) \over x_n}\bigg| > \epsilon}$. Then for $a \leq x_n$ one has $$|f(x_n) - f(a)| \geq |f(x_n)| - |f(a)|$$ $$\geq \epsilon x_n - f(a)$$ By the mean value theorem, there is a $y_n$ between $a$ and $x_n$ such that $$|f'(y_n)| = {|f(x_n) - f(a)| \over x_n - a}$$ $$\geq {\epsilon x_n - f(a) \over x_n - a}$$ Letting $n$ go to infinity this means for sufficiently large $n$ you have $$|f'(y_n)| > {\epsilon \over 2}$$ Since each $y_n \geq a$ and $a$ is arbitrary, $f'(y)$ can't go to zero as $y$ goes to infinity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/62916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 5, "answer_id": 1 }
Samples and random variables Suppose I picked a sample of $n$ 20-year-olds. I measured the height of each to obtain $n$ numbers: $h_1, h_2, \ldots, h_n$. According to theory of probability/statistics, there are $n$ random variables associated with the sample, say $X_1, X_2, \ldots, X_n$. However, I do not understand the relationship between the $X_i$ and the $h_i$, I have yet to see it explained clearly in any book and so I have a few questions. * *What is the probability space corresponding to the $X_i$? It seems to me that the way one samples should effect what the probability space will look like. In this case, I am sampling without replacement and the order in which I picked the individuals is irrelevant so I believe the sample space $\Omega$ should consist of all $n$-tuples of 20-year-olds such that no two tuples contain the same individuals. In this way, $X_i(\omega)$ is the height of the $i$th individual in the $n$-tuple $\omega \in \Omega$. The sample I picked would therefore correspond to one particlar point in $\Omega$, call it $\omega_0$, such that $X_i(\omega_0) = h_i$. I surmise that the $\sigma$-algebra will be just the power set $2^\Omega$ of $\Omega$ but I haven't a clue as to what the probability measure would be. *Let $(\Gamma, 2^\Gamma, P)$ be a probability space where $\Gamma$ is the set of all 20-year-olds and let $X$ be a random variable on $\Gamma$ such that $X(\gamma)$ is the height of the individual $\gamma\in\Gamma$. What is the connection between the $X_i$ and $X$ besides that afforded to us by the law of large numbers? In particular, what is the exact relationship between the probability space of $X$ and that of the $X_i$?
Your experiment consists of choosing $n$ people from a certain population of 20-year-olds and measuring their heights. $X_i$ is the height of the $i$'th person chosen. In the particular outcome you obtained when you did this experiment, the value of $X_i$ was $h_i$. The sample space $\Omega$ is all ordered $n$-tuples of distinct individuals from the population. Since the sample space is finite (there being only a finite number of 20-year-olds in the world), the $\sigma$-algebra is indeed $2^\Omega$. If you choose your sample "at random", the probability measure assigns equal probabilities to all $\omega \in \Omega$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/62967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Linear functions I'm taking Algebra II, and I need help on two types of problems. The numbers may not work out, as I am making these up. 1st problem example: Using the following functions, write a linear functions. f(2) = 3 and f(5) = 4 2nd problem example: Write a solution set for the following equation (I thought you couldn't solve a linear equation?) 2x + 4y = 8 Feel free to use different numbers that actually work in your examples, these problems are just making me scratch my head.
First problem. At this level, a "linear function" is one of the form $f(x) = ax+b$ for some $a$ and $b$ If you know that $f(2)=3$ and $f(5)=4$, then by plugging in you get two equations: $$\begin{align*} 3 &= 2a + b\\ 4 &= 5a + b \end{align*}$$ From these two equations, you should be able to solve for $a$ and $b$, thus finding the function. For example, you can solve for $b$ in the first equation, substitute in the second, and solve the resulting equation for $a$; then plug that to find the value of $b$. Second Problem. An equation like $2x+4y = 8$ does not have a unique solution, but each value of $x$ gives you a corresponding value of $y$ and vice-versa. The "solution set" of this would be a description of all the values that make the equation true. For instance, if you had the problem "Write a solution set for $x-3y=4$", you could do the following: given a value of $y$, the value of $x$ has to be $4+3y$. So one way to write the solutions is: $$\bigl\{ (4+3y, y)\,\bigm|\, y\text{ any real number}\bigr\}.$$ For each real number $y$, you get one solution. Or you could solve for $y$, to get that $y=\frac{x-4}{3}$, and write a solution set as: $$\left\{\left. \left(x, \frac{x-4}{3}\right)\,\right|\, x\text{ any real number}\right\}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/63013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to find $\gcd(f_{n+1}, f_{n+2})$ by using Euclidean algorithm for the Fibonacci numbers whenever $n>1$? Find $\gcd(f_{n+1}, f_{n+2})$ by using Euclidean algorithm for the Fibonacci numbers whenever $n>1$. How many division algorithms are needed? (Recall that the Fibonacci sequence $(f_n)$ is defined by setting $f_1=f_2=1$ and $f_{n+2}=f_{n+1}+f_n$ for all $n \in \mathbb N^*$, and look here to get information about Euclidean algorithm)
anon's answer: $$ \gcd(F_{n+1},F_{n+2}) = \gcd(F_{n+1},F_{n+2}-F_{n+1}) = \gcd(F_{n+1},F_n). $$ Therefore $$ \gcd(F_{n+1},F_n) = \gcd(F_2,F_1) = \gcd(1,1) = 1. $$ In other words, any two adjacent Fibonacci numbers are relatively prime. Since $$\gcd(F_n,F_{n+2}) = \gcd(F_n,F_{n+1}+F_n) = \gcd(F_n,F_{n+1}), $$ this is also true for any two Fibonacci numbers of distance $2$. Since $(F_3,F_6) = (2,8)=2$, the pattern ends here - or so you might think... It is not difficult to prove that $$F_{n+k+1} = F_{k+1}F_{n+1} + F_kF_n. $$ Therefore $$ \gcd(F_{n+k+1},F_{n+1}) = \gcd(F_kF_n,F_{n+1}) = \gcd(F_k,F_{n+1}). $$ Considering what happened, we deduce $$ (F_a,F_b) = F_{(a,b)}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/63068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Does solving a Rubik's cube imply alignment? Today, I got my hands on a Rubik's cube with text on it. It looks like this: Now, I would love to know whether solving the right cube will also always correctly align the text on the cube or whether it's possible to solve the cube in a way that the colors match correctly but the text is misaligned.
whether it's possible to solve the cube in a way that the colors match correctly but the text is misaligned Yes. But the total amount of misalignment (if I remember correctly; it's been a while since I played with a Rubik's cube) must be a multiple of $\pi$ radians. So if only one face has the center piece misaligned, it must be upside down. On the other hand it is possible to have two center pieces simultaneously off by quarter-turns. (This is similar to the fact that without taking the cube apart, you cannot change the orientation of an edge piece (as opposed to center piece or corner piece) while fixing everything else.) (I don't actually have a group-theoretic proof for the fact though; this is just from experience.) Edit: Henning Makholm provides a proof in the comments Here's the missing group-theoretic argument: Place four marker dots symmetrically on each center, and one marker at the tip of each corner cubie, for a total of 32 markers. A quarter turn permutes the 32 markers in two 4-cycles, which is even. Therefore every possible configuration of the dots is an even permutation away from the solved state. But misaligning one center by 90° while keeping everything else solved would be an odd permutation and is therefore impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Jordan Measures, Open sets, Closed sets and Semi-closed sets I cannot understand: $$\bar{\mu} ( \{Q \cap (0,1) \} ) = 1$$ and (cannot understand this one particularly) $$\underline{\mu} ( \{Q \cap (0,1) \} ) = 0$$ where $Q$ is rational numbers, why? I know that the measure for closed set $\mu ([0,1]) = 1$ so I am puzzled with the open set solution. Is $\underline{\mu} ( (0,1) ) = 0$ also? How is the measure with open sets in general? So far the main question, history contains some helper questions but I think this is the lion part of it what I cannot understand. More about Jordan measure here. Related * *Jordan measure with semi-closed sets here *Jordan measure and uncountable sets here
The inner measure of $S:=\mathbb Q \cap [0,1]$ is, by definition, the "largest" simple subset contained in S (, with largest defined as the sup of the measures of simple sets contained in $S$). But there is no non-trivial , i.e., non-empty, simple set contained in $S'$, since , by density of both rationals and irrationals in $\mathbb R$, any simple set $S':=[a,a+e)$ contained in $S$ (i.e., with $0<a<a+e<1$ , will necessarily hit some irrational, i.e., no non-empty simple subset $S'$ of $S$ can be contained in $S$, so the only simple set contained in $S$ is the trivial , empty set. And the empty set is defined to have measure $0$. For the outer measure, you want to find the "smallest" set $T$ containing $S:=\mathbb Q \cap [0,1]$. As pointed above, by density of $\mathbb Q$ in $\mathbb R$ , no strict subset of $[0,1]$ can contain $S$. We then only have the option of having sets of the type $S'':=[0,1+e)$ covering $S$; we can rewrite $S'':=[0, 1+\frac{1}{n})$, and $m^*(S'')=1 + \frac{1}{n}$. The infimum of the measures over all $S''$ is then $ inf$ { 1+$\frac{1}{n}$ : n in $\mathbb N$ }, which is 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Produce output with certain probability using fair coin flips "Introduction to Algorithm" C.2-6 Describe a procedure that takes as input two integers a and b such that $0 < a < b$ and, using fair coin flips, produces as output heads with probability $a / b$ and tails with probability $(b - a) /b$. Give a bound on the expected number of coin flips, which should be O(1) (Hint: Represent a/b in binary.) My guess is that we can use head to represent bit 0 and tail for bit 1. Then by flipping $m = \lceil \log_2 b \rceil$ times, we obtain a $m$ bit binary based number $x$. If $ x \ge b $, then we just drop x and do the experiment again, until we can an $ x < b$. This x has probility $P\{ x < a\} = \frac a b$ and $P\{ a \le x < a\} = \frac {b - a} b$ But I'm not quite sure if my solution is what the question asks. Am I right? Edit, I think Michael and TonyK gave the correct algorithm, but Michael explained the reason behind the algorithm. The 3 questions he asked: * *show that this process requires c coin flips, in expectation, for some constant c; The expectation of c, the number of flips, as TonyK pointed out, is 2. * *show that you yell "NO!" with probability 2/3; P(yell "NO!") = P("the coin comes up tails on an odd toss") = $ \sum_{k=1}^\infty (\frac 1 2)^ k = \frac 2 3$ * *explain how you'd generalize to other rational numbers, with the same constant c It's the algorithm given by TonyK. We can restate it like this Represent a/b in binary. Define $f(Head) = 0$, and $f(Tail) = 1$. If f(nth flip's result) = "nth bit of a/b" then terminate. If the last flip is Head, we yell "Yes", otherwise "No". We have $ P \{Yes\} = \sum_{i\in I}(1/2)^i = \frac a b $ where I represent the set of index where the binary expression of a/b is 1.
Here is my thought of understanding Michael and TonyK's solutions. * *$0 \lt a \lt b$, which means $0 \lt {\frac ab} \lt 1$, so $\frac ab$ can be represent by infinite binary fraction with form $0.b_1b_2...b_{j-1}b_jb_{j+1}b_{j+2}$(adds trailing $0$s, when $\frac ab$ is finite binary fraction). flipping a coin infinite times can represent binary fraction between $0$ and $1$, as $0.{flip_1}{flip_2}...{flip_i}...$, one by one correspondingly. Now, the probability of the flipping fraction $0.{flip_1}{flip_2}...{flip_i}...$ less than fraction $\frac ab$ is $\frac ab$, and greater than fraction $\frac ab$ is $\frac {b-a} {b}$ , for continuous uniform probability distribution. *However, flipping a coin infinite times is impossible and what we want is the relation whether that infinite string $0.{flip_1}{flip_2}...{flip_i}...$ is less or greater than $\frac ab$. we can predict the less or greater than relation when the first unmatched throw occurred , because any $0.b_1b_2...b_{j-1}\mathbf0x_{j+1}x_{j+2}...$ is less than $0.b_1b_2...b_{j-1}\mathbf1b_{j+1}b_{j+2}...$ when $b_j = 1$ and any $0.b_1b_2...b_{j-1}\mathbf1x_{j+1}x_{j+2}...$ is greater than $0.b_1b_2...b_{j-1}\mathbf0b_{j+1}b_{j+2}....$ when $b_j = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 2 }
Solving for Inequality $\frac{12}{2x-3}<1+2x$ I am trying to solve for the following inequality: $$\frac{12}{2x-3}<1+2x$$ In the given answer, $$\frac{12}{2x-3}-(1+2x)<0$$ $$\frac{-(2x+3)(2x-5)}{2x-3}<0 \rightarrow \textrm{ How do I get to this step?}$$ $$\frac{(2x+3)(2x-5)}{2x-3}>0$$ $$(2x+3)(2x-5)(2x-3)>0 \textrm{ via multiply both sides by }(2x-3)^2$$
$$ \frac{12}{2x-3} - (1-2x) = \frac{12 - (1+2x)(2x-3) }{2x-3} = \frac{ 12 - (2x-3+4x^2-6x)}{2x-3} $$ $$= - \frac{4x^2-4x-15}{2x-3} = - \frac{(2x+3)(2x-5)}{2x-3} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/63250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How can I prove this random process to be Standard Brownian Motion? $B_t,t\ge 0$ is a standard Brownian Motion. Then define $X(t)=e^{t/2}B_{1-e^{-t}}$ and $Y_t=X_t-\frac{1}{2}\int_0^t X_u du$. The question is to show that $Y_t, t\ge 0$ is a standard Brownian Motion. I tried to calculate the variance of $Y_t$ for given $t$, but failed to get $t$..
For every nonnegative $t$, let $Z_t=B_{1-\mathrm e^{-t}}=\displaystyle\int_0^{1-\mathrm e^{-t}}\mathrm dB_s$. Then $(Z_t)_{t\geqslant0}$ is a Brownian martingale and $\mathrm d\langle Z\rangle_t=\mathrm e^{-t}\mathrm dt$ hence there exists a Brownian motion $(\beta_t)_{t\geqslant0}$ starting from $\beta_0=0$ such that $Z_t=\displaystyle\int_0^t\mathrm e^{-s/2}\mathrm d\beta_s$ for every nonnegative $t$. In particular, $X_t=\displaystyle\mathrm e^{t/2}\int_0^t\mathrm e^{-s/2}\mathrm d\beta_s$ and $$ \int_0^tX_u\mathrm du=\int_0^t\mathrm e^{u/2}\int\limits_0^u\mathrm e^{-s/2}\mathrm d\beta_s\mathrm du=\int_0^t\mathrm e^{-s/2}\int_s^t\mathrm e^{u/2}\mathrm du\mathrm d\beta_s, $$ hence $$ \int_0^tX_u\mathrm du=\int_0^t\mathrm e^{-s/2}2(\mathrm e^{t/2}-\mathrm e^{s/2})\mathrm d\beta_s=2\mathrm e^{t/2}\int_0^t\mathrm e^{-s/2}\mathrm d\beta_s-2\beta_t=2X_t-2\beta_t. $$ This proves that $Y_t=X_t-\displaystyle\frac12\int\limits_0^tX_u\mathrm du=\beta_t$ and that $(Y_t)_{t\geqslant0}$ is a standard Brownian motion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
2D transformation I have a math problem for some code I am writing. I don't have much experience with 2D transformations, but I am sure there must be a straight-froward formula for my problem. I have illustrated it here: My goal is to work out the co-ordinates of (Xp2, Yp2). Shape A is a quadrilateral that exists that can exist anywhere in 2D space. Its four co-ordinates are known. It contains a point (Xp1, Yp1), which are also known. Shape B is a rectangle with one corner at (0,0). The height and width are variable, but known. Shape A needs to be transposed to Shape B so that the new position of the point inside can be calculated. How do I work out the new co-ordinates of (Xp2, Yp2)? Cheers,
See my answer to "Tranforming 2D outline into 3D plane". The transforms and 4 point to 4 point mapping described there should be just what you need.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Proofs of $\lim\limits_{n \to \infty} \left(H_n - 2^{-n} \sum\limits_{k=1}^n \binom{n}{k} H_k\right) = \log 2$ Let $H_n$ denote the $n$th harmonic number; i.e., $H_n = \sum\limits_{i=1}^n \frac{1}{i}$. I've got a couple of proofs of the following limiting expression, which I don't think is that well-known: $$\lim_{n \to \infty} \left(H_n - \frac{1}{2^n} \sum_{k=1}^n \binom{n}{k} H_k \right) = \log 2.$$ I'm curious about other ways to prove this expression, and so I thought I would ask here to see if anybody knows any or can think of any. I would particularly like to see a combinatorial proof, but that might be difficult given that we're taking a limit and we have a transcendental number on one side. I'd like to see any proofs, though. I'll hold off from posting my own for a day or two to give others a chance to respond first. (The probability tag is included because the expression whose limit is being taken can also be interpreted probabilistically.) (Added: I've accepted Srivatsan's first answer, and I've posted my two proofs for those who are interested in seeing them. Also, the sort of inverse question may be of interest. Suppose we have a function $f(n)$ such that $$\lim_{n \to \infty} \left(f(n) - \frac{1}{2^n} \sum_{k=0}^n \binom{n}{k} f(k) \right) = L,$$ where $L$ is finite and nonzero. What can we say about $f(n)$? This question was asked and answered a while back; it turns out that $f(n)$ must be $\Theta (\log n)$. More specifically, we must have $\frac{f(n)}{\log_2 n} \to L$ as $n \to \infty$.)
I made an quick estimate in my comment. The basic idea is that the binomial distribution $2^{−n} \binom{n}{k}$ is concentrated around $k= \frac{n}{2}$. Simply plugging this value in the limit expression, we get $H_n−H_{n/2} \sim \ln 2$ for large $n$. Fortunately, formalizing the intuition isn't that hard. Call the giant sum $S$. Notice that $S$ can be written as $\newcommand{\E}{\mathbf{E}}$ $$ \sum_{k=0}^{\infty} \frac{1}{2^{n}} \binom{n}{k} (H(n) - H(k)) = \sum_{k=0}^{\infty} \Pr[X = k](H(n) - H(k)) = \E \left[ H(n) - H(X) \right], $$ where $X$ is distributed according to the binomial distribution $\mathrm{Bin}(n, \frac12)$. We need the following two facts about $X$: * *With probability $1$, $0 \leqslant H(n) - H(X) \leqslant H(n) = O(\ln n)$. *From the Bernstein inequality, for any $\varepsilon \gt 0$, we know that $X$ lies in the range $\frac{1}{2}n (1\pm \varepsilon)$, except with probability at most $e^{- \Omega(n \varepsilon^2) }$. Since the function $x \mapsto H(n) - H(x)$ is monotone decreasing, we have $$ S \leqslant \color{Red}{H(n)} \color{Blue}{-H\left( \frac{n(1-\varepsilon)}{2} \right)} + \color{Green}{\exp (-\Omega(n \varepsilon^2)) \cdot O(\ln n)}. $$ Plugging in the standard estimate $H(n) = \ln n + \gamma + O\Big(\frac1n \Big)$ for the harmonic sum, we get: $$ \begin{align*} S &\leqslant \color{Red}{\ln n + \gamma + O \Big(\frac1n \Big)} \color{Blue}{- \ln \left(\frac{n(1-\varepsilon)}{2} \right) - \gamma + O \Big(\frac1n \Big)} +\color{Green}{\exp (-\Omega(n \varepsilon^2)) \cdot O(\ln n)} \\ &\leqslant \ln 2 - \ln (1- \varepsilon) + o_{n \to \infty}(1) \leqslant \ln 2 + O(\varepsilon) + o_{n \to \infty}(1). \tag{1} \end{align*} $$ An analogous argument gets the lower bound $$ S \geqslant \ln 2 - \ln (1+\varepsilon) - o_{n \to \infty}(1) \geqslant \ln 2 - O(\varepsilon) - o_{n \to \infty}(1). \tag{2} $$ Since the estimates $(1)$ and $(2)$ hold for all $\varepsilon > 0$, it follows that $S \to \ln 2$ as $n \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 6, "answer_id": 4 }
suggest textbook on calculus I read single variable calculus this semester, and the course is using Thomas Calculus as the textbook. But this book is just too huge, a single chapter contains 100 exercise questions! Now I'm looking for a concise and complete textbook. I'm not interested in routine, computational exercises, but rather some challenging problem sets. I have quite a strong basic knowledge of calculus from high school, but I still have difficulties in solving a few questions from past exam papers. So I'm looking for more challenging exercises. In fact, I'm looking forward to solving Putnam level questions. Please suggest some textbooks with these features. Thanks in advance.
Here's a new book to add to the growing list of books that handle Calculus more strictly than normal and include numerous hard problems: https://techvoke.com/couchtuner/ https://books.google.se/books?id=HMnvCQAAQBAJ&printsec=frontcover&dq=sasane+calculus&hl=sv&sa=X&ved=0CCQQ6wEwAGoVChMIsYe1utLQyAIVJotyCh2XsQaF#v=onepage&q=sasane%20calculus&f=false Amol Sasane's book The How and Why of One Variable Calculus was released in August 2015 by Wiley. I have attempted a newer edition of Thomas' Calculus, which I do not recommend. You won't be challenged at all because the workouts are so basic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 8, "answer_id": 7 }
What did Simon Norton do after 1985? Simon Norton is a mathematician that worked on finite simple groups and co-authored the Atlas of Finite Groups. With John Conway they proved there is a connection with the Monster group and the j-function:monstrous moonshine There's now a book out titled The Genius in my Basement by Alexander Masters, where he says that Simon Norton stopped doing mathematics after 1985 when John Conway left for America. I find this hard to believe because the book also talks about his immense talent and natural attraction to the subject as a youngster: * *while still at school, gained a first-class external degree from London university *Won gold at the IMO between '67 and '69 and special prizes twice. Did he continue to produce mathematical papers after 1985? I suspect the answer is yes, and would love to know what he did after 1985.
He was teaching postgraduate students in the year 1987-88; I did Part III in Cambridge that year, where he lectured Reflection Groups. The most interesting subject of all I took that year, and I regret not doing that subject in the final exam.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Invertibility of elements in a left Noetherian ring Let $A$ be a left Noetherian ring. How do I show that every element $a\in A$ which is left invertible is actually two-sided invertible?
Recall that we get an isomorphism $A^\text{op} \to \operatorname{End}_A(A)$ by sending $a$ to the endomorphism $x \mapsto xa$. Here $A^\text{op}$ is the opposite ring. If $a$ is left invertible then the corresponding endomorphism $f$ is surjective, and if we can show that $f$ is injective then $f$ is invertible in $\operatorname{End}_A(A)$, whence $a$ is invertible in both $A^\text{op}$ and $A$. It isn't any harder to prove a more general statement: If an endomorphism of a Noetherian module is surjective, then it is an isomorphism. Here are some ideas for this. If $g$ is such an endomorphism then the increasing sequence of submodules $\{\operatorname{Ker}(g^n)\}$ must stabilize. Use this and the fact that each $g^n$ is surjective to show that the kernel is trivial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
The infinity of random variable The problem is: For infinite independent Bernoulli trials, prove that the total number of successful trials $N$ have the following property: $$ [N < \infty] = \bigcup\limits_{n=1}^{\infty}\,[N \le n] $$ Actually this is just part of bigger problem in a book, and the equation is given as an obvious fact and as a hint without any explanation. What does the equation exactly mean? I guess square brace means set, but what's the definition of $[N < \infty]$?
Forget everything except that $N$ is a function from $\Omega$ to $\mathbb R^+$. Then $[N<\infty]$ is the set of $\omega$ in $\Omega$ such that $N(\omega)$ is finite and $[N\le n]$ is the set of $\omega$ in $\Omega$ such that $N(\omega)\le n$. Hence $[N\le n]\subseteq[N<\infty]$ for every $n$. For the other inclusion, note that $N(\omega)$ finite implies there exists $n$ such that $N(\omega)\le n$. Hence the equality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Motivation for the term "separable" in topology A topological space is called separable if contains a countable dense subset. This is a standard terminology, but I find it hard to associate the term to its definition. What is the motivation for using this term? More vaguely, is it meant to capture any suggestive image or analogy that I am missing?
On Srivatsan's request I'm making my comment into an answer, even if I have little to add to what I said in the MO-thread. As Qiaochu put it in a comment there: My understanding is it comes from the special case of ℝ, where it means that any two real numbers can be separated by, say, a rational number. In my answer on MO I provided a link to Maurice Fréchet's paper Sur quelques points du calcul fonctionnel, Rend. Circ. Mat. Palermo 22 (1906), 1-74 and quoted several passages from it in order to support that view: The historical importance of that paper is (among many other things) that it is the place where metric spaces were formally introduced. Separability is defined as follows: Amit Kumar Gupta's translation in a comment on MO: We will henceforth call a class separable if it can be considered in at least one way as the derived set of a denumerable set of its elements. And here's the excerpt from which I quoted on MO with some more context — while not exactly accurate, I think it is best to interpret classe $(V)$ as metric space in the following: Felix Hausdorff, in his magnum opus Mengenlehre (1914, 1927, 1934) wrote (p.129 of the 1934 edition): My loose translation: The simplest and most important case is that a countable set is dense in $E$ [a metric space]; $E = R_{\alpha}$ has at most the cardinality of the continuum $\aleph_{0}^{\aleph_0} = \aleph$. A set in which a countable set is dense is called separable, with a not exactly very suggestive but already established term by M. Fréchet. A finite or separable set is called at most separable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 1, "answer_id": 0 }
A conjecture about the form of some prime numbers Let $k$ be an odd number of the form $k=2p+1$ ,where $p$ denote any prime number, then it is true that for each number $k$ at least one of $6k-1$, $6k+1$ gives a prime number. Can someone prove or disprove this statement?
$p = 59 \implies k = 2p + 1 = 119$. Neither $6k+1 = 715$ nor $6k-1 = 713$ is prime. Some other counter examples are: 59 83 89 103 109 137 139 149 151 163 193 239 269 281
{ "language": "en", "url": "https://math.stackexchange.com/questions/63862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Example of a model in set theory where the axiom of extensionality does not hold? I recently started a course in set theory and it was said that a model of set theory consists of a nonempty collection $U$ of elements and a nonempty collection $E$ of ordered pairs $(u,v)$, the components of which belong to $U$. Then the elements of $U$ are sets in the model and a set $u$ is interpreted as an element of $v$ if $(u,v) \in E$. It was also said that $U$ can also be a set and then $E$ is a relation in the set $U$ so that the ordered pair $(U,E)$ is a directed graph and reversely, any ordered graph $(U,E)$ can be used as a model of set theory. There have been examples of different models now where some of the axioms of ZFC do not hold and some do, but the axiom of extensionality has always held and I for some reason don't seem to comprehend enough of that axiom and its usage. Can someone tell an example of some collections $E$ and $U$ where the axiom of extensionality wouldn't hold?
The axiom of extensionality says: $$\forall x\forall y\left(x=y\leftrightarrow \forall z\left(z\in x\leftrightarrow z\in y\right)\right)$$ Obviously, if two sets are the same then they have the same elements. So in order to violate this axiom we need to have different sets which the model would think have the same elements. If you just want a model of sets in which the axiom of extensionality does not hold, consider for $a\neq b$ the following: $$\left(U=\Big\{\{a,b\},\{a\},a\Big\}, \in\right)$$ We have that $a\in\{a\}$, and $a\in\{a,b\}$. Since $a\neq b$ we have that $\{a\}\neq\{a,b\}$, however for all $x\in U$ we have $x\in\{a\}\leftrightarrow x\in\{a,b\}$. This is because $U$ does not know about $b$. It just knows that $\{a,b\}$ and $\{a\}$ are two distinct beings. It is unable to say why, in terms of $\in$ relation. The problems begins when you want more axioms. The more axioms you would want to have, the more complicated your universe will have to get.
{ "language": "en", "url": "https://math.stackexchange.com/questions/63910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 3 }
How do I reflect a function about a specific line? Starting with the graph of $f(x) = 3^x$, write the equation of the graph that results from reflecting $f(x)$ about the line $x=3$. I thought that it would be $f(x) = 3^{-x-3}$ (aka shift it three units to the right and reflect it), but it's wrong. The right answer is $f(x) = 3^{-x+6}$ but I just can't get to it! An explained step by step would be appreciated so I can follow what is being done. Thanks in advance!
Your idea will work if you just carry it fully through. First shift three units to the left, so the line of reflection becomes the y axis, then flip, and finally remember to shift three units back to the right to put the center line back where it belongs. (This gives the $f(6-x)$ solution you already know).
{ "language": "en", "url": "https://math.stackexchange.com/questions/63973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Gandalf's adventure (simple vector algebra) So, I found the correct answer to this homework question, but I was hoping there was an easier way to find the same answer. Here's the question: Gandalf the Grey started in the Forest of Mirkwood at a point with coordinates $(-2, 1)$ and arrived in the Iron Hills at the point with coordinates $(-1, 6)$. If he began walking in the direction of the vector $\bf v = 5 \mathbf{I} + 1 \mathbf{J}$ and changes direction only once, when he turns at a right angle, what are the coordinates of the point where he makes the turn. The answer is $((-1/13), (18/13))$. Now, I know that the dot product of two perpendicular vectors is $0$, and the sum of the two intermediate vectors must equal $\langle 1, 5\rangle$. Also, the tutor solved the problem by using a vector-line formula which had a point, then a vector multiplied by a scalar. I'm looking for the easiest and most intuitive way to solved this problem. Any help is greatly appreciated! I'll respond as quickly as I can.
His first leg is $a(5,1)$ and second leg is $b(1,-5)$ (because it is perpendicular to the first) with total displacement $(1,5)$. So $5a+b=1$ $a-5b=5$ Then the turning point is $(-2,1)+a(5,1),$ which should equal (here is a check) $(-1,6)-b(1,-5)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/64021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Is $M^2-[M]$ a local martingale when $M$ is a local martingale? I've learned that for each continuous local martingale $M$, there's a unique continuous adapted non-decreasing process $[M]$ such that $M^2-[M]$ is a continuous local martingale. For a local martingale $M$, is there a adapted non-decreasing process $[M]$ such that $M^2-[M]$ is a local martingale? (i.e. Do we have an analogous result for discontinuous local martingales?) Thank you. (The notes I have only consider the continuous case. I tried to adapt the argument, but ran into various problems...)
The answer is yes. For a good exposition of the semimartingale theory (includes local martingales, not necessarily continuous), I recommend Peter Medvegyev's "Stochastic Integration Theory". And the general discontinuous (but still cadlag) theory is harder than continuous case, but also fun to learn!
{ "language": "en", "url": "https://math.stackexchange.com/questions/64071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
What is the logical operator for but? I saw a sentence like, I am fine but he has flu. Now I have to convert it into logical sentence using logical operators. I do not have any idea what should but be translated to. Please help me out. Thanks
I agree with Jiri on their interpretation. But coming from an AI background, I have a different sort of take: Your example "I am fine but he has flu" has to do with the common knowledge between the speaker and the audience. The speaker has a certain belief of the above common knowledge. The attempt is to warn the audience that the proposition next to 'but' is unexpected, given the proposition before 'but'. Let us denote the proposition of a sentence $S$ before 'but' as $before(S)$ and after 'but' as $after(S)$. Lets denote the information content of a proposition $B$ when $A$ is already known as $I(B|A)$. Then, 'but' means: $I(after(S)|before(S)) > I(\lnot after(S)|before(S))$. That is, the information content (surprise) of $after(S)$ is more than $\lnot after(S)$ when $before(S)$ is already known.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 8, "answer_id": 1 }
Interpolating point on a quad I have a quad defined by four arbitrary points, A, B, C and D all on the same plane. I then have a known point on that quad, P. I want to find the value of 's' as shown in the diagram above, where t and s are parametric values in the range (0, 1) that interpolate along the edges.
Let $E$ be the point you show on $AB$ and $F$ be the point on $CD$. Then $E=A+t(B-A), F=C+t(D-C), P=E+s(F-E)$, where you can write these out componentwise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Is there any math operation defined to obtain vector $[4,3,2,1]$ from $[1,2,3,4]$? I mean have it been studied, does it have a name? Like Transpose, Inverse, etc.. have names. I wonder if the "inversion" of the components position have a name so then I could search material on this topic. Thanks
I do not know anything about reversal specifically, but it is a special case of what is known as a permutation matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Computing Grades-getting average from a weighted test Ok well I have this basic problem in which there are grades (4 grades). There's an upcoming final that is weighted to be some fraction toward the final grade (2/3). I have to find what the final grade has to be to get an average grade of 80. and then 90. I completly forgot the procedure as to how to tackle this problem. Anybody have any hints/tricks for me to start me off?
The key to solving this problem is to realize that there are essentially two components that will go into the final grade : 1) The average of the previous four tests 2) The grade on the final Thus we can set it up as follows : Let $G =$ Grade in the class, $a =$ average score from the previous 4 tests, and $f =$ final exam score. \begin{align*} G = \frac{2f}{3} + \frac{a}{3} \end{align*} Using you're numbers you can solve for whatever possibilities you need. EDIT: you can also use this approach for any different weightings by simply changing the fractional amounts that $a$ and $f$ are worth, for example if you want the final $f$, to be worth $3/4$ the final grade then it would reflect as: \begin{align*} G = \frac{3f}{4} + \frac{a}{4} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/64302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are column operations legal in matrices also? In linear algebra we have been talking a lot about the three elementary row operations. What I don't understand is why we can't multiply by any column by a constant? Since a matrix is really just a grouping of column vectors, shouldn't we be able to multiply a whole column by a constant but maintain the same set of solutions for the original and resulting matrices?
i think that the reason which causes "elementary" column operations incorrect is because of our rules toward the form of linear equations. That is, we define linear equations to be written as what we get used to writing now! You can try to write down theses equations straight and apply column operations to them, and see what happens.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 8, "answer_id": 7 }
Showing group with $p^2$ elements is Abelian I have a group $G$ with $p^2$ elements, where $p$ is a prime number. Some (potentially) useful preliminary information I have is that there are exactly $p+1$ subgroups with $p$ elements, and with that I was able to show $G$ has a normal subgroup $N$ with $p$ elements. My problem is showing that $G$ is abelian, and I would be glad if someone could show me how. I had two potential approaches in mind and I would prefer if one of these were used (especially the second one). First: The center $Z(G)$ is a normal subgroup of $G$ so by Langrange's theorem, if $Z(G)$ has anything other than the identity, it's size is either $p$ or $p^2$. If $p^2$ then $Z(G)=G$ and we are done. If $Z(G)=p$ then the quotient group of $G$ factored out by $Z(G)$ has $p$ elements, so it is cylic and I can prove from there that this implies $G$ is abelian. So can we show theres something other than the identity in the center of $G$? Second: I list out the elements of some other subgroup $H$ with $p$ elements such that the intersection of $H$ and $N$ is only the identity (if any more, due to prime order the intersected elements would generate the entire subgroups). Let $N$ be generated by $a$ and $H$ be generated by $b$. We can show $NK= G$, i.e every element in G can be wrriten like $a^k b^l $. So for this method, we just need to show $ab=ba$ (remember, these are not general elements in the set, but the generators of $N$ and $H$). Do any of these methods seem viable? I understand one can give very strong theorems using Sylow theorems and related facts, but I am looking for an elementary solution (no Sylow theorems, facts about p-groups, centrailzers) but definitions of centres and normalizers is fine.
Here is a proof of $|Z(G)|\not=p$ which does not invoke the proposition "if $G/Z(G)$ is cyclic then $G$ is abelian". Suppose $|Z(G)|=p$. Let $x\in G\setminus Z(G)$. Lagrange's theorem and $p$ being prime dictate that the centralizer $Z(x)$ of $x$ has order $1,p$,or $p^2$. Note that $\{x\}\uplus Z(G)\subset Z(x)$, $Z(x)$ has at least $p+1$ elements, so $|Z(x)|=p^2$, but this implies $x\in Z(G)$, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 8, "answer_id": 7 }
Question regarding counting poker dice Problem Poker dice is played by simultaneously rolling 5 dice. How many ways can we form "1 pair", "2 pairs"? For one pair, I got the answer right away. First I consider there are 5 spots for 5 dice. Then I pick 2 places out of 5, which means there are 3 places left, so we have to choose 3 out of 3 which is 1 way. Hence, I have: $${{5}\choose{2}} \cdot 6 {{3}\choose{3}} \cdot 5 \cdot 4 \cdot 3 = 3600.$$ However, I couldn't figure out why I got two pairs wrong. First, I pick 2 places for the first pair, then its rank. Next, 2 places for the second pair, and its rank. Since there is only 1 place left, I pick the rank for the last dice. $${{5}\choose{2}} \cdot 6 {{3}\choose{2}} \cdot 5 \cdot 4 \cdot 3 = 3600.$$ But the correct answer is 1800, which means I need to divide by a factor of 2. I guess that might be the order of two pairs can be switched, but I wonder is there a better way to count it? I'm so confused! Any idea?
You’ve correctly identified the mistake: you’ve counted each hand twice, because the pairs can be chosen in either order. For instance, you’ve counted the hand $11223$ once with $22$ as the first pair and $33$ as the second pair, and once again with $33$ as the first pair and $22$ as the second pair. Here’s a way to count that avoids that problem. First pick the two denominations that will be pairs; this can be done in $\binom62$ ways. Then pick the remaining denomination; this can be done in $4$ ways. Now choose which of the $5$ dice will show the singleton; this can be done in $5$ ways. Finally, choose which $2$ of the remaining $4$ dice will show the smaller of the two pairs; this can be done in $\binom42$ ways. The total is then $\binom62\cdot4\cdot5\cdot\binom42=1800$ ways. The key to avoiding the double counting is to choose the positions for a specific pair. Once you know where the smaller pair and the singleton are, you automatically know where the larger pair is: there’s nothing to choose.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why is lambda calculus named after that specific Greek letter? Why not “rho calculus”, for example? Where does the choice of the Greek letter $\lambda$ in the name of “lambda calculus” come from? Why isn't it, for example, “rho calculus”?
Dana Scott, who was a PhD student of Church, addressed this question. He said that, in Church's words, the reasoning was "eeny, meeny, miny, moe" — in other words, an arbitrary choice for no reason. He specifically debunked Barendregt's version in a recent talk at the University of Birmingham. * *Source *Video
{ "language": "en", "url": "https://math.stackexchange.com/questions/64468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47", "answer_count": 4, "answer_id": 0 }
Questions about averaging i have some trouble with averages. Here are two questions rolled in one: why is : $$\frac{\prod _{n=1}^N \left(1-\text{rnd}_n\right)}{N} \neq \prod _{n=1}^N \frac{1-\text{rnd}_n}{N} \mbox{where $rnd_n$ is a random gaussian real} $$ And how can i get $\frac{\prod _{n=1}^N \left(1-\text{rnd}_n\right)}{N}$ using only the mean and the variance of rnd, not the actual values ? So i only know how rnd is shaped, but not the values, that are supposed to average out anyway. What rule about averaging am i violating?
As Ross has mentioned, you cannot know the actual value of the expressions you wrote based only on the characteristics of random variables such as mean or a variance. You can only ask for the distribution of these expressions. E.g. in the case $\xi_n$ (rnd$_n$) are iid random variables, you can use the fact that $$ \mathsf E[(1-\xi_i)(1-\xi_j)]=\mathsf E(1-\xi_i)\mathsf E(1-\xi_j) = (\mathsf E(1-\xi_i))^2$$ which leads to the fact that $$ \mathsf E \pi_N = \frac1N[\mathsf E(1-\xi_1)]^N = \frac{(1-a)^N}{N} $$ where $a = \mathsf E\xi_1$. Here I denoted $$ \pi_N = \frac{\prod\limits_{n=1}^N(1-\xi_n)}{N}. $$ This holds regardless of the distribution of $\xi$, just integrability is needed. In the same way you can also easily calculate the variance of this expression based only on the variance and expectation of $\xi$ (if you want, I can also provide it). Finally, there is a small hope that for the Gaussian r.v. $\xi$ the distribution of this expression will be nice since it includes the products of normal random variables. On your request: variance. Recall that for any r.v. $\eta$ holds $V \eta = \mathsf E \eta^2 - (\mathsf E\eta)^2$ hence $\mathsf E\eta^2 = V\eta+(\mathsf E\eta)^2$. As I told, you don't need to know the distribution of $\xi$, just its expectation $a$ and variance $\sigma^2$. Since we already calculated $\mathsf E\pi_N$, we just need to calculate $\mathsf E\pi^2_N$: $$ \mathsf E\pi_N^2 = \frac1{N^2}\mathsf E\prod\limits_{n=1}^N(1-\xi_n)^2 = \frac{1}{N^2}\prod\limits_{n=1}^N\mathsf E(1-\xi_n)^2 = \frac{1}{N^2}\left(\mathsf E(1-\xi_1)^2\right)^N. $$ Now, $$ \mathsf E(1-\xi_1)^2 = \mathsf E(1-2\xi_1+\xi^2_1) = 1-2a+\mathsf E\xi_1^2 = 1-2a+a^2+\sigma^2 = (1-a)^2+\sigma^2 $$ and $$ \mathsf E\pi_N^2 = \frac{1}{N^2}\left((1-a)^2+\sigma^2\right)^N. $$ As a consequence, $$ V\pi_N = \frac{1}{N^2}\left[\left((1-a)^2+\sigma^2\right)^N - (1-a)^{2N}\right]. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/64501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that every element of a finite group has an order I was reading Nielsen and Chuang's "Quantum Computation and Quantum Information" and in the appendices was a group theory refresher. In there, I found this question: Exercise A2.1 Prove that for any element $g$ of a finite group, there always exists a positive integer $r$ such that $g^r=e$. That is, every element of such a group has an order. My first thought was to look at small groups and try an inductive argument. So, for the symmetric groups of small order e.g. $S_1, S_2, S_3$ the integer $r$ is less than or equal to the order of the group. I know this because the groups are small enough to calculate without using a general proof. For example, in $S_3$ there is an element that rearranges the identity $\langle ABC \rangle$ element by shifting one character to the left e.g. $s_1 = \langle BCA \rangle$. Multiplying this element by itself produces the terms $s_1^2 = \langle CAB \rangle$; and $s_1^3 = \langle ABC \rangle$ which is the identity element, so this element is equal to the order of the group, which is three. I have no idea if this relation holds for $S_4$ which means I am stuck well before I get to the general case. There's a second question I'd like to ask related to the first. Is the order or period of any given element always less than or equal to the order of the group it belongs to?
Sometimes it is much clearer to argue the general case. Consider any $g \in G$, a finite group. Since $G$ is a group, we know that $g\cdot g = g^2 \in G$. Similarly, $g^n \in G$ for any $n$. So there is a sequence of elements, $$g, g^2, g^3, g^4, g^5, \ldots, g^n, \ldots $$ in $G$. Now since $G$ is finite, there must be a pair of number $m \neq n$ such that $g^m = g^n$ (well, there are many of these, but that's irrelevant to this proof). Can you finish the proof from this point? What does $g^m = g^n$ imply in a group? Hope this helps!
{ "language": "en", "url": "https://math.stackexchange.com/questions/64575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 2 }
what kind of quadrilateral is ABCD? ABCD is a quadrilateral, given $\overrightarrow{AB}\cdot\overrightarrow{BC}=\overrightarrow{BC}\cdot\overrightarrow{CD}=\overrightarrow{CD}\cdot\overrightarrow{DA}$, then what kind of quadrilateral is ABCD? I guess it's a rectangle, but how to prove it? If the situation becomes $\overrightarrow{AB}\cdot\overrightarrow{BC}=\overrightarrow{BC}\cdot\overrightarrow{CD}=\overrightarrow{CD}\cdot\overrightarrow{DA}=\overrightarrow{DA}\cdot\overrightarrow{AB}$, i can easily prove ABCD is a rectangle. So, the question is given $\overrightarrow{AB}\cdot\overrightarrow{BC}=\overrightarrow{BC}\cdot\overrightarrow{CD}=\overrightarrow{CD}\cdot\overrightarrow{DA}$, can we get $\overrightarrow{AB}\cdot\overrightarrow{BC}=\overrightarrow{BC}\cdot\overrightarrow{CD}=\overrightarrow{CD}\cdot\overrightarrow{DA}=\overrightarrow{DA}\cdot\overrightarrow{AB}$? thanks.
Some informal degrees-of-freedom analysis: * *An arbitrary quadrilateral on a plane is described by $8$ parameters: coordinates of each vertex (to simplify matter, I don't take quotient by isometries). *A rectangle on a plane is described by $5$ parameters: endpoints of one side and (signed) length of the other side. We should not expect two equations to restrict the $8$-dimensional space of quadrilaterals down to $5$-dimensional space of rectangles. Three equations (also given in the post) are enough. The above is not a rigorous proof because two equations $f=0=g$ can be made into one $f^2+g^2=0$, etc. One needs some transversality consideration to make it work. But it's easier to just quote a geometric argument given by Henning Makholm in the comments. If you place $B$, $C$, and $D$ arbitrarily, then each of the two equations between dot products defines a line that $A$ must lie on. Putting A at the intersection between these two lines gives you a quadrilateral that satisfies the condition. So you cannot conclude anything about the angle at $C$ (i.e., it doesn't have to be a rectangle) -- nor anything about the relative lengths of $BC$ versus $CD$. A concrete example would be $A(2,5)$, $B(-1,1)$, $C(0,0)$, $D(1,0)$. Doesn't look like anything that has a nice name. Neither does $A(1,1)$, $B(1,2)$, $C(0,0)$, $D(0,2)$. The first of Henning's examples is below (the second isn't even convex)
{ "language": "en", "url": "https://math.stackexchange.com/questions/64634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Extension of $3\sigma$ rule For the normally distributed r.v. $\xi$ there is a rule of $3\sigma$ which says that $$ \mathsf P\{\xi\in (\mu-3\sigma,\mu+3\sigma)\}\geq 0.99. $$ Clearly, this rule not necessary holds for other distributions. I wonder if there are lower bounds for $$ p(\lambda) = P\{\xi\in (\mu-\lambda\sigma,\mu+\lambda\sigma)\} $$ regardless of the distribution of real-valued random variable $\xi$. If we are focused only on absolute continuous distributions, a naive approach is to consider the variational problem $$ \int\limits_{\int\limits xf(x)\,dx - \lambda\sqrt{\int\limits x^2f(x)\,dx-(\int\limits xf(x)\,dx)^2}}^{\int\limits xf(x)\,dx + \lambda\sqrt{\int\limits x^2f(x)\,dx-(\int\limits xf(x)\,dx)^2}} f(x)\,dx \to\inf\limits_f $$ which may be too naive. The other problem is that dsitributions can be not necessary absolutely continuous. So my question is if there are known lower bounds for $p(\lambda)$?
In general this is Chebyshev's inequality $$\Pr(|X-\mu|\geq k\sigma) \leq \frac{1}{k^2}.$$ Equality is achieved by the discrete distribution $\Pr(X=\mu)=1-\frac{1}{k^2}$, $\Pr(X=\mu-k\sigma)=\frac{1}{2k^2}$, $\Pr(X=\mu+k\sigma)=\frac{1}{2k^2}$. This can be approached arbitrarily closely by an absolutely continuous distribution. Letting $k=3$, this gives $$\Pr(|X-\mu|\geq 3\sigma) \leq \frac{1}{9} \approx 0.11;$$ while letting $k=10$, this gives $$\Pr(|X-\mu|\geq 10\sigma) \leq \frac{1}{100} =0.01.$$ so these bounds are relatively loose for a normal distribution. This diagram (from my page here) compares the bounds. Red is Chebyshev's inequality; blue is a one-tailed version of Chebyshev's inequality; green is a normal distribution; and pink is a one-tailed normal distribution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Equicontinuous set Let $\mathcal E$ be the set of all functions $u\in C^1([0,2])$ such that $u(x)\geq 0$ for every $x\in[0,2]$ and $|u'(x)+u^2(x)|<1$ for every $x\in [0,2]$. Prove that the set $\mathcal F:=\{u_{|[1,2]}: u\in\mathcal E\}$ is an equicontinuous subset of $C^0([1,2]).$ The point I am stuck on is that i can't see how to combine the strange hypothesis imposed on every $u\in\mathcal E$, in particular i solved the two differential equations $$u'(x)=1-u^2(x),\qquad u'(x)=-1-u^2(x),$$ which result to be the extremal case of the condition given. In particular the two solutions are $$u_1(x)=\frac{ae^t-be^{-t}}{ae^t+be^{-t}},\qquad u_2(x)=\frac{a\cos(x)-b\sin(x)}{a\cos(x)+b\sin(x)}.$$ I feel however i'm not ong the right path so any help is appreciated. P.S. Those above are a big part of my efforts and thoughts on this problem so i hope they won't be completely useless :P Edit In the first case the derivative is $$u'_1(x)=\frac{2ab}{(ae^t+be^{-t})}\geq 0$$ while for the other function we have, for $x\in[0,2],$ $$u'_2(x)=-\frac{\sin(2x) ab}{(a\cos(x)+b\sin(x))^2}\leq 0.$$ Moreover $u_1(1)>u_2(1)$, since $$\frac{ae-b^{e-1}}{ae+be^{-1}}>\frac{a\cos(1)-b\sin(1)}{a\sin(1)+b\cos(1)}\Leftrightarrow (a^2e+be^{-1})(\sin(1)-\cos(1)),$$ and $\sin(1)>\cos(1).$ Now, all this bounds i've found are useful to solve the problem?
Suppose $u \in \mathcal{E}$. It's enough to show $u(t) \le 3$ for all $t \in [1,2]$, since then we'll have $-10 \le u'(t) \le 1$, and any set of functions with uniformly bounded first derivatives is certainly equicontinuous. We also know that $u' \le 1$ on $[0,2]$, and so by the mean value theorem it suffices to show that $u(1) \le 2$. If $u(0) \le 1$ we are also done, so assume $u(0) > 1$. Let $v$ be the solution of $v'(t) = 1 - v(t)^2$ with $v(0) = u(0) > 1$. This is given by your formula for $u_1$ with, say, $b=1$ and some $a < -1$. I claim $u(t) \le v(t)$. This will complete the proof, since it is easy to check that $v(1) < 2$. (We have $v(1) = \frac{ae-e^{-1}}{ae+e^{-1}}$, which is increasing in $a$; compute its value at $a=-1$.) Set $w(t) = v(t) - u(t)$. We have $w(0)=0$ and $w'(t) > u(t)^2 - v(t)^2$. Suppose to the contrary there exists $s \in [0,1]$ such that $u(s) > v(s)$; let $s_0$ be the infimum of all such $s$. Then necessarily $u(s_0) = v(s_0)$, so $w(s_0)=0$ and $w'(s_0) > u(s_0)^2 - v(s_0)^2 = 0$. So for all small enough $\epsilon$, $w(s_0 + \epsilon) > 0$. This contradicts our choice of $s_0$ as the infimum. So in fact $u \le v$ and we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What is the significance of the three nonzero requirements in the $\varepsilon-\delta$ definition of the limit? What are the consequences of the three nonzero requriments in the definition of the limit: $\lim_{x \to a} f(x) = L \Leftrightarrow \forall$ $\varepsilon>0$, $\exists$ $\delta>0 :\forall$ $x$, $0 < \lvert x-a\rvert <\delta \implies \lvert f(x)-L \rvert < \varepsilon$ I believe I understand that: * *if $0 = \lvert x-a\rvert$ were allowed the definition would require that $f(x) \approx L$ at $a$ ($\lvert f(a)-L \rvert < \varepsilon$); *if $\varepsilon=0$ and $\lvert f(a)-L \rvert \le \varepsilon$ were allowed the theorem would require that $f(x) = L$ near $a$ (for $0 < \lvert x-a\rvert <\delta$); and *if $\delta=0$ were allowed (and eliminating the tautology by allowing $0 \le \lvert x-a\rvert \le \delta$) the definition would simply apply to any function where $f(a) = L$, regardless of what happened in the neighborhood of $f(a)$. Of course if (2'.) $\varepsilon=0$ were allowed on its own, the theorem would never apply ($\lvert f(a)-L \rvert \nless 0$). What I'm not clear about is [A] the logical consequences of (3'.) allowing $\delta=0$ its own, so that: $\lim_{x \to a} f(x) = L \Leftrightarrow \forall$ $\varepsilon>0$, $\exists$ $\delta≥0 :\forall$ $x$, $0 < \lvert x-a\rvert <\delta \implies \lvert f(x)-L \rvert < \varepsilon$ and [B] whether allowing both 1. and 2. would be equivalent to requiring continuity?
For (3), if $\delta = 0$ was allowed the definition would apply to everything: since $|x-a| < 0$ is impossible, it implies whatever you like.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Proving the AM-GM inequality for 2 numbers $\sqrt{xy}\le\frac{x+y}2$ I am having trouble with this problem from my latest homework. Prove the arithmetic-geometric mean inequality. That is, for two positive real numbers $x,y$, we have $$ \sqrt{xy}≤ \frac{x+y}{2} .$$ Furthermore, equality occurs if and only if $x = y$. Any and all help would be appreciated.
Since $x$ and $y$ are positive, we can write them as $x=u^2$, $y=v^2$. Then $$(u-v)^2 \geq 0 \Rightarrow u^2 + v^2 \geq 2uv$$ which is precisely it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 4 }
Calculating Basis Functions for DFTs (64 Samples) I am attempting to graph some 64 sample'd basis functions in MatLab, and getting inconsistent results -- which is to say, I'm getting results that are still sinusoidal, but don't have the frequency they ought. Here's a graph of what is supposed to be my c8 basis function: Unfortunately, it only has 7 peaks, which indicates that I seem to have botched the frequency somehow. I'm assuming my problem lies somewhere within how I'm trying to graph in matlab, and not an error in the function itself. Here's my code: n = linspace(0, 2*pi*8, 64) x = cos(2*pi*8*n/64) plot(n,x) I'm inclined to believe x has the correct formula, but I'm at a loss as to how else to formulate an 'n' to graph it with. Why am I getting a result with the incorrect frequency?
You're plotting the function $\cos n\pi/4$, which has period $8$, and thus $8$ full periods from $0$ to $64$, but you're only substituting values from $0$ to $16\pi$. Since $16\pi\approx50$, you're missing a bit less than two of the periods. From what it seems you're trying to do, you should be plotting the function from $0$ to $64$, i. e. replace 2*pi*8 by 64 in the first line.
{ "language": "en", "url": "https://math.stackexchange.com/questions/64947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove that the Binet formula gives the terms of the Fibonacci Sequence? This formula provides the $n$th term in the Fibonacci Sequence, and is defined using the recurrence formula: $u_n = u_{n − 1} + u_{n − 2}$, for $n > 1$, where $u_0 = 0$ and $u_1 = 1$. Show that $$u_n = \frac{(1 + \sqrt{5})^n - (1 - \sqrt{5})^n}{2^n \sqrt{5}}.$$ Please help me with its proof. Thank you.
Alternatively, you can use the linear recursion difference formula. This works for any linear recursion (i.e. a recursion in the form $a_n=qa_{n-1}+ra_{n-2}$. Step 1 for closed form of linear recursion: Find the roots of the equation $x^2=qx+r$. For Fibonnaci, this formula is $x^2=x+1$. The roots are $\frac{1\pm\sqrt5}2$. Step 2: The closed form is in the form $a(n)=g\cdot\text{root}_1^n+h\cdot\text{root}_2^n$. For Fibonacci, this yields $a_n=g(\frac{1+\sqrt5}2)^n+h(\frac{1-\sqrt5}2)^n$. Step 3: Solve for $g$ and $h$. All you have to do know is plug in two known values of the sequence into this equation. For fibonacci, you get $g=h=1/\sqrt5$. You are done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/65011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 7, "answer_id": 2 }
How to find primes between $p$ and $p^2$ where $p$ is arbitrary prime number? What is the most efficient algorithm for finding prime numbers which belongs to the interval $(p,p^2)$ , where $p$ is some arbitrary prime number? I have heard for Sieve of Atkin but is there some better way for such specific case which I described?
This is essentially the same as asking for the primes below x for arbitrary x. There are essentially only two practical sieves for the task: Eratosthenes and Atkin-Bernstein. Practically, the sieve of Eratosthenes is fastest; the Atkin-Bernstein sieve might overtake it eventually but I do not know of any implementations that are efficient for large numbers. Unless your range is very small, it will not fit in memory. In that case it is critical to use a segmented sieve; both Eratosthenes and Atkin-Bernstein do this naturally. If you're looking for an existing program, try yafu, primesieve, or primegen. The first two are modified sieves of Eratosthenes and the last is an Atkin-Bernstein implementation, though efficient only to $2^{32}$ (or p = 65521 in your case).
{ "language": "en", "url": "https://math.stackexchange.com/questions/65057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to predict the tolerance value that will yield a given reduction with the Douglas-Peucker algorithm? Note: I'm a programmer, not a mathematician - please be gentle. I'm not even really sure how to tag this question; feel free to re-tag as appropriate. I'm using the Douglas-Peucker algorithm to reduce the number of points in polygons (in a mapping application). The algorithm takes a tolerance parameter that indicates how far I'm willing to deviate from the original polygon. For practical reasons, I sometimes need to ensure that the reduced polygon doesn't exceed a predetermined number of points. Is there a way to predict in advance the tolerance value that will reduce a polygon with N points to one with N' points?
Here is a somewhat nontraditional variation of the Douglas-Peucker algorithm. We will divide a given curve into pieces which are well approximated by line segments (within tolerance $\varepsilon$). Initially, there is only one piece, which is the entire curve. * *Find the piece $C$ with the highest "deviation" $d$, where the deviation of a curve is the maximum distance of any point on it from the line segment joining its end points. *If $d < \varepsilon$, then all pieces have sufficiently low deviation. Stop. *Let $p_0$ and $p_1$ be the end points of $C$, and $q$ be the point on $C$ which attains deviation $d$. Replace $C$ with the piece between $p_0$ and $q$, and the piece between $q$ and $p_1$. *Repeat. It should be easy to see how to modify step 2 so that the algorithm produces exactly $n-1$ pieces, i.e. $n$ points, for any given $n$. Exercises Things I am too lazy to do myself: * *Show that for (almost) any result of the modified algorithm, there is a corresponding tolerance on which Douglas-Peucker would produce the same result. *Use priority queues for efficient implementation of step 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/65115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Right identity and Right inverse in a semigroup imply it is a group Let $(G, *)$ be a semigroup. Suppose * *$ \exists e \in G$ such that $\forall a \in G,\ ae = a$; *$\forall a \in G, \exists a^{-1} \in G$ such that $aa^{-1} = e$. How can we prove that $(G,*)$ is a group?
It is conceptually very simple that a right inverse is also a left inverse (when there is also a right identity). It follows from the axioms above in two steps: 1) Any element $a$ with the property $aa = a$ [i.e. idempotent] must be equal to the identity $e$ in the axioms, since in that case: $$a = ae = a(aa^{-1}) = (aa)a^{-1} = aa^{-1} = e$$ This already proves the uniqueness of the [right] identity, since any identity by definition has the property of being idempotent. 2) By the axioms, for every element $a$ there is at least one right inverse element $a^{-1}$ such that $aa^{-1}=e$. Now we form the product of the same two elements in reverse order, namely $a^{-1}a$, to see if that product also equals the identity. If so, this right inverse is also a left inverse. We only need to show that $a^{-1}a$ is idempotent, and then its equality to $e$ follows from step 1: $$[a^{-1}a][ a^{-1}a] = a^{-1}(a a^{-1})a = a^{-1}ea = a^{-1}a $$ 3) It is now clear that the right identity is also a left identity. For any $a$: $$ea = (aa^{-1})a = a(a^{-1}a) = ae = a$$ 4) To show the uniqueness of the inverse: Given any elements $a$ and $b$ such that $ab=e$, then $$b = eb = a^{-1}ab = a^{-1}e = a^{-1}$$ Here, as above, the symbol $a^{-1}$ was first used to denote a representative right inverse of the element $a$. This inverse is now seen to be unique. Therefore, the symbol now signifies an operation of "inversion" which constitutes a single-valued function on the elements of the set. See Richard A. Dean, “Elements of Abstract Algebra” (Wiley, 1967), pp 30-31.
{ "language": "en", "url": "https://math.stackexchange.com/questions/65239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40", "answer_count": 6, "answer_id": 0 }
Real-world uses of Algebraic Structures I am a Computer science student, and in discrete mathematics, I am learning about algebraic structures. In that I am having concepts like Group,semi-Groups etc... Previously I studied Graphs. I can see a excellent real world application for that. I strongly believe in future I can use many of that in my Coding Algorithms related to Graphics. Could someone tell me real-world application for algebraic structures too...
The fact that electrons , positrons , quarks , neutrinos and other particles exist in the universe is due to the fact that the quantum state of these particles respects poincare invariance. Put in simpler terms, If Einstein's theory of relativity is to hold , Some arguments using group theory show that these kinds of particles that I mentioned respects Einstein's theory and that there's no fundamental reason they shouldn't exist. Scientists have used group theory to predict the existence of many particles .We use a special kind of groups called lie groups that are groups and manifolds in the same time.For example $GL(n,R)$ is a lie group of invertible linear transformation of the n-dimensional Euclidean space. Symmetry operations correspond to elements living inside groups. If you map these symmetry elements to the group of invertible (and Unitary) transformations of a Hilbert Space ( An infinite dimensional vector space where particle quantum state lives ) You can study how these particle states transforms under the action of the group
{ "language": "en", "url": "https://math.stackexchange.com/questions/65300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 2 }
If $a, b, c$ are integers, $\gcd(a,b) = 1$ then $\gcd (a,bc)=\gcd(a,c)$ If $a, b, c$ and $k$ be integers, $\gcd(a,b) = 1$ and $\gcd(a, c)=k$, then $\gcd (bc, a)=k$.
Since $gcd(a,b)=1$, there exist two integers $x$ and $y$ such that $$ax+by=1\tag{1}$$ Also $gcd (a,c)=k$, there exist two integers $x_{1}$ and $y_{1}$ such that $$ax_{1}+cy_{1}=k\tag{2}$$ Now multiplying $(1)$ and $(2)$ we get, $$a^{2}xx_{1}+acxy_{1}+bayx_{1}+bcyy_{1}=k$$ $$\Rightarrow a(axx_{1}+cxy_{1}+byx_{1})+bc(yy_{1})=k$$ It follows that $gcd(a,bc)=k.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/65366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why does this expected value simplify as shown? I was reading about the german tank problem and they say that in a sample of size $k$, from a population of integers from $1,\ldots,N$ the probability that the sample maximum equals $m$ is: $$\frac{\binom{m-1}{k-1}}{\binom{N}{k}}$$ This make sense. But then they take expected value of the sample maximum and claim: $$\mu = \sum_{m=k}^N m \frac{\binom{m-1}{k-1}}{\binom{N}{k}} = \frac{k(N+1)}{k+1}$$ And I don't quite see how to simplify that summation. I can pull out the denominator and a $(k-1)!$ term out and get: $$\mu = \frac{(k-1)!}{\binom{N}{k}} \sum_{m=k}^N m(m-1) \ldots (m-k+1)$$ But I get stuck there...
Call $B_k^N=\sum\limits_{m=k}^N\binom{m-1}{k-1}$. Fact 1: $B_k^N=\binom{N}{k}$ (because the sum of the masses of a discrete probability measure is $1$ or by a direct computation). Fact 2: For every $n\geqslant i\geqslant 1$, $n\binom{n-1}{i-1}=i\binom{n}{i}$. Now to the proof. Fact 2 for $(n,i)=(m,k)$ gives $\sum\limits_{m=k}^Nm\binom{m-1}{k-1}=\sum\limits_{m=k}^Nk\binom{m}{k}=k\sum\limits_{m=k+1}^{N+1}\binom{m-1}{(k+1)-1}=kB_{k+1}^{N+1}$. Fact 1 gives $B_{k+1}^{N+1}=\binom{N+1}{k+1}$. Fact 2 for $(n,i)=(N+1,k+1)$ (or a direct computation) gives $(k+1)B_{k+1}^{N+1}=(N+1)B_k^N$. Finally, $\mu=\dfrac{kB_{k+1}^{N+1}}{B_k^N}=k\dfrac{N+1}{k+1}$. Edit The same method yields, for every $i\geqslant0$, $$ \mathrm E(X(X+1)\cdots(X+i))=\frac{k}{k+i+1}(N+1)(N+2)\cdots(N+i+1). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/65398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 1 }
How can the following be calculated? How can the following series be calculated? $$S=1+(1+2)+(1+2+3)+(1+2+3+4)+\cdots+(1+2+3+4+\cdots+2011)$$
Let $S$ be our sum. Then $$S=\binom{2}{2}+\binom{3}{2}+\binom{4}{2} + \cdots + \binom{2012}{2}=\binom{2013}{3}=\frac{2013\cdot 2012\cdot 2011}{3 \cdot 2 \cdot 1}.$$ Justification: We count, in two different ways, the number of ways to choose $3$ numbers from the set $$\{1,2,3,4,\dots, n,n+1\}.$$ (For our particular problem we use $n=2012$.) First Count: It is clear that there are $\binom{n+1}{3}$ ways to choose $3$ numbers from $n+1$ numbers. Second Count: The smallest chosen number could be $1$. Then there are $\binom{n}{2}$ ways to choose the remaining $2$ numbers. Or the smallest chosen number could be $2$, leaving $\binom{n-1}{2}$ choices for the remaining $2$ numbers. Or the smallest chosen number could be $3$, leaving $\binom{n-2}{2}$ choices for the remaining $2$ numbers. And so on, up to smallest chosen number being $n-1$, in which case there are $\binom{2}{2}$ ways to choose the remaining $2$ numbers. Thus the total count is $$\binom{n}{2}+\binom{n-1}{2}+\binom{n-2}{2}+\cdots +\binom{3}{2}+\binom{2}{2}.$$ Comparing the two counts, we find that $$\binom{2}{2}+\binom{3}{2}+\binom{4}{2}+\cdots +\binom{n-1}{2}+\binom{n}{2}=\binom{n+1}{3}.$$ Comment: Similarly, it is easy to see that in general $\sum_{k=r}^n \binom{k}{r}=\binom{n+1}{r+1}.$ These natural binomial coefficient identities give a combinatorial approach to finding general formulas for the sums of consecutive squares, consecutive cubes, and so on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/65465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Trouble counting the number of "ace high" hands in poker I'm trying to count the number of "ace high" hands in a five card poker hand. The solution from my answer key puts the count at 502,860; however, I have an argument for why this number is too high. Please help me understand where my logic is flawed. Instead of coming up with an exact answer for the number of ace high hands I will show an upper bound on the number of ace high hands. First, go through the card deck and remove all four aces leaving a deck of 48 cards. We will use this 48 card deck to form the four card "non ace" part of the "ace high" hand. First, how many ways are there to form any four card hand from a 48 card deck? This is (48 choose 4) = 194,580. Now, not all of these hands when paired with an ace will form an "ace high" hand. For example A Q Q K K would be two pair. In fact, any four card hand with at least two cards of the same rank (e.g. Queen of Spades, Queen of Hearts) will not generate an ace high hand. So let's find the number of such hands and subtract them from 194,580. I believe the number of such hands can be found by first selecting a rank for a pair from these 48 remaining cards, that is, (12 choose 1)--times the number of ways to select two suits for our rank (4 choose 2)--times the number of ways to pick the remaining 2 required cards from 46 remaining cards, that is, (46 choose 2). So, restated, given our 48 card deck we can create a four card hand that contains at least one pair this many ways: (12 choose 1)(4 choose 2) (46 choose 2) = 74,520 [pair rank] [suits of pair] [remaining 2 cards] Thus the number of four card hands that do not include at least one pair is: (48 choose 4) - 74,520 = 120,060 We can pair each of these four card sets with one of our four aces to form the number of five card hands that contain an ace, but not any single pair (or better). This is 120,060 * 4 = 480,240 hands. However, this is already less than 502,860 shown by the key... and I haven't even begun to start subtracting out straights. Clearly I have made a mistake, but what is it?
In your method, "two pair" hands would be subtracted twice, "three of a kind" hands would be subtracted three times, and "full house" hands 5 times.
{ "language": "en", "url": "https://math.stackexchange.com/questions/65576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
I need to define a family (one parameter) of monotonic curves I want to define a function family $f_a(x)$ with a parameter $a$ in $(0,1)$, where: For any $a$, $f_a(0) = Y_0$ and $f_a(X_0) = 0$ (see image) For $a = 0.5$, this function is a straight line from $(0,Y_0)$ to $(X_0, 0)$. For $a < 0.5$, up to zero (asymptotically perhaps), I want $f_a$ to be a curve below, and for $a > 0.5$, the curve should be to the other side. I didn't fill the diagram with many examples, but I hope you get the idea. Different values of $a$ always produce a distinct, monotonic curve, below all curves of larger values of $a$, and above all curves for smaller values of $a$. E.g.: when I decrease $a$, the distance of the $(0,0)$ point from the curve decreases, and if I increase $a$, it increases. Sorry for the clumsy description but I hope you got the intuition of what I'm trying to define! Any suggestion of how this function $f_a(x)$ could look like?
How about $$f_a(x) = y_0\left(1-(x/x_0)^{\frac{a}{1-a}}\right)$$ ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/65641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
Examples of mapping two sets where if the two sets are isomorphic doesn't imply that mapping is also 1 to 1 I am struggling with getting an example of two sets S and T and (onto) mapping f, where the fact S and T are isomorphic does not imply that f is also 1 - 1. If possible could you also give an example in which the fact that they are isomorphic would imply that they are 1 - 1? Thank You!
Hint: Can you do this for finite $T$ and $S$? What happens if $T$ and $S$ are infinite? Think of shift-like maps $\mathbb{N}\rightarrow \mathbb{N}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/65684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
What is the cardinality of the set of all topologies on $\mathbb{R}$? This was asked on Quora. I thought about it a little bit but didn't make much progress beyond some obvious upper and lower bounds. The answer probably depends on AC and perhaps also GCH or other axioms. A quick search also failed to provide answers.
Let me give a slightly simplified version of Stefan Geschke's argument. Let $X$ be an infinite set. As in his argument, the key fact we use is that there are $2^{2^{|X|}}$ ultrafilters on $X$. Now given any ultrafilter $F$ on $X$ (or actually just any filter), $F\cup\{\emptyset\}$ is a topology on $X$: the topology axioms easily follow from the filter axioms. So there are $2^{2^{|X|}}$ topologies on $X$. Now if $T$ is a topology on $X$ and $f:X\to X$ is a bijection, there is exactly one topology $T'$ on $X$ such that $f$ is a homeomorphism from $(X,T)$ to $(X,T')$ (namely $T'=\{f(U):U\in T\}$). In particular, since there are only $2^{|X|}$ bijections $X\to X$, there are only at most $2^{|X|}$ topologies $T'$ such that $(X,T)$ is homeomorphic to $(X,T')$. So we have $2^{2^{|X|}}$ topologies on $X$, and each homeomorphism class of them has at most $2^{|X|}$ elements. Since $2^{2^{|X|}}>2^{|X|}$, this can only happen if there are $2^{2^{|X|}}$ different homeomorphism classes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/65731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52", "answer_count": 2, "answer_id": 1 }
How to prove $\log n \leq \sqrt n$ over natural numbers? It seems like $$\log n \leq \sqrt n \quad \forall n \in \mathbb{N} .$$ I've tried to prove this by induction where I use $$ \log p + \log q \leq \sqrt p \sqrt q $$ when $n=pq$, but this fails for prime numbers. Does anyone know a proof?
Here is a proof of a somewhat weaker inequality that does not use calculus: Put $m:=\lceil\sqrt{n}\>\rceil$. The set $\{2^0,2^1,\ldots,2^{m-1}\}$ is a subset of the set $\{1,2,\ldots,2^{m-1}\}$; therefore we have the inequality $m\leq 2^{m-1}$ for all $m\geq1$. It follows that $$\log n=2\log\sqrt{n}\leq 2\log m\leq 2(m-1)\log2\leq 2\log2\>\sqrt{n}\ ,$$ where $2\log2\doteq1.386$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/65793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 6, "answer_id": 2 }
Cantor's completeness principle I hope everyone who has underwent a fundamental course in Analysis must be knowing about Cantor's completeness principle. It says that in a nest of closed intervals ,the intersection of all the intervals is a single point. I hope I can get an explanation as to why in case of only closed intervals this principle holds good, why not in case of open intervals?
The intersection of all the open intervals centered at $0$ is just $\{0\}$, since $0$ is the only point that is a member of all of them. But the intersection of all the open intervals whose lower boundary is $0$ is empty. (After all, what point could be a member all of them?) And they are nested, in that for any two of them, one is a subset of the other.
{ "language": "en", "url": "https://math.stackexchange.com/questions/65846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is it wrong to express $\mathop{\lim}\limits_{x \to \infty}x\sin x$ as $k\mathop{\lim}\limits_{x \to \infty}x$; $\lvert k \rvert \le 1$? Why is it wrong to write $$\mathop{\lim}\limits_{x \to \infty}x\left(\frac{1}{x}\sin x-1+\frac{1}{x}\right)=(0k-1+0)\cdot\mathop{\lim}\limits_{x \to \infty}x,$$ where $\lvert k \rvert \le 1$? And, as an aside, is there an idiom or symbol for compactly representing, in an expression, a number that is always within a range so that "where ..." can be avoided?
You can’t rewrite that comment this way: $\sin x$ is always between $-1$ and $1$, but it isn’t a constant, which is what you’re implying when you pull it outside the limit. You could write $$\lim\limits_{x\to\infty}x\left(\frac{1}{x}\sin x - 1 + \frac{1}{x}\right) = (0-1+0)\cdot\lim\limits_{x\to\infty}x,$$ provided that you explained why $\lim\limits_{x\to\infty}\dfrac{\sin x}{x}=0$. For that you really do need to write an explanation, not an equation. In an elementary course you should give more detail rather than less, so it might look something like this: $\vert \sin x\vert \le 1$ for all real $x$, so $\dfrac{-1}{x} \le \dfrac{\sin x}{x} \le \dfrac{1}{x}$ for all $x>0$, and therefore by the sandwich theorem $$0 = \lim\limits_{x\to\infty}\frac{-1}{x} \le \lim\limits_{x\to\infty}\frac{\sin x}{x} \le \lim\limits_{x\to\infty}\frac{1}{x} = 0$$ and $\lim\limits_{x\to\infty}\dfrac{\sin x}{x}=0$. In a slightly higher-level course you could simply say that $\lim\limits_{x\to\infty}\dfrac{\sin x}{x}=0$ because the numerator is bounded and the denominator increases without bound. But it’s just as easy to multiply it out to get $$\lim\limits_{x\to\infty}(\sin x - x + 1)$$ and argue that $0 \le \sin x + 1 \le 2$ for all $x$, so $-x \le \sin x - x + 1 \le 2-x$ for all $x$, and hence (again by the sandwich theorem) the limit is $-\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/65908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Combinatorics-number of permutations of $m$ A's and at most $n$ B's Prove that the number of permutations of $m$ A's and at most $n$ B's equals $\dbinom{m+n+1}{m+1}$. I'm not sure how to even start this problem.
By summing all possibilities of $n$, we get that the number of permutations $P_n$ satisfies $$P_n = \binom{m+n}{n} + \binom{m+(n-1)}{(n-1)} + \ldots + \binom{m+0}{0} = \sum_{i=0}^n \binom{m + i}{i}$$ Note that $$\binom{a}{b} = \binom{a-1}{b} + \binom{a-1}{b-1}$$ Repeatedly applying this to the last term, we get $$\begin{array}{rcl} \binom{a}{b} &=& \binom{a-1}{b} + \binom{a-1}{b-1} \\ &=& \binom{a-1}{b} + \binom{a-2}{b-1} + \binom{a-2}{b-2} \\ &=& \binom{a-1}{b} + \binom{a-2}{b-1} + \binom{a-3}{b-2} + \binom{a-3}{b-3} \\ &=& \binom{a-1}{b} + \binom{a-2}{b-1} + \binom{a-3}{b-2} + \binom{a-4}{b-3} + \ldots \\ &=& \binom{a-1}{b} + \binom{a-2}{b-1} + \binom{a-3}{b-2} + \binom{a-4}{b-3} + \ldots + \binom{a-b-1}{0} \\ &=& \sum_{i=0}^b \binom{a-b-1+i}{i} \end{array}$$ Substituting $b$ by $a-b$ we similarly get $$\binom{a}{a-b} = \sum_{i=0}^{a-b} \binom{b-1+i}{i}$$ Replacing $b = m + 1$ and $a = n + m + 1$ we thus get $$\binom{n + m + 1}{n} = \sum_{i=0}^{n} \binom{m+i}{i} = P_n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/65947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Intuitive explanation of $(a^b)^c = a^{bc}$ What is an intuitive explanation for the rule that $(a^b)^c = a^{bc}$. I'm trying to wrap my head around it, but I can't really do it.
I will assume that $b$ and $c$ are positive integers and that $a$ is any "number" (it doesn't really matter much what $a$ is...). Suppose I have $b \times c$ copies of the number $a$. I can arrange them into a $b \times c$ rectangular array: i.e., with $b$ rows and $c$ columns. When I multiply all $b \times c$ of these $a$'s together, I get $a^{bc}$. On the other hand, suppose I look at just one column of the array. In this column I have $b$ $a$'s, so the product of all the entries in a column is $a^b$. But now I have $c$ columns altogether, so the product of all the entries is obtained by multiplying the common product of all the entries in a given column by itself $c$ times, or $(a^b)^c$. Thus $a^{bc} = (a^b)^c$. If you want to justify this identity when $b$ and $c$ are other things besides positive integers -- e.g. real numbers, or infinite cardinals -- that's another matter: please ask.
{ "language": "en", "url": "https://math.stackexchange.com/questions/65995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 1 }
Why PA=LU matrix factorization is better than A=LU matrix factorization? While finding A=LU for a matrix if zero is encountered in pivot position then row exchange is required. However if PA=LU form is used then no row exchange is required and apparently this also requires less computation. What I am not able to understand it that how finding a correct permutation matrix involves less efforts that doing a row exchange during A=LU process? Edit: Matrix PA will already have a form in which all rows are in correct order. what I am not able to understand is that why PA=LU computation is going to be better than A=LU computation?
Note sure if I understand your point. The purpose of a permutation matrix is exactly to do the row exchange for you. So consider $\bf PA = LU$ the more general form of $\bf A = LU$ in that it also takes care of row exchanges when they are needed. Let's recap for my own sake: By performing some row operations on $\bf A$ (a process called elimination), you want to end up with $\bf U$. You can represent each row operation through a separate matrix, so say you need two row operations, $\bf E_1$ followed by $\bf E_2$ before you get $\bf U$, then you have $\bf E_2E_1A = U \Rightarrow \bf L = (E_2E_1)^{-1} = E_2^{-1}E_1^{-1}$. But the neat thing is that you don't need matrix inversion to find those inverses. Say for example $\bf A$ is $2 \times 2$ and $\bf E_1$ is the operation subtract 2 times row 1 from row 2, then $\bf E_1^{-1}$ is just add 2 times row 1 to row 2. In matrix language: $$ {\bf E_1} = \begin{pmatrix} 1 & 0 \\ -2 & 1 \end{pmatrix} \Rightarrow {\bf E_1^{-1} = \begin{pmatrix} 1 & 0 \\ 2 & 1 \end{pmatrix} } $$ This translates into computational ease, because all you have to do is change the sign on the non-diagonal elements to get the inverse. Now on to permutation matrices. A permutation matrix is just an identity matrix with two of its rows exchanged. And even more conveniently, its inverse is just itself (because you get the original matrix back once you reapply the row exchange a second time), i.e. $\bf PPA = IA = A \Rightarrow P^{-1} = P$. So if row exchanges are needed, we add the additional step and write $\bf PA = LU$. Once again, a computational cakewalk.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\mathbb{Q}/\mathbb{Z}$ has a unique subgroup of order $n$ for any positive integer $n$? Viewing $\mathbb{Z}$ and $\mathbb{Q}$ as additive groups, I have an idea to show that $\mathbb{Q}/\mathbb{Z}$ has a unique subgroup of order $n$ for any positive integer $n$. You can take $a/n+\mathbb{Z}$ where $(a,n)=1$, and this element has order $n$. Why would such an element exist in any subgroup $H$ of order $n$? If not, you could reduce every representative, and then every element would have order less than $n$, but where is the contradiction?
We can approach the problem using elementary number theory. Look first at the subgroup $K$ of $\mathbb{Q}/\mathbb{Z}$ generated by (the equivalence class of) $q/r$, where $q$ and $r$ are relatively prime. Since $q$ and $r$ are relatively prime, there exist integers $x$ and $y$ such that $qx+ry=1$. Divide both sides by $r$. We find that $$x\frac{q}{r}+y=\frac{1}{r}.$$ Since $y$ is an integer, it follows that $\frac{1}{r}$ is congruent, modulo $1$, to $x\frac{q}{r}$. It follows that (the equivalence class of) $1/r$ is in $K$, and therefore generates $K$. Now let $H$ be a subgroup of $\mathbb{Q}/\mathbb{Z}$ of order $n$. Let $h$ be an element of $H$. If $h$ generates $H$, we are finished. Otherwise, $h$ generates a proper subgroup of $H$. By the result above, we can assume that $h$ is (the equivalence class of) some $1/r_1$, and that there is some $1/b$ in $H$ such that $b$ does not divide $r_1$. Let $d=\gcd(r_1,b)$. There are integers $x$ and $y$ such that $r_1x+by=d$. Divide through by $r_1b$. We find that $$x\frac{1}{b}+y\frac{1}{r_1}=\frac{d}{r_1b}.$$ It follows that (the equivalence class of) $d/(r_1b)$ is in $H$. But $r_1b/d$ is the least common multiple of $r_1$ and $b$. Call this least common multiple $r_2$. Then since $r_1$ and $b$ both divide $r_2$, the subgroup of $H$ generated by (the equivalence class of) $1/r_2$ contains both $1/r_1$ and $1/b$. If $1/r_2$ generates all of $H$, we are finished. Otherwise, there is a $1/b$ in $H$ such that $b$ does not divide $r_2$. Continue.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 0 }
Given a function $f(x)$, is there an analytic way to determine which integer values of $x$ give an integer value of $f(x)$? Basically, I have some function $f(x)$ and I would like to figure out which integer values of $x$ make it such that $f(x)$ is also an integer. I know that I could use brute force and try all integer values of $x$ in the domain, but I want to analyze functions with large (possibly infinite) domains so I would like an analytical way to determine the values of $x$. The function itself will always be well-behaved and inversely proportional to the variable. The domain will be restricted to the positive real axis. I thought about functions like the Dirac delta function but that only seemed to push the issue one step further back. I get the feeling that I am either going to be told that there is no way to easily determine this, or that I am misunderstanding something fundamental about functions, but I thought I'd let you all get a crack at it at least.
It's not just "some function", if it's inversely proportional to the variable $x$ that means $f(x) = c/x$ for some constant $c$. If there is any $x$ such that $x$ and $c/x$ are integers, that means $c = x c/x$ is an integer. The integer values of $x$ for which $c/x$ is an integer are then the factors of $c$. If the prime factorization of $c$ is $p_1^{n_1} \ldots p_m^{n_m}$ (where $p_i$ are primes and $n_i$ positive integers), then the factors of $c$ are $p_1^{k_1} \ldots p_m^{k_m}$ where $k_i$ are integers with $0 \le k_i \le n_i$ for each $i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding $p^\textrm{th}$ roots in $\mathbb{Q}_p$? So assume we are given some $a\in\mathbb{Z}_p^\times$ and we want to figure out if $X^p-a$ has a root in $\mathbb{Q}_p$. We know that such a root must be unique, because given two such roots $\alpha,\beta$, the quotient $\alpha/\beta$ would need to be a non-trivial $p^\textrm{th}$ root of unity and $\mathbb{Q}_p$ does not contain any. Now we can't apply Hensel, which is the canonical thing to do when looking for roots in $\mathbb{Q}_p$. What other approaches are available?
Let's take $\alpha\equiv 1\mod p^2$. Then $\alpha = 1 + p^2\beta\in\mathbb{Z}_p$, where $\beta\in\mathbb{Z}_p$. What we want to do is make Hensel's lemma work for us, after changing the equation a bit. We have $f(x) = x^p - \alpha = x^p - (1 + p^2\beta)$. We see that if $x$ is to be a solution, it must satisfy $x\equiv 1\mod p$, so $x = 1 + py$ for some $y\in\mathbb{Z}_p$. Now we have a new polynomial equation: $$ f(y) = (1 + py)^p - (1 + p^2\beta) = \sum_{i = 0}^p \begin{pmatrix} p \\ i\end{pmatrix}(py)^i - (1 + p^2\beta), $$ which reduces to $$ f(y) = \sum_{i = 1}^p \begin{pmatrix} p \\ i\end{pmatrix}(py)^i - p^2\beta. $$ So long as $p\neq 2$, we can set this equal to zero and cancel a $p^2$ from each term, and get $$ 0 = p^2 y + \begin{pmatrix} p \\ 2\end{pmatrix}(py)^2 + \ldots + (py)^p - p^2\beta = y + \begin{pmatrix} p \\ 2\end{pmatrix} y^2 + \ldots + p^{p-2}y^p - \beta. $$ Mod $p$, we can solve this equation: $$ y + \begin{pmatrix} p \\ 2\end{pmatrix} y^2 + \ldots + p^{p-2}y^p - \beta \equiv y - \beta \equiv y - \beta_0\mod p, $$ where $\beta = \beta_0 + \beta_1 p + \beta_2 p^2 + \ldots$ by $y = \beta_0$. Mod $p$, our derivative is always nonzero: $$ \frac{d}{dy}\left[y - \beta_0\right] \equiv 1\mod p, $$ so we can use Hensel's lemma and lift our modified solution mod $p$ to a solution in $\mathbb{Q}_p$. Therefore, if $\alpha\in 1 + p^2\mathbb{Z}_p$ and $p\neq 2$, there exists a $p$th root of $\alpha$ in $\mathbb{Q}_p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Projective Tetrahedral Representation I can embed $A_4$ as a subgroup into $PSL_2(\mathbb{F}_{13})$ (in two different ways in fact). I also have a reduction mod 13 map $$PGL_2(\mathbb{Z}_{13}) \to PGL_2(\mathbb{F}_{13}).$$ My question is: Is there a subgroup of $PGL_2(\mathbb{Z}_{13})$ which maps to my copy of $A_4$ under the above reduction map? (I know that one may embed $A_4$ into $PGL_2(\mathbb{C})$, but I don't know about replacing $\mathbb{C}$ with $\mathbb{Z}_{13}$).
Yes. Explicitly one has: $ \newcommand{\ze}{\zeta_3} \newcommand{\zi}{\ze^{-1}} \newcommand{\vp}{\vphantom{\zi}} \newcommand{\SL}{\operatorname{SL}} \newcommand{\GL}{\operatorname{GL}} $ $$ \SL(2,3) \cong G_1 = \left\langle \begin{bmatrix} 0 & 1 \vp \\ -1 & 0 \vp \end{bmatrix}, \begin{bmatrix} \ze & 0 \\ -1 & \zi \end{bmatrix} \right\rangle \cong G_2 = \left\langle \begin{bmatrix} 0 & 1 \vp \\ -1 & 0 \vp \end{bmatrix}, \begin{bmatrix} 0 & -\zi \\ 1 & -\ze \end{bmatrix} \right\rangle $$ and $$G_1 \cap Z(\GL(2,R)) = G_2 \cap Z(\GL(2,R)) = Z = \left\langle\begin{bmatrix}-1&0\\0&-1\end{bmatrix}\right\rangle \cong C_2$$ and $$G_1/Z \cong G_2/Z \cong A_4$$ This holds over any ring R which contains a primitive 3rd root of unity, in particular, in the 13-adics, $\mathbb{Z}_{13}$. The first representation has rational (Brauer) character and Schur index 2 over $\mathbb{Q}$ (but Schur index 1 over the 13-adics $\mathbb{Q}_{13}$), and the second representation is the unique (up to automorphism of $A_4$) 2-dimensional projective representation of $A_4$ with irrational (Brauer) character. You can verify that if $G_i = \langle a,b\rangle$, then $a^2 = [a,a^b] = -1$, $ a^{(b^2)} = aa^b$, and $b^3 = 1$. Modulo $-1$, one gets the defining relations for $A_4$ on $a=(1,2)(3,4)$ and $b=(1,2,3)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\log_{12} 2=m$ what's $\log_6 16$ in function of $m$? Given $\log_{12} 2=m$ what's $\log_6 16$ in function of $m$? $\log_6 16 = \dfrac{\log_{12} 16}{\log_{12} 6}$ $\dfrac{\log_{12} 2^4}{\log_{12} 6}$ $\dfrac{4\log_{12} 2}{\log_{12} 6}$ $\dfrac{4\log_{12} 2}{\log_{12} 2+\log_{12} 3}$ $\dfrac{4m}{m+\log_{12} 3}$ And this looks like a dead end for me.
Here is a zero-cleverness solution: write everything in terms of the natural logarithm $\log$ (or any other logarithm you like). Recall that $\log_ab=\log b/\log a$. Hence your hypothesis is that $m=\log2/\log12$, or $\log2=m(\log3+2\log2)$, and you look for $k=\log16/\log6=4\log2/(\log2+\log3)$. Both $m$ and $k$ are functions of the ratio $r=\log3/\log2$, hence let us try this. One gets $1=m(r+2)$ and one wants $k=4/(1+r)$. Well, $r=m^{-1}-2$ hence $k=4/(m^{-1}-1)=4m/(1-m)$. An epsilon-cleverness solution is to use at the onset logarithms of base $2$ and to mimick the proof above (the algebraic manipulations become a tad simpler).
{ "language": "en", "url": "https://math.stackexchange.com/questions/66405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
About Borel's lemma Borel lemma states that for $x_0 \in \mathbf{R}$ and for a real sequence $(a_n)_{n \in \mathbf{N_0}}$ there exists a smooth function $f: \mathbf{R} \rightarrow \mathbf{R}$ such that $f^{(n)}(x_0)=a_n$ for $n \in \mathbf{N_0}$. However is it true for $x_1, x_2 \in \mathbf{R}, x_1 \neq x_2$, and for real sequences $(a_n)_{n \in \mathbf{N_0}}, (b_n)_{n \in \mathbf{N_0}}$ there exists a smooth function $f: \mathbf{R} \rightarrow \mathbf{R}$ such that $f^{(n)}(x_1)=a_n$, $f^{(n)}(x_2)=a_n$ for $n \in \mathbf{N_0}$ ? Thanks
Since Giuseppe has already given an answer, I will give the details of the extension. Let $A\subset \mathbb R$ a set which consist of isolated points and for each $a\in A$, $\{x(a)_n\}$ a sequence of real numbers. We can find a smooth function $f$ such that for all $a\in A$ and $n\in\mathbb N$, we have $f^{(n)}(a)=x(a)_n$. Since $\mathbb R$ is separable, this set has to be countable, hence we can write $A=\{a_n,n\in\mathbb N\}$. We can write $b_{n,k}=x(a_n)_k$ for each $n$ and $k$. For each $n$, we consider $r_n $such that $\left]a_n-3r_n,a_n+3r_n\right[\cap A=\{r_n\}$. Let $g_n$ be a bump function, $g_n=1$ on $\left]-r_n,r_n\right[$ and $\mathrm{supp}g_n\subset \left[-2r_n,2r_n\right]$. Put $$f_{n,j}(x)= \frac{b_{n,j}}{j!}(x-a_n)^jg_n\left(\frac{x-a_n}{\varepsilon_{n,k}}\right)$$ where $\varepsilon_{n,k}=\frac 1{1+|b_{n,k}|}\frac 1{4^kM_{n,k}}$, with $\displaystyle M_{n,k}:=\max_{0\leq j\leq k}\sup_{x\in\mathbb R}|g_n^{(j)}(x)|$. Note that $f_{n,k}$ has a compact support, and putting for all $n$: $\displaystyle h_n(x):=\sum_{k=0}^{+\infty}f_{n,k}(x)$, $h_n$ is a smooth function. Indeed, for $n$ fixed and $k\geq d+1$ we have $\sup_{x\in\mathbb R}|f_{n,k}^{(d)}|\leq \frac 1{2^k}$, hence the normal convergence of the series on the whole real line for all the derivatives. A (boring but not hard) computation which uses Leibniz rule gives that we have $h_n^{(k)}(a_n)=b_{n,k}$. Now, put $$f(x):=\sum_{n=0}^{+\infty}\sum_{j=0}^{+\infty}f_{n,j}(x).$$ Since the supports of $h_n$ are pairwise disjoint, the series in $n$ is convergent to a smooth function. We will note that we cannot extend the result to sets $S$ which contain a point which is not isolated, because the continuity of the derivative gives a restriction on the sequence $\{x(s)_n\}$ for the $s$ in a neighborhood of the non-isolated point. Namely, if $s_0$ is a non-isolated point and $\{s_n\}\subset S$ a sequence which converges to $s_0$ then the sequence $\{x(s_n)_0\}$ has to converge to $x(s_0)_0$ (hence we can't get all the sequences).
{ "language": "en", "url": "https://math.stackexchange.com/questions/66459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
If $A$ and $B$ are positive-definite matrices, is $AB$ positive-definite? I've managed to prove that if $A$ and $B$ are positive definite then $AB$ has only positive eigenvalues. To prove $AB$ is positive definite, I also need to prove $(AB)^\ast = AB$ (so $AB$ is Hermitian). Is this statement true? If not, does anyone have a counterexample? Thanks, Josh
EDIT: Changed example to use strictly positive definite $A$ and $B$. To complement the nice answers above, here is a simple explicit counterexample: $$A=\begin{bmatrix}2 & -1\\\\ -1 & 2\end{bmatrix},\qquad B = \begin{bmatrix}10 & 3\\\\ 3 & 1\end{bmatrix}. $$ Matrix $A$ has eigenvalues (1,3), while $B$ has eigenvalues (0.09, 10). Then, we have $$AB = \begin{bmatrix} 17 & 5\\\\ -4 & -1\end{bmatrix}$$ Now, pick vector $x=[0\ \ 1]^T$, which shows that $x^T(AB)x = -1$, so $AB$ is not positive definite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
Separability of a group and its dual Here is the following exercise: Let $G$ be an abelian topological group. $G$ has a countable topological basis iff its dual $\hat G$ has one. I am running into difficulties with the compact-open topology while trying this. Any help?
Let $G$ a topological locally compact abelian group. If $G$ has a countable topological basis $(U_n)_{n \in \mathbb{N}}$. We show $\hat{G}$ has a countable topological basis. For every finite subset $I$ of $\mathbb{N}$, let $O_I=\cup_{i \in I}U_i$. We define $B:=\{\bar{O_I} | \bar{O_I}$ is compact $\}$. $B$ is countable, because the cardinality is lower or equal than the cardinality of the finite subset of $\mathbb{N}$. $U(1)$ has a countable topological basis $(V_n)_{n \in\mathbb{N}}$. Let $O(K,V)=\{ \chi \in \hat{G} | \chi(K) \subset V\}$, with $K$ compact in $G$, $V$ open in $U(1)$. $O(K,V)$ is open in the compact-open topology on $\hat{G}$. Let $B'=\{O(K,V_n)|K \in B, n \in \mathbb{N}\}$. $B'$ is countable and is a topological basis of $\hat{G}$. If $\hat{G}$ has a countable topological basis, $\hat{\hat{G}}$ too. But $\hat{\hat{G}}=G$ (Pontryagin's duality), so $G$ has a topological basis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Determine a point $$\text{ABC- triangle:} A(4,2); B(-2,1);C(3,-2)$$ Find a D point so this equality is true: $$5\vec{AD}=2\vec{AB}-3\vec{AC}$$
So,let's observe picture below.first of all you will need to find point $E$...use that $E$ lies on $p(A,B)$ and that $\left\vert AB \right\vert = \left\vert BE \right\vert $. Since $ p(A,C)\left\vert \right\vert p(F,E)$ we may write next equation: $\frac{y_C-y_A}{x_C-x_A}=\frac{y_E-y_F}{x_E-x_F}$ and $\left\vert EF \right\vert=3 \left\vert AC \right\vert$ so we may find point F.Since $\left\vert AF \right\vert=5 \left\vert AD \right\vert$ we may write next equations: $x_D=\frac{x_F+4x_A}{5}$ and $y_D=\frac{y_F+4y_A}{5}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/66671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
questions about residue Let $f(z)$ be a rational function on $\mathbb{C}$. If the residues of $f$ at $z=0$ and $z=\infty$ are both $0$, is it true that $\oint_{\gamma} f(z)\mathrm dz=0$ ($\gamma$ is a closed curve in $\mathbb{C}$)? Thanks.
In fact, to have $\oint_\gamma f(z)\ dz =0$ for all closed contours $\gamma$ that don't pass through a pole of $f$, what you need is that the residues at all the poles of $f$ are 0. Since the sum of the residues at all the poles (including $\infty$) is always 0, it's necessary and sufficient that the residues at all the finite poles are 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
When is the preimage of prime ideal is not a prime ideal? If $f\colon R\to S$ is a ring homomorphism such that $f(1)=1$, it's straightforward to show that the preimage of a prime ideal is again a prime ideal. What happens though if $f(1)\neq 1$? I use the fact that $f(1)=1$ to show that the preimage of a prime ideal is proper, so I assume there is some example where the preimage of a prime ideal is not proper, and thus not prime when $f(1)\neq 1$? Could someone enlighten me on such an example?
Consider the "rng" homomorphism $f:\mathbb{Z}\to\mathbb{Q}$ where $f(n)=0$; then $(0)$ is a prime ideal of $\mathbb{Q}$, but $f^{-1}(0)=\mathbb{Z}$ is not proper, hence not prime. A different example would be $f:\mathbb{Z}\to\mathbb{Z}\oplus\mathbb{Z}$ where $f(1)=(1,0)$; then for any prime ideal $P\subset \mathbb{Z}$, we have that $I=\mathbb{Z}\oplus P$ is a prime ideal of $\mathbb{Z}\oplus\mathbb{Z}$, but $f^{-1}(I)=\mathbb{Z}$ is not proper, hence not prime.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Can we use Peano's axioms to prove that integer = prime + integer? Every integer greater than 2 can be expressed as sum of some prime number greater than 2 and some nonegative integer....$n=p+m$. Since 3=3+0; 4=3+1; 5=3+2 or 5=5+0...etc it is obvious that statement is true.My question is: Can we use Peano's axioms to prove this statement (especially sixth axiom which states "For every natural number $n$, $S(n)$ is a natural number.")?
Yes we can use the Peano's axiom to prove that integer = prime + integer. Think of $0 + a = a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/66882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Proof that $x \Phi(x) + \Phi'(x) \geq 0$ $\forall x$, where $\Phi$ is the normal CDF As title. Can anyone supply a simple proof that $$x \Phi(x) + \Phi'(x) \geq 0 \quad \forall x\in\mathbb{R}$$ where $\Phi$ is the standard normal CDF, i.e. $$\Phi(x) = \int_{-\infty}^x \frac{1}{\sqrt{2\pi}} e^{-y^2/2} {\rm d} y$$ I have so far: Defining $f(x) = x \Phi(x) + \Phi'(x)$ we get $$ \begin{align} f'(x) & = \Phi(x) + x \Phi'(x) + \Phi''(x) \\ & = \Phi(x) + x\Phi'(x) - x\Phi'(x) \\ & = \Phi(x) \\ & >0 \end{align}$$ so it seems that if we can show $$\lim_{x\to-\infty} f(x) = 0$$ then we have our proof - am I correct? Clearly $f$ is the sum of two terms which tend to zero, so maybe I have all the machinery I require, and I just need to connect the parts in the right way! Assistance will be gratefully received. In case anyone is interested in where this question comes from: Bachelier's formula for an option struck at $K$ with time $T$ until maturity, with volatility $\sigma>0$ and current asset price $S$ is given by $$V(S) = (S - K) \Phi\left( \frac{S-K}{\sigma S \sqrt{T}} \right) + \sigma S \sqrt{T} \Phi' \left( \frac{S-K}{\sigma S \sqrt{T}} \right) $$ Working in time units where $\sigma S\sqrt{T} = 1$ and letting $x=S-K$, we have $$V(x) = x \Phi(x) + \Phi'(x)$$ and I wanted a simple proof that $V(x)>0$ $\forall x$, i.e. an option always has positive value under Bachelier's model.
Writing $\phi(x) = (2\pi)^{-1/2}\exp(-x^2/2)$ for $\Phi^{\prime}(x)$ $$ x\Phi(x) + \phi(x) = \int_{-\infty}^x x\phi(t)\mathrm dt + \phi(x) \geq \int_{-\infty}^x t\phi(t)\mathrm dt + \phi(x) = -\phi(t)\biggr\vert_{-\infty}^x + \phi(x) = 0 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/66958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 5, "answer_id": 2 }
Normalizer of the normalizer of the sylow $p$-subgroup If $P$ is a Sylow $p$-subgroup of $G$, how do I prove that normalizer of the normalizer $P$ is same as the normalizer of $P$ ?
Another proof We have that $P\in\text{Syl}_p(\mathbf{N}_G(P))$ and $\mathbf{N}_G(P)\trianglelefteq \mathbf{N}_G(\mathbf{N}_G(P))$. By Frattini's Argument: $$\mathbf{N}_{\mathbf{N}_G(\mathbf{N}_G(P\,))}(P)\cdot\mathbf{N}_G(P)=\mathbf{N}_G(P)=\mathbf{N}_G(\mathbf{N}_G(P)),$$ because $\mathbf{N}_{\mathbf{N}_G(\mathbf{N}_G(P\,))}(P)\subseteq \mathbf{N}_G(P)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 7, "answer_id": 2 }
rotation vector If $t(t),n(t),b(t)$ are rotating, right-handed frame of orthogonal unit vectors. Show that there exists a vector $w(t)$ (the rotation vector) such that $\dot{t} = w \times t$, $\dot{n} = w \times n$, and $\dot{b} = w \times b$ So I'm thinking this is related to Frenet-Serret Equations and that the matrix of coefficient for $\dot{t}, \dot{n}, \dot{b}$ with respect to $t,n,b$ is skew-symmetric. Thanks!
You have sufficient information to compute it yourself! :) Suppose that $w=aT+bN+cB$, with $a$, $b$ and $c$ some functions. Then you want, for example, that $$\kappa N = T' = w\times T = (aT+bN+cB)\times T=b N\times T+cB\times T=-bB+cN.$$ Since $\{T,N,B\}$ is an basis, this gives you some information about the coefficients. Can you finish?
{ "language": "en", "url": "https://math.stackexchange.com/questions/67040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The sum of a polynomial over a boolean affine subcube Let $P:\mathbb{Z}_2^n\to\mathbb{Z}_2$ be a polynomial of degree $k$ over the boolean cube. An affine subcube inside $\mathbb{Z}_2^n$ is defined by a basis of $k+1$ linearly independent vectors and an offset in $\mathbb{Z}_2^n$. [See "Testing Low-Degree Polynomials over GF(2)" by Noga Alon, Tali Kaufman, Michael Krivelevich, Simon Litsyn, Dana Ron - http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.9.1235 - for more details] Why does taking such a subcube and evaluating the sum of $P$ on all $2^{k+1}$ elements of it, always results in zero ?
Let the coordinates be $z_1,z_2,\ldots,z_n$. It suffices to do the case, when $P$ is a monomial, say $P=z_{i_1}z_{i_2}\cdots z_{i_k}$. Let's use induction on $k$. If $k=0$, then $P=1$, and the claim is clear. In the general case let us consider the coordinates $z_{i_j},1\le j\le k$. If all these take only a single value on the affine subcube, then the restriction of $P$ to the subcube is a constant, and the claim holds. On the other hand, if one of these coordinates, say $z_{i_m}$, takes both values within the subcube, then $P$ obviously vanishes identically in the zero-set of $z_{i_m}=0$, so we need to worry about the restriction of $P$ to the affine hyperplane $H_m$ determined by the equation $z_{i_m}=1$. The intersection of the subcube and $H_m$ will be another affine subcube of dimension one less, i.e. at most $k$. Fortunately also the restriction of $P$ to that smaller cube coincides with that of the monomial $P/z_{i_m}$ of degree $k-1$. Therefore the induction hypothesis applies, and we are done. [Edit:] The logic of the inductive step was a bit unclear in the first version. I think that it is clearer to first restrict to a smaller cube, and then observe that the degree also goes down. Not the other way around. [/Edit] Remark: In coding theory this is a standard duality property of the so called Reed-Muller codes. The polynomial $P$, when evaluated at all the points of $\mathbf{Z}_2^n$, gives a word of the code $RM(k,n)$. The characteristic function of the affine hypercube is of degree $n-k-1$, and is thus a word of the dual code $RM(n-k-1,n)$ that is also known to be equal to the dual: $RM(n-k-1,n)=RM(k,n)^\perp$. The duality means that these two functions both take value $=1$ at an even number of points, and the claim follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
How to prove the sum of squares is minimum? Given $n$ nonnegative values. Their sum is $k$. $$ x_1 + x_2 + \cdots + x_n = k $$ The sum of their squares is defined as: $$ x_1^2 + x_2^2 + \cdots + x_n^2 $$ I think that the sum of squares is minimum when $x_1 = x_2 = \cdots = x_n$. But I can't figure out how to prove it. Can anybody help me on this? Thanks.
You can use Lagrange multipliers. We want to minimize $\sum_{i=1}^{n} x_{i}^{2}$ subject to the constraint $\sum_{i=1}^{n} x_{i} = k$. Set $J= \sum x_{i}^{2} + \lambda\sum_{i=1}^{n} x_{i}$. Then $\frac{\partial J}{\partial x_i}=0$ implies that $x_{i} = -\lambda/2$. Substituting this back into the constraint give $\lambda = -2k/n$. Thus $x_{i} = k/n$, as you thought.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 9, "answer_id": 4 }
Interesting but elementary properties of the Mandelbrot Set I suppose everyone is familiar with the Mandelbrot set. I'm teaching a course right now in which I am trying to convey the beauty of some mathematical ideas to first year students. They basically know calculus but not much beyond. The Mandelbrot set is certainly fascinating in that you can zoom in and get an incredible amount of detail, all out of an analysis of the simple recursion $z\mapsto z^2+c$. So my plan is to show them a movie of a deep fractal zoom, and go over the definition of the Mandelbrot set. But I'd like to also show them something mathematically rigorous, and the main interesting properties I know about the Mandelbrot set are well beyond the scope of the course. I could mention connectedness, which is of course a seminal result, but that's probably not that interesting to someone at their level. So my question is whether anyone has any ideas about an interesting property of the Mandelbrot set that I could discuss at the calculus level, hopefully including an actual calculation or simple proof.
A proof that once |z|>2 then the recursion will take it to infinity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 1 }
Do real matrices always have real eigenvalues? I was trying to show that orthogonal matrices have eigenvalues $1$ or $-1$. Let $u$ be an eigenvector of $A$ (orthogonal) corresponding to eigenvalue $\lambda$. Since orthogonal matrices preserve length, $ \|Au\|=|\lambda|\cdot\|u\|=\|u\|$. Since $\|u\|\ne0$, $|\lambda|=1$. Now I am stuck to show that lambda is only a real number. Can any one help with this?
I guess it depends whether you are working with vector spaces over the real numbers or vector spaces over the complex numbers.In the latter case the answer is no, however in the former the answer has to be yes.Is it not guys ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/67304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 3, "answer_id": 1 }
Sequence sum question: $\sum_{n=0}^{\infty}nk^n$ I am very confused about how to compute $$\sum_{n=0}^{\infty}nk^n.$$ Can anybody help me?
If you know the value of the geometric series $\sum\limits_{n=0}^{+\infty}x^n$ at every $x$ such that $|x|<1$ and if you know that for every nonnegative integer $n$, the derivative of the polynomial function $x\mapsto x^n$ is $x\mapsto nx^{n-1}$, you might get an idea (and a proof) of the value of the series $\sum\limits_{n=0}^{+\infty}nx^{n-1}$, which is $x^{-1}$ times what you are looking for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/67364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 2 }
Group of order 15 is abelian How do I prove that a group of order 15 is abelian? Is there any general strategy to prove that a group of particular order (composite order) is abelian?
(Without: Sylow, Cauchy, semi-direct products, cyclic $G/Z(G)$ argument, $\gcd(n,\phi(n))=1$ argument. Just Lagrange and the class equation.) Firstly, if $|G|=pq$, with $p,q$ distinct primes, say wlog $p>q$, then $G$ can't have $|Z(G)|=p,q$, because otherwise there's no way to accomodate the centralizers of the noncentral elements between the center and the whole group (recall that, for all such $x$, it must strictly hold $Z(G)<C_G(x)<G$). Next, if in addition $q\nmid p-1$, then a very simple counting argument suffices to rule out the case $|Z(G)|=1$. In fact, if $Z(G)$ is trivial, then the class equation reads: $$pq=1+kp+lq \tag 1$$ where $k$ and $l$ are the number of conjugacy classes of size $p$ and $q$, respectively. Now, there are exactly $lq$ elements of order $p$ (they are the ones in the conjugacy classes of size $q$). Since each subgroup of order $p$ comprises $p-1$ elements of order $p$, and two subgroups of order $p$ intersect trivially, then $lq=m(p-1)$ for some positive integer $m$ such that $q\mid m$ (because by assumption $q\nmid p-1$). Therefore, $(1)$ yields: $$pq=1+kp+m'q(p-1) \tag 2$$ for some positive integer $m'$; but then $q\mid 1+kp$, namely $1+kp=nq$ for some positive integer $n$, which plugged into $(2)$ yields: $$p=n+m'(p-1) \tag 3$$ In order for $m'$ to be a positive integer, it must be $n=1$ (which in turn implies $m'=1$, but this is not essential here). So, $1+kp=q$: contradiction, because $p>q$. So we are left with $|Z(G)|=pq$, namely $G$ abelian (and hence, incidentally, cyclic).
{ "language": "en", "url": "https://math.stackexchange.com/questions/67407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45", "answer_count": 6, "answer_id": 4 }
Diverging sequence I can't understand diverging sequences. How can I prove that $a_n=1/n^2-\sqrt{n}$ is divering? Where to start? What picture should I have in my mind? I tried to use $\exists z \forall n^* \exists n\ge n^*: |a_n-A|\ge z$, but how should I see this? And how can I solve the question with this property?
Now I got this: $|a_n-A| \ge \epsilon$ $\frac{1}{n^2} - sqrt(n) \ge \epsilon + |A|$ Suppose $u=sqrt(n) (u \ge 0)$ $u^{-4}-u \ge \epsilon + |A|$ $u^{-4} \ge u^{-4} - u$ $u^{-4} \ge \epsilon + |A|$ $u \ge (\epsilon + |A|)^\frac{-1}{4}$ $n \ge \frac{1}{sqrt(\epsilon + |A|)}$ And what may I conclude now?
{ "language": "en", "url": "https://math.stackexchange.com/questions/67476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }