Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Determine a holomorphic function by means of its values on $ \mathbb{N} $ This is exercise 5, page 236 from Remmert, Theory of complex functions For each of the following properties produce a function which is holomorphic in a neighborhood of $ 0 $ or prove that no such function exists: i) $ f (\frac{1}{n}) = (-1)^{n}\frac{1}{n} \ $ for almost all $ n \in \mathbb{N}\ , n \neq 0 $ ii) $ f (\frac{1}{n}) = \frac{1}{n^{2} - 1 } $ for almost all $ n \in \mathbb{N}\ , n \neq 0, n \neq 1 $ iii) $|f^{(n)}(0)|\geq (n!)^{2} $ for almost all $ n \in \mathbb{N} $
Your title is misleading, as you cannot determine a holomorphic function from its values on $\mathbb{N}$. However, in this case you can determine it, using the uniqueness theorem for analytic functions: if $f$ and $g$ are two analytic functions and there is a convergent series $a_n$ such that $f(a_n)=g(a_n)$ for all $n$ then $f=g$. Thus, for example, if your (i), putting $g(z)=z$. we see that $f(1/2n)=g(1/2n)$, so if $f$ is analytic we must have $f=g$. But then $f(1/(2n+1)) = 1/(2n+1)$, contradicting the fact that $f(1/(2n+1)) = -1/(2n+1)$, so no such analytic function exists. (ii) is similar. as for (iii), try to think on the Taylor expansion of $f$ near $0$. What will be its radius of convergence?
{ "language": "en", "url": "https://math.stackexchange.com/questions/108431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Number of pairs $(a, b)$ with $gcd(a, b) = 1$ Given $n\geq1$, how can I count the number of pairs $(a,b)$, $0\leq a,b \leq n$ such that $gcd(a,b)=1$? I think the answer is $2\sum_{i}^{n}\phi(i)$. Am I right?
Perhaps it could be mentioned that if we consider the probability $P_{n}$ that two randomly chosen integers in $\{1, 2, \dots, n \}$ are coprime $$ P_{n} = \frac{\lvert \{(a, b) : 1 \le a, b \le n, \gcd(a,b) =1 \}\rvert}{n^{2}}, $$ then $$ \lim_{n \to \infty} P_{n} = \frac{6}{\pi^{2}}. $$ See this Wikipedia article.
{ "language": "en", "url": "https://math.stackexchange.com/questions/108483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Why are $\log$ and $\ln$ being used interchangeably? A definition for complex logarithm that I am looking at in a book is as follows - $\log z = \ln r + i(\theta + 2n\pi)$ Why is it $\log z = \ldots$ and not $\ln z = \ldots$? Surely the base of the log will make a difference to the answer. It also says a few lines later $e^{\log z} = z$. Yet again I don't see how this makes sense. Why isn't $\ln$ used instead of $\log$?
"$\log$" with no base generally means base the base is $e$, when the topic is mathematics, just as "$\exp$" with no base means the base is $e$. In computer programming languages, "$\log$" also generally means base-$e$ log. On calculators, "$\log$" with no base means the base is $10$ because calculators are designed by engineers. Ironically, the reasons for frequently using base-$10$ logarithms were made obsolete by calculators, which became prevalent in the early 1970s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/108547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
General solution using Euclidean Algorithm I was able to come up with the integer solution that they also have in the textbook using the same method they used but I am really puzzled how they come up with a solution for all the possible integer combinations...how do they come up with that notation/equation that represents all the integer solutions ? I am talking about the very last line.
A general rule in life: When you have a linear equation(s) of the form $f(x_1,x_2,\dots, x_n)=c$, find one solution to the equation and then find a general solution to $f(x_1,\dots,x_n)=0$ and now you can obtain the general solution for the initial equation by adding the special solution you found with the general solution of the second, homogeneous, equation. In our case the homogeneous equation is $957x+609y=0$. Divide by the gcd 87 to obtain $11x=-7y$. So the general solution for this equation is $(-7n,11n)$ for integer $n$ (you must multiply both sides by something that will give you the LCM times an integer).
{ "language": "en", "url": "https://math.stackexchange.com/questions/108567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Pointwise limit of continuous functions not Riemann integrable The following is an exercise from Stein's Real Analysis (ex. 10 Chapter 1). I know it should be easy but I am somewhat confused at this point; it mostly consists of providing the Cantor-like construction for continuous functions on the interval $[0,1]$ whose pointwise limit is not Riemann integrable. So, let $C'$ be a closed set so that at the $k$th stage of the construction one removes $2^{k-1}$ centrally situated open intervals each of length $l^{k}$ with $l_{1}+\ldots+2^{k-1}l_{k}<1$; in particular, we know that the measure of $C'$ is strictly positive. Now, let $F_{1}$ denote a piece-wise linear and continuous function on $[0,1]$ with $F_{1}=1$ in the complement of the first interval removed in the consutrction of $C'$, $F_{1}=0$ at the center of this interval, and $0 \leq F_{1}(x) \leq 1$ for all $x$. Similarly, construct $F_{2}=1$ in the complement of the intervals in stage two of the construction of $C'$, with $F_{2}=0$ at the center of these intervals, and $0 \leq F_{2} \leq 1$, and so on, and let $f_{n}=F_{1}\cdot \ldots F_{n}$. Now, obviously $f_{n}(x)$ converges to a limit say $f(x)$ since it is decreasing and bounded and $f(x)=1$ if $x \in C'$; so in order to show that $f$ is discontinuous at every point of $C'$, one should show that there is a sequence of points $x_{n}$ so that $x_{n} \rightarrow x$ and $f(x_{n})=0$; I can't see this, so any help is welcomed, thanks a lot!
Take a point $c\in C'$ and any open interval $I$ containing $c$. Then there is an open interval $D\subseteq I $ that was removed in the construction of $C'$. Indeed, since $C'$ has no isolated points, there is a point $y\in C'\cap I$ distinct from $x$. Between $x$ and $y$, there is an open interval removed from the construction of $C'$, which we take to be our $D$. Now, by the definition of the $f_n$, there is a point $d\in D$ (namely the center of $D$) such that $f(d)=0$. To recap: given $x\in C'$ and any open interval $I$ containing $x$, there is a point $d\in I$ with $f(d)=0$. As $f(x)=1$, this implies that $f$ is not continuous at $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/108619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
group of order 28 is not simple I have a proof from notes but I don't quite understand the bold part: Abelian case: $a \in G / \{1\}$. If $\langle a\rangle \neq G$, then we are done. If $\langle a\rangle = G$, then $\langle a^4\rangle $ is a proper normal subgroup of $G$. General case: WLOG we can assume $G \neq Z(G)$. $\langle 1\rangle \neq Z(G)$ which is a proper normal subgroup of $G$. Done. Otherwise $|Z(G)|= 1$. $$ 28 = 1 + \sum_{**}\frac{|G|}{|C_G(x)|} $$ There must be some $a\in G$ such that 7 does not divide $$ \frac{|G|}{|C_G(a)|} $$ It follows that $\frac{|G|}{|C_G(a)|} = 2 $ or $4$ $\Rightarrow [G:C_G(a)] = 2$ or $4$ $\Rightarrow 28 \mid2!$ or $28\mid4!$. Therefore group of order 28 is not simple. Why are they true?
Reading the class equation modulo 7 gives the existence of one $x$ such that $\frac{|G|}{|C_G(x)|}$ is NOT divisible by 7. Hence 7 divides $|C_G(x)|$. Now the factors of the numerator $|G|$ are 1, 2, 4, 7 $\cdots$. Since $\frac{|G|}{|C_G(x)|}$ cannot be 1 and cannot divide 7, the only possibilities are 2 and 4.
{ "language": "en", "url": "https://math.stackexchange.com/questions/108789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 3 }
Number of 5 letter words over a 4 letter group using each letter at least once Given the set $\{a,b,c,d\}$ how many 5 letter words can be formed such that each letter is used at least once? I tried solving this using inclusion - exclusion but got a ridiculous result: $4^5 - \binom{4}{1}\cdot 3^5 + \binom{4}{2}\cdot 2^5 - \binom{4}{3}\cdot 1^5 = 2341$ It seems that the correct answer is: $\frac{5!}{2!}\cdot 4 = 240$ Specifically, the sum of the number of permutations of aabcd, abbcd, abccd and abcdd. I'm not sure where my mistake was in the inclusion - exclusion approach. My universal set was all possible 5 letter words over a set of 4 letters, minus the number of ways to exclude one letter times the number of 5 letter words over a set of 3 letters, and so on. Where's my mistake?
Your mistake is in the arithmetic. What you think comes out to 2341 really does come out to 240. $4^5=1024$, $3^5=243$, $2^5=32$, $1024-(4)(243)+(6)(32)-4=1024-972+192-4=1216-976=240$
{ "language": "en", "url": "https://math.stackexchange.com/questions/108854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Upper bounds on the size of $\operatorname{Aut}(G)$ Any automorphism of a group $G$ is a bijection that fixes the identity, so an easy upper bound for the size of $\operatorname{Aut}(G)$ for a finite group $G$ is given by \begin{align*}\lvert\operatorname{Aut}(G)\rvert \leq (|G| - 1)! \end{align*} This inequality is an equality for cyclic groups of orders $1$, $2$ and $3$ and also the Klein four-group $\mathbb{Z}_2 \times \mathbb{Z_2}$. I think it's reasonable to believe that they are the only groups with this property. The factorial $(|G| - 1)!$ is eventually huge. I searched through groups of order less than $100$ with GAP and found no other examples. The problem can be reduced to the abelian case. We can check the groups of order $< 6$ by hand. Then if $|G| \geq 6$ and the equality holds, we have $\operatorname{Aut}(G) \cong S_{|G|-1}$. Now $\operatorname{Inn}(G)$ is a normal subgroup of $\operatorname{Aut(G)}$, and is thus isomorphic to $\{(1)\}$, $A_{|G|-1}$ or $S_{|G|-1}$. This is because $A_n$ is the only proper nontrivial normal subgroup of $S_n$ when $n \geq 5$. We can see that $(|G| - 1)!/2 > |G|$ and thus $\operatorname{Inn}(G) \cong G/Z(G)$ is trivial. How to prove that there are no other groups for which the equality $\lvert\operatorname{Aut}(G)\rvert = (|G| - 1)!$ holds? Are any better upper bounds known for larger groups?
I believe this is an exercise in Wielandt's permutation groups book. $\newcommand{\Aut}{\operatorname{Aut}}\newcommand{\Sym}{\operatorname{Sym}}\Aut(G) \leq \Sym(G\setminus\{1\})$ and so if $|\Aut(G)|=(|G|-1)!$, then $\Aut(G) = \Sym(G\setminus\{1\})$ acts $|G|-1$-transitively on the non-identity elements of G. This means the elements of G are indistinguishable. Heck even subsets of the same size (not containing the identity) are indistinguishable. I finish it below: In particular, every non-identity element of G has the same order, p, and G has no proper, non-identity characteristic subgroups, like $Z(G)$, so G is an elementary abelian p-group. However, the automorphism group is $\newcommand{\GL}{\operatorname{GL}}\GL(n,p)$ which, for $p \geq 3, n\geq 2$, only acts at most $n-1$-transitively since it cannot send a basis to a non-basis. The solutions of $p^n-1 \leq n-1, p \geq 3, n \geq 2$ are quite few: none. Obviously $\GL(1,p)$ has order $p-1$ which is very rarely equal to $(p-1)!$, when $p=2, 3$. $\GL(n,2)$ still can only act $n$-transitively if $2^n-1 > n+1$, since once a basis's image is specified, the other points are determined, and the solutions of $2^n-1 \leq n+1$ are also limited: $n=1,2$. Thus the cyclic groups of order 1,2,3 and the Klein four group are the only examples.
{ "language": "en", "url": "https://math.stackexchange.com/questions/108923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 6, "answer_id": 5 }
Convergence/Divergence of infinite series $\sum_{n=1}^{\infty} \frac{(\sin n+2)^n}{n3^n}$ $$ \sum_{n=1}^{\infty} \frac{(\sin n+2)^n}{n3^n}$$ Does it converge or diverge? Can we have a rigorous proof that is not probabilistic? For reference, this question is supposedly a mix of real analysis and calculus.
The values for which $\sin(n)$ is close to $1$ (say in an interval $[1-\varepsilon ; 1]$) are somewhat regular : $1 - \varepsilon \le \sin(n)$ implies that there exists an integer $k(n)$ such that $n = 2k(n) \pi + \frac \pi 2 + a(n)$ where $|a(n)| \leq \arccos(1- \varepsilon)$. As $\varepsilon \to 0$, $\arccos(1- \varepsilon) \sim \sqrt{2 \varepsilon}$, thus we can safely say that for $\varepsilon$ small enough, $|n-2k(n) \pi - \frac{\pi}2| = |a(n)| \leq 2 \sqrt{ \varepsilon} $ If $m \gt n$ and $\sin(n)$ and $\sin(m)$ are both in $[1-\varepsilon ; 1]$, then we have the inequality $|(m-n) - 2(k(m)-k(n)) \pi| \leq |m-2k(m)\pi - \frac{\pi}2| + |n-2k(n)\pi - \frac{\pi}2| \leq 4 \sqrt { \varepsilon} $ where $(k(m)-k(n))$ is some integer $k$. Since $\pi$ has a finite irrationality measure, we know that there is a finite real constant $\mu \gt 2$ such that for any integers $n,k$ large enough, $|n-k \pi| \ge k^{1- \mu} $. By picking $\varepsilon$ small enough we can forget about the finite number of exceptions to the inequality, and we get $ 4\sqrt{\varepsilon} \ge (2k)^{1- \mu}$. Thus $(m-n) \ge 2k\pi - 4\sqrt{\varepsilon} \ge \pi(4\sqrt{\varepsilon})^{\frac1{1- \mu}} - 4\sqrt{\varepsilon} \ge A_\varepsilon = A\sqrt{\varepsilon}^{\frac1{1- \mu}} $ for some constant $A$. Therefore, we have a guarantee on the lengh of the gaps between equally problematic terms, and we know how this length grows as $\varepsilon$ gets smaller (as we look for more problematic terms) We can get a lower bound for the first problematic term using the irrationality measure as well : from $|n-2k(n) \pi - \frac{\pi}2| \leq 2\sqrt {\varepsilon}$, we get that for $\varepsilon$ small enough, $(4k+1)^{1- \mu} \le |2n - (4k+1) \pi| \le 4\sqrt \varepsilon$, and then $n \ge B_\varepsilon = B\sqrt\varepsilon^{\frac1{1- \mu}}$ for some constant $B$. Therefore, there exists a constant $C$ such that forall $\varepsilon$ small enough, the $k$-th integer $n$ such that $1-\varepsilon \le \sin n$ is greater than $C_\varepsilon k = C\sqrt\varepsilon^{\frac1{1- \mu}}k$ Since $\varepsilon < 1$ and $\frac 1 {1- \mu} < 0$, this bound $C_ \varepsilon$ grows when $\varepsilon$ gets smaller. And furthermore, the speed of this growth is greater if we can pick a smaller (better) value for $\mu$ (though all that matters is that $\mu$ is finite) Now let us give an upper bound on the contribution of the terms where $n$ is an integer such that $\sin (n) \in [1-2\varepsilon ; 1-\varepsilon]$ $$S_\varepsilon = \sum \frac{(2+\sin(n))^n}{n3^n} \le \sum_{k\ge 1} \frac{(1- \varepsilon/3)^{kC_{2\varepsilon}}}{kC_{2\varepsilon}} = \frac{- \log (1- (1- \varepsilon/3)^{C_{2\varepsilon}})}{C_{2\varepsilon}} \\ \le \frac{- \log (1- (1- C_{2\varepsilon} \varepsilon/3))}{C_{2\varepsilon}} = \frac{- \log (C_{2\varepsilon} \varepsilon/3))}{C_{2\varepsilon}} $$ $C_{2\varepsilon} = C \sqrt{2\varepsilon}^\frac 1 {1- \mu} = C' \varepsilon^\nu$ with $ \nu = \frac 1 {2(1- \mu)} \in ] -1/2 ; 0[$, so : $$ S_\varepsilon \le - \frac{ \log (C'/3) + (1+ \nu) \log \varepsilon}{C'\varepsilon^\nu} $$ Finally, we have to check if the series $\sum S_{2^{-k}}$ converges or not : $$ \sum S_{2^{-k}} \le \sum - \frac { \log (C'/3) - k(1+ \nu) \log 2}{C' 2^{-k\nu}} = \sum (A+Bk)(2^ \nu)^k $$ Since $2^ \nu < 1$, the series converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/109029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50", "answer_count": 5, "answer_id": 1 }
Integration analog of automatic differentiation I was recently looking at automatic differentiation. * *Does something like automatic differentiation exist for integration? *Would the integral be equivalent to something like Euler's method? (or am I thinking about it wrong?) edit: I am looking at some inherited code that includes https://projects.coin-or.org/ADOL-C as a black box.
If I'm reading your question correctly: I don't believe there is an algorithm that, given the algorithm for your function to be integrated and appropriate initial conditions, will give an algorithm that corresponds to the integral of your original function. However: you might wish to look into the Chebfun project by Trefethen, Battles, Driscoll, and others. What this system does is to internally represent a function given to it as a piecewise polynomial of possibly high degree, interpolated at appropriately shifted and scaled "Chebyshev points" (roots of the Chebyshev polynomial of the first kind). The resulting chebfun() object is then easily differentiated, integrated, or whatever other operation you might wish to do to the function. See the user guide for more details on this approach.
{ "language": "en", "url": "https://math.stackexchange.com/questions/109070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 2, "answer_id": 1 }
Graph-Minor Theorem for Directed Graphs? Suppose that $\vec{G}$ is a directed graph and that $G$ is the undirected graph obtained from $\vec{G}$ by forgetting the direction on each edge. Define $\vec{H}$ to be a minor of $\vec{G}$ if $H$ is a minor of $G$ as undirected graphs and direction on the edges of $\vec{H}$ are the same as the corresponding edges in $\vec{G}$. Does the Robertson-Seymour Theorem hold for directed graphs (where the above definition of minor is used and our graphs are allowed to have loops and multiple edges)?
I think the answer is yes, see 10.5 in Neil Robertson and Paul D. Seymour. Graph minors. xx. wagner’s conjecture. Journal of Combinatorial Theory, 92:325–357, 2004. and the preceding section: As a corollary, we deduce the following form of Wagner’s conjecture for directed graphs (which immediately implies the standard form of the conjecture for undirected graphs). A directed graph is a minor of another if the first can be obtained from a subgraph of the second by contracting edges. 10.5 Let $G_i$ ($i = 1,2,\ldots$) be a countable sequence of directed graphs. Then there exist $j > i \geq 1$ such that $G_i$ is isomorphic to a minor of $G_j$. I haven't tried to understand the proof and I don't plan to try anytime soon. But I'm pretty sure that the used definition of digraph minor is identical to your definition, and that the statement is exactly the theorem you asked for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/109121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
What is a Gauss sign? I am reading the paper "A Method for Extraction of Bronchus Regions from 3D Chest X-ray CT Images by Analyzing Structural Features of the Bronchus" by Takayuki KITASAKA, Kensaku MORI, Jun-ichi HASEGAWA and Jun-ichiro TORIWAKI and I run into a term I do not understand: In equation (2), when we say "[] expresses the Gauss sign", what does it mean?
From the context (a change of scale using discrete units), this should certainly mean floor as on page 5 of Gauss's Werke 2 per signum $[x]$ exprimemus integrum ipsa $x$ proxime minorem, ita ut $x-[x]$ semper fiat quantitas positiva intra limites $0$ et $1$ sita i.e. the next lower integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/109179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Positive semi-definite matrix Suppose a square symmetric matrix $V$ is given $V=\left(\begin{array}{ccccc} \sum w_{1s} & & & & \\ & \ddots & & -w_{ij} \\ & & \ddots & & \\ & -w_{ij} & & \ddots & \\ & & & & \sum w_{ns} \end{array}\right) \in\mathbb{R}^{n\times n},$ with values $w_{ij}> 0$, hence with only positive diagonal entries. Since the above matrix is diagonally dominant, it is positive semi-definite. However, I wonder if it can be proved that $a\cdot diag(V)-V~~~~~a\in[1, 2]$ is also positive semi-definite. ($diag(V)$ denotes a diagonal matrix whose entries are those of $V$, hence all positive) In case of $a=2$, the resulting $2\cdot diag(V)-V$ is also diagonally dominant (positive semi-definite), but is it possible to prove for $a\in[1,2]$? ......................................... Note that the above proof would facilitate my actual problem; is it possible to prove $tr[(X-Y)^T[a\cdot diag(V)-V](X-Y)]\geq 0$, where $tr(\cdot)$ denotes matrix trace, for $X, Y\in\mathbb{R}^{n\times 2}$ and $a\in[1,2]$ ? Also note that $tr(Y^TVY)\geq tr(X^TVX)$ and $tr(Y^Tdiag(V)Y)\geq tr(X^Tdiag(V)X)$. (if that facilitates the quest., assume $a=1$) ..................................................... Since the positive semi-definiteness could not generally be guaranteed for $a<2$, the problem casts to: for which restrictions on a does the positive semi-definiteness of a⋅diag(V)−V still hold? Note the comment from DavideGiraudo, and his claim for case $w_{ij}=1$, for all $i,j$. Could something similar be derived for general $w_{ij}$≥0?
Claim: For a symmetric real matrix $A$, then $tr(X^TAX)\ge 0$ for all $X$ if and only if $A$ is positive semidefinite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/109231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
The set of limit points of an unbounded set of ordinals is closed unbounded. Let $\kappa$ be a regular, uncountable cardinal. Let $A$ be an unbounded set, i.e. $\operatorname{sup}A=\kappa$. Let $C$ denote the set of limit points $< \kappa$ of $A$, i.e. the non-zero limit ordinals $\alpha < \kappa$ such that $\operatorname{sup}(X \cap \alpha) = \alpha$. How can I show that $C$ is unbounded? I cannot even show that $C$ has any points let alone that it's unbounded. (Jech page 92) Thanks for any help.
Fix $\xi\in \kappa$, since $A$ is unbounded there is a $\alpha_0\in A$ so that $\xi<\alpha_0$. Now, construct recursively a strictly increasing sequence $\langle \alpha_n: n\in \omega\rangle$. Let $\alpha=\sup\{\alpha_n: n\in \omega\}.$ Since $\kappa$ is regular and uncountable, we have $\alpha<\kappa.$ It is also easy to see that $\sup(A\cap\alpha)=\alpha$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/109292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Solving modular equations Is there a procedure to solve this or is it strictly by trial and error? $5^x \equiv 5^y \pmod {39}$ where $y > x$. Thanks.
Hint: since $39 = 3\cdot 13$ we can compute the order of $5\ ({\rm mod}\ 39)$ from its order mod $3$ and mod $13$. First, mod $13\!:\ 5^2\equiv -1\ \Rightarrow\ 5^4\equiv 1;\ \ $ Second, mod $3\!:\ 5\equiv -1\ \Rightarrow\ 5^2\equiv 1\ \Rightarrow\ 5^4\equiv 1 $. Thus $\:3,13\ |\ 5^4-1\ \Rightarrow\ {\rm lcm}(3,13)\ |\ 5^4-1,\:$ i.e. $\:39\ |\ 5^4-1,\:$ i.e. $\:5^4\equiv 1\pmod {39}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/109358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Proof that $\int_1^x \frac{1}{t} dt$ is $\ln(x)$ A logarithm of base b for x is defined as the number u such that $b^u=x$. Thus, the logarithm with base $e$ gives us a $u$ such that $e^u=b$. In the presentations that I have come across, the author starts with the fundamental property $f(xy) = f(x)+f(y)$ and goes on to construct the natural logarithm as $\ln(x) = \int_1^x \frac{1}{t} dt$. It would be suprising if these two definitions ended up the same, as is the case. How do we know that the are? The best that I can think of is that they share property $f(xy) = f(x)+f(y)$, and coincide at certain obvious values (x=0, x=1). This seems weak. Is their a proof?
The following properties uniquely determine the natural log: 1) $f(1) = 0$. 2) $f$ is continuous and differentiable on $(0, \infty)$ with $f'(x) = \frac{1}{x}$. 3) $f(xy) = f(x) + f(y)$ We will show that the function $f(x) = \int_1^x \frac{1}{t} dt$ obeys properties 1,2, and 3, and is thus the natural log. 1) This is easy, since $f(1) = \int_1^1 \frac{1}{t} dt = 0$. 2) Defining $f(x) = \int_1^x \frac{1}{t} dt$, we note that since $\frac{1}{t}$ is continuous on any interval of the form $[a,b]$, where $0 < a \leq b$, then the Fundamental Theorem of Calculus tells us that $f(x)$ is (continuous and) differentiable with $f'(x) = \frac{1}{x}$ for all $x \in [a,b]$. 3) $$\begin{align} f(xy) = \int_1^{xy} \frac{1}{t}dt &= \int_1^x \frac{1}{t} dt + \int_x^{xy} \frac{1}{t} dt \\ &= f(x) + \int_{1}^{y} \frac{1}{u} du \\ &= f(x) + f(y) \end{align}$$ where in the last step we perform the substitution $t = ux$ (viewing $x$ as constant).
{ "language": "en", "url": "https://math.stackexchange.com/questions/109483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 2 }
How to calculate this limit? Doesn't seem to be difficult, but still can't get it. $\displaystyle\lim_{x\rightarrow 0}\frac{10^{-x}-1}{x}=\ln10$
It's worth noticing that we can define the natural logarithm as $$\log x = \lim_{h \to 0} \frac{x^h-1}{h}$$ So in your case you have $$ \lim_{h \to 0} \frac{10^{-h}-1}{h}= \lim_{h \to 0} \frac{\frac{1}{10}^{h}-1}{h}=-\log10$$ This result holds because we have that $$ \lim_{h \to 0} \frac{x^h-1}{h} =\frac{0}{0}$$ So we can apply L'Hôpitals rule, differentiating with respect to $h$ to get $$ \lim_{h \to 0} \frac{x^h-1}{h} =\lim_{h \to 0} x^h \log x = \log x $$ Obivously this is done by knowing how to handle the derivative of an exponential function with arbitrary base $x$, so you could've also solved you problem by noticing the expression is a derivative, as other answers/comments suggest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/109549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Rate of convergence of a sequence in $\mathbb{R}$ and Big O notation From Wikipedia $f(x) = O(g(x))$ if and only if there exists a positive real number $M$ and a real number $x_0$ such that $|f(x)| \le \; M |g(x)|\mbox{ for all }x>x_0$. Also from Wikipedia Suppose that the sequence $\{x_k\}$ converges to the number $L$. We say that this sequence converges linearly to $L$, if there exists a number $μ ∈ (0, 1)$ such that $\lim_{k\to \infty} \frac{|x_{k+1}-L|}{|x_k-L|} = \mu$. If the sequences converges, and * *$μ = 0$, then the sequence is said to converge superlinearly. *$μ = 1$, then the sequence is said to converge sublinearly. I was wondering * *Is it true that if $\{x_n\}$ either linearly, superlinearly or sublinearly converges to $L$, only if $|x_{n+1}-L| = O(|x_n-L|)$? This is based on what I have understood from their definitions and viewing $\{ x_{n+1}-L \}$ and $\{ x_n-L \}$ as functions of $n$. Note that "only if" here means "if" may not be true, since $\mu$ may lie outside of $[0,1]$ and $\{x_n\}$ may not converge. *Some Optimization book says that the steepest descent algorithm has linear rate of convergence, and writes $|x_{n+1}-L| = O(|x_n-L|)$. Is the usage of big O notation here expanding the meaning of linear rate of convergence? Thanks and regards!
To answer your added question, from the definition, $x_n$ converges to $L$ if and only if $|x_n-L| \to 0$ as $n \to \infty$. The existence of a positive c such that $c < 1$ and $|x_{n+1}-L| \le c|x_n-L|$ is sufficient for convergence, but not necessary. For example, if $x_n = 1/(\ln n)$, then $x_n \to 0$, but there is no $c < 1$ such that $x_{n+1} < c x_n$ for all large enough n. It can be shown that there is no slowest rate of convergence - for any rate of convergence, a slower one can be constructed. This is sort of the inverse of constructing arbitrarily fast growing functions and can lead to many interesting places.
{ "language": "en", "url": "https://math.stackexchange.com/questions/109686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
The sum of the coefficients of $x^3$ in $(1-\frac{x}{2}+\frac{1}{\sqrt x})^8$ I know how to solve such questions when it's like $(x+y)^n$ but I'm not sure about this one: In $(1-\frac{x}{2}+\frac{1}{\sqrt x})^8$, What's the sum of the coefficients of $x^3$?
You can just multiply it out. Alternatively, you can reason the terms are of the form $1^a(\frac x2)^b(\frac 1{\sqrt x})^c$ with $a+b+c=8, b-\frac c2=3$. Then $c=2b-6$, so $a+3b=14$ and $a$ needs to be $2 \text{ or } 5$. Then you need the multinomial coefficient as stated by Suresh.
{ "language": "en", "url": "https://math.stackexchange.com/questions/109748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
How can we find the values that a (divergent!) series tends to? Suppose we are given a series that diverges. That's right, diverges. We may interest ourselves in the limiting function(s) of its behavior. For instance, given the power series:$$\frac{1}{1+x} = 1 - x + x^2 - x^3 + \dots$$ I am interested in finding the sum of the coefficients for $x^n$ and $x^{n+1}$ as $n$ approaches infinity. This should be fairly obvious as to what it is, but is there some way that we could do this for a general alternating series that essentially converges to two values, or more importantly, for 3 or more values (i.e. not exactly an alternating series, but one that cycles through a set of values at infinity)? MY IDEAS I guess that this can somehow be accomplished similar to finding the limiting behavior for a convergent series. I thought that I knew how to do this, but I forgot what I thought was an easy way. MY GOAL I really would like to know if it's possible to apply a function to the values at the limit. If it is, of course, I'd like to know how to do this. This may allow us to determine what the values actually are by using multiple functions.
Note that the usual definition of the infinite sum is a very different kind of thing from an ordinary finite sum. It introduces ideas from topology, analysis or metric spaces which aren't present in the original definition of sum. So when generalising from finite sums to infinite sums we have quite a bit of choice in how these concepts are introduced and it's something of a prejudice to call the standard method from analysis the sum. There are quite a few different approaches to summing infinite series, many of which give finite values in places where the usual summation method fails. Examples are Abel summation, Borel summation and Cesàro summation. The mathematician Hardy wrote an entire book on techniques to sum divergent series. These alternative summation methods aren't just a mathematical curiosity. They play a role in physics where they can help sum some of the series that arise from sums over Feynman diagrams. These results often agree with results computed by more orthodox methods.
{ "language": "en", "url": "https://math.stackexchange.com/questions/109815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Showing $\gcd(n^3 + 1, n^2 + 2) = 1$, $3$, or $9$ Given that n is a positive integer show that $\gcd(n^3 + 1, n^2 + 2) = 1$, $3$, or $9$. I'm thinking that I should be using the property of gcd that says if a and b are integers then gcd(a,b) = gcd(a+cb,b). So I can do things like decide that $\gcd(n^3 + 1, n^2 + 2) = \gcd((n^3+1) - n(n^2+2),n^2+2) = \gcd(1-2n,n^2+2)$ and then using Bezout's theorem I can get $\gcd(1-2n,n^2+2)= r(1-2n) + s(n^2 +2)$ and I can expand this to $r(1-2n) + s(n^2 +2) = r - 2rn + sn^2 + 2s$ However after some time of chasing this path using various substitutions and factorings I've gotten nowhere. Can anybody provide a hint as to how I should be looking at this problem?
Let $\:\rm d = (n^3+1,\:n^2+2).\:$ Observe that $\rm \ d \in \{1,\:3,\:9\} \iff\ d\:|\:9\iff 9\equiv 0\pmod d\:.$ mod $\rm (n^3\!-a,n^2\!-b)\!:\ a^2 \equiv n^6 \equiv b^3\:$ so $\rm\:a=-1,\:b = -2\:\Rightarrow 1\equiv -8\:\Rightarrow\: 9\equiv 0\:. \ \ $ QED Or, if you don't know congruence arithmetic, since $\rm\: x-y\:$ divides $\rm\: x^2-y^2$ and $\rm\: x^3-y^3$ $\rm n^3-a\ |\ n^6-a^2,\:\ n^2-b\ |\ n^6-b^3\ \Rightarrow\ (n^3-a,n^2-b)\ |\ n^6-b^3-(n^6-a^2) = a^2-b^3 $ Note how much simpler the proof is using congruences vs. divisibility relations on binomials. Similar congruential proofs arise when computing modulo ideals generated by binomials.
{ "language": "en", "url": "https://math.stackexchange.com/questions/109876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 6, "answer_id": 2 }
Is the supremum of an ordinal the next ordinal? I apologize for this naive question. Let $\eta$ be an ordinal. Isn't the supremum of $\eta$ just $\eta+1$? If this is true, the supremum is only necessary if you conisder sets of ordinals.
There are two notions of supremum in sets ordinals, let $A$ be a set of ordinals: * *$\sup^+(A)=\sup\{\alpha+1\mid\alpha\in A\}$, *$\sup(A) =\sup\{\alpha\mid\alpha\in A\}$. Since most of the time we care about supremum below limit ordinals (eg. $A=\omega$) the notions coincide. If $A=\{\alpha\}$ then indeed $\sup(A)=\alpha$ and $\sup^+(A)=\alpha+1$. The reason there are two notions is that $\sup^+(A)$ is defined as $\min\{\alpha\in\mathrm{Ord}\mid A\subseteq\alpha\}$ and $\sup(A)=\bigcup A$. Both of these notions are useful, and it is easy to see that if $A$ has no maximal element then these indeed coincide. However the distinction can be useful in successor ordinals from time to time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/109954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Sum of alternating reciprocals of logarithm of 2,3,4... How to determine convergence/divergence of this sum? $$\sum_{n=2}^\infty \frac{(-1)^n}{\ln(n)}$$ Why cant we conclude that the sum $\sum_{k=2}^\infty (-1)^k\frac{k}{p_k}$, with $p_k$ the $k$-th prime, converges, since $p_k \sim k \cdot \ln(k)$ ?
You are correct. The alternating series test suffices, no need to look at the dirichlet test. Think about it, the sequence $|a|\rightarrow 0$ at $n\rightarrow\infty$, and decreases monotonically ($a_n>a_{n+1}$). That means that you add and remove terms that shrink to $0$. What you add you remove partially in the next term, and the terms shrink to 0. Eventually, your sum converges to a number but perhaps extremely slowly, but it will at least. If the terms didn't shrink to 0 but to a value $c$ or $a_{\infty}\rightarrow c$, then the series would be alternating around a value $middle\pm c$, hence not converge. We know this is not the case for $a_n=\frac{1}{p_n}$ (since there are infinitely many primes) and $a_n=\frac{1}{\log{n}}$, clearly $a_n$ tend to $0$ in both cases. For $a_n=\frac{n}{\log{n}}$, you are correct to assume that $\frac{p_n}{n}=\mathcal{O}\left(\log{n}\right)$. This means that $a_n=\frac{n}{p_n}=\mathcal{O}\left(\frac{1}{\log{n}}\right)\rightarrow 0$. From "The kth prime is greater than k(log k + log log k−1) for k ≥ 2", $\frac{p_n}{n}=\frac{1}{a_n}$ is stuck between $\log n+\log\log n-1<\frac{p_n}{n}<\log{n}+\log\log n,$ then $\frac{1}{\log n+\log\log n}<\frac{n}{p_n}<\frac{1}{\log{n}+\log\log n-1}.$ From the above inequality, we clearly see that the value $a_n=\frac{n}{p_n}$ is squished to $0$ by the bounds that vanish at infinity. The sum accelerates so slowly that you may think that it alternates around a value $\pm c$, but this is not the case. Similarly you can show that $\sum_n \frac{(-1)^nn^{s}}{p_n }$ converges for $s\leq 1$, but will diverge for $Re(s)>1$. And you have $\sum_n \frac{n^{s}}{p_n }$ converge if $Re(s)<0$. You see that modulating the sequence $a_n=\frac{n^{s}}{p_n }$ with $(-1)^n$ moves the region of convergence from $Re(s)<0$ to $Re(s)\leq 1$. The sum $\sum_n \frac{(-1)^nn^{s}}{p_n }$ is not differentiable for $ Re(s)=0$, therefore it should exhibit fractal like appearance on the imaginary line at the boundary. As it is for the Prime Zeta function, $ Re(s)=0$ is a natural boundary for the derivative of $\sum_n \frac{(-1)^nn^{s}}{p_n }$. For the sum itself, it is a "softer" natural boundary!
{ "language": "en", "url": "https://math.stackexchange.com/questions/110009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Inverse function that takes connected set to non-connected set I've been struggling with providing examples of the following: 1) A continuous function $f$ and a connected set $E$ such that $f^{-1}(E)$ is not connected 2) A continuous function $g$ and a compact set $K$ such that $f^{-1}(K)$ is not compact
Take any space $X$ which is not connected and not compact. For example, you could think of $\mathbf R - \{0\}$. Map this to a topological space consisting of one point. [What properties does such a space have?]
{ "language": "en", "url": "https://math.stackexchange.com/questions/110069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
CLT for arithmetic mean of centred exp.distributed RV $X_1, X_2,\ldots$ are independent, exponentially distributed r. variables with $m_k:=EX_k=\sqrt{2k}$, $v_k:=\operatorname{Var} X_k=2k$. I want to analyse the weak convergence of $Y_n=\dfrac{\sum_{i=1}^n (X_i−m_i)}{n}$. CLT: When the Lindeberg Condition $b^{−2}_n\sum_{i=1}^n E(|X_i−m_i|^2⋅1_{∣Xi−mi∣>ϵ⋅b_n}) ⟶0$ , for $n→∞\text{ and }∀ϵ>0$ is fullfilled, I can state $\frac{\sum_{i=1}^n (X_i−m_i)}{b_n}⇒N(0,1)$. It converges weakly to a normal distributed r.v. Here it is $b^2_n=\sum_{i=1}^n \operatorname{Var} X_i=\sum_{i=1}^n 2i=n(n+1)$ So if the condition would be fullfilled it is $$Y_n=\frac{\sum_{i=1}^n (X_i−m_i)}{n}=\sqrt{\frac{n+1}{n}}\frac{\sum_{i=1}^n (X_i−m_i)}{b_n},$$ where $$\frac{\sum_{i=1}^{n}(X_i−m_i)}{b_n}⇒N(0,1)\text{ and }\sqrt{\frac{n+1}{n}}=\sqrt{1+\frac{1}{n}}→1,$$ hence $$Y_n⇒N(0,1).$$ Now the problem is to prove the Lindeberg condition. $$E(|X_i−m_i|^2⋅1_{∣Xi−mi∣>ϵ⋅b_n})=∫_{∣x−m_i∣>ϵ⋅b_n}(x−m_i)^2\sqrt{2i}e^{−\frac{x}{\sqrt{2i}}}\;dx.$$ But thats where it ends. Am I on the right way for solving this? Can I switch ∑ and ∫ in the condition?
Let $x_{k,n}=\mathrm E((X_k-m_k)^2:|X_k-m_k|\geqslant\varepsilon b_n)$. Since $X_k/m_k$ follows the distribution of a standard exponential random variable $X$, $$ x_{k,n}=m_k^2\mathrm E((X-1)^2:(X-1)^2\geqslant\varepsilon^2 b_n^2/m_k^2). $$ In particular, $x_{k,n}\leqslant m_k^2x_{n,n}$ for every $k\leqslant n$ and $\sum\limits_{k=1}^nx_{k,n}\leqslant b_n^2x_{n,n}$ where $b_n^2=\sum\limits_{k=1}^nm_k^2$. If $x_{n,n}\to0$, this yields $\sum\limits_{k=1}^nx_{k,n}\ll b_n^2$, which is Lindeberg condition. But $\mathrm E((X-1)^2:X\geqslant t)=(t^2+1)\mathrm e^{-t}$ for every $t\geqslant0$, hence $x_{n,n}=O(b_n^2\mathrm e^{-\varepsilon b_n/m_n})$, which is enough to prove that $x_{n,n}\to0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/110119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Irrationality of "primes coded in binary" For fun, I have been considering the number $$ \ell := \sum_{p} \frac{1}{2^p} $$ It is clear that the sum converges and hence $\ell$ is finite. $\ell$ also has the binary expansion $$ \ell = 0.01101010001\dots_2 $$ with a $1$ in the $p^{th}$ place and zeroes elsewhere. I have also computed a few terms (and with the help of Wolfram Alpha, Plouffe's Inverter, and this link from Plouffe's Inverter) I have found that $\ell$ has the decimal expansion $$ \ell = .4146825098511116602481096221543077083657742381379169778682454144\dots. $$ Based on the decimal expansion and the fact that $\ell$ can be well approximated by rationals, it seems exceedingly likely that $\ell$ is irrational. However, I have been unable to prove this. Question: Can anyone provide a proof that $\ell$ is irrational?
That $\ell$ is irrational is clear. There are arbitrarily large gaps between consecutive primes, so the binary expansion of $\ell$ cannot be periodic. Any rational has a periodic binary expansion. The fact that there are arbitrarily large gaps between consecutive primes comes from observing that if $n>1$, then all of $n!+2, n!+3, \dots, n!+n$ are composite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/110187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 1 }
Derivation of asymptotic solution of $\tan(x) = x$. An equation that seems to come up everywhere is the transcendental $\tan(x) = x$. Normally when it comes up you content yourself with a numerical solution usually using Newton's method. However, browsing today I found an asymptotic formula for the positive roots $x$: $x = q - q^{-1} - \frac23 q^{-3} + \cdots$ with $q = (n + 1/2) \pi$ for positive integers $n$. For instance here: http://mathworld.wolfram.com/TancFunction.html, and here: http://mathforum.org/kb/message.jspa?messageID=7014308 found from a comment here: Solution of tanx = x?. The Mathworld article says that you can derive this formula using series reversion, however I'm having difficulty figuring out exactly how to do it. Any help with a derivation would be much appreciated.
You may be interested in N. G. de Bruijn's book Asymptotic Methods in Analysis, which treats the equation $\cot x = x$. What follows is essentially a minor modification of that section in the book. The central tool we will use is the Lagrange inversion formula. The formula given in de Bruijn differs slightly from the one given on the wiki page so I'll reproduce it here. Lagrange Inversion Formula. Let the function $f(z)$ be analytic in some neighborhood of the point $z=0$ of the complex plane. Assuming that $f(0) \neq 0$, we consider the equation $$w = z/f(z),$$ where $z$ is the unknown. Then there exist positive numbers $a$ and $b$ such that for $|w| < a$ the equation has just one solution in the domain $|z| < b$, and this solution is an analytic function of $w$: $$z = \sum_{k=1}^{\infty} c_k w^k \hspace{1cm} (|w| < a),$$ where the coefficients $c_k$ are given by $$c_k = \frac{1}{k!} \left\{\left(\frac{d}{dz}\right)^{k-1} (f(z))^k\right\}_{z=0}.$$ Essentially what this says is that we can solve the equation $w = z/f(z)$ for $z$ as a power series in $w$ when $|w|$ and $|z|$ are small enough. Okay, on to the problem. We wish to solve the equation $$\tan x = x.$$ As with many asymptotics problems, we need a foothold to get ourselves going. Take a look at the graphs of $\tan x$ and $x$: We see that in each interval $\left(\pi n - \frac{\pi}{2}, \pi n + \frac{\pi}{2}\right)$ there is exactly one solution $x_n$ (i.e. $\tan x_n = x_n$), and, when $n$ is large, $x_n$ is approximately $\pi n + \frac{\pi}{2}$. But how do we show this second part? Since $\tan$ is $\pi$-periodic we have $$\tan\left(\pi n + \frac{\pi}{2} - x_n\right) = \tan\left(\frac{\pi}{2} - x_n\right)$$ $$\hspace{2.4 cm} = \frac{1}{\tan x_n}$$ $$\hspace{2.6 cm} = \frac{1}{x_n} \to 0$$ as $n \to \infty$, where the second-to-last equality follows from the identites $$\sin\left(\frac{\pi}{2} - \theta\right) = \cos \theta,$$ $$\cos\left(\frac{\pi}{2} - \theta\right) = \sin \theta.$$ Since $-\frac{\pi}{2} < \pi n + \frac{\pi}{2} - x_n < \frac{\pi}{2}$ and since $\tan$ is continuous in this interval we have $\pi n + \frac{\pi}{2} - x_n \to 0$ as $n \to \infty$. Thus we have shown that $x_n$ is approximately $\pi n + \frac{\pi}{2}$ for large $n$. Now we begin the process of putting the equation $\tan x = x$ into the form required by the Lagrange inversion formula. Set $$z = \pi n + \frac{\pi}{2} - x$$ and $$w = \left(\pi n + \frac{\pi}{2}\right)^{-1}.$$ Note that we do this because when $|w|$ is small (i.e. when $n$ is large) we may take $|z|$ small enough such that there will be only one $x$ (in the sense that $x = \pi n + \frac{\pi}{2} - z$) which satisfies $\tan x = x$. Plugging $x = w^{-1} - z$ into the equation $\tan x = x$ yields, after some simplifications along the lines of those already discussed, $$\cot z = w^{-1} - z,$$ which rearranges to $$w = \frac{\sin z}{\cos z + z\sin z} = z/f(z),$$ where $$f(z) = \frac{z(\cos z + z\sin z)}{\sin z}.$$ Here note that $f(0) = 1$ and that $f$ is analytic at $z = 0$. We have just satisfied the requirements of the inversion formula, so we may conclude that we can solve $w = z/f(z)$ for $z$ as a power series in $w$ in the form given earlier in the post. We have $c_1 = 1$ and, since $f$ is even, it can be shown that $c_{2k} = 0$ for all $k$. Calculating the first few coefficients in Mathematica gives $$z = w + \frac{2}{3}w^3 + \frac{13}{15}w^5 + \frac{146}{105}w^7 + \frac{781}{315}w^9 + \frac{16328}{3465}w^{11} + \cdots.$$ Substituting this into $x = w^{-1} - z$ and using $w = \left(\pi n + \frac{\pi}{2}\right)^{-1}$ gives the desired series for $x_n$ when $n$ is large enough: $$x_n = \pi n + \frac{\pi}{2} - \left(\pi n + \frac{\pi}{2}\right)^{-1} - \frac{2}{3}\left(\pi n + \frac{\pi}{2}\right)^{-3} - \frac{13}{15}\left(\pi n + \frac{\pi}{2}\right)^{-5} - \frac{146}{105}\left(\pi n + \frac{\pi}{2}\right)^{-7} - \frac{781}{315}\left(\pi n + \frac{\pi}{2}\right)^{-9} - \frac{16328}{3465}\left(\pi n + \frac{\pi}{2}\right)^{-11} + \cdots$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/110256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 1, "answer_id": 0 }
Do the algebraic and geometric multiplicities determine the minimal polynomial? Let $T$ denote some linear transformation of a finite-dimensional space $V$ (say, over $\mathbb{C}$). Suppose we know the eigenvalues $\{\lambda_i\}_i$ and their associated algebraic multiplicities $\{d_i\}_i$ and geometric multiplicities $\{r_i\}_i$ of $T$, can we determine the minimal polynomial of $T$ via these informations? If the answer is no, is there a nice way to produce different linear transformations with same eigenvalues and associated algebraic and geometric multiplicities? Some backgraoud: It is well-known that for a given linear transformation, the minimal polynomial divides the characteristic polynomial: $m_T|p_T$. And I find in a paper proved that $$m_T|\prod_i(x-\lambda_i)^{d_i-r_i+1}\ ,\ \ \ \ p_T|m_T\prod_i(x-\lambda_i)^{r_i}$$ And then I want to know if there are any better results.
No, the algebraic and geometric multiplicities do not determine the minimal polynomial. Here is a counterexample: Consider the Jordan matrices $J_1, J_2$: $$J_1 = \left( \begin{array}{cccc} 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{array} \right) ~~ J_2 = \left( \begin{array}{cccc} 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right) $$ both have only one eigenvalue, namely 1, so they both have algebraic multiplicity 4. They also both have geometric multiplicity 2, since there are 2 Jordan blocks in both matrices (check the Wikipedia article on Jordan normal form for more information). However, they have different minimal polynomials: $$\begin{align} m_{J_1}(x) = (x - I)^2 \\ m_{J_2}(x) = (x - I)^3 \end{align}$$ so the algebraic and geometric multiplicities do not determine the minimal polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/110307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 1, "answer_id": 0 }
Number of Solutions of $3\cos^2(x)+\cos(x)-2=0$ I'm trying to figure out how many solutions there are for $$3\cos^2(x)+\cos(x)-2=0.$$ I can come up with at least two solutions I believe are correct, but I'm not sure if there is a third.
Unfortunately, there are infinitely many solutions. Note, for example, that $\pi$ is a solution. Then we also have that $\pi + 2k \pi$ is a solution for all $k$. But between $0$ and $2\pi$, there are 3 solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/110386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Closed form for $ \int_0^\infty {\frac{{{x^n}}}{{1 + {x^m}}}dx }$ I've been looking at $$\int\limits_0^\infty {\frac{{{x^n}}}{{1 + {x^m}}}dx }$$ It seems that it always evaluates in terms of $\sin X$ and $\pi$, where $X$ is to be determined. For example: $$\displaystyle \int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^3}}}dx = } \frac{\pi }{3}\frac{1}{{\sin \frac{\pi }{3}}} = \frac{{2\pi }}{{3\sqrt 3 }}$$ $$\int\limits_0^\infty {\frac{{{x^1}}}{{1 + {x^4}}}dx = } \frac{\pi }{4}$$ $$\int\limits_0^\infty {\frac{{{x^2}}}{{1 + {x^5}}}dx = } \frac{\pi }{5}\frac{1}{{\sin \frac{{2\pi }}{5}}}$$ So I guess there must be a closed form - the use of $\Gamma(x)\Gamma(1-x)$ first comess to my mind because of the $\dfrac{{\pi x}}{{\sin \pi x}}$ appearing. Note that the arguments are always the ratio of the exponents, like $\dfrac{1}{4}$, $\dfrac{1}{3}$ and $\dfrac{2}{5}$. Is there any way of finding it? I'll work on it and update with any ideas. UPDATE: The integral reduces to finding $$\int\limits_{ - \infty }^\infty {\frac{{{e^{a t}}}}{{{e^t} + 1}}dt} $$ With $a =\dfrac{n+1}{m}$ which converges only if $$0 < a < 1$$ Using series I find the solution is $$\sum\limits_{k = - \infty }^\infty {\frac{{{{\left( { - 1} \right)}^k}}}{{a + k}}} $$ Can this be put it terms of the Digamma Function or something of the sort?
The general formula (for $m > n+1$ and $n \ge 0$) is $\frac{\pi}{m} \csc\left(\frac{\pi (n+1)}{m}\right)$. IIRC the usual method involves a wedge-shaped contour of angle $2 \pi/m$. EDIT: Consider $\oint_\Gamma f(z)\ dz$ where $f(z) = \frac{z^n}{1+z^m}$ (using the principal branch if $m$ or $n$ is a non-integer) and $\Gamma$ is the closed contour below: $\Gamma_1$ goes to the right along the real axis from $\epsilon$ to $R$, so $\int_{\Gamma_1} f(z)\ dz = \int_\epsilon^R \frac{x^n\ dx}{1+x^m}$. $\Gamma_3$ comes in along the ray at angle $2 \pi/m$. Since $e^{(2 \pi i/m) m} = 1$, $\int_{\Gamma_3} f(z)\ dz = - e^{2 \pi i (n+1)/m} \int_{\Gamma_1} f(z)\ dz$. $\Gamma_2$ is a circular arc at distance $R$ from the origin. Since $m > n+1$, the integral over it goes to $0$ as $R \to \infty$. Similarly, the integral over the small circular arc at distance $\epsilon$ goes to $0$ as $\epsilon \to 0$. So we get $$ \lim_{R \to \infty, \epsilon \to 0} \int_\Gamma f(z)\ dz = (1 - e^{2 \pi i (n+1)/m}) \int_0^\infty \frac{x^n\ dx}{1+x^m}$$ The meromorphic function $f(z)$ has one singularity inside $\Gamma$, a pole at $z = e^{\pi i/m}$ where the residue is $- e^{\pi i (n+1)/m}/m$. So the residue theorem gives you $$ \int_0^\infty \frac{x^n\ dx}{1+x^m} = \frac{- 2 \pi i e^{\pi i (n+1)/m}}{ m (1 - e^{2 \pi i (n+1)/m})} = \frac{\pi}{m} \csc\left(\frac{\pi(n+1)}{m}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/110457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "109", "answer_count": 11, "answer_id": 3 }
Calculating the percentage difference of two numbers The basic problem is this: "I have this number x and I ask you to give me another number y. If the number you give me is some percentage c different than my number then I do not want it." Given that you will know x and c, how do you calculate whether or not I should take y? The naive approach I came up with is to just divide y / x < c but this fails for obvious reason (try y bigger than x). The next approach I is that the percentage difference is really just a ratio of the smaller number divided by the larger number. So thereforce we could try min(x, y) / max(x, y) < c. However this does not work, here is an example: x = 1.2129 y = 1.81935 c = 50% If we do the above we get 1.2129 / 1.81935 = 0.67 which is greater than 0.50. The problem here is that I obtained y by multiplying 1.2129 by 1.5, therefore y is only 50% greater than x. Why? I still don't understand why the above formula doesn't work. Eventually through some googling I stumbled accross the percentage difference formula but even this doesn't suit my needs. It is abs(x - y) / ((x + y) / 2). However, this does not yield the result I am looking for. abs(x - y) = abs(1.2129 - 1.81935 ) = 0.60645. (x + y) / 2 = 3.03225 / 2 = 1.516125 0.60645 / 1.516125 = 0.4 Eventually I ended up writing some code to evaluate x * c < y < x * (1 + c). As the basic idea is that we don't want any y that is 50% less than my number, nor do we want any number that is 50% greater than my number. Could someone please help me identify what I'm missing here? It seems like there ought to be another way that you can calculate the percentage difference of two arbitrary numbers and then compare it to c.
What you're missing is what you want. The difference between your two numbers is clearly $|x-y|$, but the "percentage" depends on how you want to write $|x-y|/denominator$. You could choose for a denominator $|x|$, $|x+y|$, $\max \{x,y\}$, $\sqrt{x^2 + y^2}$, for all I care, it's just a question of choice. Personally I'd rather use $|x|$ as a denominator, but that's just because I think it'll fit for this problem ; if this is not the solution to your problem, then choose something else. That is because when you say that you want the difference between $x$ and $y$ to be $c$% or less than your number $x$, for me it means that $$ |x-y| < \frac c{100} |x| \qquad \Longleftrightarrow \qquad \frac{|x-y|}{|x|} < \frac{c}{100} $$ so that choosing $|x|$ as a denominator makes most sense. Hope that helps,
{ "language": "en", "url": "https://math.stackexchange.com/questions/110503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Combinatorics-N boys and M girls are learning acting skills from a theatre in Mumbai. N boys and M girls are learning acting skills from a theatre in Mumbai. To perform a play on ‘Ramayana’ they need to form a group of P actors containing not less than 4 boys and not less than 1 girl. The theatre requires you to write a program that tells them the number of ways the group can be formed.
Assume that $M \ge 1$, $N\ge 4$, and $P\ge 5$. In how many ways can we choose $P$ people, with no sex restrictions? We are choosing $P$ people from $M+N$. The number of choices is $$\tbinom{M+N}{P}.$$ However, choosing $0$, $1$, $2$, or $3$ boys is forbidden. In how many ways can we choose $0$ boys, and therefore $P$ girls? The answer is $\binom{M}{P}$. For reasons that will be apparent soon, we write that as $\binom{N}{0}\binom{M}{P}$. There are $\binom{N}{1}\binom{M}{P-1}$ ways to choose $1$ boy and $P-1\,$ girls. There are $\binom{N}{2}\binom{M}{P-2}$ ways to choose $2$ boys and $P-2\,$ girls. There are $\binom{N}{3}\binom{M}{P-3}$ ways to choose $3$ boys and $P-3\,$ girls. We need to deal with a small technical complication. Suppose, for example, that we are choosing $0$ boys and therefore $P$ girls, but the number $M$ of available girls is less than $P$. The answer above remains correct if we use the common convention that $\binom{m}{n}=0$ if $m<n$. Finally, at least $1$ girl must be chosen. So the $\binom{N}{P}\binom{M}{0}$ all-boy choices are forbidden. The total number of forbidden choices is therefore $$\tbinom{N}{0}\tbinom{M}{P}+\tbinom{N}{1}\tbinom{M}{P-1}+\tbinom{N}{2}\tbinom{M}{P-2} +\tbinom{N}{3}\tbinom{M}{P-3}+\tbinom{N}{P}\tbinom{M}{0}.$$ Finally, subtract the above number from the total number $\binom{M+N}{P}$ of choices with no sex restrictions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/110560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluating $\int_0^\infty e^{-x^n}\,\mathrm{d}x$ Is there a general approach to evaluating definite integrals of the form $\int_0^\infty e^{-x^n}\,\mathrm{d}x$ for arbitrary $n\in\mathbb{Z}$? I imagine these lack analytic solutions, so some sort of approximation is presumably required. Any pointers are welcome.
For $n=0$ the integral is divergent and if $n<0$ $\lim_{n\to\infty}e^{-x^n}=1$ so the integral is not convergent. For $n>0$ we make the substitution $t:=x^n$, then $$I_n:=\int_0^{+\infty}e^{-x^n}dx=\int_0^{+\infty}e^{—t}t^{\frac 1n-1}\frac 1ndt=\frac 1n\Gamma\left(\frac 1n\right),$$ where $\Gamma(\cdot)$ is the usual Gamma function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/110602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Simple algebra. How can you factor out a common factor if it is $\frac{1}{\text{factor}}$ in one of the cases? I'm sure this is simple. I want to pull out a factor as follows... I have the expression $$\frac{a(\sqrt{x}) - (b + c)(\frac{1}{\sqrt{x}})c}{x}.$$ It would be useful for me to pull out the $\sqrt{x}$ from the numerator and try to simplify to remove the denominator, but how can I pull out the $\sqrt{x}$ from the right-most statement $\frac{1}{\sqrt{x}}$. Thanks for your help!
$$ \frac{a(\sqrt{x}) - (b + c)({\frac{\sqrt{x}}{x}})c}{x}\;\tag{1}$$ $$=\frac{a(\color{red}{\sqrt{x}}) - (b + c)({\frac{\color{red}{\sqrt{x}}}{x}})c}{\color{red}{\sqrt x}\sqrt x}\;\tag{2}$$ $$=\frac{(\color{red}{\sqrt x})[a - (b + c)({\frac{1}{x}})c]}{\color{red}{\sqrt x}\sqrt x}\;\tag{3}$$ $$=\frac{[a - (b + c)({\frac{1}{x}})c]}{\sqrt{x}}$$ And finally: $$=\frac{[ax - (b + c)c]}{x\sqrt{x}}$$ Hope it helps. $(1)$: Rewriting ${1\over\sqrt x}$ as ${\sqrt x \over x}$ $(2)$: Rewriting $x$ as $\sqrt x \sqrt x$, now $\sqrt x$ is the common factor in both numerator and denominator $(3)$: Pulling the common factor ($\sqrt x$) out in the numerator
{ "language": "en", "url": "https://math.stackexchange.com/questions/110669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Help with a short paper - cumulative binomial probability estimates I was hoping someone could help me with a brief statement I can't understand in a book. The problem I have is with the final line of the following section of Lemma 2.2 (on the second page): Since $|\mathcal{T}_j|$ is bin. distributed with expectation $n(\log{n})^2 2^{−\sqrt{\log{n}}}$, by the standard estimates, we have that the probability that $\mathcal{T}_j$ has more than $2\mu$ elements is at most $e^{−μ/3} < n^{−2}$. Then, with probability at least $1− \frac{\log{n}}{n^2}$, the sum of the sizes of these sets is at most $n(\log{n})^3 2^{-\sqrt{\log{n}}}$. Why is this?
Wikipedia is your friend. In general, when a paper mentions using technique X, if you are not aware of technique X, then look it up. It will be impossible to fill the gap without knowing about X. In the case at hand, X is the Chernoff bound (also Hoeffding's inequality, and even more names). It's indeed pretty standard, so it's good for you to get to know it. It's a theorem of the "concentration of measure" type, saying that if you average many well-behaved random variables, then you get something which is concentrated roughly like it should according to the central limit theorem. The central limit theorem itself doesn't give you anything about the speed of convergence to a normal distribution, and so to get an actual "large deviation bound" you use something like Chernoff. Sometimes you need more refined results, and then you use Berry-Esséen (q.v.).
{ "language": "en", "url": "https://math.stackexchange.com/questions/110730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A contest problem about multiple roots if a is real, what is the only real number that could be a mutiple root of $x^3 +ax+1$=0 No one in my class know how to do it, so i have to ask it here.
Let the multiple root be $r$, and let the other root be $s$. If $r$ is to be real, then $s$ must be real also. From Vieta's formulas, we have $2r + s = 0$ and $r^2s = -1$. The first equation gives $s = -2r$, which we plug into the second equation to get $r^2s = -2r^3 = -1$, so $r = \boxed{\left(\frac12\right)^{1/3}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/110804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Combining Taylor expansions How do you taylor expand the function $F(x)={x\over \ln(x+1)}$ using standard results? (I know that WA offers the answer, but I want to know how to get it myself.) I know that $\ln(x+1)=x-{x^2\over 2}+{x^3\over 3}+…$ But I don't know how to take the reciprocal. In general, given a function $g(x)$ with a known Taylor series, how might I find $(g(x))^n$, for some $n\in \mathbb Q$? Also, how might I evaluate expressions like $\ln(1+g(x))$ where I know the Taylor expansion of $g(x)$ (and $\ln x$). How do I combine them? Thank you.
You have $$F(x)=\frac{x}{\sum_{n\ge 1}\frac{(-1)^{n+1}}nx^n}=\frac1{\sum_{n\ge 0}{\frac{(-1)^n}{n+1}}x^n}=\frac1{1-\frac{x}2+\frac{x^2}3-+\dots}\;.$$ Suppose that $F(x)=\sum\limits_{n\ge 0}a_nx^n$; then you want $$1=\left(1-\frac{x}2+\frac{x^2}3-+\dots\right)\left(a_0+a_1x+a_2x^2+\dots\right)\;.$$ Multiply out and equate coefficients: $$\begin{align*} 1&=a_0\,;\\ 0&=a_1-\frac{a_0}2=a_1-\frac12,\text{ so }a_1=\frac12\,;\\ 0&=a_2-\frac{a_1}2+\frac{a_0}3=a_2-\frac14+\frac13=a_2+\frac1{12},\text{ so }a_2=-\frac1{12}\,; \end{align*}$$ and so on. In general $$a_n=\frac{a_{n-1}}2-\frac{a_{n-2}}3+\dots+(-1)^{n+1}\frac{a_0}{n+1}$$ for $n>0$, so $$\begin{align*}&a_3=-\frac1{24}-\frac16+\frac14=\frac1{24}\;,\\ &a_4=\frac1{48}+\frac1{36}+\frac18-\frac15=-\frac{19}{720}\;,\\ &a_5=-\frac{19}{1440}-\frac1{72}+\frac1{48}-\frac1{10}+\frac16=\frac{29}{480}\;, \end{align*}$$ and if there’s a pattern, it isn’t an obvious one, but you can get as good an approximation as you want in relatively straightforward fashion; $$F(x)=1+\frac{x}2-\frac{x^2}{12}+\frac{x^3}{24}-\frac{17x^4}{720}+\frac{29x^5}{480}+\dots$$ already gives two or three decimal places over much of the interval of convergence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/110869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Area of a trapezoid from given the two bases and diagonals Find the area of trapezoid with bases $7$ cm and $20$ cm and diagonals $13$ cm and $5\sqrt{10} $ cm. My approach: Assuming that the bases of the trapezoid are the parallel sides, the solution I can think of is a bit ugly, * *Find the other two non-parallel sides of the trapezoid by using this formula. *Find the height using this $$ h= \frac{\sqrt{(-a+b+c+d)(a-b+c+d)(a-b+c-d)(a-b-c+d)}}{2(b-a)}$$ Now, we can use $\frac12 \times$ sum of the parallel sides $\times$ height. But, this is really messy and I am not sure if this is correct or feasible without electronic aid, so I was just wondering how else we could solve this problem?
Let's denote $a=20$ , $b=7$ ,$d_1=13$ , $d_2=5 \sqrt{10}$ , (see picture below) You should solve following system of equations : $\begin{cases} d_1^2-(b+x)^2=d_2^2-(b+y)^2 \ a-b=x+y \end{cases}$ After you find values of $x$ and $y$ calculate $h$ from one of the following equations : $h^2=d_2^2-(b+y)^2$ , or $h^2= d_1^2-(b+x)^2$ Then calculate area : $A=\frac{a+b}{2} \cdot h$
{ "language": "en", "url": "https://math.stackexchange.com/questions/110921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
How to get from $a\sqrt{1 + \frac{b^2}{a^2}}$ to $\sqrt{a^2 + b^2}$ I have the following expression: $a\sqrt{1 + \frac{b^2}{a^2}}$. If I plug this into Wolfram Alpha, it tells me that, if $a, b$ are positive, this equals $\sqrt{a^2 + b^2}$. How do I get that result? I can't see how that could be done. Thanks
$$a\sqrt{1 + \frac{b^2}{a^2}}$$ $$=a\sqrt{\frac{a^2 + b^2}{a^2}}$$ $$=a\frac{\sqrt{a^2 + b^2}}{|a|}$$ So when $a$ and $b$ are positive, $|a|=a$. Hence: $$=\sqrt{a^2 + b^2}$$ Without the assumption: $$\sqrt{a^2} =|a|=\begin{cases} a && a \geq 0\\ -a &&a < 0\\ \end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/110994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to deal with multiplication inside of integral? I have an undefined integral like this: \begin{aligned} \ \int x^3 \cdot \sin(4+9x^4)dx \end{aligned} I have to integrate it and I have no idea where to start. I have basic formulas for integrating but I need to split this equation into two or to do something else.
Note that $$(4+9x^4)' = 36x^3$$ So that your integral becomes $$\int x^3 \sin(4+9x^4)dx$$ $$\dfrac{1}{36}\int 36x^3 \sin(4+9x^4)dx$$ $$\dfrac{1}{36}\int \sin u du$$ Which you can easily solve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/111069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Prove by induction that $n!>2^n$ Possible Duplicate: Proof the inequality $n! \geq 2^n$ by induction Prove by induction that $n!>2^n$ for all integers $n\ge4$. I know that I have to start from the basic step, which is to confirm the above for $n=4$, being $4!>2^4$, which equals to $24>16$. How do I continue though. I do not know how to develop the next step. Thank you.
Hint: prove inductively that a product is $> 1$ if each factor is $>1$. Apply that to the product $$\frac{n!}{2^n}\: =\: \frac{4!}{2^4} \frac{5}2 \frac{6}2 \frac{7}2\: \cdots\:\frac{n}2$$ This is a prototypical example of a proof employing multiplicative telescopy. Notice how much simpler the proof becomes after transforming into a form where the induction is obvious, namely: $\:$ a product is $>1$ if all factors are $>1$. Many inductive proofs reduce to standard inductions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/111146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 2 }
Linear Algebra: Find a matrix A such that T(x) is Ax for each x I am having difficulty solve this problem in my homework: (In my notation, $[x;y]$ represents a matrix of 2 rows, 1 column) Let $\mathbf{x}=[x_1;x_2]$, $v_1$=[−3;5] and $v_2=[7;−2]$ and let $T\colon\mathbb{R}^2\to\mathbb{R}^2$ be a linear transformation that maps $\mathbf{x}$ into $x_1v_1+x_2v_2$. Find a matrix $A$ such that $T(\mathbf{x})$ is $A\mathbf{x}$ for each $\mathbf{x}$. I am pretty clueless. So I assume that I start off with the following: $x_1v_1 + x_2v_2 = x_1[−3;5] + x_2[7;−2]$ But I do not know what to do from here, or if this is even the correct start!
If I understand you correctly, I would say that $$A = \left(\begin{array}{rr}-3&7\\5&-2\end{array}\right) \ \textrm{and} \ x'=Ax.$$ You can see this if you use $$x' = \left(\begin{array}{cc}x_1\\x_2\end{array}\right).$$ Then $$x_1'= -3\cdot x_1 + 7\cdot x_2 = x_1 \cdot v_{11} + x_2\cdot v_{21}$$ and $$x_2'= 5\cdot x_1-2\cdot x_2 = x_1\cdot v_{12} + x_2\cdot v_{22}$$ (here $v_{12}$ is the second element of the first $v_1$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/111213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What does $\sin^{2k}\theta+\cos^{2k}\theta=$? What is the sum $\sin^{2k}\theta+\cos^{2k}\theta$ equal to? Besides Mathematical Induction,more solutions are desired.
If you let $z_k=\cos^k(\theta)+i\sin^k(\theta)\in\Bbb C$, it is clear that $$ \cos^{2k}(\theta)+\sin^{2k}(\theta)=||z_k||^2. $$ When $k=1$ the complex point $z_1$ describes (under the usual Argand-Gauss identification $\Bbb C=\Bbb R^2$) the circumference of radius $1$ centered in the origin, and your expression gives $1$. For any other value $k>1$, the point $z_k$ describes a closed curve $\cal C_k\subset\Bbb R^2$ and your expression simply computes the square distance of the generic point from the origin. There's no reason to expect that this expression may take a simpler form than it already has.
{ "language": "en", "url": "https://math.stackexchange.com/questions/111265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Density function with absolute value Let $X$ be a random variable distributed with the following density function: $$f(x)=\frac{1}{2} \exp(-|x-\theta|) \>.$$ Calculate: $$F(t)=\mathbb P[X\leq t], \mathbb E[X] , \mathrm{Var}[X]$$ I have problems calculating $F(t)$ because of the absolute value. I'm doing it by case statements but it just doesn't get me to the right answer. So it gets to this: $$ \int_{-\infty}^\infty\frac{1}{2} \exp(-|x-\theta|)\,\mathrm dx $$
The very best thing you can do in solving problems such as these is to sketch the given density function first. It does not have to be a very accurate sketch: if you drew a peak of $\frac{1}{2}$ at $x=\theta$ and decaying curves on either side, that's good enough! Finding $F_X(t)$: * *Pick a number $t$ that is smaller than $\theta$ (where that peak is) and remember that $F_X(t)$ is just the area under the exponential curve to the left of $t$. You can find this area by integration. *Think why it must be that $F_X(\theta) = \frac{1}{2}$. *Pick a $t > \theta$. Once again, you have to find $F_X(t)$ which is the area under the density to the left of $t$. This is clearly the area to the left of $\theta$ (said area is $\frac{1}{2}$, of course!) plus the area under the curve between $\theta$ and $t$ which you can find by integration. Or you can be clever about it and say that the area to the right of $t = \theta + 5$ must, by symmetry, equal the area to the left of $\theta - 5$, which you found previously. Since the total area is $1$, we have $F_X(\theta+5)=1-F_X(\theta-5)$, or more generally, $$F_X(\theta + \alpha) = 1 - F_X(\theta - \alpha).$$ Finding $E[X]$: Since the pdf is symmetric about $\theta$, it should work out that $E[X]=\theta$ but we do need to check that the integral does not work out to be of the undefined form $\infty-\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/111325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Solution to Locomotive Problem (Mosteller, Fifty Challenging Problems in Probability) My question concerns the solution Professor Mosteller gives for the Locomotive Problem in his book, Fifty Challenging Problems in Probability. The problem is as follows: A railroad numbers its locomotives in order 1, 2, ..., N. One day you see a locomotive and its number is 60. Guess how many locomotives the company has. Mosteller's solution uses the "symmetry principle". That is, if you select a point at random on a line, on average the point you select will be halfway between the two ends. Based on this, Mosteller argues that the best guess for the number of locomotives is 119 (locomotive #60, plus an equal number on either "side" of 60 gives 59 + 59 + 1 = 119. While I feel a bit nervous about challenging the judgment of a mathematician of Mosteller's stature, his answer doesn't seem right to me. I've picked a locomotive at random and it happens to be number 60. Given this datum, what number of locomotives has the maximum likelihood? It seems to me that the best answer (if you have to choose a single value) is that there are 60 locomotives. If there are 60 locomotives, then the probability of my selecting the 60th locomotive at random is 1/60. Every other total number of locomotives gives a lower probability for selecting #60. For example, if there are 70 locomotives, I have only a 1/70 probability of selecting #60 (and similarly, the probability is 1/n for any n >= 60). Thus, while it's not particularly likely that there are exactly 60 locomotives, this conclusion is more likely than any other. Have I missed something, or is my analysis correct?
Choosing $2\times 60 - 1$ gives an unbiased estimate of $N$. Choosing $60$ gives a maximum likelihood estimate of $N$. But these two types of estimator are often different, and indeed this example is the one used by Wikipedia to show that the bias of maximum-likelihood estimators can be substantial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/111374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 1, "answer_id": 0 }
Lemma vs. Theorem I've been using Spivak's book for a while now and I'd like to know what is the formal difference between a Theorem and a Lemma in mathematics, since he uses the names in his book. I'd like to know a little about the ethymology but mainly about why we choose Lemma for some findings, and Theorem for others (not personally, but mathematically, i.e. why should one classify a finding as lemma and not as theorem). It seems that Lemmas are rather minor findings that serve as a keystone to proving a Theorem, by that is as far as I can go. NOTE: This question doesn't address my concern, so please avoid citing it as a duplicate.
There is no mystery regarding the use of these terms: an author of a piece of mathematics will label auxiliary results that are accumulated in the service of proving a major result lemmas, and will label the major results as propositions or (for the most major results) theorems. (Sometimes people will not use the intermediate term proposition; it depends on the author.) Exactly how this is decided is a matter of authorial judgement. There is a separate issue, which is the naming of certain well-known traditional results, such as Zorn's lemma, Nakayama's lemma, Yoneda's lemma, Fatou's lemma, Gauss's lemma, and so on. Those names are passed down by tradition, and you don't get to change them, whatever your view is on the importance of the results. As to how they originated, one would have to investigate the literature.
{ "language": "en", "url": "https://math.stackexchange.com/questions/111428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 6, "answer_id": 1 }
Examples of patterns that eventually fail Often, when I try to describe mathematics to the layman, I find myself struggling to convince them of the importance and consequence of "proof". I receive responses like: "surely if Collatz is true up to $20×2^{58}$, then it must always be true?"; and "the sequence of number of edges on a complete graph starts $0,1,3,6,10$, so the next term must be 15 etc." Granted, this second statement is less logically unsound than the first since it's not difficult to see the reason why the sequence must continue as such; nevertheless, the statement was made on a premise that boils down to "interesting patterns must always continue". I try to counter this logic by creating a ridiculous argument like "the numbers $1,2,3,4,5$ are less than $100$, so surely all numbers are", but this usually fails to be convincing. So, are there any examples of non-trivial patterns that appear to be true for a large number of small cases, but then fail for some larger case? A good answer to this question should: * *be one which could be explained to the layman without having to subject them to a 24 lecture course of background material, and *have as a minimal counterexample a case which cannot (feasibly) be checked without the use of a computer. I believe conditions 1. and 2. make my question specific enough to have in some sense a "right" (or at least a "not wrong") answer; but I'd be happy to clarify if this is not the case. I suppose I'm expecting an answer to come from number theory, but can see that areas like graph theory, combinatorics more generally and set theory could potentially offer suitable answers.
This might be a simple example. If we inscribe a circle of radius 1 in a square of side 2, the ratio of the area of the circle to the square is $\frac{\pi}{4}$. You can show that any time we put a square number of circles into this square, the ratio of the area of the circles to that of the square is (for the simple symmetric arrangement) again $\frac{\pi}{4} $. So for 1, 4, 9, 16 circles, this packing is the best we can do. I had mistakenly assumed, based on this "obvious" pattern, that the limit of optimal packings of circles into the square did not converge, but rather continued to drop down to this same ratio every time a square number was reached. This turns out not to be true, as I learned here. The pattern breaks down at n=49 circles. At 49 circles the optimal packing, given here, is not the simple square lattice arrangement. There are many other examples, but this served as a reminder for me.
{ "language": "en", "url": "https://math.stackexchange.com/questions/111440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "612", "answer_count": 41, "answer_id": 24 }
Solving the equation $- y^2 - x^2 - xy = 0$ Ok, this is really easy and it's getting me crazy because I forgot nearly everything I knew about maths! I've been trying to solve this equation and I can't seem to find a way out. I need to find out when the following equation is valid: $$\frac{1}{x} - \frac{1}{y} = \frac{1}{x-y}$$ Well, $x \not= 0$, $y \not= 0$, and $x \not= y$ but that's not enough I suppose. The first thing I did was passing everything to the left side: $$\frac{x-y}{x} - \frac{x-y}{y} - 1 = 0$$ Removing the fraction: $$xy - y² - x² + xy - xy = 0xy$$ But then I get stuck.. $$- y² - x² + xy = 0$$ How can I know when the above function is valid?
$x^2-xy+y^2=(x+jy)(x+j^2y)$ so $x=y(1+\sqrt{-3})/2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/111486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
For $f$ Riemann integrable prove $\lim_{n\to\infty} \int_0^1x^nf(x)dx=0.$ Suppose $f$ is a Riemann integrable function on $[0,1]$. Prove that $\lim_{n\to\infty} \int_0^1x^nf(x)dx=0.$ This is what I am thinking: Fix $n$. Then by Jensen's Inequality we have $$0\leq\left(\int_0^1x^nf(x)dx\right)^2 \leq \left(\int_0^1x^{2n}dx\right)\left(\int_0^1f^2(x)dx\right)=\left(\frac{1}{2n+1}\right)\left(\int_0^1f^2(x)dx\right).$$Thus, if $n\to\infty$ then $$0\leq \lim_{n\to \infty}\left(\int_0^1x^nf(x)dx\right)^2 \leq 0$$ and hence we get what we want. How correct (or incorrect) is this?
That looks great. If someone doesn't know Jensen's inequality, this is still seen just with Cauchy-Schwarz. Another quick method is the dominated convergence theorem. Gerry's and Peters answers are both far simpler though.
{ "language": "en", "url": "https://math.stackexchange.com/questions/111561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Is $3^x \lt 1 + 2^x + 3^x \lt 3 \cdot 3^x$ right? Is $3^x \lt 1 + 2^x + 3^x \lt 3 \cdot 3^x$ right? This is from my lecture notes which is used to solve: But when $x = 0$, $(1 + 2^x + 3^x = 3) \gt (3^0 = 1)$? The thing is how do I choose which what expression should go on the left & right side?
When $x=0$, the left side $3^0=1$, the center is $3$ as you say, and the right side is $3\cdot 3^0=3 \cdot 1=3$ so the center and right sides are equal. But you want this for large $x$, so could restrict the range to $x \gt 1$, say.
{ "language": "en", "url": "https://math.stackexchange.com/questions/111661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Notation of the summation of a set of numbers Given a set of numbers $S=\{x_1,\dotsc,x_{|S|}\}$, where $|S|$ is the size of the set, what would be the appropriate notation for the sum of this set of numbers? Is it $$\sum_{x_i \in S} x_i \qquad\text{or}\qquad \sum_{i=1}^{|S|} x_i$$ or something else?
Say I had a set A, under an operation with the properties of $+$, then $$\sum_{i\in A} x_i$$ is how I write it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/111724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 3, "answer_id": 2 }
If $A,B\in M(2,\mathbb{F})$ and $AB=I$, then $BA=I$ This is Exercise 7, page 21, from Hoffman and Kunze's book. Let $A$ and $B$ be $2\times 2$ matrices such that $AB=I$. Prove that $BA=I.$ I wrote $BA=C$ and I tried to prove that $C=I$, but I got stuck on that. I am supposed to use only elementary matrices to solve this question. I know that there is this question, but in those answers they use more than I am allowed to use here. I would appreciate your help.
$AB= I$, $Det(AB) = Det (A) . Det(B) = 1$. Hence $Det(B)\neq 0$ Hence $B$ is invertible. Now let $BA= C$ then we have $BAB= CB$ which gives $B= CB$ that is $B. B^{-1} = C$ this gives $ C= I$
{ "language": "en", "url": "https://math.stackexchange.com/questions/111771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 3 }
Fibonacci numbers modulo $p$ If $p$ is prime, then $F_{p-\left(\frac{p}{5}\right)}\equiv 0\bmod p$, where $F_j$ is the $j$th Fibonacci number, and $\left(\frac{p}{5}\right)$ is the Jacobi symbol. Who first proved this? Is there a proof simple enough for an undergraduate number theory course? (We will get to quadratic reciprocity by the end of the term.)
Here's a proof that only uses a little Galois theory of finite fields (and QR). I don't know if it's any of the proofs referenced by Gerry. Recall that $$F_n = \frac{\phi^n - \varphi^n}{\phi - \varphi}$$ where $\phi, \varphi$ are the two roots of $x^2 = x + 1$. Crucially, this formula remains valid over $\mathbb{F}_{p^2}$ where $p$ is any prime such that $x^2 = x + 1$ has distinct roots, thus any prime not equal to $5$. We distinguish two cases: * *$x^2 = x + 1$ is irreducible. This is true for $p = 2$ and for $p > 2, p \neq 5$ it's true if and only if the discriminant $\sqrt{5}$ isn't a square $\bmod p$, hence if and only if $\left( \frac{5}{p} \right) = -1$, hence by QR if and only if $\left( \frac{p}{5} \right) = -1$. In this case $x^2 = x + 1$ splits over $\mathbb{F}_{p^2}$ and the Frobenius map $x \mapsto x^p$ generates its Galois group, hence $\phi^p \equiv \varphi \bmod p$. It follows that $\phi^{p+1} \equiv \phi \varphi \equiv -1 \bmod p$ and the same is true for $\varphi$, hence that $F_{p+1} \equiv 0 \bmod p$. *$x^2 = x + 1$ is reducible. This is false for $p = 2$ and for $p > 2, p \neq 5$ it's true if and only if $\left( \frac{p}{5} \right) = 1$. In this case $x^2 = x + 1$ splits over $\mathbb{F}_p$, hence $\phi^{p-1} \equiv 1 \bmod p$ and the same is true for $\varphi$, hence $F_{p-1} \equiv 0 \bmod p$. The case $p = 5$ can be handled separately. Maybe this is slightly ugly, though.
{ "language": "en", "url": "https://math.stackexchange.com/questions/111835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 2 }
Resources for learning Elliptic Integrals During a quiz my Calc 3 professor made a typo. He corrected it in class, but he offered a challenge to anyone who could solve the integral. The (original) question was: Find the length of the curve as described by the vector valued function $\vec{r} = \frac{1}{3}t^{3}\vec{i} + t^{2}\vec{j} + 4t\vec{k} $ where $0 \le t \le 3$ This give us: $\int_0^3 \! \sqrt{t^{4}+4t^{2}+16} \, \mathrm{d}t$ Wolfram Alpha says that the solution to this involves Incomplete Elliptic Integrals of the First and Second Kinds. I was wondering if anyone had any level appropriate resources where I can find information about how to attack integrals like this. Thanks in advance.
There are plenty of places to look (for example, most any older 2-semester advanced undergraduate "mathematics for physicists" or "mathematics for engineers" text), but given that you're in Calculus III, some of these might be too advanced. If you can find a copy (your college library may have a copy, or might be able to get a copy using interlibrary loan), I strongly recommend the treatment of elliptic integrals at the end of G. M. Fichtenholz's book The Indefinite Integral (translated to English by Richard A. Silverman in 1971). Also, the books below might be useful, but Fichtenholz's book would be much better suited for you, I think. (I happen to have a copy of Fichtenholz's book and Bowman's book, by the way.) Arthur Latham Baker, Elliptic functions: An Elementary Text-book for Students of Mathematics (1906) http://books.google.com/books?id=EjYaAAAAYAAJ Alfred Cardew Dixon, The Elementary Properties of the Elliptic Functions With Examples (1894) http://books.google.com/books?id=Gx4SAAAAYAAJ Frank Bowman, Introduction to Elliptic Functions With Applications (reprinted by Dover Publications in 1961)
{ "language": "en", "url": "https://math.stackexchange.com/questions/111921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Why are perpendicular bisectors 'lines'? Given two points $p$ and $q$ their bisector is defined to be $l(p,q)=\{z:d(p,z)=d(q,z)\}$. Due to the construction in Euclidean geometry, we know that $l(p,q)$ is a line, that is, for $x,y,z\in l(p,q)$, we have $d(x,y)+d(y,z)=d(x,z)$, which charactorizes lines. I wonder whether this is true for other geometries. That is, does the bisector always satisfy the above charactorization? I think about this problem when trying to prove bisectors are 'lines' in hyperbolic geometry (upper half plane) where the metric is different from Euclidean, only to notice even the Euclidean case is not so easy. Any advice would be helpful!
Let $A$ and $B$ be the two given points and let $M$ be the midpoint of $AB$, i.e., $M\in A\vee B$ and $d(M,A)=d(M,B)$. Let $X\ne M$ be an arbitrary point with $d(X,A)=d(X,B)$. Then the triangles $\Delta(X,A,M)$ and $\Delta(X,B,M)$ are congruent as corresponding sides have equal length. It follows that $\angle(XMA)=\angle(XMB)={\pi\over2}$ which implies that the line $m:=X\vee M$ is the unique normal to $A\vee B$ through $M$. Conversely, if $Y$ is an arbitrary point on this line, then $d(Y,M)=d(Y,M)$, $d(M,A)=d(M,B)$ and $\angle(Y,M,A)=\angle(Y,M,B)={\pi\over2}$. Therefore the triangles $\Delta(Y,M,A)$ and $\Delta(Y,M,B)$ are congruent, and we conclude that $d(Y,A)=d(Y,B)$. The above argument is valid in euclidean geometry as well as in spherical and hyperbolic geometry. Note that a spherical triangle is completely determined (up to a motion or reflection on $S^2$) by the lengths of its three sides or by the lengths of two sides and the enclosed angle, and the same is true concerning hyperbolic triangles.
{ "language": "en", "url": "https://math.stackexchange.com/questions/111975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Can there exist a non-constant continuous function that has a derivative of zero everywhere? Somebody told me that there exists a continuous function with a derivative of zero everywhere that is not constant. I cannot imagine how that is possible and I am starting to doubt whether it's actually true. If it is true, could you show me an example? If it is not true, how would you go about disproving it?
Since there are no restrictions on the domain, it is actually possible. Let $f:(0,1)\cup(2,3)\to \mathbb R$ be defined by $f(x)=\left\{ \begin{array}{ll} 0 & \mbox{if } x \in (0,1) \\ 1 & \mbox{if } x\in (2,3) \end{array} \right.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/112047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 0 }
Simplest Example of a Poset that is not a Lattice A partially ordered set $(X, \leq)$ is called a lattice if for every pair of elements $x,y \in X$ both the infimum and suprememum of the set $\{x,y\}$ exists. I'm trying to get an intuition for how a partially ordered set can fail to be a lattice. In $\mathbb{R}$, for example, once two elements are selected the completeness of the real numbers guarantees the existence of both the infimum and supremum. Now, if we restrict our attention to a nondegenerate interval $(a,b)$ it is clear that no two points in $(a,b)$ have either a suprememum or infimum in $(a,b)$. Is this the right way to think of a poset that is not a lattice? Is there perhaps a more fundamental example that would yield further clarity?
The set $\{x,y\}$ in which $x$ and $y$ are incomparable is a poset that is not a lattice, since $x$ and $y$ have neither a common lower nor common upper bound. (In fact, this is the simplest such example.) If you want a slightly less silly example, take the collection $\{\emptyset, \{0\}, \{1\}\}$ ordered by inclusion. This is a poset, but not a lattice since $\{0\}$ and $\{1\}$ have no common upper bound.
{ "language": "en", "url": "https://math.stackexchange.com/questions/112117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 6, "answer_id": 1 }
$C[0,1]$ is not Hilbert space Prove that the space $C[0,1]$ of continuous functions from $[0,1]$ to $\mathbb{R}$ with the inner product $ \langle f,g \rangle =\int_{0}^{1} f(t)g(t)dt \quad $ is not Hilbert space. I know that I have to find a Cauchy sequence $(f_n)_n$ which converges to a function $f$ which is not continuous, but I can't construct such a sequence $(f_n)_n$. Any help?
You are right to claim that in order to prove that the subspace $C[0,1]$ of $L^2[0,1]$ is not complete, it is sufficient to "find [in $C[0,1]$] a Cauchy sequence $(f_n)_n$ [i.e. Cauchy for the $L^2$-norm] which converges [in $L^2[0,1]$] to a function $f$ which is not continuous". It will even be useless to check that $(f_n)_n$ is $L^2$-Cauchy: this will result from the $L^2$-convergence. The sequence of functions $f_n\in C[0,1]$ defined by $$f(x)=\begin{cases}n\left(x-\frac12\right)&\text{if }\left|x-\frac12\right|\le\frac1n\\+1&\text{if }x-\frac12\ge\frac1n\\-1&\text{if }x-\frac12\le-\frac1n\end{cases}$$ satisfies $|f_n|\le1$ and converges pointwise to the function $f$ defined by $$f(x)=\begin{cases}0&\text{if }x=\frac12\\+1&\text{if }x>\frac12\\-1&\text{if }x<\frac12.\end{cases}$$ By the dominated convergence theorem, we deduce $\|f_n-f\|_2\to0.$ Because of its jump, the function $f$ is discontinuous, and more precisely: not equal almost everywhere to any continuous function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/112168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 3, "answer_id": 2 }
Solving the system $\sum \sin = \sum \cos = 0$. Can we solve the system of equations: $$\sin \alpha + \sin \beta + \sin \gamma = 0$$ $$\cos \alpha + \cos \beta + \cos \gamma = 0$$ ? (i.e. find the possible values of $\alpha, \beta, \gamma$)
Developing on Gerenuk's answer, you could consider the complex numbers $$ z_1=\cos \alpha+i\sin \alpha,\ z_2=\cos \beta+i\sin\beta,\ z_3=\cos \gamma+i\sin \gamma$$ Then you know that $z_1,z_2,z_3$ are on the unit circle, and the centroid of the triangle formed by the points of afixes $z_i$ is of afix $\frac{z_1+z_2+z_3}{3}=0$. From classical geometry, we can see that if the centroid of a triangle is the same as the center of the circumscribed circle, then the triangle is equilateral. This proves that $\alpha,\beta,\gamma$ are of the form $\theta,\theta+\frac{2\pi}{3},\theta+\frac{4\pi}{3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/112411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 2 }
Is the domain of an one-to-one function a set if the target is a set? This is probably very naive but suppose I have an injective map from a class into a set, may I conclude that the domain of the map is a set as well?
If a function $f:A\to B$ is injective one, we can assume without loss of generality that $f$ is surjective too (by passing to a subclass of $B$), therefore $f^{-1}:B\to A$ is also a bijection. If $B$ is a set then every subclass of $B$ is a set, so $f^{-1}:B\to A$ is a bijection from a set, and by the axiom of replacement $A$ is a set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/112467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Basis for adjoint representation of $sl(2,F)$ Consider the lie algebra $sl(2,F)$ with standard basis $x=\begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}$, $j=\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$, $h=\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}$. I want to find the casimir element of the adjoint representation of $sl(2,F)$. How can I go about this? Thanks.
A representation for a Lie algebra $\mathfrak{g}$ is a Lie algebra homomorphism $\varphi:\mathfrak{g} \to \mathfrak{gl}(V)$ for some vector space $V$. Of course, every representation corresponds to a module action. In the case of this representation the module action would be $g \cdot v = \varphi(g)(v)$. It is not clear what you mean by "basis for the representation". Do you mean a basis for the linear transformations $\varphi(g)$? That would be a basis for $\varphi(\mathfrak{g})$ (the image of $\mathfrak{g}$ in $\mathfrak{gl}(V)$). Or do you mean a basis for the module $V$? The adjoint representation is the map $\mathrm{ad}:\mathfrak{g} \to \mathfrak{gl}(\mathfrak{g})$ defined by $\mathrm{ad}(g)=[g,\cdot]$. In the case that $\mathfrak{g}=\mathfrak{sl}_2$, $\mathfrak{g}$ has a trivial center so $\mathrm{ad}$ is injective. Thus a basis for $\mathfrak{g}$ maps directly to a basis for $\mathrm{ad}(\mathfrak{g})$. Therefore, if by "basis for the representation" you mean a basis for the space of linear transformations $\mathrm{ad}(\mathfrak{sl}_2)$, then "Yes" $\mathrm{ad}_e, \mathrm{ad}_f,$ and $\mathrm{ad}_h$ form a basis for this space. On the other hand, if you mean "basis for the module" then $e,f,$ and $h$ themselves form a basis for $V=\mathfrak{sl}_2$. By the way, if you are looking for matrix representations for $ad_e,ad_f,ad_h$ relative to the basis $e,f,h$. Simply compute commutators: $[e,e]=0$, $[e,f]=h$, $[e,h]=-2e$. Thus the coordinate matrix of $ad_e$ is $$\begin{bmatrix} 0 & 0 & -2 \\ 0 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/112550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What are the possible values for $\gcd(a^2, b)$ if $\gcd(a, b) = 3$? I was looking back at my notes on number theory and I came across this question. Let $a$, $b$ be positive integers such that $\gcd(a, b) = 3$. What are the possible values for $\gcd(a^2, b)$? I know it has to do with their prime factorization decomposition, but where do I go from here?
If $p$ is a prime, and $p|a^2$, then $p|a$; thus, if $p|a^2$ and $p|b$, then $p|a$ and $p|b$, hence $p|\gcd(a,b) = 3$. So $\gcd(a^2,b)$ must be a power of $3$. Also, $3|a^2$ and $3|b$, so $3|\gcd(a^2,b)$; so $\gcd(a^2,b)$ is a multiple of $3$. If $3^{2k}|a^2$, then $3^k|a$ (you can use prime factorization here); so if $3^{2k}|\gcd(a^2,b)$, then $3^k|\gcd(a,b) = 3$. Thus, $k\leq 1$. That is, no power of $3$ greater than $3^2$ can divide $\gcd(a^2,b)$. In summary: $\gcd(a^2,b)$ must be a power of $3$, must be a multiple of $3$, and cannot be divisible by $3^3=27$. What's left? Now give examples to show all of those possibilities can occur.
{ "language": "en", "url": "https://math.stackexchange.com/questions/112608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
The inclusion $j:L^{\infty}(0,1)\to L^1(0,1)$ is continuous but not compact. I'm stuck on this problem, namely I cannot find a bounded subset in $L^\infty(0,1)$ such that it is not mapped by the canonical inclusion $$j: L^\infty(0,1)\to L^1(0,1)$$ onto a relatively compact subset in $L^1(0,1)$. Can anybody provide me an example? Really I don't see the point. My thoughts are wondering on the fact that the ball of $L^\infty(0,1)$ is norm dense in $L^1(0,1)$ so the inclusion cannot be compact, however, as i said, no practical examples come to my mind. Thank you very much in advance.
This is actually just a variant of a special case of NKS’s example, but it may be especially easy to visualize with this description. For $n\in\mathbb{Z}^+$ and $x\in(0,1)$ let $f_n(x)$ be the $n$-th bit in the unique non-terminating binary expansion of $x$. Then $\|f_n\|_\infty=1$, but $\|f_n-f_m\|_1=\frac12$ whenever $n\ne m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/112668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Infinite distinct factorizations into irreducibles for an element Consider the factorization into irreducibles of $6$ in $\mathbb{Z}[\sqrt{-5}]$. We have $6=2 \times 3$ and $6=(1+\sqrt{-5}) \times (1-\sqrt{-5})$, i.e. $2$ distinct factorizations. And, $$6^2=3 \times 3\times2\times2$$ $$=(1+\sqrt{-5}) \times (1-\sqrt{-5}) \times (1+\sqrt{-5}) \times (1-\sqrt{-5})$$ $$=(1+\sqrt{-5}) \times (1-\sqrt{-5})\times3\times2.$$ More generally, $6^n$ will have $n+1$ distinct factorizations into irreducibles in $\mathbb{Z}[\sqrt{-5}]$ by a simple combinatorial argument. But, can we construct a ring in which there exists an element that has an infinite number of distinct factorizations into irreducibles? To make life harder, can we construct an extension of $\mathbb{Z}$ in which this happens? I have been thinking about this for a while and have managed to find no foothold.. Any help is appreciated.
If you are only interested in behaviour in the ring of integers of a number field (such as $\mathbb{Z}[\sqrt{-5}]$) then you will never get infinitely many different factorisations of an element. These different factorisations come from reordering the (finitely many) prime ideals in the unique factorisation of the ideal generated by your element.
{ "language": "en", "url": "https://math.stackexchange.com/questions/112846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 3 }
For $x+y=n$, $y^x < x^y$ if $x(updated) I'd like to use this property for my research, but it's somewhat messy to prove. $$\text{For all natural number $x,y$ such that $x+y=n$ and $1<x<y<n$, then $y^x < x^y$}.$$ For example, let $x=3, y=7$. Then $y^x = y^3 = 343$ and $x^y = 3^7 = 2187$. Any suggestion on how to prove this?
I proved this in the special case $x = 99, y = 100$, here. As others have pointed out, what you really want to hold is the following: Statement: Let $x, y \in \mathbb{R}$. Then $y > x > e$ implies $x^y > y^x$. Proof:. Write $y = x + z$, where $z > 0$. Then, $$\begin{align} x^y > y^x &\iff x^x x^z > y^x \\ &\iff x^z > \left(\frac{x+z}{x} \right)^x \\ &\iff x^z > \left( 1 + \frac{z}{x} \right)^x. \end{align}$$ The right hand side $\left(1 + \frac{z}{x} \right)^x$ is monotone increasing with limit $e^z$. Since the left hand size is strictly greater than $e^z$ (as $x > e$), it follows that the inequality always holds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/112900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
"Every linear mapping on a finite dimensional space is continuous" From Wiki Every linear function on a finite-dimensional space is continuous. I was wondering what the domain and codomain of such linear function are? Are they any two topological vector spaces (not necessarily the same), as along as the domain is finite-dimensional? Can the codomain be a different normed space (and may not be finite-dimensional)? I asked this because I saw elsewhere the same statement except the domain is a finite-dimensional normed space, and am also not sure if the codomain can be a different normed space (and may not be finite-dimensional). Thanks and regards!
The special case of a linear transformations $A: \mathbb{R}^n \to \mathbb{R}^n$ being continuous leads nicely into the definition and existence of the operator norm of a matrix as proved in these notes. To summarise that argument, if we identify $M_n(\mathbb{R})$ with $\mathbb{R^{n^2}}$, and suppose that $v \in \mathbb{R}^n$ has co-ordinates $v_j$, then by properties of the Euclidean and sup norm on $\mathbb{R}^n$ we have: $\begin{align}||Av|| &\leq \sqrt{n} \,||Av||_{\sup} \\&= \sqrt{n}\max_i\bigg|\sum_{j}a_{ij}\,v_j\bigg|\\&\leq \sqrt{n}\max_i \sum_{j}|a_{ij}\,v_j|\\&\leq \sqrt{n} \max_i n\big(\max_j|a_{ij} v_j|\big)\\&=n\sqrt{n} \max_i \big(\max_j |a_{ij}| \max_j |v_j|\big)\\&= n\sqrt{n}\max_{i,j}|a_{ij}|||v||_{\sup}\\&\leq n \sqrt{n} \max_{i,j}|a_{ij}|||v|| \end{align}$ $\Rightarrow ||Av|| \leq C ||v||$ where $C = n\sqrt{n}\displaystyle\max_{i,j}|a_{ij}|$ is independent of $v$ So if $\varepsilon>0$ is given, choose $\delta = \dfrac{\varepsilon}{C}$ and for $v, w \in \mathbb{R}^n$ with $||v-w||< \delta$ consider $||Av - Aw || = ||A(v-w) || \leq C ||v-w || < \delta C= \varepsilon$ from which we conclude that $A$ is uniformly continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/112985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "59", "answer_count": 3, "answer_id": 2 }
What is the Jacobian? What is the Jacobian of the function $f(u+iv)={u+iv-a\over u+iv-b}$? I think the Jacobian should be something of the form $\left(\begin{matrix} {\partial f_1\over\partial u} & {\partial f_1\over\partial v} \\ {\partial f_2\over\partial u} & {\partial f_2\over\partial v} \end{matrix}\right)$ but I don't know what $f_1,f_2$ are in this case. Thank you.
You could just write $(u+iv−(a_1+a_2 i))/(u+iv−(b_1+b_2 i))$ where $u,v,a_1,a_2,b_1,b_2$ are real. Then multiply the numerator and denominator by the complex conjugate of the denominator to find the real and imaginary parts. Then later, exploit the Cauchy--Riemann equations to conclude that the matrix must have the form $\begin{bmatrix} {}\ \ \ c & d \\ -d& c\end{bmatrix}$ where $c$ and $d$ are some real numbers and $f\;'(z)=c+id$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/113027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Indefinite integral of $\cos^{3}(x) \cdot \ln(\sin(x))$ I need help. I have to integrate $\cos^{3} \cdot \ln(\sin(x))$ and I don´t know how to solve it. In our book it is that we have to solve using the substitution method. If somebody knows it, you will help me..please
Substitute : $\sin x =t \Rightarrow \cos x dx =dt$ , hence : $I=\int (1-t^2)\cdot \ln (t) \,dt$ This integral you can solve using integration by parts method .
{ "language": "en", "url": "https://math.stackexchange.com/questions/113184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proof of $\sum_{0 \le k \le a} {a \choose k} {b \choose k} = {a+b \choose a}$ $$\sum_{0 \le k \le a}{a \choose k}{b \choose k} = {a+b \choose a}$$ Is there any way to prove it directly? Using that $\displaystyle{a \choose k}=\frac{a!}{k!(a-k)!}$?
How about this proof? (Actually an extended version of your identity.) * *http://en.wikipedia.org/wiki/Chu-Vandermonde_identity#Algebraic_proof I don't think it is "direct" enough, though...
{ "language": "en", "url": "https://math.stackexchange.com/questions/113267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Must a measure on $2^{\mathbb{N}}$ be atomless to be a measure on $[0,1]$? This question comes from section 4.4, page 17, of this paper. Let $\mu$ be a Borel measure on Cantor space, $2^\mathbb{N}$. The authors say that If the measure is atomless, via the binary expansion of reals we can view it also as a Borel measure on $[0,1]$. Is it necessary that $\mu$ be atomless?
The existence of the measure on $[0,1]$ has nothing to do with atoms, per se. Let $\varphi: 2^\mathbb{N}\to [0,1]$ be defined by $\varphi(x)=\sum_{n=0}^\infty {x(n)/2^n}$. This map is Borel measurable, and so for any Borel measure $\mu$ on $2^\mathbb{N}$, the image measure $\mu\circ\varphi^{-1}$ is a Borel measure on $[0,1]$. The authors mention this condition, I think, so they can go back and forth between the two viewpoints. That is, for atomless measures the map $\mu\mapsto \mu\circ\varphi^{-1}$ is one-to-one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/113317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Disprove uniform convergence of $\sum_{n=1}^{\infty} \frac{x}{(1+x)^n}$ in $[0,\infty)$ How would I show that $\sum_{n=1}^{\infty} \frac{x}{(1+x)^n}$ does not uniformly converge in $[0,\infty)$? I don't know how to approach this problem. Thank you.
This is almost the same as Davide' answer: let $$f_n(x)={x\over (1+x)^n},\ n\in\Bbb N^+;\ \ \text{ and }\ \ f(x)= \sum\limits_{n=1}^\infty {x\over(1+x)^n}.$$ Since, for $x>0$, the series $\sum\limits_{n=1}^\infty {1\over(1+x)^n}$ is a Geometric series with $r={1\over 1+x}$: $$ f(x)=x\sum_{n=1}^\infty {1\over(1+x)^n} =x\cdot{ 1/(1+x)\over 1-\bigl(1/(1+x)\bigr)} =x\cdot{1\over x}=1, $$ for $x>0$. As $f(0)=0$, we see that $f(x)$ converges pointwise to a discontinuous function on $[0,\infty)$. Since a uniform limit of a sum of continuous functions is continuous, and as each term $f_n$ is continuous on $[0,\infty)$, it follows that $f(x)$ does not converge uniformly on $[0,\infty)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/113352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
an open ball in $\mathbb{R^n}$ is connected Show that an open ball in $\mathbb{R^n}$ is a connected set. Attempt at a Proof: Let $r>0$ and $x_o\in\mathbb{R^n}$. Suppose $B_r(x_o)$ is not connected. Then, there exist $U,V$ open in $\mathbb{R^n}$ that disconnect $B_r(x_o)$. Without loss of generality, let $a\in B_r(x_o)$: $a\in U$. Since $U$ is open, for some $r_1>0$, $B_{r_1}(x_o)\subseteq U$. Since $(U\cap B_r(x_o))\cap (V\cap B_r(x_o))=\emptyset$, $a\not\in V$. Thus, $\forall b\in V, d(a,b)>0$. But then for some $b'\in V: b'\in B_r(x_o)$ and some $r>o$, $d(a,b')>r$. Contradiction since both $a$ and $b'$ were in the ball of radius $r$. Is this the general idea?
$\mathbb{R}=(-\infty,\infty)$, hence it is connected. Since the finite product of connected space is connected, the result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/113383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Show that $\tan 3x =\frac{ \sin x + \sin 3x+ \sin 5x }{\cos x + \cos 3x + \cos 5x}$ I was able to prove this but it is too messy and very long. Is there a better way of proving the identity? Thanks.
More generally, for any arithmetic sequence, denoting $z=\exp(i x)$ and $2\ell=an+2b$, we have $$\begin{array}{c l} \blacktriangle & =\frac{\sin(bx)+\sin\big((a+b)x\big)+\cdots+\sin\big((na+b)x\big)}{\cos(bx)+\cos\big((a+b)x\big)+\cdots+\cos\big((na+b)x\big)} \\[2pt] & \color{Red}{\stackrel{1}=} \frac{1}{i}\frac{z^b\big(1+z^a+\cdots+z^{na}\big)-z^{-b}\big(1+z^{-a}+\cdots+z^{-na}\big)}{z^b\big(1+z^a+\cdots+z^{na}\big)+z^{-b}\big(1+z^{-a}+\cdots+z^{-na}\big)} \\[2pt] & \color{LimeGreen}{\stackrel{2}=}\frac{1}{i}\frac{z^b-z^{-b}z^{-na}}{z^b+z^{-b}z^{-na}} \\[2pt] & \color{Blue}{\stackrel{3}=}\frac{(z^\ell-z^{-\ell})/2i}{(z^\ell+z^{-\ell})/2} \\[2pt] & \color{Red}{\stackrel{1}{=}}\frac{\sin (\ell x)}{\cos(\ell x)}. \end{array}$$ Hence $\blacktriangle$ is $\tan(\ell x)$ - observe $\ell$ is the average of the first and last term in the arithmetic sequence. $\color{Red}{(1)}$: Here we use the formulas $$\sin \theta = \frac{e^{i\theta}-e^{-i\theta}}{2i} \qquad \cos\theta = \frac{e^{i\theta}+e^{-i\theta}}{2}.$$ $\color{LimeGreen}{(2)}$: Here we divide numerator and denominator by $1+z^a+\cdots+z^{na}$. $\color{Blue}{(3)}$: Multiply numerator and denominator by $z^{na/2}/2$. Note: there are no restrictions on $a$ or $b$ - they could even be irrational!
{ "language": "en", "url": "https://math.stackexchange.com/questions/113451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 2 }
Finite Rings whose additive structure is isomorphic to $\mathbb{Z}/(n \mathbb{Z})$ I am having trouble proving the following conjecture: If $R$ is a ring with $1_R$ different from $0_R$ s.t. its additive structure is isomorphic to $\mathbb{Z}/(n \mathbb{Z})$ for some $n$, must $R$ always be isomorphic to the ring $\mathbb{Z}/(n \mathbb{Z})$ ? How do we go about defining a ring isomorphism with a proper multiplication on $R$?
Combine the following general facts: For any ring $R$, the prime ring (i.e. the subring generated by $1$) is isomorphic to the quotient of $\mathbb Z$ by the annihilator of $R$ in $\mathbb Z$. Any cyclic group $R$ is isomorphic to the quotient of $\mathbb Z$ by the annihilator of $R$ in $\mathbb Z$. (This is Mariano's answer with slightly different words.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/113505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Normal distribution involving $\Phi(z)$ and standard deviation The random variable X has normal distribution with mean $\mu$ and standard deviation $\sigma$. $\mathbb{P}(X>31)=0.2743$ and $\mathbb{P}(X<39)=0.9192$. Find $\mu$ and $\sigma$.
Hint: Write, $$ \tag{1}\textstyle P[\,X>31\,] =P\bigl[\,Z>{31-\mu\over\sigma}\,\bigr]=.2743\Rightarrow {31-\mu\over\sigma} = z_1 $$ $$\tag{2}\textstyle P[\,X<39\,] =P\bigl[\,Z<{39-\mu\over\sigma}\,\bigr]=.9192\Rightarrow {39-\mu\over\sigma} =z_2 , $$ where $Z$ is the standard normal random variable. You can find the two values $z_1$ and $z_2$ from a cdf table for the standard normal distribution. Then you'll have two equations in two unknowns. Solve those for $\mu$ and $\sigma$. For example, to find $z_1$ and $z_2$, you can use the calculator here. It gives the value $z$ such that $P[Z<z]=a$, where you input $a$. To use the calculator for the first equation first write $$\textstyle P\bigl[\,Z<\underbrace{31-\mu\over\sigma}_{z_1}\,\bigr]=1-P\bigl[\,Z>{31-\mu\over\sigma}\,\bigr] =1-.2743=.7257.$$ You input $a=.7257$, and it returns $z_1\approx.59986$. To use the calculator for the second equation, $$\textstyle P\bigl[\,Z<\underbrace{39-\mu\over\sigma}_{z_2}\,\bigr]= .9192,$$ input $a=.9192$, the calculator returns $z_2\approx1.3997$. So, you have to solve the system of equations: $$ \eqalign{ {31-\mu\over\sigma}&=.59986\cr {39-\mu\over\sigma}&=1.3997\cr } $$ (The solution is $\sigma\approx 10$, $\mu\approx 25$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/113579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding a simple expression for this series expansion without a piecewise definition I am doing some practice Calculus questions and I ran into the following problem which ended up having a reduction formula with a neat expansion that I was wondering how to express in terms of a series. Here it is: consider $$ I_{n} = \int_{0}^{\pi /2} x^n \sin(x) dx $$ I obtained the reduction formula $$ I_{n} = n\left(\frac{\pi}{2}\right)^{n-1} - n I_{n-1}. $$ I started incorrectly computing up to $I_{6}$ with the reduction formula $$ I_{n} = n\left(\frac{\pi}{2}\right)^{n-1} - I_{n-1} $$ by accident which ended up having a way more interesting pattern than the correct reduction formula. So, after computing $I_{0} = 1$, the incorrect reduction expansion was, $$ I_{1} = 0 \\ I_{2} = \pi \\ I_{3} = \frac{3\pi^2}{2^2} - \pi \\ I_{4} = \frac{4\pi^3}{2^3} - \frac{3\pi^2}{2^2} + \pi \\ I_{5} = \frac{5\pi^4}{2^4} - \frac{4\pi^3}{2^3} + \frac{3\pi^2}{2^2} - \pi \\ I_{6} = \frac{6\pi^5}{2^5} - \frac{5\pi^4}{2^4} + \frac{4\pi^3}{2^3} - \frac{3\pi^2}{2^2} + \pi \\ $$ Note that $\pi = \frac{2\pi}{2^1}$, of course, which stays in the spirit of the pattern. How could I give a general expression for this series without defining a piecewise function for the odd and even cases? I was thinking of having a term in the summand with $(-1)^{2i+1}$ or $(-1)^{2i}$ depending on it was a term with an even or odd power for $n$, but that led to a piecewise defined function. I think that it will look something like the following, where $f(x)$ is some function that handles which term gets a negative or positive sign depending on whether $n$ is an even or odd power in that term: $$\sum\limits_{i=1}^{n} n \left(\frac{\pi}{2} \right)^{n-1} f(x)$$ Any ideas on how to come up with a general expression for this series?
$$ \color{green}{I_n=\sum\limits_{i=2}^{n} (-1)^{n-i}\cdot i\cdot\left(\frac{\pi}{2} \right)^{i-1}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/113655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why do introductory real analysis courses teach bottom up? A big part of introductory real analysis courses is getting intuition for the $\epsilon-\delta\,$ proofs. For example, these types of proofs come up a lot when studying differentiation, continuity, and integration. Only later is the notion of open and closed sets introduced. Why not just introduce continuity in terms of open sets first (E.g. it would be a better visual representation)? It seems that the $\epsilon-\delta$ definition would be more understandable if a student is first exposed to the open set characterization.
I'm with Alex Becker, I first learned convergence of sequences, using epsilon and deltas, and only later moved on to continuity of functions. It worked out great for me. I don't believe that the abstraction from topology would be useful at this point. The ideas of "$x$ is near $y$", "choosing $\epsilon$ as small as you want", etc, are better expressed by epsilon-delta arguments, because they quantify/translate the words "near" and "small". Maybe one could talk about "size of intervals", grasping the idea of "open neighborhood" and retaining the epsilons.
{ "language": "en", "url": "https://math.stackexchange.com/questions/113698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 7, "answer_id": 6 }
Extension and Self Injective Ring Let $R$ be a self injective ring. Then $R^n$ is an injective module. Let $M$ be a submodule of $R^n$ and let $f:M\to R^n$ be an $R$-module homomorphism. By injectivity of $R^n$ we know that we can extend $f$ to $\tilde{f}:R^n\to R^n$. My question is that if $f$ is injective, can we also find an injective extension $\tilde{f}:R^n\to R^n$? Thank you in advance for your help.
The question is also true without any commutativity for quasi-Frobenius rings. Recall that a quasi-Frobenius ring is a ring which is one-sided self injective and one-sided Noetherian. They also happen to be two-sided self-injective and two-sided Artinian. For every finitely generated projective module $P$ over a quasi-Frobenius ring $R$, a well-known fact is that isomorphisms of submodules of $P$ extend to automorphisms of $P$. (You can find this on page 415 of Lam's Lectures on Modules and Rings.) Obviously your $P=R^n$ is f.g. projective, and injecting $M$ into $P$ just results in an isomorphism between $M$ and its image, so there you have it! In fact, this result seems a bit overkill for your original question, so I would not be surprised if a class properly containing the QF rings and satisfying your condition exists.
{ "language": "en", "url": "https://math.stackexchange.com/questions/113756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Is the product of symmetric positive semidefinite matrices positive definite? I see on Wikipedia that the product of two commuting symmetric positive definite matrices is also positive definite. Does the same result hold for the product of two positive semidefinite matrices? My proof of the positive definite case falls apart for the semidefinite case because of the possibility of division by zero...
Actually, one has to be vary careful in the way one interprets the results of Meenakshi and Rajian (referenced in one of the posts above). Symmetry is inherent in their definition of positive definiteness. Thus, their result can be stated very simply as follows: If $A$ and $B$ are symmetric and PSD, then $AB$ is PSD iff $AB$ is symmetric. A direct proof for this result can be given as follows. If $AB$ is PSD, it is symmetric (by Meenakshi and Rajian's definition of PSD). If it is symmetric, it is PSD since the eigenvalues of $AB$ are non-negative. To summarize, all the stuff about normality in their paper is not required (since normality of $AB$ is equivalent to the far simpler condition of symmetry of $AB$ when $A$ and $B$ are symmetric PSD). The most important point here is that if one adopts a more general definition for PSD ($x^TAx\ge 0$) and if one now considers cases where the product $AB$ is unsymmetric, then their results do not go through.
{ "language": "en", "url": "https://math.stackexchange.com/questions/113842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "68", "answer_count": 4, "answer_id": 0 }
Why is there no continuous square root function on $\mathbb{C}$? I know that what taking square roots for reals, we can choose the standard square root in such a way that the square root function is continuous, with respect to the metric. Why is that not the case over $\mathbb{C}$, with respect the the $\mathbb{R}^2$ metric? I suppose what I'm trying to ask is why is there not continuous function $f$ on $\mathbb{C}$ such that $f(z)^2=z$ for all $z$? This is what I was reading, but didn't get: Suppose there exists some $f$, and restrict attention to $S^1$. Given $t\in[0,2\pi)$, we can write $$ f(\cos t+i\sin t)=\cos(\psi (t))+i\sin(\psi (t)) $$ for unique $\psi(t)\in\{t/2,t/2+\pi\}$. (I don't understand this assertion of why the displayed equality works, and why $\psi$ only takes those two possible values.) If $f$ is continuous, then $\psi:[0,2\pi)\to[0,2\pi)$ is continuous. Then $t\mapsto \psi(t)-t/2$ is continuous, and takes values in $\{0,\pi\}$ and is thus constant. This constant must equal $\psi(0)$, so $\psi(t)=\psi(0)+t/2$. Thus $\lim_{t\to 2\pi}\psi(t)=\psi(0)+\pi$. Then $$ \lim_{t\to 2\pi} f(\cos t+i\sin t)=-f(1). $$ (How is $-f(1)$ found on the RHS?) Since $f$ is continuous, $f(1)=-f(1)$, impossible since $f(1)\neq 0$. I hope someone can clear up the two problems I have understanding the proof. Thanks.
Here is a proof for those who know a little complex function theory. Suppose $(f(z))^2=z$ for some continuous $f$. By the implicit function theorem, $f(z)$ is complex differentiable (=holomorphic) for all $z\neq0$ in $\mathbb C$. However since $f$ is continuous at $0$, it is also differentiable there thanks to Riemann's extension theorem. Differentiating $z=f(z)^2$ at $z=0$ leads to $1=2f(0)f'(0)=2\cdot0\cdot f'(0)=0 \;$. Contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/113876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
How can I evaluate an expression like $\sin(3\pi/2)$ on a calculator and get an answer in terms of $\pi$? I have an expression like this that I need to evaluate: $$16\sin(2\pi/3)$$ According to my book the answer is $8\sqrt{3}$. However, when I'm using my calculator to get this I get an answer like $13.86$. What I want to know, is it possible to make a calculator give the answer without evaluating $\pi$, so that $\pi$ is kept separate in the answer? And the same for in this case, $\sqrt{3}$. If the answer involves a square root, I want my calculator to say that, I don't want it to be evaluated. I am using the TI-83 Plus if that makes a difference.
Here’s something I used to tell students that might help. Among the angles that you’re typically expected to know the trig. values for ($30,$ $45,$ $60$ degrees and their cousins in the other quadrants), the only irrational values for the sine, cosine, tangent have the following magnitudes: $$\frac{\sqrt{2}}{2}, \;\; \frac{\sqrt{3}}{2}, \;\; \sqrt{3}, \;\; \frac{\sqrt{3}}{3}$$ Note that if you square each of these, you get: $$\frac{1}{2}, \;\; \frac{3}{4}, \;\; 3, \;\; \frac{1}{3}$$ Now consider the decimal expansions of these fractions: $$0.5, \;\; 0.75, \;\; 3, \;\; 0.3333…$$ The important thing to notice is that if you saw any of these decimal expansions, you’d immediately know its fractional equivalent. (O-K, most people would know it!) Now you can see how to use a relatively basic calculator to determine the exact value of $\sin\left(2\pi / 3 \right).$ First, use your calculator to find a decimal for $\sin\left(2\pi / 3 \right).$ Using a basic calculator (mine is a TI-32), I get $0.866025403.$ Now square the result. Doing this, I get $0.75.$ Therefore, I know that the square of $\sin\left(2\pi / 3 \right)$ is equal to $\frac{3}{4},$ and hence $\sin\left(2\pi / 3 \right)$ is equal to $\sqrt{\frac{3}{4}}.$ The positive square root is chosen because I got a positive value for $\sin\left(2\pi / 3 \right)$ when I used my calculator. Finally, we can rewrite this as $\frac{\sqrt{3}}{\sqrt{4}}=\frac{\sqrt{3}}{2}.$ What follows are some comments I posted in sci.math (22 June 2006) about this method. By the way, I used to be very concerned in the early days of calculators that students could obtain all the exact values of the $30,$ $45,$ and $60$ degree angles by simply squaring the calculator output, recognizing the equivalent fraction of the resulting decimal [note that the squares of the sine, cosine, tangent of these angles all come out to fractions that any student would recognize (well, they used to be able to recognize) from its decimal expansion], and then taking the square root of the fraction. As the years rolled by, I got to where I didn't worry about this at all, because even when I taught this method in class (to help students on standardized tests and to help them for other teachers who were even more insistent about using exact values than I was), the majority of my students had more trouble doing this than just memorizing the values!
{ "language": "en", "url": "https://math.stackexchange.com/questions/113926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 2 }
Finding an indefinite integral I have worked through and answered correctly the following question: $$\int x^2\left(8-x^3\right)^5dx=-\frac{1}{3}\int\left(8-x^3\right)^5\left(-3x^2\right)dx$$ $$=-\frac{1}{3}\times\frac{1}{6}\left(8-x^3\right)^5+c$$ $$=-\frac{1}{18}\left(8-x^3\right)^5+c$$ however I do not fully understand all of what I have done or why I have done it (other than I used principles I saw in a similar example question). Specifically I picked $-\frac{1}{3}$ to multiply the whole of the integral because it is the reciprocal of $-3$ but I do not fully understand why it is necessary to perform this step. The next part I do not understand is on the second line what causes the $\left(-3x^2\right)$ to disappear? Here is what I think is happening: $$-\frac{1}{3}\times-3x^2=x^2$$ therefore $$\int x^2\left(8-x^3\right)^5dx=-\frac{1}{3}\int\left(8-x^3\right)^5\left(-3x^2\right)dx$$ But I picked as stated before the reciprocal of $-3$ because it was the coefficient of the derivative of the expression $8-x^3$ not because it would leave an expression equivalent to $x^2$. For example if I alter the question slightly to: $$\int x^3\left(8-x^3\right)^5dx$$ then by picking $-\frac{1}{3}$ the following statement would be false? $$\int x^3\left(8-x^3\right)^5dx=-\frac{1}{3}\int\left(8-x^3\right)^5\left(-3x^2\right)dx$$ Also $$\int-3x^2=-3\left(\frac{1}{3}x^3\right)+c$$ $$=x^3+c$$ Which is why I am confused as to why when integrating the full question $-3x^2$ seems to disappear.
You correctly recognised x^2 as "almost" thw derivative of So put u = (8 - x^3), and find du/dx = -3x^2. The your integral becomes (-1/3)∫(-3x2)(8−x3)^5dx = (-1/3) ∫ u^5 (du/dx) dx = (-1/3) ∫ u^5 du -- which is rather easier to follow. It is the change of variable procedure, which is the reverse of the chain rule for derivatives. (To verify this procedure, put I = the integral, and compare dI/dx and dI/du, using the chain rule.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/113992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
What is the proof that covariance matrices are always semi-definite? Suppose that we have two different discreet signal vectors of $N^\text{th}$ dimension, namely $\mathbf{x}[i]$ and $\mathbf{y}[i]$, each one having a total of $M$ set of samples/vectors. $\mathbf{x}[m] = [x_{m,1} \,\,\,\,\, x_{m,2} \,\,\,\,\, x_{m,3} \,\,\,\,\, ... \,\,\,\,\, x_{m,N}]^\text{T}; \,\,\,\,\,\,\, 1 \leq m \leq M$ $\mathbf{y}[m] = [y_{m,1} \,\,\,\,\, y_{m,2} \,\,\,\,\, y_{m,3} \,\,\,\,\, ... \,\,\,\,\, y_{m,N}]^\text{T}; \,\,\,\,\,\,\,\,\, 1 \leq m \leq M$ And, I build up a covariance matrix in-between these signals. $\{C\}_{ij} = E\left\{(\mathbf{x}[i] - \bar{\mathbf{x}}[i])^\text{T}(\mathbf{y}[j] - \bar{\mathbf{y}}[j])\right\}; \,\,\,\,\,\,\,\,\,\,\,\, 1 \leq i,j \leq M $ Where, $E\{\}$ is the "expected value" operator. What is the proof that, for all arbitrary values of $\mathbf{x}$ and $\mathbf{y}$ vector sets, the covariance matrix $C$ is always semi-definite ($C \succeq0$) (i.e.; not negative definte; all of its eigenvalues are non-negative)?
A symmetric matrix $C$ of size $n\times n$ is semi-definite if and only if $u^tCu\geqslant0$ for every $n\times1$ (column) vector $u$, where $u^t$ is the $1\times n$ transposed (line) vector. If $C$ is a covariance matrix in the sense that $C=\mathrm E(XX^t)$ for some $n\times 1$ random vector $X$, then the linearity of the expectation yields that $u^tCu=\mathrm E(Z_u^2)$, where $Z_u=u^tX$ is a real valued random variable, in particular $u^tCu\geqslant0$ for every $u$. If $C=\mathrm E(XY^t)$ for two centered random vectors $X$ and $Y$, then $u^tCu=\mathrm E(Z_uT_u)$ where $Z_u=u^tX$ and $T_u=u^tY$ are two real valued centered random variables. Thus, there is no reason to expect that $u^tCu\geqslant0$ for every $u$ (and, indeed, $Y=-X$ provides a counterexample).
{ "language": "en", "url": "https://math.stackexchange.com/questions/114072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42", "answer_count": 2, "answer_id": 0 }
Find the class of equivalence of a element of a given equivalence relation. Yesterday on my Abstract Algebra course, we were having a problem with equivalence relations. We had a given set: $$A = \{a, b, c\}$$ We found all the partitions of $A$, and one of them was: $$P = \{ \{a\} , \{b, c\} \}$$ Then we built an equivalence relation $S$ from this partition, where two elements are in equivalence relation if $a$ and $b$ belong to the same cell. So the relation of equivalence is: $$S = \{ (a,a) , (b,b) , (c,c) , (b,c) , (c,b) \}$$ After this the professor, without explaining anything wrote: The class of equivalence of $(b,c)$: $[(b,c)] = \{ (b,c) , (c,b) \}$ So can anyone explain this last line? Because I don't understand it.
When you have an equivalence relation $R$ on a set $X$, and an element $x\in X$, you can talk about the equivalence class of $x$ (relative to $R$), which is the set $$[x] = \{y\in X\mid (x,y)\in R\} = \{y\in X\mid (y,x)\in R\} = \{y\in X\mid (x,y),(y,x)\in R\}.$$ But I note that your professor did not say "equivalence class", he said "Class of Equivalence". That suggests he may be refering to some other (defined) concept. I would suggest that you talk to your professor directly and ask him what he means by "Class of Equivalence", and whether it is the same thing as "equivalence class"; explain what your understanding of "equivalence class" is, and why you would be confused if he said "The equivalence class of $(b,c)$ is $[(b,c)]={(b,c),(c,b)}$" (at least, I would be confused because in order to talk about "the equivalence class of $(b,c)$", I would need some equivalence relation defined on some set that contains $(b,c)$, and we don't seem to have that on hand).
{ "language": "en", "url": "https://math.stackexchange.com/questions/114111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Estimating a probability of head of a biased coin The question is: We assume a uniform (0,1) prior for the (unknown) probability of a head. A coin is tossed 100 times with 65 of the tosses turning out heads. What is the probability that the next toss will be head? Well, the most obvious answer is of course prob = 0.65, but I am afraid this is too simple. However, I really don't know what is wrong with this answer? I think I need to use the fact that we assume a uniform [0,1] before we begin tossing the coin, but I am not sure how to proceed.
$0.65$ is the maximum-likelihood estimate, but for the problem you describe, it is too simple. For example, if you toss the coin just once and you get a head, then that same rule would say "prob = 1". Here's one way to get the answer. The prior density is $f(p) = 1$ for $0\le p\le 1$ (that's the density for the uniform distribution). The likelihood function is $L(p) = \binom{100}{65} p^{65}(1-p)^{35}$. Bayes' theorem says you multiply the prior density by the likelihood and then normalize, to get the posterior density. That tells you the posterior density is $$ g(p) = \text{constant}\cdot p^{65}(1-p)^{35}. $$ The "constant" can be found by looking at this. We get $$ \int_0^1 p^{65} (1-p)^{35} \; dp = \frac{1}{101\binom{100}{65}}, $$ and therefore $$g(p)=101\binom{100}{65} p^{65}(1-p)^{35}. $$ The expected value of a random variable with this distribution is the probability that the next outcome is a head. That is $$ \int_0^1 p\cdot 101\binom{100}{65} p^{65}(1-p)^{35}\;dp. $$ This can be evaluated by the same method: $$ 101\binom{100}{65} \int_0^1 p\cdot p^{65}(1-p)^{35}\;dp = 101\binom{100}{65} \int_0^1 p^{66}(1-p)^{35}\;dp $$ $$ = 101\binom{100}{65} \cdot \frac{1}{\binom{101}{66}\cdot 102} = \frac{66}{102} = \frac{11}{17}. $$ This is an instance of Laplace's rule of succession (Google that term!). Laplace used it to find the probability that the sun will rise tomorrow, given that it's risen every day for the 6000-or-so years the universe has existed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/114168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
prove that $g\geq f^2$ The problem is this: Let $(f_n)$ a sequence in $L^2(\mathbb R)$ and let $f\in L^2(\mathbb R)$ and $g\in L^1(\mathbb R)$. Suppose that $$f_n \rightharpoonup f\;\text{ weakly in } L^2(\mathbb R)$$ and $$f_n^2 \rightharpoonup g\;\text{ weakly in } L^1(\mathbb R).$$ Show that $$f^2\leq g$$ almost everywhere on $\mathbb R$. I admit I'm having problems with this since It's quite a long time I don't deal with these kind of problems. Even an hint is welcomed. Thank you very much.
Well it is a property of the weak convergence that every weak convergent sequence is bounded and $$||f||\leq \lim\inf ||f_{n}||$$ then for every $\Omega \in \mathbb{R}^n$ measurable with finite measure we have $$\left(\int_\Omega f^2\right)^{\frac{1}{2}}\leq \lim\inf\left(\int_\Omega f_n^2\right)^{\frac{1}{2}}$$ That implies $$\int_\Omega f^2\leq \lim\inf\int_\Omega f_n^2\tag{1}$$ Note that the function $h$ constant equal to 1 in $\Omega$ and 0 case contrary is in $L^{\infty}{(\mathbb{R})}$ which is the dual of $L^{1}{(\mathbb{R})}$ then $$\int_{\Omega}f_n^2\rightarrow\int_\Omega g\tag{2}$$ Jointing (1) and (2) we get $$\int_{\Omega}f^2\leq\int_\Omega g$$ Since $\Omega$ is arbitrary we have $f^2\leq g$ almost everywhere.$\blacksquare$
{ "language": "en", "url": "https://math.stackexchange.com/questions/114214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Gentzen's Consistency Proof confusion I am recently finding some confusion. Some texts say that Gentzen's Consistency Proof shows transfinite induction up to $\varepsilon_0$ holds, while other texts say that consistency can be shown up to the numbers less than $\varepsilon_0$, but not $\varepsilon_0$. Which one is correct? Thanks.
Since $\epsilon_0$ is a limit ordinal when you say induction up to $\epsilon_0$ you mean every ordinal $<\epsilon_0$. In fact the confusion is only understanding in the terminology used, as both mean the same thing. For example, induction on all the countable ordinals would be just the same as induction up to $\omega_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/114274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Product of adjacency matrices I was wondering if there was any meaningful interpertation of the product of two $n\times n$ adjacency matrices of two distinct graphs.
The dot product of the adjacency matrix with itself is a measure of similarity between nodes. For instance take the non-symmetric directed adjacency matrix A = 1, 0, 1, 0 0, 1, 0, 1 1, 0, 0, 0 1, 0, 1, 0 then the dot of $A^T$A (gram matrix) gives the un-normalized similarity between column i and column j which is the symmetric matrix: 3, 0, 2, 0 0, 1, 0, 1 2, 0, 2, 0 0, 1, 0, 1 This is much like the gram matrix of a linear kernel in an SVM. An alternate version of the kernel is the RBF kernel. The RBF kernel is simply a measure of similarity between two datapoints that can be looked up in the nxn matrix. Likewise, so is the linear kernel. A Gram matrix is simply the dot of its transpose and itself. Now say you have matrix B which is also a non-symmetric directed adjacency matrix. B = 1, 0, 1, 0 1, 0, 0, 0 1, 0, 0, 0 1, 0, 0, 1 So $A^T$B is a non-symmetric matrix: 3, 0, 1, 1 1, 0, 0, 0 2, 0, 1, 1 1, 0, 0, 0 Matrix A col i and matrix B col j are proportionately similar according to the above matrix. Thus the dot product of transpose of the first matrix to the second matrix is a measure of similarity of nodes characterized by their edges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/114334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 3, "answer_id": 2 }
What are all pairs $(a,b)$ such that if $Ax+By \equiv 0 \pmod n$ then we can conclude $ax+by = 0 \pmod n$? All these are good pairs: $$(0, 0), (A, B), (2A, 2B), (3A, 3B), \ldots \pmod{n}$$ But are there any other pairs? actually it was a programming problem with $A,B,n \leq 10000$ but it seems to have a pure solution.
If $\rm\:c\ |\ A,B,n\:$ cancel $\rm\:c\:$ from $\rm\:Ax + By = nk.\:$ So w.l.o.g. $\rm\:(A,B,n) = 1,\:$ i.e. $\rm\:(A,B)\equiv 1$. Similarly, restricting to "regular" $\rm\:x,y,\:$ those such that $\rm\:(x,y,n) = 1,\:$ i.e. $\rm\:(x,y)\equiv 1,\:$ yields Theorem $\rm\:\ If\:\ (A,B)\equiv 1\equiv (x,y)\:\ and\:\ Ax+By\equiv 0,\ then\:\ ax+by\equiv 0\iff aB\equiv bA$ Proof $\ $ One easily verifies $$\rm\:\ \ B(ax+by)\: =\: (aB-bA)x + b(Ax+By) $$ $$\rm -A(ax+by)\: =\: (aB-bA)y - a(Ax+By)$$ $(\Rightarrow)\ $ Let $\rm\:z = aB-bA.\:$ By above $\rm\:ax+by\equiv 0\ \:\Rightarrow\ xz,\:yz\equiv 0 \ \Rightarrow\ z \equiv (x,y)z\equiv 0$. $(\Leftarrow)\ $ Let $\rm\:z = ax+by.\:$ By above $\rm\:aB-bA\equiv 0\ \Rightarrow\ Az,Bz\equiv 0\ \Rightarrow\ z \equiv (A,B)z\equiv 0.\ \ $ QED Note $\rm\ (x,y)\equiv 1\pmod n\:$ means $\rm\:ix+jy = 1 + kn\:$ for some $\rm\:i,j,k\in \mathbb Z$ Thus we infer $\rm\:xz,yz\equiv 0\ \Rightarrow z \equiv (ix+jy)z\equiv i(xz)+j(yz)\equiv 0\pmod n$ i.e. $\rm\ \ ord(z)\ |\ x,y\ \Rightarrow\ ord(z)\ |\ (x,y) = 1\ $ in the additive group $\rm\:(\mathbb Z/n,+)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/114374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculating the partial derivative of the following I think I may be missing something here, $$f(x,y)=\left\{ \frac {xy(x^{2}-y^{2})}{x^{2}+y^{2}}\right.\quad(x,y)\neq (0,0)$$ Let $X(s,t)= s\cos(\alpha)+t\sin(\alpha)$ and $Y(s,t)=-s\sin(\alpha)+t\cos(\alpha)$, where $\alpha$ is a constant, and Let $F(s,t)=f(X(s,t), Y(s,t))$. Show that $$ \left.\frac{\partial^2 F}{\partial s^2}\frac{\partial^2 F}{\partial t^2} - \left( \frac{\partial^2 F}{\partial s\partial t}\right)^2 = \left(\frac{\partial^2 f}{\partial x^2}\frac{\partial^2 f}{\partial y^2} - \left(\frac{\partial^2 f}{\partial x\partial y}\right)^2\right)\right| _{x=X(s,t),y=Y(s,t)} $$ I decided to try and subsitute My $X(s,t)$ and $Y(s,t)$ into $f(x,y)$, however im just wondering if thre is an alternative approach as it gives a lot of terms, many thanks in advance. I have gone away and had a think about the answer and still not sure where to put my best foot forward with it so: $ \frac{\partial^2 F}{\partial s^2}=cos^{2}\alpha\frac{\partial^2 F}{\partial ^2X}$ $\Rightarrow$ $ \frac{\partial^2 F}{\partial t^2}=sin^{2}\alpha\frac{\partial^2 F}{\partial ^2X}$ Now using the fact that $\frac{\partial^2 F}{\partial ^2X}$ is equal to $\frac{\partial^2 f}{\partial ^2X} | _{x=X(s,t)}$ to calculate our $\frac{\partial^2 F}{\partial ^2X}$. Now $\frac{\partial^2 f}{\partial ^2X}$= $ \frac{-4x^{4}y-20x^{2} y^{3}+8x 3y^{3}-4x y^{5}+4x^{5} y+10x^{3} y^{3}+6x y^{5}}{(x^{2}+y^{2})^{3}}$ hence do I make the subsitution here, seems to be far to many terms and havent even got to the RHS, many thanks in advance.
Let $f:\ (x,y)\mapsto f(x,y)$ be an arbitrary function and put $$g(u,v):=f(u\cos\alpha + v\sin\alpha, -u\sin\alpha+v \cos\alpha)\ .$$ Using the abbreviations $$c:=\cos\alpha, \quad s:=\sin\alpha,\quad \partial_x:={\partial\over\partial x}, \quad\ldots$$ we have (note that $c$ and $s$ are constants) $$\partial_u=c\partial_x-s\partial_y, \quad \partial_v =s\partial_x+ c\partial_y\ .$$ It follows that $$\eqalign{g_{uu}&\cdot g_{vv}-\bigl(g_{uv}\bigr)^2 \cr &=(c\partial_x-s\partial_y)^2 f\cdot (s\partial_x+ c\partial_y)^2 f -\bigl((c\partial_x-s\partial_y)(s\partial_x+ c\partial_y)f\bigr)^2 \cr &=(c^2 f_{xx}-2cs f_{xy}+s^2 f_{yy})(s^2 f_{xx}+2cs f_{xy}+c^2 f_{yy}) -\bigl(cs f_{xx}+(c^2-s^2) f_{xy}-cs f_{yy}\bigr)^2 \cr &=\ldots =f_{xx}\, f_{yy} -\bigl(f_{xy})^2\ . \cr}$$ This shows that the stated identity is true for any function $f$ and not only for the $f$ considered in the original question. There should be a way to prove this identity "from a higher standpoint", i.e., without going through this tedious calculation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/114435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to pronounce "$\;\setminus\;$" (the symbol for set difference) A question for English speakers. When using (or reading) the symbol $\setminus$ to denote set difference — $$A\setminus B=\{x\in A|x\notin B\}$$ — how do you pronounce it? If you please, indicate in a comment on your answer what region you're from (what dialect you have). This is a poll question. Please do not repeat answers! Rather, upvote an answer if you pronounce the symbol the same way the answerer does, and downvote it not at all. Please don't upvote answers for other reasons. Thanks!
I usually say "A without B," but it depends on my mood that day
{ "language": "en", "url": "https://math.stackexchange.com/questions/114488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 10, "answer_id": 4 }
Coupon Problem generalized, or Birthday problem backward. I want to solve a variation on the Coupon Collector Problem, or (alternately) a slight variant on the standard Birthday Problem. I have a slight variant on the standard birthday problem. In the standard Coupon Collector problem, someone is choosing coupons at random (with replacement) from n different possible coupons. Duplicate coupons do us no good; we need a complete set. The standard question is "What is the expected number of coupons (or probability distribution in number of coupons) to collect them all? In the standard birthday problem, we choose k items from n options with replacement (such as k people in a room, each with one of 365 possible birthdays) and try to determine the probability distribution for how many unique values there will be (will they have the same birthday?). In my problem, someone has chosen k items from n options and I know that there were p distinct values, but I don't know what k was. If p=n this is the coupon problem, but I want to allow for values of p that are less than n. I want to determine the probability distribution for k (actually, all I need is the expected value of k, but the distribution would be interesting as well) as a function of p and n.
This is a statistical problem, not a probabilistic problem: you have observed data (the value $p$) and seek to infer the underlying probabilistic process (the parameter $k$). The process going from $k$ to $p$ is understood, but the reverse is much more difficult. You cannot "solve" this problem of parameter estimation. The Maximum Likelihood Estimator of $k$ would be $\hat k = p$. Other statistical criteria would lead to different estimates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/114544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Is there a closed-form solution to this linear algebra problem? $A$ and $B$ are non-negative symmetric matrices, whose entries sum to 1.0. Each of these matrices has $\frac{N^2-N}{2}+N-1$ degrees of freedom. $D$ is the diagonal matrix defined as follows (in Matlab code): $$D=\text{diag}(\text{diag}(A*\text{ones}(N)))^{-1}$$ We are given the matrix $B$. Does this problem have a closed-form solution to $A$ (assuming one exists), such that $$ADA=B$$ If so, what is it? If not, what's the best method to find an approximate solution?
The diagonal entries of $D$ are the reciprocals of the row sums of $A$. The row sums of $B$ are those of $A$. Thus $D$ is known. Then $A$ can be obtained as $$A=\frac1{\sqrt D}\sqrt{\sqrt DB\sqrt D}\frac1{\sqrt D}\;,$$ or, if you prefer, $$A=D^{-1/2}\left(D^{1/2}BD^{1/2}\right)^{1/2}D^{-1/2}\;.$$ According to this post, this is the unique symmetric positive-definite solution of $ADA=B$. The square root of $D$ is straightforward; the remaining square root can be computed by diagonalization or by various other methods. To see that the solution is consistent in that the $A$ so obtained does indeed have the same row sums as $B$, note that $$\left(D^{1/2}BD^{1/2}\right)\left(D^{-1/2}\mathbf 1\right)=D^{1/2}B\mathbf 1=D^{1/2}D^{-1}\mathbf 1=D^{-1/2}\mathbf 1\;,$$ where $\mathbf 1$ is the vector consisting entirely of $1$s. Thus $D^{-1/2}\mathbf 1$ is an eigenvector with eigenvalue $1$ of $D^{1/2}BD^{1/2}$, and thus also of $$D^{1/2}AD^{1/2}=\left(D^{1/2}BD^{1/2}\right)^{1/2},$$ and thus $$DA\mathbf1=D^{1/2}\left(D^{1/2}AD^{1/2}\right)\left(D^{-1/2}\mathbf1\right)=D^{1/2}D^{-1/2}\mathbf1=\mathbf1$$ as desired. Perhaps a more concise way of saying all this is that we should apply a transform $x=D^{1/2}x'$ to get $$x^\top Ax=x'^\top A'x'\quad\text{with}\quad A'=D^{1/2}AD^{1/2}\;,\\ x^\top Bx=x'^\top B'x'\quad\text{with}\quad B'=D^{1/2}BD^{1/2}\;,\\ \mathbf1'=D^{-1/2}\mathbf1\;,$$ and then the equation becomes $A'^2=B'$ and the row sum conditions become $A'\mathbf1'=B'\mathbf1'=\mathbf1'$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/114630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Proving Cauchy's Generalized Mean Value Theorem This is an exercise from Stephen Abbott's Understanding Analysis. The hint it gives on how to solve it is not very clear, in my opinion, so I would like for a fresh set of eyes to go over it with me: pp 143 Exercise 5.3.4. (a) Supply the details for the proof of Cauchy's Generalized Mean Value Theorem (Theorem 5.3.5.). Theorem 5.3.5. (Generalized Mean Value Theorem). If $f$ and $g$ are continuous on the closed interval $[a,b]$ and differentiable on the open interval $(a,b)$, then there exists a point $c\in(a,b)$ where$$[f(b)-f(a)]g'(c)=[g(b)-g(a)]f'(c).$$If $g'$ is never zero on $(a,b)$, then the conclusion can be stated as$$\frac{f'(c)}{g'(c)}=\frac{f(b)-f(a)}{g(b)-g(a)}.$$ *Hint: This result follows by applying the Mean Value Theorem to the function*$$h(x)=[f(b)-f(a)]g(x)-[g(b)-g(a)]f(x)$$ First of all, I know that the Mean Value Theorem (MVT) states that if $f:[a,b]\to\mathbb{R}$ is continuous on $[a,b]$ and differentiable on $(a,b)$, then there exists a point $c\in(a,b)$ where$$f'(c)=\frac{f(b)-f(a)}{b-a}.$$ If we assume that $h$ has the above properties, then applying the MVT to it, for some $c\in(a,b)$, would yield$$h'(c)=\frac{h(b)-h(a)}{b-a}=$$ $$\frac{[f(b)-f(a)]g(b)-[g(b)-g(a)]f(b) \quad - \quad [f(b)-f(a)]g(a)+[g(b)-g(a)]f(a)}{b-a}=$$ $$[f(b)-f(a)]\left(\frac{g(b)-g(a)}{b-a}\right) \quad - \quad[g(b)-g(a)]\left(\frac{f(b)-f(a)}{b-a}\right)=$$ $$[f(b)-f(a)]g'(c) \quad - \quad [g(b)-g(a)]f'(c).$$This is the best I could achieve; I have no clue on how to reach the second equation in the above theorem. Do you guys have any ideas? Thanks in advance!
Note that $$\begin{eqnarray}h(a)&=&[f(b)-f(a)]g(a)-[g(b)-g(a)]f(a)\\ &=&f(b)g(a)-g(b)f(a)\\ &=&[f(b)-f(a)]g(b)-[g(b)-g(a)]f(b)\\ &=&h(b)\end{eqnarray}$$ and so $h'(c)=0$ for some point $c\in (a,b)$. Then differentiate $h$ normally and note that this makes $c$ the desired point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/114694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
How many combinations of 6 items are possible? I have 6 items and want to know how many combinations are possible in sets of any amount. (no duplicates) e.g. It's possible to have any of the following: 1,2,3,4,5,6 1,3,5,6,2 1 1,3,4 there cannot be duplicate combinations: 1,2,3,4 4,3,2,1 Edit: for some reason I cannot add more comments. @miracle173 is correct. Also {1,1} is not acceptable
Your are asking the number of subsets of a set with n elements.{1,2,3,...,n} Each subset can be represented by a binary string, e.g for the set {1,2,3,4,5,6} the string 001101 means the subset that does not contain the element 1 of the set, because the 1st left character of the string is 0 does not contain the element 2 of the set, because the 2nd left character of the string is 0 does contain the element 3 of the set, because the 3rd left character of the string is 1 does contain the element 4 of the set, because the 4th left character of the string is 1 does not contain the element 5 of the set, because the 5th left character of the string is 0 does contain the element 6 of the set, because the 6th left character of the string is 1 so 001101 means the subset {3,4,6}. Therefore there asre as many subsets as strings of length n. With n binary digits one can count from 0 to 2^n-1, therefore there are 2^n such strings and 2^n subsets of {1,....,n}. 00...0 means the empty subset. if you dont want count the empty subset then you have only 2^n-1 subsets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/114750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 0 }
For $x_1,x_2,x_3\in\mathbb R$ that $x_1+x_2+x_3=0$ show that $\sum_{i=1}^{3}\frac{1}{x^2_i} =({\sum_{i=1}^{3}\frac{1}{x_i}})^2$ Show that if $ x_1,x_2,x_3 \in \mathbb{R}$ , and $x_1+x_2+x_3=0$ , we can say that: $$\sum_{i=1}^{3}\frac{1}{x^2_i} = \left({\sum_{i=1}^{3}\frac{1}{x_i}}\right)^2.$$
Hint: What is value of $\frac{1}{x_1.x_2}+\frac{1}{x_2.x_3}+\frac{1}{x_3.x_1}$ ,when $x_1+x_2+x_3=0$. If you got the value of $\frac{1}{x_1.x_2}+\frac{1}{x_2.x_3}+\frac{1}{x_3.x_1}$, then proceed by expanding $(\sum_{i=1}^3 \frac{1}{x_i})^2$ by using the formula $(a+b+c)^2= a^2+b^2+c^2+ 2(ab+bc+ac)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/114788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proof of a formula involving Euler's totient function: $\varphi (mn) = \varphi (m) \varphi (n) \cdot \frac{d}{\varphi (d)}$ The third formula on the wikipedia page for the Totient function states that $$\varphi (mn) = \varphi (m) \varphi (n) \cdot \dfrac{d}{\varphi (d)} $$ where $d = \gcd(m,n)$. How is this claim justified? Would we have to use the Chinese Remainder Theorem, as they suggest for proving that $\varphi$ is multiplicative?
You can write $\varphi(n)$ as a product $\varphi(n) = n \prod\limits_{p \mid n} \left( 1 - \frac 1p \right)$ over primes. Using this identity, we have $$ \varphi(mn) = mn \prod_{p \mid mn} \left( 1 - \frac 1p \right) = mn \frac{\prod_{p \mid m} \left( 1 - \frac 1p \right) \prod_{p \mid n} \left( 1 - \frac 1p \right)}{\prod_{p \mid d} \left( 1 - \frac 1p \right)} = \varphi(m)\varphi(n) \frac{d}{\varphi(d)} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/114841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 3, "answer_id": 2 }
Rationality test for a rational power of a rational It has been known since Pythagoras that 2^(1/2) is irrational. It is also obvious that 4^(1/2) is rational. There is also a fun proof that even the power of two irrational numbers can be rational. Can you, in general, compute whether the power of two rational numbers is rational? The reason I am asking, besides curiosity, is that the Fraction-type in Python always returns a float on exponentiation. If there is a quick way to tell if it could be accurately expressed as a fraction, the power function could conceivably only return floats when it has to. EDIT: By popular demand, I changed 0.5 to 1/2 to make it clearer that it is a fraction and not a float.
We can do this much quicker than using prime factorization. Below I show how to reduce the problem to testing if an integer is a (specific) perfect power - i.e. an integer perfect power test. Lemma $\ $ If $\rm\,R\,$ and $\,\rm K/N\:$ are rationals, $\rm\:K,N\in\mathbb Z,\ \gcd(K,N)=1,\,$ then $$\rm\:R^{K/N}\in\Bbb Q\iff R^{1/N}\in \mathbb Q\qquad$$ Proof $\ (\Rightarrow)\ $ If $\,\rm\color{#0a0}{R^{K/N}\in\Bbb Q},\,$ then by $\rm\:gcd(N,K) = 1\:$ we have a Bezout equation $$\rm 1 = JN+I\:\!K\, \overset{\!\div\ N}\Rightarrow\ 1/N = J + IK/N\ \Rightarrow\ R^{1/N} =\ R^J(\color{#0a0}{R^{K/N}})^I \in \mathbb Q$$ $(\Leftarrow)\ \ \rm\:R^{1/N}\in \mathbb Q\ \Rightarrow\ R^{K/N} = (R^{1/N})^K\in \mathbb Q.\ \ \small\bf QED$ So we've reduced the problem to determining if $\rm\:R^{1/N} = A/B \in \mathbb Q.\,$ If so then $\rm\: R = A^N/B^N\:$ and $\rm\:gcd(A,B)=1\:$ $\Rightarrow$ $\rm\:gcd(A^N,B^N) = 1,\:$ by unique factorization or Euclid's Lemma. By uniqueness of reduced fractions, this is true iff the lowest-terms numerator and denominator of $\rm\:R\:$ are both $\rm\:N'th\:$ powers of integers. So we reduce to the problem of checking if an integer is a perfect power. This can be done very quickly, even in the general case, see D. J. Bernstein, Detecting powers in almost linear time. 1997.
{ "language": "en", "url": "https://math.stackexchange.com/questions/114914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
How to show that these two random number generating methods are equivalent? Let $U$, $U_1$ and $U_2$ be independent uniform random numbers between 0 and 1. Can we show that generating random number $X$ by $X = \sqrt{U}$ and $X = \max(U_1,U_2)$ are equivalent?
For every $x$ in $(0,1)$, $\mathrm P(\max\{U_1,U_2\}\leqslant x)=\mathrm P(U_1\leqslant x)\cdot\mathrm P(U_2\leqslant x)=x\cdot x=x^2$ and $\mathrm P(\sqrt{U}\leqslant x)=\mathrm P(U\leqslant x^2)=x^2$ hence $\max\{U_1,U_2\}$ and $\sqrt{U}$ follow the same distribution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/114950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Estimate the sample deviation in one pass We've learned this algorithm in class but I'm not sure I've fully understood the correctness of this approach. It is known as Welford's algorithm for the sum of squares, as described in: Welford, B.P., "Note on a Method for Calculating Corrected Sums of Squares and Products", Technometrics, Vol. 4, No. 3 (Aug., 1962), pp. 419-420 The two-pass algorithm to estimate the sample deviation is simple. We first estimate the mean by one pass and then calculate the standard deviation. In short, it is $s_n^2 = \frac{1}{n-1} \sum_{i=1}^n (Y_i - \hat{Y})^2$. The one-pass approach has three steps. 1) $v_1 = 0$ 2) $v_k = v_{k-1} + \frac{1}{k(k-1)} (\sum_{i=1}^{k-1} Y_i - (k-1) Y_k)^2 (2 \leq k \leq n)$ 3) $s_n^2 = \frac{v_n}{n-1}$. Can someone help me understand how this approach works?
$v_n$ is going to be $\sum_{i=1}^n (Y_i - \overline{Y}_n)^2$ where $\overline{Y}_n = \frac{1}{n} \sum_{i=1}^n Y_i$. Note that by expanding out the square, $v_n = \sum_{i=1}^n Y_i^2 - \frac{1}{n} \left(\sum_{i=1}^n Y_i\right)^2$. In terms of $m_k = \sum_{i=1}^k Y_i$, we have $$v_n = \sum_{i=1}^n Y_i^2 - \frac{1}{n} m_n^2 = \sum_{i=1}^{n-1} Y_i^2 + Y_n^2 - \frac{1}{n} (m_{n-1} + Y_n)^2$$ On the other hand, $$v_{n-1} = \sum_{i=1}^{n-1} Y_i^2 - \frac{1}{n-1} m_{n-1}^2$$ Since $(m_{n-1} + Y_n)^2 = m_{n-1}^2 + 2 Y_n m_{n-1} + Y_n^2$, the difference between these is $$ v_n - v_{n-1} = \left(1 - \frac{1}{n}\right) Y_n^2 + \left(\frac{1}{n-1} - \frac{1}{n}\right) m_{n-1}^2 - \frac{2}{n} Y_n m_{n-1} $$ $$ = \frac{1}{n(n-1)} m_{n-1}^2 - \frac{2}{n} Y_n m_{n-1} +\frac{n-1}{n} Y_n^2 $$ $$ = \frac{1}{n(n-1)} \left(m_{n-1} - (n-1) Y_n\right)^2 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/115008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }