Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
If all paths with the same endpoints are homotopic, then the space is simply connected. Let $X$ be a path connected space such that any two paths in $X$ having the same end points are path homotopic. Then prove that $X$ is simply connected. I am totally stuck on this problem. Can someone help me please? Thanks for your time.
If every two paths having the same end points are path homotopic, then every loop $w: S^1 \longrightarrow X$ can be deformed into a constant path, a point. Which is the definition of simply connected, isn't it?
{ "language": "en", "url": "https://math.stackexchange.com/questions/541064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
If P = NP, then 3-SAT can be solved in P Prove that if $P = NP$, then there is an algorithm that can find a boolean assignment for a 3-SAT problem in P time if it exists. $P = NP$ only says that we can decide whether a 3-SAT problem is satisfiable but it doesn't say anything about how to find a satisfying boolean expression. Does any one know how to tackle this problem?
Take your formula and check if it is satisfiable. If so, conjoin $x_1$ to your formula, where $x_1$ is your first variable, and check if it is still satisfiable. If so, then there is an assignment where $x_1$ is true; otherwise, there is one where $x_1$ is false. Conjoin $x_1$ or $\neg x_1$ and repeat for all of the other variables.
{ "language": "en", "url": "https://math.stackexchange.com/questions/541114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Evaluation of $\lim_{n\rightarrow \infty}\frac{1}{2n}\cdot \ln \binom{2n}{n}$ Evaluate $$\lim_{n\rightarrow \infty}\frac{1}{2n}\cdot \ln \binom{2n}{n}.$$ $\underline{\bf{My\;\;Try}}::$ Let $\displaystyle y = \lim_{n\rightarrow \infty}\frac{1}{2n}\cdot \ln \binom{2n}{n} = \lim_{n\rightarrow \infty}\frac{1}{2n}\cdot \ln \left(\frac{(n+1)\cdot (n+2)\cdot (n+3)\cdots (n+n)}{(1)\cdot (2)\cdot (3)\cdots (n)}\right)$ $\displaystyle y = \lim_{n\rightarrow \infty}\frac{1}{2n}\cdot \left\{\ln \left(\frac{n+1}{1}\right)+\ln \left(\frac{n+2}{2}\right)+\ln \left(\frac{n+3}{3}\right)+\cdots+\ln \left(\frac{n+n}{n}\right)\right\}$ $\displaystyle y = \lim_{n\rightarrow \infty}\frac{1}{2n}\cdot \sum_{r=1}^{n}\ln \left(\frac{n+r}{r}\right) = \lim_{n\rightarrow \infty}\frac{1}{2n}\cdot \sum_{r=1}^{n}\ln \left(\frac{1+\frac{r}{n}}{\frac{r}{n}}\right)$ Now Using Reinman Sum $\displaystyle y = \frac{1}{2}\int_{0}^{1}\ln \left(\frac{x+1}{x}\right)dx = \frac{1}{2}\int_{0}^{1}\ln (x+1)dx-\frac{1}{2}\int_{0}^{1}\ln (x)dx = \ln (2)$ My Question is , Is there is any method other then that like Striling Approximation OR Stolz–Cesàro theorem OR Ratio Test If yes then please explain here Thanks
Beside the elegant demonstration given by achille hui, I think that the simplest manner to solve this problem is to use Stirling approximation. At the first order, Stirling's approximation is $n! = \sqrt{2 \pi n} (n/e)^n$. It is very good. Have a look at http://en.wikipedia.org/wiki/Stirling%27s_approximation. They have a very good page for that. For sure, you need to express the binomial coefficient as the ratio of factorials. Try that and you will be amazed to see how simple becomes your problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/541232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Gröbner bases: Polynomial equations. Solution $x$ to $G \cap k[x_1, .., x_i]$ imply solution to $G \cap k[x_1, .., x_i, x_{i+1}]$, $x$ plugged in. I'm have been studying Gröbner bases for a while now and seen a few examples in my textbook / exercises. Let $\mathcal k$ be a field and $\mathcal k[x_1,..,x_n]$ a polynomial ring. I wish to solve a system of equations $f_1, .., f_k \in \mathcal k[x_1,..,x_n]$ where $G$ a Gröbner basis for $I = <f_1, ..,f_n>$ (ideal generated by $f_1,..,f_n$) $f_1(x_1,..,x_n) = 0, .., f_k(x_1, .. ,x_n) = 0$ In the examples / exercises all solutions to $G \cap \mathcal k[x_1,.., x_i]$ also imply that these solutions work for solving polynomial equations $G \cap \mathcal k[x_1,.., x_i, x_{i+1}]$ all the way up to $ i = n$. I haven't experienced that a solution $x$ for $G \cap \mathcal k[x_1,.., x_i]$ make the system of the equations $G \cap \mathcal k[x_1,.., x_i, x_{i+1}]$ unsolveable, when $x$ plugged in. So does a solution $x$ for $G \cap \mathcal k[x_1,.., x_i]$ imply that there exist $x_{i+1} \in \mathcal k$ such that $G \cap \mathcal k[x_1,.., x_i, x_{i+1}]$ is solved for $x, x_{i+1}$ ? Please give a counter example if possible, thanks.
For $\mathbb C$, the Extension Theorem tells us when we can extend a partial solution to a complete one. Theorem 1 (The Extension Theorem) Let $I=<f_1,...,f_s>\subset \mathbb C[x_1,...,x_n]$. Then for each $1\le i \le s$, write $f_i=g(x_2,...,x_n)x_1^{N_i} + $ terms in which $x_1$ has degree $< N_i$ where $N_i\ge 0$ and $g_i \in \mathbb[x_2,...,x_n]$ is nonzero. Let $I_1$ be the first elimination ideal of $I$ and suppose we have a partial solution $(a_2,...,a_n) \in V(I_1)$. If $(a_2,..,a_n) \not\in V(g_1,...,g_s)$, then $\exists a_1 \in \mathbb C s.t. (a_1,...,a_n) \in V(I)$ That is, if any of our partial solutions cause the leading coefficients of the ideal we extend to to simultaneously vanish, then we can't extend it. The accepted answer does have a solution over the complex field, but not the reals. A better example is $$ xy = 1, \\ xz = 1 $$ The Gröbner basis is $\{y-z, xz-1\}$, so the first elimination ideal is $y-z$ from which we get the partial solution $\forall a\in \mathbb R, y=z=a$. However, since $y$ and $z$ are the leading coefficients of our original system, the partial solution $(0,0)$ causes them to vanish simultaneously, so we cannot extend this partial solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/541316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove one-to-one function Let $S$ be the set of all strings of $0$'s and $1$'s, and define $D:S \rightarrow \mathbb{Z}$ as follows: For all $s\in S$, $D(s)= \text{the number of}\,\, 1$'s in $s$ minus the number of $0$'s in $s$. a. Is $D$ one-to-one(injective)? Prove or give counterexample if it is false. b. Is $D$ onto(surjective)? Prove or give a counterexample. a. I know $D(S1)\neq D(S2)$ and $S1=S2$. Since $11000\neq 10100$ but $D(11000)=-1$ and $D(10100)=-1$. So does that means that D is not one-to-one? b. ?
a) Yes, the function is indeed not injective, and your idea is correct! b) You have to show that, given an arbitrary integer $n$, then you can find a string $S$ such that $D(S)=n$. For example, if $n=3$, then take $S=111$...
{ "language": "en", "url": "https://math.stackexchange.com/questions/541407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Extending a connected open set Assume $\emptyset\neq V\subseteq U\subseteq\mathbb{R}^n$ are open and connected sets so that $U\setminus\overline{V}$ is connected as well. Given any point $x\in U$, is there always a connected open set $W\subseteq U$ so that $\{x\}\cup V\subseteq W$ and $U\setminus\overline{W}$ is connected? In other words, can $V$ be extended to a connected open set containing a given point so that the complement of the closure of the extended set is still connected?
There are two cases to consider: * *$n\ge 2$. Observe that in this case, for every open connected set $S\subset R^n$, any $a\in A$ and any sufficiently small $r\ge 0$, the complement of the closed ball $$ A\setminus \overline{B(a, r)} $$ is still connected. Now, if $V$ is dense in $U$ then the only meaningful answer is to take $W=U$ (for any choice of $x$) and then $U\setminus \bar W$ and $W$ are both connected (the first one is empty of course). If $V$ is not dense in $U$, we can do a bit better than this: The subset $U\setminus \{x\}\cup \bar V$ is open and nonempty. Pick any point $a$ in this complementary set and let $$ W= U \setminus \overline{B(a, r)} $$ where $r>0$ is sufficiently small. Then $$ U\setminus \bar W = B(a, r) $$ (the open ball) is connected and nonempty and $W$ is also connected by the above remark. *$n=1$ (I will leave out the case $n=0$). Then both $U$ and $V$ are intervals an you can take $W$ to be the smallest open interval containing $V$ and $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/541532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
logarithm equation with different bases. Why is this like it is? :D $$\dfrac{1}{\log_ae} = \ln(a)$$ I'm solving some exercises and I ran up to this? Maybe it's really banal, but please explain me...
We can prove $$\log_ab=\frac{\log_cb}{\log_ca}$$ where $a,c >0,\ne1$ $$\implies \log_ab\cdot\log_ba=\cdots=1$$ and conventionally Natural logarithm is written as $\displaystyle \ln a$ which means $\displaystyle \log_ea$
{ "language": "en", "url": "https://math.stackexchange.com/questions/541604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
An overview of analysis I'm looking for a book that gives an overview of analysis, a bit like Shafarevich's Basic Notions of Algebra but for analysis. The book I have in mind would give definitions, theorems, examples, and sometimes sketches of proofs. It would cover a broad swathe of analysis (real, complex, functional, differential equations) and discuss a range of applications (i.e. in physics and in prime numbers). I've looked at the Analysis I volume of the Encyclopaedia of Mathematical Sciences which Shafarevich's book is also a part of, but it focuses more on methods and isn't quite what I have in mind. Thank you!
Loomis & Sternber's Advanced Calculus is available online. It is a classic that goes well beyond what people normally call calculus (differential equations, differential geometry, variational principles, ...). Personally I really like Sternberg's books, but it is a full blown textbook rather than a survey. Or maybe Aleksander & Kolmogorov, Mathematics: Its Content, Methods and Meaning is closer to what you are looking for. It is more of a survey, but with a lot of depth. It is much broader than what you asked, but it is 1100+ pages and it is biased towards analysis related topics. Another area to explore are advanced applied mathematics texts, these are often primarily analysis oriented, comprehensive and, well, application oriented and you can find books by people like Kreyszig, Lanczos, ... Dover have published a number of books like this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/541684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 6, "answer_id": 0 }
Maximum N that will hold this true Find the largest positive integer $N$ such that $$\sqrt{64 + 32^{403} + 4^{N+3}}$$ is an integer Is $N = 1003$?
Note that with $N=2008$ we have $ (2^{N+3}+8)^2=4^{N+3}+2\cdot 8\cdot 2^{N+3}+64=4^{N+3}+2^{2015}+64=64+32^{403}+4^N,$ so we conjecture that the maximal value is $2008$. If $2^{2015}+2^6+2^{2N+6}$ is a perfect square then also $\frac1{64}$ of it, i.e. $2^{2009}+1+2^{2N}=m^2$ for some $m\in\mathbb N$. But if $N> 2008$, we have $$(2^N)^2=2^{2N}<2^{2009}+1+2^{2N}<1+2^{N+1}+2^{2N}=(2^N+1)^2$$ so that $2^N<m<2^N+1$, which is absurd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/541756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
basic induction probs Hello guys I have this problem which has been really bugging me. And it goes as follows: Using induction, we want to prove that all human beings have the same hair colour. Let S(n) be the statement that “any group of n human beings has the same hair colour”. Clearly S(1) is true: in any group of just one, everybody has the same hair colour. Now assume S(k), that in any group of k everybody has the same hair colour. If we replace any one in the group with someone else, they still make a total of k and hence have the same hair colour. This works for any initial group of people, meaning that any group of k + 1 also has the same hair colour. Therefore S(k + 1) is true. I cant seem to figure out where the problem lies in this proof. I have tried a few things and I have concluded the base case is correct. But other than that I can't seem to disprove this proof.
The base case is correct. Inductive step: Assume that the result is true for $n = k$, which is to say that everybody in a group of $k$ people has the same hair colour. For the proof to work, you now have to prove that $P(k)$ implies $P(k+1)$, but it doesn't. Just because it's true that for any group of $k$ people, everyone in the group has the same hair colour, it doesn't follow that for any group of $k+1$ people, everyone in the group has the same hair colour. One counterexample suffices to disprove the claim: take a group with one person in, who has red hair. Now add someone with blonde hair.
{ "language": "en", "url": "https://math.stackexchange.com/questions/541852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Prove that a group of order 30 has at least three different normal subgroups Prove that a group of order 30 has at least three different normal subgroups. Prove: $30=2\cdot3\cdot5$ There are $2$-Sylow, $3$-Sylow and $5$-Sylow subgroups. If $t_p$= number of $p$-Sylow-subgroups. Then $t_2$=$1$, $3$, $5$, $15$ and $t_3$=$1$, $10$ and $t_5$=$1$, 6. Therefore I can claim that the group is not simple. So $t_3$=1 or $t_5$=1. Which I can prove. So now we know that the group has another non-trivial normal subgroup. But this is just one normal subgroup. How can I show that there are (at least) three different normal subgroups ?
Let $|G|=30$. Assume that it has no nontrivial normal subgroups. Than there must exist more than one 5-sylow subgroup and more than one 3-sylow subgroup. By the sylow theorems, you can then prove that there must be 10 3-sylow subgroups and 6 5-sylow subgroups. These subgroups intersect trivially, i.e., their intersection equals {$e$}.This is true because for every subgroup every element generates the group, so if one element is in two subgroups, then the subgroups are equal. The total number of elements in $G$ however is at least the number of distinct elements in all subgroups; but $10\cdot(3-1)+6\cdot(5-1)+1>30$, which is a contradiction. QED.
{ "language": "en", "url": "https://math.stackexchange.com/questions/541946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Find the coordinates of intersection of a line and a circle There is a circle with a radius of $25$ ft and origin at $(0, 0)$ and a line segment from (0, -31) to (-37, 8). Find the intersections of the line and circle. I am asking for somebody to analyze what I am doing wrong in calculating the answer, given the question above in its exact format. I almost never get an answer correct, and I would like an explanation as to why this appears to be the case. I begin with finding the formula of the line I used the slope formula, $\Delta y \over{\Delta x}$ to get ${-31 - 8 \over{0 - (-37)}} = {-39 \over{37}}$, which becomes $1.054$, so that $y = 1.054x + b$ $(0, -31)$ is the y-intercept when $x = 0$, so $y = 1.054x - 31$. Formula of circle The format I learned is $(x - h)^2 + (y - k)^2 = r^2$. Substitute known info: $(x - 0)^2 + (y - 0)^2 = 25$ Simplify: $x^2 + y^2 = 25$ Substitution I took the circle's equation $x^2 + y^2 = 25$ and plugged in the value of y: $x^2 + (0.621x - 31)^2 = 25$. I expanded and rounded it into $x^2 + 1.111x^2 + -65.348x + 961 = 25$. It simplifies to $2.111x^2 - 65.348x + 936$. Quadratic formula The template: $x = {-b \pm \sqrt{b^2 - 4ac }\over{2a}}$ is used by plugging in values: ${65.348 \pm \sqrt{4270.361 - 4*2.111*936 }\over{2*2.111}} = {65.348 \pm \sqrt{4270.361 - 7903.584}\over{4.222}} = {65.348 \pm \sqrt{-3633.223}\over{4.222}}$. Either I do not know what to do next or (as I suspect) this is the unfortunate answer to the problem. The problem continues with questions about how long somebody is in the circle if they follow the line at a certain rate, which leads me to believe that there are real roots, but I certainly cannot find them. Edit: slope and template errors corrected, answer is still incorrect.
The line equation is $y=(-39/37)x-31.$ To get the closest point on this line to the origin, intersect it with the perpendicular to it from $(0,0),$ which has equation $y=(37/39)x$. The nearest point on the line to $(0,0)$ is then seen to be $(a,b)$ where $a=-44733/2890,\ b=-42439/2890.$ The distance to the origin is then $\sqrt{a^2+b^2} \approx 21.336.$ Since this is less than the required radius of $25$ there should be two points for the solution. [I could have sworn that initially the given radius was 5...]
{ "language": "en", "url": "https://math.stackexchange.com/questions/542040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
finding determinant as an function in given matrix Calculate the determinant of the following matrix as an explicit function of $x$. (It is a polynomial in $x$. You are asked to find all the coefficients.) \begin{bmatrix}1 & x & x^{2} & x^{3} & x^{4}\\ x^{5} & x^{6} & x^{7} & x^{8} & x^{9}\\ 0 & 0 & 0 & x^{10} & x^{11}\\ 0 & 0 & 0 & x^{12} & x^{13}\\ 0 & 0 & 0 & x^{14} & x^{15} \end{bmatrix} Can someone help me with this question?
Another way to look at this: the bottom three rows can't have rank more than $2$, since they have only two nonzero columns, so the whole matrix can't have rank more than $4$, and therefore is singular.
{ "language": "en", "url": "https://math.stackexchange.com/questions/542148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Taylor polynomial about the origin Find the 3rd degree Taylor polynomial about the origin of $$f(x,y)=\sin (x)\ln(1+y)$$ So I used this formula to calculate it $$p=f(0,0)+(f_x(0,0)x+f_y(0,0)y)+(\frac{1}{2}f_{xx}(0,0)x^2+f_{xy}(0,0)xy+\frac{1}{2}f_{yy}(0,0)y^2)+(\frac{1}{6}f_{xxx}(0,0)x^3+\frac{1}{2}f_{xxy}(0,0)x^2y+\frac{1}{2}f_{xyy}(0,0)xy^2+\frac{1}{6}f_{yyy}(0,0)y^3)$$ I get $x(\ln(1)+y-\frac{\ln(1)x^2}{6}-\frac{y^2}{2})$ But as you can see, this is a very tedious task (especially if I have to do this on my midterm). There exists a Taylor series for $\sin(x)$ and $\ln(1+y)$. If I only keep the terms with degree $\le 3$, I have $$\sin(x)\ln(1+y)=(x-\frac{x^3}{3!})(y-\frac{y^2}{2}+\frac{y^3}{3}) \\=xy-\frac{xy^2}{2}$$ (I multiply the two and remove terms with degree > 3 from the answer) The two polynomials are different. Is the second method even a valid way to determine Taylor polynomial?
The answers are the same. $\ln(1) = 0$. And yes, your technique is correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/542237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Do there exist functions such that $f(f(x)) = -x$? I am wondering about this. A well-known class of functions are the "involutive functions" or "involutions", which have that $f(f(x)) = x$, or, equivalently, $f(x) = f^{-1}(x)$ (with $f$ bijective). Now, consider the "anti-involution" equation $f(f(x)) = -x$. It is possible for a function $f: \mathbb{C} \rightarrow \mathbb{C}$ to have $f(f(z)) = -z$. Take $f(z) = iz$. But what about this functional equation $f(f(x)) = -x$ for functions $f: \mathbb{R} \rightarrow \mathbb{R}$, instead of $f: \mathbb{C} \rightarrow \mathbb{C}$? Do such functions exist? If so, can any be described or constructed? What about in the more general case of functions on arbitrary groups $G$ where $f(f(x)) = x^{-1}$ (or $-x$ for abelian groups), $f: G \rightarrow G$? Can we always get such $f$? If not, what conditions must there be on the group $G$ for such $f$ to exist?
Your primary question has been asked and answered already. Your follow-up question can be answered by similar means. The group structure on $G$ is actually irrelevant: all you're actually using is that each element is paired with an inverse. Another way to phrase this question is that you have a group action of the cyclic group with two elements $C_2 = \{ \pm 1 \} $ acting on a set $X$. Your question amounts wondering if this can be extended to an action of $C_4 = \{ \pm 1, \pm i\}$. The method of my answer to the other question can still be applied, with a small change. Let $A$ be the set of all fixed points of $C_2$. That is, the set of elements such that $(-1) \cdot x = x$. Let $B$ be the set of all two-element orbits. That is, the set of all unordered pairs $\{ \{a,b\} \mid (-1) \cdot a = b \wedge a \neq b \}$. Now, partition $A$ into sets of one or two elements each, and partition $B$ into sets of two elements each. Also choose an ordering for each unordered pair in $B$. Now, * *For each of the one-element partitions $\{ x \}$ of $A$, define $i \cdot x = x$ *For each of the two-element partitions $\{ x, y \}$ of $A$, define $i \cdot x = y$ and $i \cdot y = x$. *For each two-element partition $\{ (a,b), (c,d) \}$ of $B$, define $i \cdot a = c$, $i \cdot c = b$, $i \cdot b = d$, and $i \cdot d = a$. Every action of $C_4$ on $X$ that extends the action of $C_2$ is of the above form. (just look at the orbits: the one, two, and four-element orbits correspond to the three bullets respectively) Thus, we can make such an extension if and only if $B$ is either infinite or finite with even order.
{ "language": "en", "url": "https://math.stackexchange.com/questions/542312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
union of two contractible spaces, having nonempty path-connected intersection, need not be contractible show that union of two contractible spaces, having nonempty path-connected intersection, need not be contractible. can someone give me a proper example please.I could not remind anything.
Consider sphere $S^2$ with two open subsets $U,V$, s.t. $U$ contains everything but the south pole, $V$ contains everything but the north pole. They are both contractible, their intersection is homotopic to the circle, which is path connected, but their union is $S^2$, not contractible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/542464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Multiplicative Inverses in Non-Commutative Rings My abstract book defines inverses (units) as solutions to the equation $ax=1$ then stipulates in the definition that $xa=1=ax$, even in non-commutative rings. But I'm having trouble understanding why this would be true in the generic case. Can someone help me understand? Book is "Abstract Algebra : An Introduction - Third Edition" By Thomas W. Hungerford. ISBN-13: 978-1-111-56962-4 (Chapter 3.2) Edit: Ok, I think I got it. Left inverses are not necessarily also right inverses. However, if an element has a left inverse and a right inverse, then those inverses are equal: $$ lx = 1 \\ xr = 1\\ lxr = r \\ lxr = l \\ r = l $$ Source:http://www.reddit.com/r/math/comments/1pdeyf/why_do_multiplicative_inverses_commute_even_in/cd17pcb Also, I assumed that 'unit' was synonymous with 'has a multiplicative inverse'. It is not; a 'unit' is an element that has both a left and right inverse, not just an inverse in general.
Note: In a ring R with 1, if EVERY non-zero element x has a left inverse, then R is a division ring (so ax=1 implies xa=1). More generally, if EVERY non-zero element has either a left or a right inverse, then R is a division ring.
{ "language": "en", "url": "https://math.stackexchange.com/questions/542552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
absolute value inequalities When answer this kind of inequality $|2x^2-5x+2| < |x+1|$ I am testing the four combinations when both side are +, one is + and the other is - and the opposite and when they are both -. When I check the negative options, I need to flip the inequality sign? Thanks
Another way - no tricks, just systematically looking at all cases, where we can write the inequality without the absolute value sign, to convince you that all possibilities are covered.. Following the definition of the absolute value function, RHS is easy to rewrite as, $$|x+1| = \begin{cases} x+1 & x \ge -1\\ -x-1 &x < -1 \end{cases}$$ Noting $2x^2-5x+2 = (2x-1)(x-2)$, we have the LHS as $$|2x^2-5x+2| = \begin{cases} 2x^2-5x+2 & x \le \frac12 \text{ or } x \ge 2\\ -2x^2+5x-2 &\frac12 < x < 2 \end{cases}$$ Thus we can break the number line into for regions (using break points at $x = -1, \frac12, 2$) and write the inequality in each region without any absolute value symbols. Then we solve the inequality in each region, and combine results in the end. Details follow. Region 1: $x < -1$ Here the inequality is $2x^2-5x+2 < -x - 1$ which is equivalent to $2x^2 - 4x + 3 < 0$ or $2(x-1)^2+1 < 0$, which is not possible. Region 2: $-1 \le x \le \frac12$ Here we have the inequality as $2x^2-5x+2 < x + 1 \iff 2x^2 - 6x + 1 < 0$ which is true when $\frac12(3-\sqrt7) < x < \frac12(3+\sqrt7)$. Taking the part of this solution which is in the region being considered, we have solutions for $x \in (\frac12(3-\sqrt7), \frac12]$ Region 3: $\frac12 < x < 2$ Here the inequality is $-2x^2+5x-2 < x + 1 \iff 2(x- 1)^2 + 2 > 0$ which holds true for all $x$. So the entire region is a solution. Region 4: $2 \le x$ Here the inequality is $2x^2-5x+2 < x + 1$, which is exactly the same form as Region 2, and has the same solutions. Hence $2 \le x < \frac12(3+\sqrt7)$ is the solution from this Region. Combining solutions from all regions, we have $\frac12(3-\sqrt7) < x < \frac12(3+\sqrt7)$ or $|x - \frac32| < \frac{\sqrt7}{2}$ as the complete solution set (which others have already pointed out).
{ "language": "en", "url": "https://math.stackexchange.com/questions/542636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Why do we subtract 1 when calculating permutations in a ring? $10$ persons are to be arranged in a ring shape. Number of ways to do that is $9!.$ I wonder why we subtarct $1$ in all such cases. I can imagine that if A,B,C,D are sitting in a row then B,C,D,A would give me a different combination but had they been sitting on a circular table then both the above combinations would imply the same thing. Same for CDAB or DABC. But how does it all lead to $(n-1)!$ formula? Individual cases, I am able to imagine but overall how are we generalizing it?
Imagine that the table is arranged so that one of the seats is due north of the centre of the table. Seat the $n$ people around the table. Let $p_1$ be the person sitting in that north seat, and let the other $n-1$ be $p_2,p_3,\ldots,p_n$ clockwise around the table. Now rotate the table and the seats one place counterclockwise: $p_2$ is now in the north seat, and the others going clockwise around the table are $p_3,p_4\ldots,p_n,p_1$. Repeat: $p_3$ is now sitting to the north, and clockwise around the table we have $p_4,p_5,\ldots,p_n,p_1,p_2$. If we keep doing this, we bring each person in succession to the northern spot, and after $n$ of these rotations everything and everyone is back in its original position. From the standpoint of absolute compass directions we had $n$ different arrangements, but from the standpoint of the order of people around the table these $n$ arrangements were all the same. Thus, each cyclic permutation of the diners corresponds to $n$ different absolute permutations, i.e., permutations in which we care about the absolute seating position of each diner and not just who sits next to whom. Since there are $n!$ absolute permutations, there must be $\frac{n!}n=(n-1)!$ cyclic permutations. Alternatively, you can think of it this way: given a seating of the diners around the table, you can designate any seat as the head of the table and list the diners clockwise around the table starting at the head. That gives $n$ different lists, but they all correspond to the same cyclic permutation of the diners. Since $n!$ different lists are possible, there must again be $\frac{n!}n=(n-1)!$ different cyclic permutations, each corresponding to $n$ of the lists.
{ "language": "en", "url": "https://math.stackexchange.com/questions/542750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Approximation of pi Given that $\frac{\pi^2}{6}=\sum_{n=1}^{\infty}\left(\frac{1}{n^2}\right)$, I have to write a program in C that finds an approximation of $\pi$ using the formula $S_n=\sum_{i=1}^{n}\left(\frac{1}{i^2}\right)$. Then the approximation is: $\sqrt{6\cdot S_n}$ Could you tell me the result for $n=100$ so I can check if my output is right?? Thanks in advance! Could you tell me why calculating this backwards, it approximates better the sum?
Mathematica is a wonderful tool for these kinds of computations; it is worth learning how to use it, and its baby brother Wolfram Alpha. Here is what I get: link
{ "language": "en", "url": "https://math.stackexchange.com/questions/542937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Difficult Derivative? I'm in a single-variable calculus course, in which we recently covered logarithmic differentiation. The professor proved it that works when $f(x)>0$, and when $f(x)<0$. I've been trying to find a way to derive that kind of function when $f(x)=0$, but I'm not sure if it's possible, or what. I've thought of this example, that resists all my efforts to differentiate, but seems to be differentiable, (and even appears to have a value of zero). Find $$f'\left(\frac{3\pi}{2}\right)\quad \rm where \quad f(x)=(\sin{x} + 1)^x .$$ Is there any way I can find this derivative (if it exists), beyond numerically computing the limit from the definition of the derivative? Or, vice versa, how can I prove that this derivative doesn't exist? Thanks, Reggie
Graph of $f(x)$: Graph of $f'(x)$: See both the above graphs. $f(x)$ is actually not differentiable at $x= 1.5π$. The graph of $f'(x)$ at $x = 1.5π$ is a vertical asymptote. The function's second differential may say that it is increasing/decreasing at $x= 1.5π$ but the first derivative doesn't exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/543174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 8, "answer_id": 3 }
Differential equation with infinitely many solutions The problem is to solve for $-1<x<1$ $$y'(x)=\frac{4x^3y(x)}{x^2+y(x)^2}$$ with $y(0)=0$. I need to show that this equation has infinitely many solutions. Note that $\frac{4y(x)x^3}{x^2+y(x)^2}$ is undefined for $y(0)=0$, but note that $\frac{4x^3y(x)}{x^2+y(x)^2}=2x^2(\frac{2xy(x)}{x^2+y(x)^2})=O(x^2)$ since $|\frac{2xy(x)}{x^2+y(x)^2}|\le 1$. I do not know how solve the problem with an explicit formula. Can anyone point out how to prove that this problem has infinitely many solutions?
As you observed, the function $$F(x,y)=\frac{4x^3y}{x^2+y^2},$$ extended by $F(0,0)$, is continuous. Also, its partial derivative with respect to $y$ is bounded near the origin: $$\left|\frac{\partial F}{\partial y}\right| = \left|\frac{4x^3(x^2-y^2)}{(x^2+y^2)^2}\right| = 4|x| \frac{x^2|x^2-y^2|}{(x^2+y^2)^2}\le 4|x|$$ By the Picard–Lindelöf theorem the initial value problem with $y(0)=0$ has a unique solution. Of course, this solution is $y(x)\equiv 0$. Maybe you misunderstood the problem?
{ "language": "en", "url": "https://math.stackexchange.com/questions/543250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Any suggestions about good Analysis Textbooks that covers the following topics? I am an undergraduate math major student. I took two courses in Advanced Calculus (Real Analysis): one in Single variable Analysis, and the second in Multivariable Analysis. We basically used Rudin's Book "Principles of Mathematical Analysis" as a textbook, and we covered the first 9 chapters of the book(Namely: 1. Real and Complex Number Systems. 2- Basic Topology. 3-Numerical Sequences and Series. 4- Continuity. 5- Differentiation. 6- Riemann-Stieltjes Integral. 7- Sequences and Series of Functions. 8- Some special Functions. 9-Functions of several variables). I am looking for a good (and easy to read) textbook, preferably with many examples (or solved problems) that covers the following topics: * *algebras and measures; *the measure theoretic integral (in particular, the N-dimensional Lebesgue integral); *convergence theorems; *product measures; *absolute continuity; *signed measures; *the Lebesgue-Stieltjes integral. This is also another description of the topics covered that I found on the syllabus of the course: "Brief review of set operations and countable sets. Measure theory, integration theory, Lebesgue measure and integrals on $\mathbb R^n$, product measure, Tonelli-Fubini theorem. Functions of bounded variation, absolutely continuous functions" I appreciate any kind of suggestion about a good textbook that I can use to learn the topics above by self-study. I prefer if you can tell me about the easy-to-read ones with examples and solved problems, because it's very hard for me to understand analysis without solving examples and problems. Thanks in advance for the help!
T. Tao, Analysis II covers the topic that you need. But for the measure theory, I think "Paul R. Halmos,measure theory" is good.
{ "language": "en", "url": "https://math.stackexchange.com/questions/543338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Proving a Property of a Set of Positive Integers I have a question as such: A set $\{a_1, \ldots , a_n \}$ of positive integers is nice iff there are no non-trivial (i.e. those in which at least one component is different from $0$) solutions to the equation $$a_1x_1 + \ldots + > a_nx_n = 0$$ with $x_1 \ldots x_n \in \{ -1, 0, 1 \}$. Prove that any nice set with $n$ elements necessarily contains at least one element that is $\geq \frac{2^n}{n}$. Here's my work so far: Let $A = \{a_1, \ldots, a_n \}$. Let $P = a_1x_1 + \ldots + a_nx_n$. $A$ is nice iff there are only trivial solutions to $P$. There is a nontrivial solution to $P$ iff $\exists B,C \subset A$ s.t. $sum(B)=sum(C)$. Note that $B\cap C = \emptyset$, but $B\cup C$ does not necessarily equal $A$, as once there are such subsets, the other elements of $A$ can be set to multiply with $x_i = 0$ s.t. Moreover, either $B$ or $C$ has to be multiplied by $x_i = -1$ s.t. $B+C = 0$. The contrapositive of the initial statement is that if $\forall a \in A, a < \frac{2^n}{n}$, then $A$ is not nice. We note that a non-nice set can contain $x \geq \frac{2^n}{n}$. It is only that if a set is nice that it necessarily contains $x \geq \frac{2^n}{n}$. Just the fact that a set contains such an element doesn't actually tell us whether the set is nice or not. The question that I arrived at is this: why is that if $\forall a\in A, a < \frac{2^n}{n}$, then $A$ has a non-trivial solution? If I can show why this is true, it would constitute proof of the contrapositive, and I would be done. By the way, if anyone can think of a better title; please do suggest it.
Hint: If the sums of two distinct subsets are equal, then the set is not nice. There are $2^n$ subsets. If the numbers are all $\lt \frac{2^n}{n}$, then the sum of all the numbers is less than $2^n$. Now use the Pigeonhole Principle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/543446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\cos x+\cos 3x+\cos 5x+\cos 7x=0$, Any quick methods? How to solve the following equation by a quick method? \begin{eqnarray} \\\cos x+\cos 3x+\cos 5x+\cos 7x=0\\ \end{eqnarray} If I normally solve the equation, it takes so long time for me. I have typed it into a solution generator to see the steps. One of the steps shows that : \begin{eqnarray} \\\cos x+\cos 3x+\cos 5x+\cos 7x&=&0\\ \\-4\cos x+40\cos ^3x-96\cos ^5x+64\cos ^7x&=&0\\ \end{eqnarray} How can I obtain this form? It seems very quick. or this quick methd do not exist? Thank you for your attention
Use product identity $2\cos(x)\cos (y)=\cos(x+y)+\cos(x-y)$ So $\cos x+\cos7x=2\cos4x\cos3x$ And $\cos3x+\cos5x=2\cos4x\cos x$. Factoring out $2\cos4x$, we get $2\cos4x (\cos3x+\cos x)=0$. You can solve the 1st factor now right? For the second factor, use the above identity again and you will be done.:)
{ "language": "en", "url": "https://math.stackexchange.com/questions/543506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 0 }
In a non-abelian group, if $C(a)=\langle a\rangle$ then $a\not\in Z(G)$. Suppose $G$ is a non-abelian group and $a∈G$. Prove that if $C(a)= \langle a \rangle$ then $a\not\in Z(G)$. I just don't understand this proof at all. Would someone mind walking me through the entire proof?
Since $C(a) = \langle a\rangle$ is abelian and $G$ is not, $C(a) \neq G$. Now $a\notin Z(G)$, because otherwise by definition $C(a) = G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/543567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the order of the sum of log x? Let $$f(n)=\sum_{x=1}^n\log(x)$$ What is $O(f(n))$? I know how to deal with sums of powers of $x$. But how to solve for a sum of logs?
Using Stirling's formula we have, for $n$ sufficiently large $$ f(n)=\sum_{k=1}^n\log k=\log(n!)\simeq\log(\sqrt{2\pi}e^{-n}n^{n+1/2}). $$ Hence $$ O(f(n))=\log(\sqrt{2\pi}e^{-n}n^{n+1/2}). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/543634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
factoring cubic polynomial equation using Cramer's rule. 1) I have question about factoring cubic polynomials. In my note it says "Any polynomial equation with positive powers whose coefficients add to 0 will have a root of 1. Another, if sum of the coefficients of the even powers = sum of coefficient of the odd powers, -1 is a rood." However, all of my problems were solved with factor(x-1) if the coefficients add to 0 and vice versa. I am totally confused when to use 1, when to use -1. example: x4+17x3-3x+k = 0 so the root is as follow: a) 1 k = -49 b) -1 k = -12 2) 2nd questions is finding determinant 3X3 ,we repeat 1st and 2nd row and make 3 crisscross product the left and to the right. But i have no clue what is saying here. "where delta Dij = the (n-1)X(n-1) determinant former striking out row i and col j." example: 1 3 5 6 8 6 7 2 7 1 0-1 6 4 10 3 4 5 1 6 8 7 9 4 -2 Dij = 6 7 2 1 0-1 6 10 3 4 5 6 8 7 9-2 for n = 3 what is value of Delta. My questions is i do not understand which row and which column are referred in this. Any explaining would be highly appreciated. thanks.
If the root is 1 the polynomial is divisible by x -1; if the root is -1 the polynomial is divisible by x + 1. Unfortunately I'm getting k = -15 for the root of 1: adding the coefficients we have 1 + 17 -3 + k = 0 so 15 + k = 0. and k = -15. I checked it out by actually dividing through by x -1. For the second the coefficients of the even powers add to 1 + k. The coefficients of the odd powers add to 17 - 3 = 14. So 1 + k = 14 and k = 13. Again, I double checked this with division. The problem I see here is that you were not offered x = 1 k = 15 or x = -1 k = 13. Did you give us the wrong polynomial? Start with any matrix. Then D$_{ij}$ is the determinant of a submatrix that you get by crossing out the ith row and jth column of the original matrix. For example if A is a 3x3 matrix defined as \begin{pmatrix} a & b & c\\ d & e & f\\ g & h & i \end{pmatrix} The D$_{23} is the determinant of \begin{pmatrix} a & b \\ g & h \end{pmatrix} which is ah-bg. To get the determinant of the 3x3 matrix above you can use $a*D_{11} - b*D_{12} + c*D_{13}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/543700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Maximum Likelihood Find the maximum likelihood estimator of $f(x|\theta) = \frac{1}{2}e^{(-|x-\theta|\:)}$, $-\infty < x < \infty$ ; $-\infty < \theta < \infty$. I am confused of how to deal with the absolute value here.
The likelihood function $$L(x;\theta)=\prod\limits_{i=1}^n\frac{1}{2}e^{-|x_i-\theta|}=\frac{1}{2^n}e^{-\sum\limits_{i=1}^n|x_i-\theta|}$$ $$\ln L(x;\theta)=-n\ln2-\sum_{i=1}^{n}|x_i-\theta|$$ $$\frac{\partial\ln L(x;\theta)}{\partial\theta}=\sum_{i=1}^n \text{sign } (x_i-\theta)$$ because $|x|'=\text{sign }x,x\ne0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/543765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proof by cases, inequality I have the following exercise: For all real numbers $x$, if $x^2 - 5x + 4 \ge 0$, then either $x \leq 1$ or $x \geq 4$. I need you to help me to identify the cases and explain to me how to resolve that. Don't resolve it for me please.
HINT: If $(x-a)(x-b)\ge0$ Now the product of two terms is $\ge0$ So, either both $\ge0$ or both $\le0$ Now in either case, find the intersection of the ranges of $x$
{ "language": "en", "url": "https://math.stackexchange.com/questions/543831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Show there is a closed interval $[a, b]$ such that the function $f(x) = |x|^{\frac1{2}}$ is continuous but not Lipschitz on on $[a, b]$. Hi guys I was given this as an "exercise" in my calculus class and we weren't told what a Lipschitz is so i really need some help, heres the question again: Show there is a closed interval $[a, b]$ such that the function $f(x) = |x|^{\frac1{2}}$ is continuous but not Lipschitz on on $[a, b]$.
Consider the interval $[0,1]$, then clearly $f(x) = x^{1/2}$ is continuous. Can you show that $f$ is not Lipschitz on $[0,1]$? (Hint: Use the Mean-Value theorem on a closed sub-interval of $(0,1/n]$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/543936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Exponential of the matrix I want to calculate the matrix exponential $e^{tA}$ of the matrix with the first row being $(0,1)$ and the second $ (-1,0)$. It would be sufficient if you would me the most important steps.
Firstly, you should expand an exponent in Taylor series. Then, you should understand, what happens with matrix, when it is exponentiated with power n. The last step is to sum up all the matrices and realize, if there are Teylor series of some functions as an entries of the aggregate matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/543992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
$(A\cap B)\cup C = A \cap (B\cup C)$ if and only if $C \subset A$ I have a set identity: $(A \cap B) \cup C = A \cap (B \cup C)$ if and only if $C \subset A$. I started with Venn diagrams and here is the result: It is evident that set identity is correct. So I started to prove it algebraic: 1) According to distributive law: $(A \cap B) \cup C = (A \cup C) \cap (B \cup C)$ 2) ... I stuck a little. Because $C$ is a subset of $A$. I thought of pulling out: $(B \cup C)$ but it seems wrong step to me. How to prove this identity having in mind that $C \subset A$? Updated Venn diagram for $C ⊈ A$
Here is a full algebraic proof. Let's first expand the definitions: \begin{align} & (A \cap B) \cup C = A \cap (B \cup C) \\ \equiv & \;\;\;\;\;\text{"set extensionality"} \\ & \langle \forall x :: x \in (A \cap B) \cup C \;\equiv\; x \in A \cap (B \cup C) \rangle \\ \equiv & \;\;\;\;\;\text{"definition of $\;\cap\;$ and $\;\cap\;$, both twice"} \\ & \langle \forall x :: (x \in A \land x \in B) \lor x \in C \;\equiv\; x \in A \land (x \in B \lor x \in C) \rangle \\ \end{align} Now we have a choice to make: do we distribute $\;\lor\;$ over $\;\land\;$ in the left hand side, or $\;\land\;$ over $\;\lor\;$ in the right hand side? Since this expression is completely symmetric, we arbitrarily choose to distribute on the left hand side, and continue our logical simplification after that: \begin{align} \equiv & \;\;\;\;\;\text{"distribute $\;\lor\;$ over $\;\land\;$ on left hand side"} \\ & \langle \forall x :: (x \in A \lor x \in C) \land (x \in B \lor x \in C) \;\equiv\; x \in A \land (x \in B \lor x \in C) \rangle \\ \equiv & \;\;\;\;\;\text{"extract common conjunct out of $\;\equiv\;$"} \\ & \langle \forall x :: x \in B \lor x \in C \;\Rightarrow\; (x \in A \lor x \in C \equiv x \in A \rangle \\ \equiv & \;\;\;\;\;\text{"simplify one way of rewriting $\;\Rightarrow\;$"} \\ & \langle \forall x :: x \in B \lor x \in C \;\Rightarrow\; (x \in C \Rightarrow x \in A) \rangle \\ \equiv & \;\;\;\;\;\text{"combine both antecedents"} \\ & \langle \forall x :: (x \in B \lor x \in C) \land x \in C \;\Rightarrow\; x \in A \rangle \\ \equiv & \;\;\;\;\;\text{"simplify antecedent: use $\;x \in C\;$ in other side of $\;\land\;$"} \\ & \langle \forall x :: x \in C \;\Rightarrow\; x \in A \rangle \\ \end{align} Now we only have to wrap up: \begin{align} \equiv & \;\;\;\;\;\text{"definition of $\;\subseteq\;$"} \\ & C \subseteq A \\ \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/544071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
If $R,S$ are reflexive relations, so are $R \oplus S$ and $R \setminus S$? Suppose $R$ and $S$ are reflexive relations on a set $A$. Prove or disprove each of these statements. a) $R\oplus S$ is reflexive. b) $R\setminus S$ is reflexive. I think both of a) and b) are false, but I'm having trouble with coming up with counterexamples.
Hint: What does it mean to say $(x,x) \in R \oplus S$, respectively $(x,x) \in R \setminus S$? Can both $R$ and $S$ be reflexive if this is the case? The above amounts to a proof by contradiction. But we can avoid this; for example, by the following argument: Let $x \in A$. We know that $(x,x)\in R$ and $(x,x) \in S$. Therefore, by definition of $R \oplus S$ (resp. $R \setminus S$), $(x,x)\notin R \oplus S$. Hence $R \oplus S$ is not reflexive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/544142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finite Sum $\sum\limits_{k=1}^{m-1}\frac{1}{\sin^2\frac{k\pi}{m}}$ Question : Is the following true for any $m\in\mathbb N$? $$\begin{align}\sum_{k=1}^{m-1}\frac{1}{\sin^2\frac{k\pi}{m}}=\frac{m^2-1}{3}\qquad(\star)\end{align}$$ Motivation : I reached $(\star)$ by using computer. It seems true, but I can't prove it. Can anyone help? By the way, I've been able to prove $\sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{{\pi}^2}{6}$ by using $(\star)$. Proof : Let $$f(x)=\frac{1}{\sin^2x}-\frac{1}{x^2}=\frac{(x-\sin x)(x+\sin x)}{x^2\sin^2 x}.$$ We know that $f(x)\gt0$ if $0\lt x\le {\pi}/{2}$, and that $\lim_{x\to 0}f(x)=1/3$. Hence, letting $f(0)=1/3$, we know that $f(x)$ is continuous and positive at $x=0$. Hence, since $f(x)\ (0\le x\le {\pi}/2)$ is bounded, there exists a constant $C$ such that $0\lt f(x)\lt C$. Hence, substituting $x={(k\pi)}/{(2n+1)}$ for this, we get $$0\lt \frac{1}{\frac{2n+1}{{\pi}^2}\sin^2\frac{k\pi}{2n+1}}-\frac{1}{k^2}\lt\frac{{\pi}^2C}{(2n+1)^2}.$$ Then, the sum of these from $1$ to $n$ satisfies $$0\lt\frac{{\pi}^2\cdot 2n(n+1)}{(2n+1)^2\cdot 3}-\sum_{k=1}^{n}\frac{1}{k^2}\lt\frac{{\pi}^2Cn}{(2n+1)^2}.$$ Here, we used $(\star)$. Then, considering $n\to\infty$ leads what we desired.
(First time I write in a math blog, so forgive me if my contribution ends up being useless) I recently bumped into this same identity while working on Fourier transforms. By these means you can show in fact that $$\sum_{k=-\infty}^{+\infty}\frac{1}{(x-k)^2}=\frac{\pi^2}{\sin^2 (\pi x)}.\tag{*}\label{*}$$ Letting $x=\frac13$ yields $$\begin{align}\sum_{k=-\infty}^{+\infty}\frac{1}{(3k-1)^2}&= \sum_{k=0}^{+\infty}\left[\frac{1}{(3k+1)^2}+ \frac{1}{(3k+2)^2} \right]=\\ &=\sum_{k=1}^{+\infty} \frac{1}{k^2} - \sum_{k=1}^{+\infty}\frac{1}{(3k)^2}=\\ &=\frac{8}{9}\sum_{k=1}^{+\infty} \frac{1}{k^2}\\ &\stackrel{\eqref{*}}= \frac{4\pi^2}{27}\end{align},$$ which in the end gives $$\sum_{k=1}^{+\infty} \frac{1}{k^2} = \frac{\pi^2}{6}.\tag{**}\label{**}$$ Now, back to \eqref{*}, we can generalize and take $x=m/n$, with $n$ and $m$ positive integers, obtaining $$\sum_{k=0}^{+\infty}\left[\frac{1}{(nk+m)^2} + \frac{1}{(nk+n-m)^2}\right]=\frac{\pi^2}{n^2 \sin^2\left(\frac{\pi m}{n}\right)}.$$ Finally taking the sum of LHS and RHS of last equation for $m=1,2,\dots,n-1$ yields $$\begin{align}\sum_{m=1}^{n-1}\sum_{k=0}^{+\infty}\left[\frac{1}{(nk+m)^2} + \frac{1}{(nk+n-m)^2}\right]&=\frac{\pi^2}{n^2} \sum_{m=1}^{n-1}\frac{1}{\sin^2\left(\frac{\pi m}{n}\right)}\\ \sum_{k=0}^{+\infty}\sum_{m=1}^{n-1}\left[\frac{1}{(nk+m)^2} + \frac{1}{(nk+n-m)^2}\right]&=\frac{\pi^2}{n^2} \sum_{m=1}^{n-1}\frac{1}{\sin^2\left(\frac{\pi m}{n}\right)}\\ 2\sum_{k=1}^{+\infty}\frac{1}{k^2} - 2\sum_{k=1}^{+\infty}\frac{1}{(nk)^2} &=\frac{\pi^2}{n^2} \sum_{m=1}^{n-1}\frac{1}{\sin^2\left(\frac{\pi m}{n}\right)}\\ \frac{2(n^2-1)}{n^2}\sum_{k=1}^{+\infty}\frac{1}{k^2} &=\frac{\pi^2}{n^2} \sum_{m=1}^{n-1}\frac{1}{\sin^2\left(\frac{\pi m}{n}\right)}. \end{align}$$ Using then \eqref{**} leads to the result $$\sum_{m=1}^{n-1} \frac{1}{\sin^2\left(\frac{\pi m}{n}\right)} = \frac{n^2-1}{3}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/544228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49", "answer_count": 8, "answer_id": 1 }
How many ways can 8 children facing each other in a circle change seats so that each faces a different child. Need some help with this problem. A carousel has eight seats each representing a different animal. Eight children are seated on the carousel but facing inward, so each child is staring at another. In how many ways can they change seats so that each faces a different child. Was thinking P(8,8) for the total positions, but not sure where to go from there.
I calculate 23040 ways. The way I see the problem, we have to assign 4 pairs of children (sitting opposite to each other) to 4 distinct slots. First let us calculate the number of way to seat the children, if we fix the pairs of oppositing children. Then the pairs could be assigned in $4! = 24$ ways to the slots. Since each of the pairs can be flipped, we have to multiply by $2^4 = 16$. So for fixed pairs of children, we have $4!*2^4 = 384$ ways to place them. Now let us see, how many possibilities there are, to pair the children. I will call the children a,b,c,d,e,f,g,h from here on. Let us assume, that the start configuration is $(a,b),(c,d),(e,f),(g,h)$. So we are looking for the number all sets of such four pairs where none of the above is included. There are $6*5 = 30$ ways to build pairs of the form $(a,x),(b,y)$ with $x \in \{c,d,e,f,g,h\}, y \in \{c,d,e,f,g,h\} \setminus \{x\}$. Now there are four children left and at least two of them included in a forbidden pair, so they cannot be matched, which leaves only $2$ possibilities to match the other two pairs. So the number of desired sets is $6*5*2 = 60$. In total we get $384 * 60 = 23040$ ways to seat the children in the desired way. Question: is there a general combinatorical formula, which could be used to get to the 60? I do not see, how I could model this problem in a way, such that I could make use of one I know.
{ "language": "en", "url": "https://math.stackexchange.com/questions/544317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Cohomological definition of the Chow ring Let $X$ be a smooth projective variety over a field $k$. One can define the Chow ring $A^\bullet(X)$ to be the free group generated by irreducible subvarieties, modulo rational equivalence. Multiplication comes from intersection. The problem is, verifying that everything is well-defined is quite a pain. My question is Is there a definition of $A^\bullet(X)$ that is purely cohomological? In other words, is it possible to give a definition of $A^\bullet(X)$ that works for any ringed space? Probably the fact that this definition is equivalent to the usual one will be a theorem of some substance. Note: I am not claiming that the usual definition of the Chow ring is "bad" in any way. I just think it would be nice to know if there was a "high-level" approach.
Another result I just ran across. In Hulsbergen's book Conjectures in arithmetic algebraic geometry he mentions the following theorem "of Grothendieck." For a general ringed space, let $$ K_0(X)^{(n)} = \{x\in K_0(X)_{\mathbf Q}:\psi^r(x) = r^n x\text{ for all }r\geqslant 1\} $$ where $\psi^r$ is the $r$-th Adams operation. The theorem states: If $X$ is a smooth variety over a field $k$, then $\operatorname{A}^n(X)_{\mathbf Q}\simeq K_0(X)^{(n)}$. Unfortunately, besides the obvious inference that this result is somewhere in SGA 6, no specific reference is given.
{ "language": "en", "url": "https://math.stackexchange.com/questions/544383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Completely baffled by this question involving putting matrices in matrices This is homework, so only hints please. Let $A\in M_{m\times m}(\mathbb{R})$ , $B\in M_{n\times n}(\mathbb{R})$ . Suppose there exist orthogonal matrices $P$ and $Q$ such that $P^{T}AP$ and $Q^{T}BQ$ are upper triangular. Let $C$ be any $m\times n$ matrix. Then show that there is an orthogonal $R$ , not depending on $C$ , such that $$R^{T} \begin{bmatrix}A & C\\ 0 & B \end{bmatrix}R$$ is upper triangular. I haven't done any proofs before using matrices containing matrices in them. I have absolutely no idea where to begin. Any ideas?
Take $$R = \operatorname{diag}(P,Q) = \begin{bmatrix} P \\ & Q \end{bmatrix},$$ and see what happens. Ask if you get stuck.
{ "language": "en", "url": "https://math.stackexchange.com/questions/544477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Quotient is principal Let $R$ be a finite commutative ring, let $J$ be a maximal ideal of $R$ and $n$ some positive integer greater or equal than $2$. Is it always true that every ideal of the quotient $R/J^{n}$ is principal?
No. For instance if $R$ is local then $J^n = 0$ for sufficiently large $n$ and then you are just asking whether a finite, local commutative ring must be principal. The answer is certainly not. Examples have come up before on this site. One natural one is $R = \mathbb{F}_p[x,y]/\langle x,y \rangle^2$. This is somehow especially instructive: the maximal ideal $\mathfrak{m} = \langle x,y \rangle$ is not monogenic since $\operatorname{dim}_{R/\mathfrak{m}} \mathfrak{m}/\mathfrak{m}^2 = \operatorname{dim}_{\mathbb{F}_p} \mathfrak{m} = 2$. I felt I understood Zariski tangent spaces a bit better after thinking about this example. Another simple example is $\mathbb{Z}[\sqrt{-3}]/\langle 2 \rangle$. More generally, if $R$ is any nonmaximal order in a ring of integers in a number field, then there will be some ideal $I$ of $R$ such that $R/I$ is finite and nonprincipal. Added: Over the summer I had the occasion to think about finite non/principal rings. For more information about when finite local rings are principal, and especially the connection to residue rings of orders in number fields, see $\S$ 1 of this paper.
{ "language": "en", "url": "https://math.stackexchange.com/questions/544550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Upper bound for $(1-1/x)^x$ I remember the bound $$\left(1-\frac1x\right)^x\leq e^{-1}$$ but I can't recall under which condition it holds, or how to prove it. Does it hold for all $x>0$?
Starting from $e^x \geq 1+x$ for all $x \in \mathbb{R}$: For all $x \in \mathbb{R}$ $$ e^{-x} \geq 1-x. $$ For all $x \neq 0$ $$ e^{-1/x} \geq 1-\frac{1}{x}. $$ And, since $t \mapsto t^x$ is increasing on $[0,\infty)$, for $x \geq 1$ $$ e^{-1} \geq \left(1-\frac{1}{x}\right)^x. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/544667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Arithmetic sequence in a Lebesgue measurable set Let $A\subseteq[a,b]$ be Lebesgue measurable, such that: $m(A)>\frac{2n-1}{2n}(b-a)$. I need to show that $A$ contains an arithmetic sequence with n numbers ($a_1,a_1+d,...,a_1+(n-1)*d$ for some d). I thought about dividing [a,b] into n equal parts, and show that if I put one part on top of the other, there must be at least one lapping point, that will occur in every part. but I haven't succeeded in showing that. Thank you.
Hint: You are on the right track. Have you noticed that the length of each of your sub-intervals is $\frac{b-a}{n}$, while the total length of all the missing pieces is only $\frac{b-a}{2n}$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/544763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
What is a convex optimisation problem? Objective function convex, domain convex or codomain convex? My teacher in the course Mat-2.3139 did not want to answer this question because it would take too much time. So what does a convex optimisation problem actually mean? Convex objective function? Convex domain or convex codomain? Or something else?
I am not yet sure whether it is a general term for all kind of "something-convex" problems or a specific term to certain mathematical problems. It could be both: some people, like your teacher, may decide to use it as a general term for "something-convex" in it, while others stick to a precise interpretation. I prefer the latter. An appeal to authority: Convex Optimization by Boyd and Vandenberghe has $17908$ citations in Google Scholar, and says this:
{ "language": "en", "url": "https://math.stackexchange.com/questions/544847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Example involving Uniform Continuity Question: Could someone give an example of a sequence of uniformly continuous real-valued functions on the reals such that they converge point-wise to a function that is continuous but not uniform continuous. My attempt so far: I managed to prove this is true in the case of uniform convergence, so I'm convinced there is an example. I considered triangles such that they were symmetrical on the y-axis such that they got taller and closer however this converges point-wise to 0 which is uniform continuous
I'm not convinced this is the simplest example (I certainly wouldn't want to make it explicit), but it was fun :) Lines are uniformly continuous and quadratics are not. Also, continuous implies is uniformly continuous on a compact domain. In addition, it's not too hard to show that if you glue two uniformly continuous functions together, the result is uniformly continuous. Now create a function that approximates a quadratic by being identical near the vertex and at some point on each side breaking off into the tangent lines. Therefore, considering a sequence of such functions whose "breakoff points" have $x$-values that go to $\pm\infty$; these satisfy the premises and get the conclusion, so you're good.
{ "language": "en", "url": "https://math.stackexchange.com/questions/544941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Assignment: determining sets are bases of $\mathbb{R}^3$ This is question from an assignment I'm working on: Which two of the following three sets in $\mathbb{R}^3$ is a basis of $\mathbb{R}^3$? \begin{align*} B_1&=\{(1,0,1),(6,4,5),(-4,-4,7)\}\\ B_2&=\{(2,1,3),(3,1,-3),(1,1,9)\}\\ B_3&=\{(3,-1,2),(5,1,1),(1,1,1)\} \end{align*} Thanks to software, I know that the answer is $B_1$ and $B_3$, as $B_2$ the only linearly dependent set of the three - and in this case, an LD set can't be a basis for $\mathbb{R}^3$. Manually, I've put all three sets in RREF, and all three can be reduced to $$ \left[\begin{array}{ccc|c} {1}&{0}&{0}&{0}\\ {0}&{1}&{0}&{0}\\ {0}&{0}&{1}&{0} \end{array}\right] $$ This also checks out when computed by software. Since all three sets reduce to the same RREF, how can I prove that these sets are linearly (in)dependent?
Let V={$v_1,v_2,...,v_n$} be a set of n vectors in $\mathbb{R}^n$. then V is linearly independent iff the matrix A=($v_1,v_2,...,v_n$) is invertible where v_i is the ith column of A . proving this result is an easy exercise .
{ "language": "en", "url": "https://math.stackexchange.com/questions/545029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Three Variables-Inequality with $a+b+c=abc$ $a$,$b$,$c$ are positive numbers such that $~a+b+c=abc$ Find the maximum value of $~\dfrac{1}{\sqrt{1+a^{2}}}+\dfrac{1}{\sqrt{1+b^{2}}}+\dfrac{1}{\sqrt{1+c^{2}}}$
Trigonometric substitution looks good for this, especially if you know sum of cosines of angles in a triangle are $\le \frac32$. However if you want an alternate way... Let $a = \frac1x, b = \frac1y, c = \frac1z$. Then we need to find the maximum of $$F = \sum \frac{x}{\sqrt{x^2+1}}$$ with the condition now as $xy + yz + zx = 1$. Using this condition to homogenise, we have $$\begin{align} F &= \sum \frac{x}{\sqrt{x^2 + xy + yz + zx}} \\ &= \sum \frac{x}{\sqrt{(x+y)(x+z)}} \\ &= \sum \frac{x\sqrt{(x+y)(x+z)}}{(x+y)(y+z)} \\ &\le \sum \frac{x\left((x+y)+(x+z)\right)}{2\cdot(x+y)(y+z)} \quad \text{by AM-GM}\\ &= \frac12 \sum \left(\frac{x}{x+y}+\frac{x}{x+z} \right) \\ &= \frac12 \sum \left(\frac{x}{x+y}+\frac{y}{y+x} \right) \quad \text{rearranging terms across sums}\\ &= \frac32 \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/545107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Braid Groups Mapped to Symmetric Groups How can I construct five elements in terms of the Braid Generators $\sigma_1 \sigma_2$ that are in the kernel of the homomorphism from the braid group on three strands to the symmetric group on three letters? I tried, but my braids keep turning out to be the identity braid.
By definition, the kernel of the canonical homomorphism $B_n\to S_n$ is the pure braid group on $n$ letters $P_n$, which is the group of all braids whose strings start and end on the same point of the disk (they don't permute the end points of the strings). Given this, you should be able to see that elements like $\sigma_i^2$ or $(\sigma_1\sigma_2\sigma_1\sigma_2)^3$ are mapped to the trivial permutation because they do not permute the ends of the strings. The best way to create elements in the pure braid group is to pick an arbitrary non-trivial element $g\in B_n$, see what element $g$ gets mapped to under the canonical homomorphism to $S_n$ and then, because $S_n$ is finite, the image of $g$ must have finite order, so taking some power $g^k$ will give an element of the pure braid group. Because $B_n$ has no torsion elements, $g^k$ will never be trivial when $g$ is non-trivial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/545184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Stochastic differential Im really new in the stochastic procceses please help me. How can I solve this stochastic differential equation? $$dX = A(t)Xdt$$ $$X(0) = X_0$$ If $A$:[0,$\infty$]$\to$ $R$ is continous and $X$ is a real random variable.
The (deterministic) ODE $$\frac{dx(t)}{dt} = A(t) \cdot x(t) $$ can be solved by transforming it into $$\log x(t)-c = \int^t \frac{1}{x(s)} dx(s) = \int^t A(s) \, ds$$ The same approach applies to this stochastic differential equation. We use Itô's formula for $f(x) := \log x$ and the Itô process $X_t$ and obtain $$\log X_t - \log x_0 = \int_0^t \frac{1}{X_s} \, dX_s = \int_0^t A(s) \, ds$$ (Note that the second term in Itô's formula equals $0$.) Consequently, $$X_t = X_0 \cdot \exp \left( \int_0^t A(s) \, ds \right)$$ This means that the solution of the ODE and the SDE coincide and as @user39360 already remarked that's not surprising since there is no stochastic part in your stochastic differential equation at all - and so the stochastic differential equation becomes an ordinary differential equation. Let me finally remark that the approach described above (i.e. considering the corresponding ordinary differential equation and adapting the idea of the solution method in the determinstic case) works often quite well in a more general setting.
{ "language": "en", "url": "https://math.stackexchange.com/questions/545424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What do $x\in[0,1]^n$ and $x\in\left\{ 0,1\right\}^n$ mean? $x\in[0,1]^n$ $x\in\{0,1\}^n$ Thank you in advance.
The first entry is an interval $ [0, 1] $ in the Reals raised to the power n. E. g. for n=2, you can think of this as the unit square. I.e. $ x \in [0, 1]^2 $ iff it is in the unit square The second entry {0, 1} is simply the set containing only 0 and 1. Picturing our n=2 square from the fist example, this set describes only the 4 corners of the square. Therefore $ x \in \{0, 1\}^2 = \{(0,0), (0,1), (1,0), (1,1) \} $ iff it is one of the vertices of the square. Hopefully you can see how this extends to n in general.
{ "language": "en", "url": "https://math.stackexchange.com/questions/545520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Recovering vector-valued function from its Jacobian Matrix Consider a function $f:\Omega\subset\mathbb{R}^n\rightarrow\mathbb{R}^m$, for which the Jacobian matrix $J_f(x_1,...,x_n)= \left( \begin{array}{ccc} \frac{\partial f_1}{\partial x_1} & ... & \frac{\partial f_1}{\partial x_n} \\ \vdots & & \vdots \\ \frac{\partial f_m}{\partial x_1} & ... & \frac{\partial f_m}{\partial x_n} \end{array} \right) $ is given. Also, assume the component functions of $J_f$ are continuously differentiable on $\Omega$, and $\Omega$ is simply connected. If $m=1$ and $n=2$, it is well known that the function $f$ can be recovered from $J_f$ (in this case the gradient) if and only if $\frac{\partial}{\partial x_2}\frac{\partial f_1}{\partial x_1}=\frac{\partial}{\partial x_1}\frac{\partial f_1}{\partial x_2}$. So my question is whether there is a generalization of this result for arbitrary values of $m$ and $n$. I would appreciate any references! Thank you!
Your question is very closely related to: * *Frobenius integrability theorem *Integrability conditions for differential systems I suspect that the first reference will be of most use.
{ "language": "en", "url": "https://math.stackexchange.com/questions/545634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
evaluating $\sum_{n=1}^\infty \frac{n^2}{3^n}$ I am trying to compute the sum $$\sum_{n=1}^\infty \frac{n^2}{3^n}.$$ I would prefer a nice method without differentiation, but if differentiation makes it easier, then that's fine. Can anyone help me? Thanks.
We have \begin{gather*} S=\sum_{n=1}^{\infty}\frac{n^2}{3^n}=\frac{1}{3}\sum_{n=0}^{\infty}\frac{(n+1)^2}{3^{n-1}}=\frac{1}{3}\left(\sum_{n=0}^{\infty}\frac{n^2}{3^{n-1}}+2\sum_{n=0}^{\infty}\frac{n}{3^{n-1}}+\sum_{n=0}^{\infty}\frac{1}{3^{n-1}}\right)=\frac{1}{3}\left(S+2 S_1+S_2\right). \end{gather*} Then $$ S_2=\sum_{n=0}^{\infty}\frac{1}{3^{n-1}}=\frac{3}{2}, $$ $$S_1=\sum_{n=0}^\infty \frac{n}{3^n}=\sum_{n=1}^\infty \frac{n}{3^n}= \frac{1}{3}\sum_{n=1}^\infty \frac{n}{3^{n-1}}= \frac{1}{3}\sum_{n=0}^\infty \frac{n+1}{3^{n}}= \frac{1}{3}\left[\sum_{n=0}^\infty \frac{n}{3^{n}}+\sum_{n=0}^\infty \frac{1}{3^{n}}\right] =\frac{1}{3}\left[S_1+ S_2\right].$$ By solving this equation you get $S_1=\dfrac{3}{4}.$ Thus we have the equation $$ S=\frac{1}{3}(S+2 \cdot \frac{3}{4}+\frac{3}{2}). $$ By solving it you will get $S=\dfrac{3}{2}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/545708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
For sets $A,B,C$, $(A\setminus B)\subset (A\setminus C)\cup (C\setminus B)$ First of all, I am sorry for my bad english, I am from Brazil :-) I have problem with proof for some set theory task. Here it is: $A,B,C$ are three sets. Show that: $$(A\setminus B) \subset (A\setminus C) \cup (C\setminus B)$$ It is clear by looking at diagrams but I do not know how I show! It is not too hard I think... Thank you for the help, Igor
If you take out from $A$ things that are in $B$, what is left is certainly things in $A$ not in $C$, except for things in $C$ that were not in $B$, so if you add the latter you are all set. The formula is nothing more than this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/545794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to prove that $\sum_{k=0}^n \binom nk k^2=2^{n-2}(n^2+n)$ I know that $$\sum_{k=0}^n \binom nk k^2=2^{n-2}(n^2+n),$$ but I cannot find a way how to prove it. I tried induction but it did not work. On wiki they say that I should use differentiation but I do not know how to apply it to binomial coefficient. Thanks for any response. (Presumptive) Source: Theoretical Exercise 1.12(b), P18, A First Course in Pr, 8th Ed, by S Ross
This is equivalent to proving $$\sum_{k=0}^n k^2 {n\choose k}=n(n+1)2^{n-2}.$$ Given $n$ people we can form a committee of $k$ people in ${n\choose k}$ ways. Once the committee is formed we can pick a committee leader and a committee planner. If we allow each person to hold both job titles there are $k$ ways for this to happen. If no person is allowed to have both job titles, then there are $k$ choices for the committee leader and $k-1$ choices for the committee planner for a total of $k(k-1)$ choices for a different committee leader and committee planner. The total number of choices for a committee leader and a committee planner is $k+k(k-1)=k^2$ . We now sum for all possible $k$ values from $0$ to $n$ to form every committee of $k$ people with a committee leader and a committee planner, that is, $\sum_{k=0}^n k^2 {n\choose k}$. We can count the same thing by first picking committee leaders and committee planners and then filling in the committee with the remaining people. If we allow each person to hold both job titles, then there are $n$ ways for this to be done. The remaining $n-1$ people can form the committee in $2^{n-1}$ ways since each person has two choices, either they are in the committee or they are not in the committee. The total number of committees where each person can hold both titles is $n2^{n-1}$. If no person is allowed to hold both job titles, then there are $n$ choices for a committee leader and $n-1$ choices for a committee planner. The remaining $n-2$ people can form the committee in $2^{n-2}$ ways since each person has two choices, either they are in the committee or they are not in the committee. The total number of ways to form a committee where no person can have both job titles is $n(n-1)2^{n-2}$. Thus the total number of ways to form the committee is $n2^{n-1}+n(n-1)2^{n-2}=n(n+1)2^{n-2}$. Hence $\sum_{k=0}^n k^2 {n\choose k}=n(n+1)2^{n-2}$. Another way to do this is to consider $$(1+x)^n=\sum_{k=0}^n {n\choose k}x^k.$$ Differentiating both sides with respect to $x$ we obtain $$n(1+x)^{n-1}=\sum_{k=1}^n k{n\choose k}x^{k-1}.$$ Multiplying both sides of the equation by $x$ and differentiating with respect to $x$ we obtain $$n(1+x)^{n-1}+nx(n-1)(1+x)^{n-2}=\sum_{k=1}^nk^2{n\choose k}x^{k-1}.$$ Letting $x=1$ and simplifying the left-hand side we see that $$n(n+1)2^{n-2}=\sum_{k=1}^n k^2{n\choose k}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/545879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
When is $\binom{n}{k}$ divisible by $n$? Is there any way of determining if $\binom{n}{k} \equiv 0\pmod{n}$. Note that I am aware of the case when $n =p$ a prime. Other than that there does not seem to be any sort of pattern (I checked up to $n=50$). Are there any known special cases where the problem becomes easier? As a place to start I was thinking of using $e_p(n!)$ defined as: $$e_p(n!) = \sum_{k=1}^{\infty}\left \lfloor\frac{n}{p^k}\right \rfloor$$ Which counts the exponent of $p$ in $n!$ (Legendre's theorem I believe?) Then knowing the prime factorization of $n$ perhaps we can determine if these primes appear more times in the numerator of $\binom{n}{k}$ than the denominator. Essentially I am looking to see if this method has any traction to it and what other types of research have been done on this problem (along with any proofs of results) before. Thanks!
Well, $n = p^2$ when $k$ is not divisible by $p.$ Also $n=2p$ for $k$ not divisible by $2,p.$ Also $n=3p$ for $k$ not divisible by $3,p.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/545962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 3, "answer_id": 0 }
Is the set of all sums-of-rationals-that-give-one countable? Some (but not all) sums of rational numbers gives us 1 as a result. For instance: $$\frac12 + \frac12 = 1$$ $$\frac13 + \frac23 = 1$$ $$\frac37 + \frac{3}{14} + \frac{5}{14} = 1$$ Is the set of all of these sums countable? Sums that differ just by their addends ordering are the same. So $(\frac25+\frac35)$ is the same element as $(\frac35+\frac25$), and shouldn't be counted twice. Also, for avoiding duplicity, all fractions should be in their irreducible form. No no need to count $(\frac26 + \frac23)$ if we already have $(\frac13 + \frac23)$. If the set is countable, what would be a mapping function from $\mathbb N$ to this set?
Since the set of rationals is countable, the set of all finite sets of rationals is countable, and therefore the set of all finite sums of rationals is countable. In particular, the set of finite sets of rationals with sum $1$ is countable. An explicit bijection between this set and $\Bbb N$ would be messy. Added: If you limit yourself to finite sums of positive rationals, you can order the sums as follows. For $n\in\Bbb Z^+$ let $\mathscr{F}_n$ be the set of all multisets $F$ of positive rationals such that $\sum F=1$, $|F|\le n$, and whenever $\frac{p}q\in F$ is in lowest terms, then $q\le n$. For example, the only member of $\mathscr{F}_1$ is $\{\!\{1\}\!\}$, and the only members of $\mathscr{F}_2$ are $\{\!\{1\}\!\}$ and $\left\{\!\!\left\{\frac12,\frac12\right\}\!\!\right\}$, and $$\mathscr{F}_3=\left\{\{\!\{1\}\!\},\left\{\!\!\left\{\frac12,\frac12\right\}\!\!\right\},\left\{\!\!\left\{\frac13,\frac13,\frac13\right\}\!\!\right\},\left\{\!\!\left\{\frac23,\frac13\right\}\!\!\right\}\right\}\;.$$ If $F=\{\!\{r_1,\ldots,r_\ell\}\!\},G=\{\!\{s_1,\ldots,s_m\}\!\}\in\mathscr{F}_n$, where $r_1\ge\ldots\ge r_\ell$ and $s_1\ge\ldots\ge s_m$, define $F\prec_n G$ iff either $\ell<m$, or $\ell=m$ and $r_k>s_k$, where $k=\min\{i:r_i\ne s_i\}$; $\prec_n$ is a strict linear order on $\mathscr{F}_n$. Let $\mathscr{F}=\bigcup_{n\in\Bbb Z^+}\mathscr{F}_n$, and for each $F\in\mathscr{F}$ let $n(F)=\min\{k\in\Bbb Z^+:F\in\mathscr{F}_k\}$. For $F,G\in\mathscr{F}$ define $F\prec G$ iff either $n(F)<n(G)$, or $n(F)=n(G)$ and $F\prec_{n(F)} G$; then $\langle\mathscr{F},\prec\rangle$ is a strict linear order that is order-isomorphic to $\langle\Bbb N,<\rangle$. I’d not care to try to write down the order isomorphism explicitly, but this does at least give the explicit ordering of $\mathscr{F}$ corresponding to such a bijection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/546043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
About the Collatz conjecture I worked on the Collatz conjecture extensively for fun and practise about a year ago (I'm a CS student, not mathematician). Today, I was browsing the Project Euler webpage, which has a question related to the conjecture (longest Collatz sequence). This reminded me of my earlier work, so I went to Wikipedia to see if there's any big updates. I found this claim The longest progression for any initial starting number less than 100 million is 63,728,127, which has 949 steps. For starting numbers less than 1 billion it is 670,617,279, with 986 steps, and for numbers less than 10 billion it is 9,780,657,630, with 1132 steps.[11][12] Now, do I get this correctly: there is no known way to pick a starting number $N$, so that the progression will last at least $K$ steps? Or, at least I don't see the point of the statement otherwise. I know how this can be done (and can prove that it works). It does not prove the conjecture either true/false, since it is only a lower bound for the number of steps. Anyway, I could publish the result if you think it's worth that? When I was previously working on the problem, I thought it was not.
The answer to your question is yes; one can work the Collatz relation backwards to build numbers that last arbitrarily long (for a simple example, just take powers of 2). However the purpose of the wikipedia records is to see if we can find SMALL numbers that last a long time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/546140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Find the cusps for the congruence subgroup $\Gamma_0(p)$ Find the cusps for the congruence subgroup $\Gamma_0(p)$. How does one go about doing this? I know the definition of a cusp - the orbit for the action of $G$, in this case $\Gamma_0(p)$, on $\mathbb Q\cup\{\infty\}$, but I don't see how to get myself started thought. The solution is "the classes of $0$ and $\infty$".
first of all, you can try to see who is in the orbit of $\infty$. If $z$ is in the orbit of $\infty$ then there exist a matrix on $\Gamma_0(p)$ say $\gamma=\left(\begin{matrix} a & b \\ pc & d\end{matrix}\right)$ such that $\gamma \infty =z$. This implies $\frac{a}{pc}=z$, then the numbers in the orbit of infinity are the rational number of the form $\frac{a}{pc}$ where $\gcd(a,pc)=1$ and $a\neq 0$ (because $ad-pbc=1$) . On the other hand, it is easy to check that the other rational numbers are in the same orbit, and that $0$ is in that orbit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/546209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Graph Ramsey Theory for Multiple Copies of Graphs I had the following question from Graph Ramsey theory. Show that if $m \geq 2$, then $$ R((m+1)K _{3},K _{3})\geq R(mK _{3},K _{3}) + 3. $$ Thanks.
In S. A. Burr, P. Erdõs, and J. H. Spencer. Ramsey theorems for multiple copies of graphs, Trans. Amer. Math. Soc., 209 (1975), 87-99. MR53 #13015, it is shown that if $m\ge 2$ and $m\ge n\ge 1$, then $r(mK_3,nK_3)=3m+2n$. This gives your result immediately by taking $n=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/546270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluating $\lim_{n\rightarrow\infty}x_{n+1}-x_n$ Let $f(x)$ be continuously differentiable on $[0,1]$ and $$x_n = f\left(\frac{1}{n}\right)+f\left(\frac{2}{n}\right)+f\left(\frac{3}{n}\right)+\ldots+f\left(\frac{n-1}{n}\right)$$ Find $$\lim_{n\rightarrow\infty}\left(x_{n+1}-x_n\right)$$ Confusion: I just found a subtle problem in my solution.By the definition of definite integration, like $\lim_{n\rightarrow\infty}\sum_{i=1}^{n}f(x_i^*)\Delta x$, the sample point $x_i^*$ should be included in its $ith$ subinterval $[x_{i-1},x_i]$. In this case, I am not sure whether the subinterval $[\frac{i-1}{n(n+1)},\frac{i}{n(n+1)}]$ includes the sample point $i$ and $\xi_i$ or not.I can not deduce it from the given condition, $$\xi_i \in [\frac{i}{n+1},\frac{i}{n}] \implies \frac{i-1}{n(n+1)}\lt \frac{i}{n+1}\lt\xi_i\lt?\lt\frac{i}{n}$$ If those sample points is not always included in the corresponding subinterval, I might not apply the definite integration here. Hope somebody can take a look at this solution. Update: I rewrite part of my solution and fixed the problem I have before. Thanks very much for all the help!
Because $f(x)$ is continuously differentiable on $[0,1]$, tben by the mean value theorem: $$f\left(\frac{i}{n+1}\right)-f\left(\frac{i}{n}\right) = f'(\xi_i)(\frac{i}{n+1} - \frac{i}{n}) \text{ where } \xi_i \in \left[\frac{i}{n+1},\frac{i}{n}\right] \tag{1}$$ Then, by the given formula of $x_n$, we have \begin{align*} \ x_{n+1} - x_n &= \left[f\left(\frac{1}{n+1}\right)-f\left(\frac{1}{n}\right)\right]+\dots +\left[f\left(\frac{n-1}{n+1}\right)-f\left(\frac{n-1}{n}\right)\right]+\left[f\left(\frac{n}{n+1}\right)-f\left(1\right)\right] + f\left(1\right) \\&=f(1) - \sum_{i=1}^{n}i\cdot f'(\xi_i)\left(\frac{1}{n(n+1)}\right) \end{align*} Hence, by the defintion of definite integration, we have: \begin{align*} \\\lim_{n\rightarrow\infty}\left(x_{n+1}-x_n\right) &= f(1) - \lim_{n \rightarrow \infty}\sum_{i=1}^{n}i\cdot f'(\xi_i)\left(\frac{1}{n(n+1)}\right) \\&=f(1) - \lim_{n\rightarrow\infty}\sum_{i=1}^{n}\frac{i}{n+1}\cdot f'(\xi_i)\cdot\frac{1}{n} \\&=f(1) - \int_{0}^{1}xf'(x)dx \space\space\space\dots\text{See Note} \\&=f(1) - \left[xf(x)|_{0}^{1}-\int_{0}^{1}f(x)dx\right] \\ &=\int_{0}^{1}f(x)dx \end{align*} Note that:: \begin{align*} \\ \sum_{i=1}^{n}i\cdot f'(\xi_i)\left(\frac{1}{n(n+1)}\right) = \sum_{i=1}^{n}\frac{i}{n+1}\cdot f'(\xi_i)\cdot\frac{1}{n} \end{align*} by (1), it is obvious that: $$\frac{i-1}{n} \lt \frac{i}{n+1} \lt \xi_i \lt \frac{i}{n}\space\space\space\text{(where } i \le n) \tag{2}$$ which means $\xi_i, \frac{i}{n+1} \in \left[\frac{i-1}{n},\frac{i}{n}\right]$. Also by $(2)$, we know: $$\sum_{i=1}^{n}\frac{i}{n+1}\cdot f'(\xi_i)\cdot\frac{1}{n} \le \sum_{i=1}^{n}\xi_i\cdot f'(\xi_i)\cdot\frac{1}{n}$$ Because $\frac{i}{n+1}$ and $\xi_i$ share the same subinterval, it implies that if we let $x_{i}^* = \xi_i$(sample points) and $\Delta x=\frac{1}{n}$, then: $$\lim_{n\rightarrow\infty}\sum_{i=1}^{n}\frac{i}{n+1}\cdot f'(\xi_i)\cdot\frac{1}{n}=\lim_{n\rightarrow\infty}\sum_{i=1}^{n}\xi_i\cdot f'(\xi_i)\cdot\frac{1}{n}=\int_{0}^{1}xf'(x)dx$$ Hence, it is ok to apply the definition of definite integration here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/546380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
At what time is the speed minimum? The position function of a particle is given by $r(t) = \langle-5t^2, -1t, t^2 + 1t\rangle$. At what time is the speed minimum?
The velocity vector is $\left<-10t,-1,2t+1\right>$. Thus the speed is $\sqrt{(-10t)^2 +(-1)^2+(2t+1)^2}$. We want to minimize this, or equivalently we want to minimize $104t^2+4t+2$. This is a problem that can even be solved without calculus, by completing the square. If negative $t$ is allowed, we get $t=-\frac{4}{208}$. If by nature $t\ge 0$, the minimum speed is at $t=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/546436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Cesaro mean approaching average of left and right limits Let $f\in L^1(\mathbb{R}/2\pi\mathbb{Z})$, where $\mathbb{R}/2\pi\mathbb{Z}$ means that $f$ is periodic with period $2\pi$. Let $\sigma_N$ denote the Cesaro mean of the Fourier series of $f$. Suppose that $f$ has a left and right limit at $x$. Prove that as $N$ approaches infinity, $\sigma_N(x)$ approaches $\dfrac{f(x^+)+f(x^-)}{2}$. We can write $\sigma_N(x)=(f\ast F_N)(x)$, where $F_N$ is the Fejer kernel given by $F_N(x)=\dfrac{\sin^2(Nx/2)}{N\sin^2(x/2)}$. We have $F_N(x)\geq 0$ and $\int_{-\pi}^{\pi}F_N(x)dx=2\pi$. We want to show that $\left|\dfrac{f(x^+)+f(x^-)}{2}-(f\ast F_N)(x)\right|\rightarrow 0$ as $N\rightarrow\infty$. How can we go from here?
Note that you can write $$(f*F_N)(x)-\frac{f(x^+)+f(x^-)}{2}=$$$$\int_{-\pi}^{0}f(x-y)F_N(y)\,dy-\frac{f(x^+)}{2}+\int_{0}^{\pi}f(x-y)F_N(y)\,dy-\frac{f(x^-)}{2}=$$$$\int_{-\pi}^{0}(f(x-y)-f(x^+)F_N(y)\,dy+\int_{0}^{\pi}(f(x-y)-f(x^-))F_N(y)\,dy,$$ since $F_N$ is symmetric around $0$, so its integral from $-\pi$ to $0$ is equal to its integral from $0$ to $\pi$, so they are both equal to $\frac{1}{2}$. Now, use that $|f(x-y)-f(x^+)|$ is small for $y$ small and negative, also $|f(x-y)-f(x^-)|$ is small for $y$ small and positive, and the fact that $F_N$ is nonnegative and goes uniformly to $0$ away from $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/546529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How prove this inequality $\frac{x}{x^3+y^2+z}+\frac{y}{y^3+z^2+x}+\frac{z}{z^3+x^2+y}\le 1$ for $x+y+z=3$ let $x,y,z$ be positive numbers,and such $x+y+z=3$,show that $$\dfrac{x}{x^3+y^2+z}+\dfrac{y}{y^3+z^2+x}+\dfrac{z}{z^3+x^2+y}\le 1$$ My try:$$(x^3+y^2+z)(\dfrac{1}{x}+1+z)\ge 9$$ so $$\dfrac{x}{x^3+y^2+z}+\dfrac{y}{y^3+z^2+x}+\dfrac{z}{z^3+x^2+y}\le\dfrac{6+xy+yz+xz}{9}\le 1$$ Have other nice methods? Thank you
Note that $2+x^3=x^3+1+1\geqslant 3x$, $y^2+1\geqslant 2y$, thus$$\dfrac{x}{x^3+y^2+z}=\frac{x}{3+x^3+y^2-x-y}\leqslant\frac{x}{3x+2y-x-y}=\frac{x}{2x+y}.$$Similarly, we can get $$\dfrac{y}{y^3+z^2+x}\leqslant\frac{y}{2y+z},\,\,\,\,\,\dfrac{z}{z^3+x^2+y}\leqslant\frac{z}{2z+x}.$$It suffices to show$$\frac{x}{2x+y}+\frac{y}{2y+z}+\frac{z}{2z+x}\leqslant 1\iff \frac{y}{2x+y}+\frac{z}{2y+z}+\frac{x}{2z+x}\geqslant 1.$$By Cauchy's inequality, we get$$(y(2x+y)+z(2y+z)+x(2z+x))\left(\frac{y}{2x+y}+\frac{z}{2y+z}+\frac{x}{2z+x}\right)\geqslant (x+y+z)^2.$$As $y(2x+y)+z(2y+z)+x(2z+x)=(x+y+z)^2$, so $$\frac{y}{2x+y}+\frac{z}{2y+z}+\frac{x}{2z+x}\geqslant 1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/546675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
A matrix with given row and column sums There are a set of equations like $A_x + A_y + A_z = P$ $B_x + B_y + B_z = Q$ $C_x + C_y + C_z = R$ Where the values of only $P, Q, R$ are known. Also, we have $A_x + B_x + C_x = I$ $A_y + B_y + C_y = J$ $A_z + B_z + C_z = K$ where only the values of $I, J$ and $K$ are known. Is there any way we know the individual values of $A_x, B_x, C_x, A_y, A_z$ and the rest? Substituting the above equations yield the result that $I + J + K = P + Q + R$ but how can I get the individual component values? Is any other information required to solve these equations? Here's a good complementary question. If solutions exist, how to generate all of them? Are there some algorithms?
You are trying to solve $$\left(\begin{matrix} 1&1&1&0&0&0&0&0&0\\ 0&0&0&1&1&1&0&0&0\\ 0&0&0&0&0&0&1&1&1\\ 1&0&0&1&0&0&1&0&0\\ 0&1&0&0&1&0&0&1&0\\ 0&0&1&0&0&1&0&0&1 \end{matrix}\right) \left(\begin{matrix} A_x\\A_y\\A_z\\B_x\\B_y\\B_z\\C_x\\C_y\\C_z \end{matrix}\right) = \left(\begin{matrix} P\\Q\\R\\I\\J\\K \end{matrix}\right)$$ If you have any solution, it will be at least a four*-dimensional solution space, and to have a solution, necessarily the equation $P+Q+R=I+J+K$ must hold, as you already found out. *If the LHS is $Ax$ then the dimension (if $\ge 0$) is at least $9 - {\rm rg}(A) = 9-5 = 4$, the rank is only five because the sum of the first three rows is equal to the sum of the last three rows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/546733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Derivatives question help The question is :Find the derivative of $f(x)=e^c + c^x$. Assume that c is a constant. Wouldn't $f'(x)= ce^{c-1} + xc^{x-1}$. It keeps saying this answer is incorrect, What am i doing wrong?
Consider $$f(x)=e^{c}+c^{x}$$ where $c$ is a constant. We know that since $c$ is a constant, $e^c$ is also a constant making ${d\over dx}(e^c)=0$. Also, ${d\over dx}(c^x)=c^x\ln c$. The reason for this is because ${d\over dx}(c^x)={d\over dx}{(e^{\ln c})^x}={d\over dx}({e^{{(\ln c}){x}})}=e^{({\ln c)} x}\cdot {d\over dx} {(\ln c)}x=(e^{\ln c})^x\cdot (\ln c)=c^x\ln c$. Thus $$f'(x)={c^x\ln c}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/546809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Need help to prove this by using natural deduction. i m concerned to prove these by using Natural Deduction. And i am also concerned to prove it for both sides. $$\exists x(P (x) \implies A) \equiv \forall xP (x) \implies A$$ I have some difficulties to show it by using Natural Deduction. Any kind of help will be appreciated
$(\Longrightarrow)$ Suppose $\exists x(P (x) \implies A)$. This means we can pick an $x_0$ such that $P (x_0) \implies A$. Now suppose $\forall x P (x)$. In particular, this gives us $P(x_0)$. Then, by modus ponens, $A$. So we have shown that $\forall x P(x) \implies A$. $(\Longleftarrow)$ Suppose $\forall x P(x) \implies A$. Either $\forall x P(x)$ or $\neg \forall x P(x)$. We will consider these cases separately: $\hspace{1cm}$ Case 1. $\forall x P(x)$. By modus ponens, we have $A$. Because $A$ is true, we can pick an $x_0$ at random and declare $\neg P(x_0) \lor A$. This is equivalent to $P (x_0) \implies A$. Therefore, $\exists x(P (x) \implies A)$. $\hspace{1cm}$ Case 2. $\neg \forall x P(x)$. This means we can pick an $x_0$ such that $\neg P(x_0)$. Because $\neg P(x_0)$, we must have $\neg P(x_0) \lor A$. This is equivalent to $P (x_0) \implies A$. Therefore, $\exists x(P (x) \implies A)$. In either case, we have $\exists x(P (x) \implies A)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/546891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to calculate $\int_0^\frac{\pi}{2}(\sin^3x +\cos^3x) \, \mathrm{d}x$? $$\int_0^\frac{\pi}{2}(\sin^3x +\cos^3x) \, \mathrm{d}x$$ How do I compute this? I tried to do the trigonometric manupulation but I can't get the answer.
Hint: $\sin^3 x dx = -\sin^2 xd(\cos x) = (\cos^2 x - 1)d(\cos x)$. Do the same for $\cos^3 x dx$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/546986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 0 }
Prove $f(x)= e^{2x} + x^5 + 1$ is one to one Prove $$f(x)= e^{2x} + x^5 + 1$$ is one to one. So my solution is: Suppose $$ f(x_1)=f(x_2),$$ then I am stuck here: $$e^{2x_1}-e^{2x_2}=x^5_1 -x^5_2.$$ How do I proceed? Also after that I found out that $$(f^-1)'(2)= 4.06$$ but how do I find $$(f^-1)''(2)$$ Differentiatie $(f^-1)'(2)$ again ?
You can also prove one-one this way: (Scroll the mouse over the covered region below) Use the fact : $x_1 \neq x_2 \implies f(x_1) \neq f(x_2)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/547047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 4 }
Existence of an injection Let $A$ and $B$ be two sets. Prove the existence of an injection from $A$ to $B$ or an injection from $B$ to $A$. I don't know how to proceed, since I don't have any information on $A$ or $B$ to begin with. Does anybody have a hint ?
To prove the existence of an injection between two sets $A$ and $B$ from the Axiom of Choice, or from Zorn's Lemma (which is equivalent) the rough idea is that we build a bijection between larger and larger subsets of $A$ and $B$, starting with the empty function. At every "step" we match one of the remaining elements of $A$ to one of the remaining elements of $B$, continuing until we run out of elements in one (or both) of the sets. When we run out of elements in one of the sets, say in $A$ first, it means that we have matched every element of $A$ to an element of $B$, giving an injection $A \to B$. If instead we run out of room in $B$ first, we get an injection $B \to A$. There are two different ways to formalize this idea. The first one uses the Axiom of Choice to choose, at every step, elements of $A$ and $B$ from among those which haven't yet been paired up, so that we can pair them up and extend our function. Once we have a pair of "choice functions" $f$ and $g$ that choose elements of $A$ and $B$ for us respectively, the notion of proceeding by infinitely many "steps" is formalized using transfinite recursion. Instead of using the combination of the Axiom of Choice and transfinite recursion, one can use Zorn's Lemma instead. This works out to be essentially the same argument overall, but it has the advantage that it abstracts away the condition you need to check in order to apply transfinite recursion, so can just check this condition and you don't need to know how transfinite recursion works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/547118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
system of equation with 3 unknown Solve $$\begin{matrix}i \\ ii \\ iii\end{matrix}\left\{\begin{matrix}x-y-az=1\\ -2x+2y-z=2\\ 2x+2y+bz=-2\end{matrix}\right.$$ For which $a$ does the equation have * *no solution *one solution *$\infty$ solutions I did one problem like this and got a fantastic solution from @amzoti. Now, I think that if I see another example, I will really get it. EDIT Here is my attempt with rref and here with equations Problems * *I don't know how to handle the $b$ in the end. *Does it ever lead to, speaking in "matrix terms", the case 0 0 0 | 0 so that I'll have $\infty$ number of solutions?
The complete matrix of your system is $$ \begin{bmatrix} 1 & -1 & -a & 1 \\ -2 & 2 & -1 & 2\\ 2 & 2 & b & -2 \end{bmatrix} $$ and with Gaussian elimination you get $$ \begin{bmatrix} 1 & -1 & -a & 1 \\ 0 & 0 & -1-2a & 4\\ 0 & 4 & b+2a & -4 \end{bmatrix} $$ (sum to the second row the first multiplied by $2$ and to the third row the first multiplied by $-2$). Now swap the second and third rows: $$ \begin{bmatrix} 1 & -1 & -a & 1 \\ 0 & 4 & b+2a & -4 \\ 0 & 0 & -1-2a & 4 \end{bmatrix} $$ You see that you have solutions if and only if $-1-2a\ne0$. Otherwise the last equation would become $0=4$ that obviously has no solution. The solution is unique for $a\ne-1/2$ and does not exist for $a=-1/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/547233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
are any two vector spaces with the same (infinite) dimension isomorphic? Is it true that any 2 vector spaces with the same (infinite) dimension are isomorphic? I think that it is true, since we can build a mapping from $V$ to $\mathbb{F}^{N}$ where the cardinality of $N$ is the dimension of the vector space - where by $\mathbb{F}^{N}$ I mean the subset of the full cartesian product - where each element contains only finite non zero coordinates?
Your argument is quite correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/547314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Rotation of 2D polar graph in a 3D space along some fixed axis? Does there exist some systematic way of rotating a 2-D polar graph $r=f(\theta)$ around some axis in a 3D space? For example: $f(\theta)=cos(\theta)$ in 2-D looks like: If we want to rotate the above plot along the y-axis (in 3D of-course) the plot should look like donut, as shown below: The Question is how to get the mathematical equation of the above "donut", either in rectangular, spherical coordinate system, or cylindrical system? Thanks !
In spherical coordinates, your $\theta_{2D}$ is given by: $$\theta_{2D} = \pi/2 - \theta$$ And you have $r = f(\theta)$. So for your graph you'ld have: $$r = \cos(\pi/2 - \theta) = \sin(\theta)$$ $\quad\quad\quad\quad\quad\quad\quad$
{ "language": "en", "url": "https://math.stackexchange.com/questions/547437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Combinatorics: Mathematic Recurrence I got $\textbf{(a)}$ $f(n) = c_12^n + c_2(-10)^n$ and solved $\textbf{(b)}$ similarly. However, (c)-(f) is not factorizable. How do I proceed?
c) (A - 1 + sqrt(6))(A - 1 - sqrt(6)) d) (A + 4)(A - 2)(A - 6) e) (A + ​3)​(A -​ 1)​(A - ​1) f) (A + 1)(A + 1)(A + 1)
{ "language": "en", "url": "https://math.stackexchange.com/questions/547512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Showing the simplifying steps of this equality ... Can someone please show me how these are equivalent in steps$$\frac{(h_2^2 - h_3^2 )}{\dfrac{1}{h_3}-\dfrac{1}{h_2}}=h_2 h_3 (h_2+h_3)$$ I thought it simplifies to $$(h_2^2-h_3^2)(h_3-h_2)$$This would be much appreciated, I cant wrap my head around it.
Multiplying the numerator & the denominator by $h_2h_3\ne0,$ $$\frac{(h_2^2 - h_3^2 )}{\dfrac{1}{h_3}-\dfrac{1}{h_2}}= \frac{h_2h_3(h_2-h_3)(h_2+h_3)}{(h_2-h_3)}=h_2h_3(h_2+h_3)$$ assuming $h_2-h_3\ne0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/547578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
"Normalizing" a Function One of our homework problems for my linear algebra course this week is: On $\mathcal{P}_2(\mathbb{R})$, consider the inner product given by $$ \left<p, q\right> = \int_0^1 p(x)q(x) \, dx $$ Apply the Gram-Schmidt procedure to the basis $\left\{1, x, x^2\right\}$ to produce an orthonormal basis for $\mathcal{P}_2(\mathbb{R})$. Generating an orthogonal basis is trivial but I'm not quite sure how to go about normalizing the functions I get to "length one". For vectors, it's easy since you just divide by the magnitude but what about for functions?
The norm in this space is $$\|u\| = \sqrt{\langle u, u\rangle} = \sqrt{\int_0^1 \left(u(x)\right)^2 dx}$$ So once you have a basis of three functions, compute the norms (i.e. compute the integral of the square, and square root it) and divide the function by the norm. In particular, show that $$\left\| \frac{u}{\|u\|}\right\| = 1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/547661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving a Sequence Does Not Converge I have a sequence as such: $$\left( \frac{1+(-1)^k}{2}\right)_{k \in \mathbb{N}}$$ Obviously it doesn't converge, because it alternates between $0,1$ for all $k$. But how do I prove this fact? More generally, how do I prove that a sequence does not converge? Are there any neat ways other than "suppose, for contradiction, that the sequence converges. Then..."?
A sequence of real numbers converges if and only if it's a Cauchy sequence; that is, if for all $\epsilon > 0$, there exists an $N$ such that $$n, m \ge N \implies |a_n - a_m| < \epsilon$$ If you haven't shown or seen this, I'd strongly suggest trying to prove it (or look it up in pretty much any basic analysis book). No matter what we choose for $N$ here, however, just choose $n = N + 1$ and $m = N$, with $\epsilon = \frac{1}{2}$. Then $|a_n - a_m| = 1 > \epsilon$, a contradiction. Alternatively, show that if a sequence is convergent, then every subsequence is convergent to the same limit. Then choose appropriate subsequences.
{ "language": "en", "url": "https://math.stackexchange.com/questions/547741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Groups with a R.E. set of defining relations Reading around I found the following two assertion: 1) Every countable abelian group has a recursively enumerable set of defining relations. 2) Every countable locally finite group has a recursively enumerable set of defining relations. How can we prove this directly? We need to find an algorithm which halts exactly when we give as input a relation of the group. As concern abelian groups, I was thinking in this way: If $w$ is an element of the free group of countable rank, then: * *If $w$ is a commutator (I think we can check this in a finite number of steps), halt. *If $w$ is not a commutator, then? And for countable locally finite groups?
I don't believe these claims. Notice that, for any set $X$ of prime numbers, we can build a countable, abelian, locally finite group in which $X$ is the set of primes that occur as orders of elements. Just take the direct sum (not product, because I want it to be countable and locally finite) of cyclic groups $\mathbb Z/p$ for all $p\in X$. By varying $X$, we get uncountably many non-isomorphic groups of this sort. But there are only countably many recursively enumerable sets, so I don't see how you could have r.e. sets of relations to define all these groups.
{ "language": "en", "url": "https://math.stackexchange.com/questions/547862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Field and Algebra What is the difference between "algebra" and "field"? In term of definition in Abstract algebra. (In probability theory, sigma-algebra is a synonym of sigma-field, does this imply algebra is the same as field?)
An algebra is a ring that has the added structure of a field of scalars and a coherent (see below) multiplication. Some examples of algebras: * *M_n(F), where $F$ is any field. *$C(T)$, continuous real (or complex)-valued functions on a topological space $T$ (here the scalars could be either the real or the complex numbers). *$B(X)$, bounded operators over a Banach space $X$, with complex (or real) scalars. *$F[x]$, polynomials over a field $F$. A field, on the other hand, is a commutative ring where every nonzero elements is invertible (i.e. a commutative division ring). Note: "coherent multiplication" means that given $x,y$ in the algebra and $\alpha,\beta$ in the field, $$ \alpha(x+y)=\alpha x+\alpha y,\ \ (\alpha+\beta)x=\alpha x+\beta x, \ \ (\alpha\beta)x=\alpha(\beta x),\ \ (\alpha x)y=x(\alpha y). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/547947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
How prove this two function which is bigger? let function $$f_{n}(x)=\left(1+x+\dfrac{1}{2!}x^2+\cdots+\dfrac{1}{n!}x^n\right)\left(\dfrac{x^2}{x+2}-e^{-x}+1\right)e^{-x},x\ge 0,n\in N^{+}$$ if $\lambda_{1},\lambda_{2},\mu_{1},\mu_{2}$ is postive numbers,and such $\mu_{1}+\mu_{2}=1$ Question: following which is bigger: $$f_{n}\left[(\lambda_{1}\mu_{1}+\lambda_{2}\mu_{2})\left(\dfrac{\mu_{1}}{\lambda_{1}}+\dfrac{\mu_{2}}{\lambda_{2}}\right)\right], f_{n}\left[\dfrac{(\lambda_{1}+\lambda_{2})^2}{4\lambda_{1}\lambda_{2}}\right]$$ My try: since $$e^x=1+x+\dfrac{1}{2!}x^2+\cdots+\dfrac{1}{n!}x^n+\cdots+$$ But this problem is $$1+x+\dfrac{1}{2!}x^2+\cdots+\dfrac{1}{n!}x^n$$ so How prove it?Thank you This problem is from http://tieba.baidu.com/p/2682214392
If you are willing to rely on the problem setter to guarantee that there is a solution, you can do the following: If $\mu_1=\mu_2=\frac 12$ they are equal. If $\mu_1=1, \mu_2=0$, the first is $f_n(1)$ and the second is $f_n(\frac 14(\frac {\lambda_1}{\lambda_2}+ \frac {\lambda_2}{\lambda_1}+2))$, which is $f_n$ of something greater than $1$. As $f_n(x) \to \infty$ as $x \to \infty$, the second is larger. We have not shown that this is true for all $n, \lambda$'s and $\mu$'s, but if there is a single answer, it must be this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/548033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Finding number with prime number of divisors? The problem is to find the number of numbers in [l,r] that have prime number of divisors. Example for $l=1,r=10$ The answer is $6$ since $2,3,4,5,7,9$ are the numbers having prime number of divisors constraints are $l,r\le10^{12}$ and $r-l\le10^6$. Can someone help me with the fastest solution? Here is my approach I stored primes 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37 in an array. Loop through i=1 to 1000000 if i is prime for j in primes calculate val=i^(j-1) add the val to answer list for each query [l,r], the answer is all primes in [l,r]+ the numbers in the list in range [l,r] but finding all primes in [l,r] takes too much time. Any sugesstion?
Your answer is simply the count of prime numbers plus some extra numbers $p^{2^k},k\geq 1,$ where $p$ is prime, or more generally, all the numbers of the form $p^{2^k},k\geq 0$ and $p$ prime. See this for more details. By the way, I am also trying this problem of codechef...but not AC yet :(
{ "language": "en", "url": "https://math.stackexchange.com/questions/548299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Determine run-time of an algorithm Probably a stupid question but I don't get it right now. I have an algorithm with an input n. It needs n + (n-1) + (n-2) + ... + 1 steps to finish. Is it possible to give a runtime estimation in Big-O notation?
Indeed, it is. Since we have $$n+(n-1)+(n-2)+\cdots+1=\frac{n(n+1)}2,$$ then you should be able to show rather readily that the runtime will be $\mathcal{O}(n^2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/548394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Automorphism of $\mathbb{Z}\rtimes\mathbb{Z}$ I'm looking for a description of $\mathrm{Aut}(\mathbb{Z}\rtimes\mathbb{Z})$. I've tried an unsuccessfully combinatorical approac, does anymore have some hints? Thank you.
Write the group as $G=\langle x,t\mid txt^{-1}=x^{-1}\rangle$. Exercise: the automorphism group is generated by the automorphisms $u,s,z$, where $$u(x)=x,\; u(t)=tx,\quad s(x)=x^{-1},\; s(t)=t, \quad z(x)=x,\; z(t)=t^{-1};$$ check that it is isomorphic to the direct product of an infinite dihedral group (generated by $u,s$) and the cyclic group of order two $\langle z\rangle$. Deduce that the outer automorphism group is a direct product of two cyclic groups of order two, generated by the representatives of $u$ and $z$. Hint: first check that $\langle x\rangle$ is a characteristic subgroup of $G$ (warning: it is not the derived subgroup).
{ "language": "en", "url": "https://math.stackexchange.com/questions/548457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is $B\otimes_A C$ a ring, where $B,C$ are $A$-algebra Given $B,C$ be two $A$-algebra, it is said that $B\otimes_AC$ has a ring structure, with the multiplication being defined as: $$ (b_1\otimes c_1)\cdot(b_2\otimes c_2):=(b_1b_2)\otimes (c_1c_2). $$ However I don't see an easy way to check it is well-defined. For, given another pair of representatives: $$ (b_1'\otimes c_1')\cdot(b_2'\otimes c_2'):=(b_1'b_2')\otimes (c_1'c_2') $$ where $$ b_1\otimes c_1=b_1'\otimes c_1'\quad\text{and}\quad b_2\otimes c_2=b_2'\otimes c_2'. $$ How to verify that $$ (b_1b_2)\otimes (c_1c_2)=(b_1'b_2')\otimes (c_1'c_2')? $$
That $B$ is an $A$-algebra means that it comes equipped with maps $u_B : A \to B$ (unit) and $m_B : B \otimes_A B \to B$ (multiplication) of $A$-modules such that certain diagrams commute (write them down!). Similarly for $C$. But then $B \otimes_A C$ has the following $A$-algebra structure: The unit is $A \cong A \otimes_A A \xrightarrow{u_B \otimes u_C} B \otimes_A C$. The multiplication is $(B \otimes_A C) \otimes_A (B \otimes_A C) \cong (B \otimes_A B) \otimes_A (C \otimes_A C) \xrightarrow{m_B \otimes m_C} B \otimes_A C.$ So it is just induced by functoriality of the tensor product, as well as the usual associativity and symmetry isomorphisms. In fact, this procedure works in any symmetric monoidal category. No need to calculate with elements here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/548551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
transition kernel I've got some trouble with transition kernels. We look at Markov process with statspace $(S,\mathcal{S})$ and initial distribution $\mu^0$. We have a transition kernel $P:S\times \mathcal{S}\to[0,1]$ Now I have to show that $P^n(x,B):=\int_{S}P^{n-1}(y,B)P(x,dy)$ for $n\geq 2$ and $P^1:=P$ is also a transition kernel. How do I proof that $P^n$ is measuable for every fixed $x\in S$? can I write $\int_{S}P^{n-1}(y,B)P(x,dy)=\int_{S}P^{n-1}(y,B)dP(x,y)$ and now look at the measurable function $f(x):=1_{A_1\times A_2}$ so I have $f(x)P^{n-1}(y,B)P(x,dy)$ is measurable. I take a family $f_n$ of elementary functions with $f_n\to f$ for $n\to\infty$. I'm not realy sure, measure theory wasn't my best course. And I hope that anyone is able to understand me, because I'm not realy able to write english in a nice way.
We have to show that for each $n\geqslant 1$, * *if $x\in S$ is fixed, then the map $S\in\mathcal{S}\mapsto P^n(x,S)$ is a probability measure, and *if $B\in\cal S$ is fixed, the map $x\in S\mapsto P^n(x,B)$ is measurable. We proceed by induction. We assume that $P^{n-1}$ is a transition kernel. If $x\in S$ is fixed, since for all $y$ we have $P^{n-1}(y,S)=1$, we have $P^n(x,S)=1$. We have $P^n(x,\emptyset)=0$ and if $S_i$ are pairwise disjoint measurable sets we get what we want using linearity of the integral and a monotone convergence argument. We now show the second bullet. It indeed follows from an approximation argument. Fix $B\in \cal S$. Since $y\mapsto P^{n-1}(y,B)$ is non-negative and measurable, there exists a non-decreasing sequence of measurable functions $(g_k)$ which have the form $\sum_{i=1}^{N_k}c_{k,i}\chi_{A_{k,i}}$, and such that $g_k(y)\uparrow P^{n-1}(y,B)$. We have by a monotone convergence argument that $$P^n(x,B)=\lim_{k\to\infty}\sum_{i=1}^{N_k}c_{k,i}\int \chi_{A_{k,i}}(y)P(x,dy)=\lim_{k\to\infty}\sum_{i=1}^{N_k}c_{k,i}P(x,A_{k,i}).$$ We wrote the map $x\mapsto P(x,B)$ as a pointwise limit of a sequence of measurable functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/548633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$x_{n+1}\le x_n+\frac{1}{n^2}$ for all $n\ge 1$ then did $(x_n)$ converges? If $(x_n)$ is a sequence of nonnegative real numbers such that $x_{n+1}\le x_n+\frac{1}{n^2}$ for all $n\ge 1$ then did $(x_n)$ converges? Can someone help me please?
Since $$\sum_{n=1}^{\infty} \frac{1}{n^2} = \zeta(2) = \frac{\pi^2}{6} < \infty,$$ $x_{n}$ is bounded from above by $x_1 + \frac{\pi^2}{6}$. Since $x_n$ is also non-negative, it is a bounded sequence and both hence its lim sup and lim inf exists. Let $L$ be the lim inf of $x_n$. For any $\epsilon > 0$, pick a $N$ such that $\displaystyle \sum_{n=N}^\infty \frac{1}{n^2} < \frac{\epsilon}{2}$. By definition of $L$, there is a $M > N$ such that $x_M < L + \frac{\epsilon}{2}$. For any $n > M$, we have $$x_n \;\;<\;\; x_M + \sum_{k=M}^{n-1} \frac{1}{k^2} \;\;<\;\; L + \frac{\epsilon}{2} + \sum_{k=M}^{\infty}\frac{1}{k^2} \;\;<\;\; L + \epsilon $$ This implies $$\limsup_{n\to\infty} x_n \le L + \epsilon$$ Since $\epsilon$ is arbitrary, we get $$\limsup_{n\to\infty} x_n \le L = \liminf_{n\to\infty} x_n \implies \limsup_{n\to\infty} x_n = \liminf_{n\to\infty} x_n = L $$ i.e. the limit exists and equal to $L$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/548705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Find transformation function when densities are known I need some help with the following probability/statistics problem: Let X be a continuous random variable with density $f_{X}(x) = \begin{cases} \mathrm{e}^{-x^{2}} & \text{}x\text{ > 0} \\ 0 & \text{}\text{elsewhere} \end{cases}$. Find the tranformation function $g$ such that the density of $Y=g(X)$ is $$f_{Y}(y) = \begin{cases} {\frac{1}{2\sqrt{y}}} & \text{}0\text{ < y < 1} \\ 0 & \text{}\text{elsewhere} \end{cases}$$Any help would be much appreciated.
Find the tranformation function $g$ such that the density of $Y=g(X)$ is... Assume that $Y=g(X)$ and that $g$ is increasing, then the change of variable theorem yields $$ g'(x)f_Y(g(x))=f_X(x). $$ In the present case, one asks that, for every positive $x$, $$ \frac{g'(x)}{2\sqrt{g(x)}}=2x\mathrm e^{-x^2}, $$ thus, $$ g(x)=(c-\mathrm e^{-x^2})^2. $$ Since $g$ sends $(0,+\infty)$ to $(0,1)$, one gets $c=1$ and a solution is $$ Y=(1-\mathrm e^{-X^2})^2. $$ Assuming instead that $g$ is decreasing, one gets $$ g'(x)f_Y(g(x))=-f_X(x), $$ hence $$ g(x)=(c+\mathrm e^{-x^2})^2, $$ and the same argument shows that $c=1$ hence another solution is $$ Y=\mathrm e^{-2X^2}. $$ This new solution should not come as a surprise since $1-\mathrm e^{-X^2}$ and $\mathrm e^{-X^2}$ are both uniformly distributed on $(0,1)$ and the density of $Y$ is nothing but the density of the square of a uniform random variable on $(0,1)$. Actually, there exists tons of different functions $g$ such that $Y=g(X)$ has the desired distribution hence the order to "Find the tranformation function such that..." is odd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/548796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Topology: Example of a compact set but its closure not compact Can anyone gives me an example of a compact subset such that its closure is not compact please? Thank you.
Write $\tau$ for the standard topology on $\Bbb R$. Consider now $\tau_0=\{U\cup\{0\}\mid U\in\tau\}\cup\{\varnothing\}$. It's not hard to verify that $\tau_0$ is a topology on $\Bbb R$. It's also not hard to see that $\overline{\{0\}}=\Bbb R$. However one can easily engineer an open cover without a finite subcover.
{ "language": "en", "url": "https://math.stackexchange.com/questions/548865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 6, "answer_id": 1 }
How to calculate simple trigonometric problem I tried for an hour or so to solve this but I can't show the way to the solution. How does one solve the below problem? $\tan(\sin^{-1}(1/3))$? Is the solution periodic because it is a tangent?
Please look in the first row of the Wikipedia table on Relationships between trigonometric functions. There you find the relation that $$\tan(\arcsin(x))=\frac{x}{\sqrt{1-x^2}}$$ This gives you immediately the correct result of $$\frac{1}{2\sqrt{2}}$$ Now, your task is to understand where this relation comes from. As MichaelE2 pointed out, an alternative would be to ask WolframAlpha
{ "language": "en", "url": "https://math.stackexchange.com/questions/548984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Funtions between sets If $A$ is a set with $m$ elements and $B$ a set with $n$ elements, how many functions are there from $A$ to $B$. If $m=n$ how many of them are bijections? I got $n^m$ for my first answer. I wasn't sure for the bijection bit is it just $n$?
The number of functions from A to B is equal to the number of lists of m elements where each element of the list is an element of b. Since we have n choices for each the answer is $n^m$ For the second one we have that n=m. Call the elements of A: $a_1,a_2...a_n$. therefore the number of bijections from A to B is the number of lists where $b_\in B$ of the form $b_1,b_2...b_n$ such that if $x\neq$ y then $b_x \neq b_y$ But this is just the number of permutations of n elements which is $n!$ Because we have n choices for the first element, n-1 choices for the second, and in general $n-k+1$ choices for the $k$'th element.
{ "language": "en", "url": "https://math.stackexchange.com/questions/549213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why is a repeating decimal a rational number? $$\frac{1}{3}=.33\bar{3}$$ is a rational number, but the $3$ keeps on repeating indefinitely (infinitely?). How is this a ratio if it shows this continuous pattern instead of being a finite ratio? I understand that $\pi$ is irrational because it extends infinitely without repetition, but I am confused about what makes $1/3=.3333\bar{3}$ rational. It is clearly repeating, but when you apply it to a number, the answers are different: $.33$ and $.3333$ are part of the same concept, $1/3$, yet: $.33$ and $.3333$ are different numbers: $.33/2=.165$ and $.3333/2=.16665$, yet they are both part of $1/3$. How is $1/3=.33\bar{3}$ rational?
$$\begin{align} 0.3333333333333\ldots &= 0.3 +0.03 +0.003 +0.0003+ \ldots\\ &=\frac{3}{10} + \frac{3}{100} + \frac{3}{1000}+ \frac{3}{10000} +\ldots \end{align}$$ If you know the sum of a geometric sequence, that is: $$a+aq+aq^2+aq^3+\ldots = \frac{a}{1-q} \quad\text{ if $|q| < 1$}$$ you can use it to conclude that for $q = \frac{1}{10}$: $$\frac{3}{10} + \frac{3}{100} + \frac{3}{1000}+ \frac{3}{10000} +\ldots =\frac{\frac{3}{10}}{1-\frac{1}{10}}=\frac{1}{3} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/549254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 1 }
Probability task (Find probability that the chosen ball is white.) I have this task in my book: First box contains $10$ balls, from which $8$ are white. Second box contains $20$ from which $4$ are white. From each box one ball is chosen. Then from previously chosen two balls, one is chosen. Find probability that the chosen ball is white. The answer is $0.5$. Again I get the different answer: There are four possible outcomes when two balls are chosen: $w$ -white, $a$ - for another color $(a,a),(w,a),(a,w),(w,w)$. Each outcome has probability: $\frac{2}{10} \cdot \frac{16}{20}; \frac{8}{10} \cdot \frac{16}{20}; \frac{2}{10} \cdot \frac{4}{20}; \frac{8}{10} \cdot \frac{4}{20};$ In my opinion the probability that the one ball chosen at the end is white is equal to the sum of last three probabilities $\frac{8}{10} \cdot \frac{16}{20} + \frac{2}{10} \cdot \frac{4}{20} + \frac{8}{10} \cdot \frac{4}{20}=\frac{21}{25}$. Am I wrong or there is a mistake in the answer in the book?
In case of $(w,a)$ or $(a,w)$ you need to consider that one of these two balls is chosen (randomly as by coin tossing, we should assume). Therefore these cases have to be weighted by a factor of $\frac 12$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/549329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Intersection of $\{ [n\sqrt{2}]\mid n \in \mathbb{N}^* \}$ and $\{ [n(2+\sqrt{2})]\mid n \in \mathbb{N}^* \}$ Find the intersection of sets $A$ and $B$ where $$A = \{ [n\sqrt{2}]\mid n \in \mathbb{N}^* \}$$ $$B = \{ [n(2+\sqrt{2})]\mid n \in \mathbb{N}^* \}.$$ ([$x$] is the integer part of $x$) Using the computer, we found common elements. Does anyone have an idea to solve?
Suppose that exists $m,n\in\mathbb{N^*}$ such that $[n\sqrt{2}]=[(2+\sqrt{2})m]=t\in\mathbb{N^*}.$ Then $t<n\sqrt{2}<n+1,\quad t<(2+\sqrt{2})m<t+1\implies \dfrac{t}{\sqrt{2}}<n<\dfrac{t+1}{\sqrt{2}},\quad \dfrac{t}{2+\sqrt{2}}<m<\dfrac{t+1}{2+\sqrt{2}}\stackrel{(+)}{\implies} t<n+m<t+1,\;\text{false}\implies A\cap B=\emptyset.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/549395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
How to find integrals using limits? How to find integrals using limits? The question arise when I see that to find the derivative of a function $f(x)$ we need to find: $$\lim_{h \to 0} \frac{f(x+h)-f(x)}{h}$$ and it works fine for finding derivatives of every function you can give. But is there a similar approach to find integrals using limits? Thanks!
Yes. The simplest way to write $\int_a^b f(x)\, dx$ as a limit is to divide the range of integration into $n$ equal intervals, estimate the integral over each interval as the value of $f$ at either the start, end, or middle of the interval, and sum those. Mathematically, $$\int_a^b f(x)\, dx = \lim_{n \to \infty} \frac1{n} \sum_{k=0}^{n-1} f(a+(k+c)\frac{b-a}{n}) $$ where $c = 0, 1/2, 1$ for the function to be evaluated at, respectively, the start, middle, or end of the subinterval. For a reasonable class of functions, this limit converges to the integral. Generalizations of this have provided many careers in mathematics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/549479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Can we have $x^n\equiv (x+1)^n \pmod m$ for large enough $n$? $x^n\equiv (x+1)^n$ For what values of m and n can we find an x that solves this?
It is not clear from your question what is fixed and what varies. To illustrate: suppose you want a solution with $n=10$ --- is that a large enough $n$? Very well, then --- pick your favorite $x$, say, $x=42$. Then calculate $43^{10}-42^{10}$ and call it $Q$. Then if $m$ is $Q$, or any factor of $Q$, you will have $(x+1)^n\equiv x^n\pmod m$. If that's not what you want, please edit your question to clarify. EDITing in things from the comments: * *For fixed $n\gt3$ it is probably hard to find a useful characterization of those $m$ for which there is a solution. *For fixed $m$, let $n=\phi(m)$, then $x^n\equiv(x+1)^n\equiv1\pmod m$ provided $x$ and $x+1$ are relatively prime to $m$. *There are (lots of) solutions not of the form of item 2.
{ "language": "en", "url": "https://math.stackexchange.com/questions/549541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Any group of order $12$ must contain a normal Sylow subgroup This is part of a question from Hungerford's section on Sylow theorems, which is to show that any group with order 12, 28, 56, or 200 has a normal Sylow subgroup. I am just trying the case for $|G| = 12$ first. I have read already that one can't conclude in general that $G$ will have a normal Sylow 2-subgroup or a normal Sylow 3-subgroup, so I am a bit confused on how to prove this. Here is my start, but it's not that far. Let $n_2, n_3$ denote the number of Sylow $2$- and $3$-subgroups, respectively. Then: $n_2 \mid 3$ and $n_2 \equiv 1 (\operatorname{mod } 2)$, so $n_2 = 1$ or $3$. Similarly, $n_3 \mid 4$ and $n_3 \equiv 1 (\operatorname{mod } 3)$, so $n_3 = 1$ or $4$. I was thinking to assume that $n_2 \ne 1$, and prove that in this case, $n_3$ must equal $1$. But I don't see how to do this. One could also do it the other way: if $n_3 \ne 1$, then show $n_2 = 1$.
Hint: If $n_3 = 4$, then how many elements of order 3 are there in the group? How many elements does that leave for your groups of order 4?
{ "language": "en", "url": "https://math.stackexchange.com/questions/549806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Bayes' Theorem with multiple random variables I'm reviewing some notes regarding probability, and the section regarding Conditional Probability gives the following example: $P(X,Y|Z)=\frac{P(Z|X,Y)P(X,Y)}{P(Z)}=\frac{P(Y,Z|X)P(X)}{P(Z)}$ The middle expression is clearly just the application of Bayes' Theorem, but I can't see how the third expression is equal to the second. Can someone please clarify how the two are equal?
We know $$P(X,Y)=P(X)P(Y|X)$$ and $$P(Y,Z|X)=P(Y|X)P(Z|X,Y)$$ (to understand this, note that if you ignore the fact that everything is conditioned on $X$ then it is just like the first example). Therefore \begin{align*} P(Z|X,Y)P(X,Y)&=P(Z|X,Y)P(X)P(Y|X)\\ &=P(Y,Z|X)P(X) \end{align*} Which derives the third expression from the second. (However I don't have any good intuition for what the third expression means. Does anyone else?)
{ "language": "en", "url": "https://math.stackexchange.com/questions/549887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 3, "answer_id": 0 }
Proving that $A \cap (A \cup B) = A$ . Please check solution For homework I need to prove the folloving: $$ A \cap (A \cup B) = A $$ I did that in the following manner: $$ A \cap (A \cup B)\\ x \in A \land (x \in A \lor x \in B)\\ (x \in A\ \land x \in A) \lor (x \in A \land x \in B)\\ x \in A \lor (x \in A \land x \in B)\\ \text{now I have concluded that x belongs to A,}\\ \text{because of the boolean expression, because I get something like this:}\\ x \in A \lor (x \in A \land x \in B) = x \in A\\ \text{when $x \in A$ is true the whole expression will be true}\\ $$ Did I do it right??? thanks
Using fundamental laws of Set Algebra $$\begin{cases}A \cap (A \cup B) & Given\\ =(A \cap A) \cup (A \cap B) & \text{Distributive Law}\\ =A \cup (A \cap B) & A \cap A = A\\ =A & A \cap B \subset A \end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/550076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Prove that $ \int_{0}^{1}\frac{\sqrt{1-x^2}}{1-x^2\sin^2 \alpha}dx = \frac{\pi}{4\cos^2 \frac{\alpha}{2}}$. Prove that $\displaystyle \int_{0}^{1}\frac{\sqrt{1-x^2}}{1-x^2\sin^2 \alpha}dx = \frac{\pi}{4\cos^2 \frac{\alpha}{2}}$. $\bf{My\; Try}::$ Let $x = \sin \theta$, Then $dx = \cos \theta d\theta$ $\displaystyle = \int _{0}^{1}\frac{\cos \theta }{1-\sin^2 \theta \cdot \sin^2 \alpha}\cdot \cos \theta d\theta = \int_{0}^{1}\frac{\cos ^2 \theta }{1-\sin^2 \theta \cdot \sin ^2 \alpha}d\theta$ $\displaystyle = \int_{0}^{1}\frac{\sec^2 \theta}{\sec^4 \theta -\tan^2 \theta \cdot \sec^2 \theta \cdot \sin^2 \alpha}d\theta = \int_{0}^{1}\frac{\sec^2 \theta }{\left(1+\tan ^2 \theta\right)^2-\tan^2 \theta \cdot \sec^2 \theta\cdot \sin^2 \alpha}d\theta$ Let $\tan \theta = t$ and $\sec^2 \theta d\theta = dt$ $\displaystyle \int_{0}^{1}\frac{1}{(1+t^2)^2-t^2 (1+t^2)\sin^2 \alpha}dt$ Now How can I solve after that Help Required Thanks
Substitute $x=\frac{t}{\sqrt{\cos a+t^2}}$. Then $dx= \frac{\cos a\ dt}{(\cos a+t^2)^{3/2}}$ and \begin{align}\int_{0}^{1}\frac{\sqrt{1-x^2}}{1-x^2\sin^2 a}dx =& \int_{0}^{\infty}\frac{\sqrt{\cos a}\ \frac1{t^2} }{(t^2+ \frac1{t^2} )\cos a +(1+\cos^2a)}\overset{t\to 1/t}{dt}\\ =& \ \frac{\sqrt{\cos a}}2\int_{0}^{\infty}\frac{d(1-\frac1{t} )}{(t-\frac1{t} )^2\cos a +(1+\cos a)^2}\\ =& \ \frac{\pi}{2(1+\cos a)}\\ \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/550145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
Is the empty set always part of the result of an intersection? From very basic set theory we have that: "The empty set is inevitably an element of every set." Then, is it correct to assume that the intersection of $A = \{1, 2, 3\}$ and $B = \{3, 4, 5\}$ is actually $\{\emptyset, 3\}$, and not just $\{3\}$? Thank you
While it is true that for all sets $A$, we have $\emptyset \subseteq A$, it is not true that for all sets $A$, $\emptyset \in A$. The intersection of $A,B$ is defined by $A \cap B = \{x | x \in A$ and $x \in B\}$. What is important here is the difference between being a subset and being an element of a set. Do you now see why the empty set is not contained in $A\cap B$, with reference to your example?
{ "language": "en", "url": "https://math.stackexchange.com/questions/550244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that interval $(a, b)$ is not open in $\mathbb{R}^2$ I know that interval $(a, b)$ is open in $\mathbb{R}$. To show that interval $(a,b)$ is open in $\mathbb{R}$, I have done so: Let it be $x\in (a,b)$. Enough to find an open ball containing the point $x$, and that is included in the interval $(a,b)$. Suffice to get $0<\epsilon\leq \min \{\vert b-x\vert, \vert a-x \vert\}$. In this case $D(x,\epsilon)$ containing the point $x$ and $D(x,\epsilon)\subseteq (a,b)$. I do not know a good act. If it is that I did very well then correct, but I do not know how to prove this fact: To show that interval $(a,b)$ is not open in $\mathbb{R^2}.$ Please if anyone has the opportunity to help, to make verification of keti example, thank you preliminarily
As kahen pointed out, what you want to say is that $$(a,b)\times\{0\}$$ is not open in $\Bbb R^2$. Now, pick any point ${\bf x}=(x,0)$ with $a<x<b$. Then $B({\bf x},\varepsilon)$ contains elements of the form $(x_1,x_2)$ with $x_2\neq 0$ (prove this!), so $B({\bf x},\varepsilon)$ cannot be contained in $(a,b)\times \{0\}$. This means $(a,b)$ is not open, since we've found a point (actually all of them) that are not interior points. In fact, ${\rm int}_{\Bbb R^2}\;(a,b)=\varnothing$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/550350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Intuitive reasoning why are quintics unsolvable I know that quintics in general are unsolvable, whereas lower-degree equations are solvable and the formal explanation is very hard. I would like to have an intuitive reasoning of why it is so, accessible to a bright high school student, or even why it should be so. I have also read somewhere that any $n$-degree equation can be depressed to the form $ax^n + bx + c$. I would also like to know why or how this happens, at least for lower degree equations. I know that this question might be too broad and difficult, but this is a thing that has troubled me a lot. To give some background, I recently figured out how to solve the cubic and started calculus, but quartics and above elude me. EDIT: It was mentioned in the comments, that not every $n$-degree equation can be depressed to the form $ax^n + bx + c$, although I recall something like this I have read, anyways, I wanted to find out the same for quintics.
@Sawarnik, what are you referring to exactly when you write that "the formal explanation is very hard"? The basic idea is actually quite simple. I looked through the comments above and none seem to mention the smallest nonabelian simple group which happens to be the group $A_5$ of order $60$. This group is not contained in any $S_n$ for $n\leq 4$, which implies that all those are solvable, or equivalently any equation of order $\leq 4$ is solvable. Solvability of the group corresponds to the solvability in radicals of the polynomial. The lowest degree of a polynomial which makes it possible to have a simple Galois group is therefore $5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/550401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51", "answer_count": 4, "answer_id": 3 }
Norm of random vector plus constant Suppose that $w$ is a multivariate standard normal vector and $c$ a real vector of the same size. I know that for positive x $$P(||w+c||^2\geq x)\ \geq \ P(||w||^2\geq x)$$ but I cannot prove it. We use the euclidean norm , in dimension 2 or 3 drawings show that the inequality above holds, but I need a proof.
Very rough sketch for a general case: Perhaps the equivalent $P(\|w+c\|^2\leq x)\ \leq \ P(\|w\|^2\leq x)$ is more intuitive to deal with. For both probabilities, one evaluates the probability that $w$ is in a subset of $R^d$ of $d$-dimensional Lebesgue measure (let us call this measure $\mu$) $m$, where $m$ is the volume of a disk of radius $\sqrt{x}$, i.e. $m = \int_{\|w\|^2 \leq x} dw$. Consider the problem, $$\sup_{\substack{\mathcal{A}\subset\mathbb{R}^d \\ \mu(\mathcal{A}) \leq m}}\int_{\mathcal{A}}f(w)dw,$$ where $f(\cdot)$ is the PDF of our Gaussian. The solution is given by starting with the points of $\mathbb{R}^d$ where $f(w)$ takes its maximum values, and keep adding points until $\mu(\mathcal{A}) = m$. But since $f(w) = \mathtt{const}\exp(-\mathtt{const}\|w\|^2)$, the solution is of the form $\mathcal{A} = \{w\in\mathbb{R}^d:\|w\|^2 \leq y\}$ for some $y$ that satisfies $\mu(\mathcal{A}) = m$. We immediately have $y = x$, and therefore, $$P(w\in\mathcal{A}) \leq P(\|w\|^2 \leq x)$$ for any $\mathcal{A}$ with $\mu(\mathcal{A}) = \mu(w\in\mathbb{R}^d:\|w\|^2 \leq x)$. In particular, $P(\|w+c\|^2\leq x)\ \leq \ P(\|w\|^2\leq x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/550477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }