Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
discrete math Quantifiers? In the following exercises, V(x,y) means "x + y = 2xy", where x and y are integers. Determine the truth value of the statement: ∀x∃y¬V(x,y) what this says is for every x, there exists an y such that negation of V? what is the negation v? and also for this question, U(x,y) means "2x + 3y = xy", where x and y are integers. Determine the truth value of the statement: ∃x∀yU(x,y) so there exists an x such that for every y, 2x+3y=xy? does that mean there is one x that for any value y, 2x+3y=xy? or?
The negation of V is $x + y \neq 2xy$. $\forall x \exists y¬V(x,y)$ means that whatever number $x$ you are given, you can find another (non necessarily distinct) number $y$ for which $x + y \neq 2xy$. Do you think this is true or false? This is true, think of a simple way of proving it! For $∃x∀yU(x,y)$, mean that there is one $x$ (at least one, I should say) such that $U(x,y)$ is true, for all integers $y$! So you are correct. This statement seems less reasonable. If, say you think it is false, a good way of showing it is to prove its negation $¬(∃x∀yU(x,y))=\forall x \exists y ¬U(x,y)$ is true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/550582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A compact Hausdorff space is metrizable if it is locally metrizable A space $X$ is locally metrizable if each point $x$ of $X$ has a neighborhood that is metrizable in the subspace topology. Show that a compact Hausdorff space $X$ is metrizable if it is locally metrizable. Hint: Show that $X$ is a finite union of open subspaces, each of which has a countable basis. I tried to use the fact of compact space. But I do not know if the opens are compact subspaces.
For every $x\in X$, there exists a neighborhood $U_x$ which is metrizable. These neighborhoods cover $X$, i.e., $X=\bigcup_x U_x$. Now use the definition of compactness to reduce this to a finite union, $X=U_1\cup\ldots\cup U_n$. Each of these sets is metrizable, so pick metrics which are defined locally on each $U_i$. Lastly, use a partition of unity to patch together the local metrics into a global one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/550659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Proving that the union of the limit points of sets is equal to the limit points of the union of the sets My instructor gave us an exercise to do, to show that this equality holds: $$\bigcup_{k=1}^m A_k'=\left(\bigcup_{k=1}^m A_k\right)^{\!\prime}.$$ My thoughts on approaching this question is by contradiction, suppose that there is an element that satisfies the left side of the equality and not the right side. Then there is a deleted ball with radius $r$ such that $B(x,r)$ intersect the left side of the equality is non empty while there is a deleted ball with radius $r$ such that $B(x,r)$ intersect the right side of the equality is empty. Is this the right approach in proving this? This is as far as I can go. Thanks for the help in advance.
It’s very easy to show that $A_\ell'\subseteq\left(\bigcup_{k=1}^mA_k\right)'$ for each $\ell\in\{1,\ldots,m\}$ and hence that $$\bigcup_{k=1}^mA_k'\subseteq\left(\bigcup_{k=1}^mA_k\right)'\;;$$ the harder part is showing that $$\left(\bigcup_{k=1}^mA_k\right)'\subseteq\bigcup_{k=1}^mA_k'\;.\tag{1}$$ Proof by contradiction is a reasonable guess, but it’s not needed. Suppose that $x\in\left(\bigcup_{k=1}^mA_k\right)'$; then for each $n\in\Bbb Z^+$ there is an $$x_n\in B\left(x,\frac1n\right)\cap\bigcup_{k=1}^mA_k\;,$$ and it’s clear that the sequence $\langle x_n:n\in\Bbb Z^+\rangle$ converges to $x$. For each $n\in\Bbb Z^+$ there is a $k_n\in\{1,\ldots,m\}$ such that $x_n\in A_{k_n}$. * *Show that there is an $\ell\in\{1,\ldots,m\}$ such that $\{n\in\Bbb Z^+:k_n=\ell\}$ is infinite. Let $L=\{n\in\Bbb Z^+:k_n=\ell\}$. *Show that $\langle n:n\in L\rangle$ is a sequence in $A_\ell$ converging to $x$. *Conclude that $(1)$ must hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/550920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
A nice number is an integer ending in 3 or 7 when written out in decimal. Prove that every nice number has a prime factor that is also a nice numbers. My teacher just asked me a question like this but i do not know how to start and work it out at all. Can someone help me out with that?
Hint: The number is odd, so all of its prime factors are odd. Could they be all nasty (end in $1$ or $5$ or $9$? Examine products of nasty numbers. You will find they are all nasty. Remark: Note that a nice number can have some nasty prime factors. For example, $77$ has the nasty prime factor $11$, but it also has the nice prime factor $7$. Note also that the product of nice numbers can be nasty, example $3\cdot 7=21$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/551073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
How find this sum $\sum_{n=1}^{\infty}n\sum_{k=2^{n-1}}^{2^n-1}\frac{1}{k(2k+1)(2k+2)}$ Find the sum $$\sum_{n=1}^{\infty}n\sum_{k=2^{n - 1}}^{2^{n}\ -\ 1}\dfrac{1}{k(2k+1)(2k+2)}$$ My try: note $$\dfrac{1}{k(2k+1)(2k+2)}=\dfrac{2}{(2k)(2k+1)(2k+2)}=\left(\dfrac{1}{(2k)(2k+1)}-\dfrac{1}{(2k+1)(2k+2)}\right)$$ Then I can't
Are you really sure about the outer $n$? If not, Wolfram Alpha gives a sum that telescopes nicely $$\sum_{k=2^{n-1}}^{2^n-1} \frac{1}{k (2 k+1) (2 k+2)}=-2^{-n-1}-\psi^{(0)}(2^{n-1})+\psi^{(0)}(2^n)-\psi^{(0)}\left(\frac{1}{2}+2^n\right) +\psi^{(0)}\left(\frac{1}{2}+2^{n-1}\right)$$ Update: It is not hard to prove user64494's result manually $$S_2=\sum_{n=1}^{\infty}\sum_{k=2^{n-1}}^{2^n-1}\frac{1}{k(2k+1)(2k+2)}=$$ $$\sum_{n=1}^{\infty}\frac{1}{k(2k+1)(2k+2)}=$$ $$\sum_{n=1}^{\infty}2\left(\frac{1}{2k}-\frac{1}{2k+1}\right)-\left( \frac{1}{2k}-\frac{1}{2k+2}\right)=$$ $$2\left(\sum_{n=1}^{\infty}\frac{1}{2k}-\frac{1}{2k+1}\right)-\frac12\sum_{n=1}^{\infty}\frac1k-\frac{1}{k+1}=$$ Since $\ln{2}=1-\frac12+\frac13-\cdots$, we have $$1-\ln{2}=\sum_{n=1}^{\infty}\frac{1}{2k}-\frac{1}{2k+1}$$ And the second sum is a nicely telescopic one that equals $1$. Therefore $$S_2=2(1-\ln{2})-\frac12=\frac32-2\ln{2}$$ But this is unfortunately not what you have asked for yet. The real sum you are asking for doesn´t get a closed form from either WolframAlpha or Maple, so I don´t think there is an easy way to get it, at least for me.
{ "language": "en", "url": "https://math.stackexchange.com/questions/551111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Why is $2x^3 + x$, where $x \in \mathbb{N}$, always divisible by 3? So, do you guys have any ideas? Sorry if this might seem like dumb question, but I have asked everyone I know and we haven't got a clue.
$x \mod 3=\{0,1,2\}$ for $x\in \mathbb{N}$. In the other words, if you divide any natural number by $3$, the remainder will be $0$ or $1$ or $2$. * *$x \mod 3=0 \Rightarrow (2x^3 + x) \mod 3 = 0$ *$x \mod 3=1 \Rightarrow (2x^3 + x) \mod 3 = 0$ *$x \mod 3=2 \Rightarrow (2x^3 + x) \mod 3 = 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/551148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 8, "answer_id": 4 }
Limit of $\frac{1}{a} + \frac{2}{a^2} + \cdots + \frac{n}{a^n}$ What is the limit of this sequence $\frac{1}{a} + \frac{2}{a^2} + \cdots + \frac{n}{a^n}$? Where $a$ is a constant and $n \to \infty$. If answered with proofs, it will be best.
Let $f(x) = 1+x + x^2+x^3+ \cdots$. Then the radius of convergence of $f$ is $1$, and inside this disc we have $f(x) = \frac{1}{1-x}$, and $f'(x) = 1+2x+3x^2+\cdots = \frac{1}{(1-x)^2}$. Suppose $|x|<1$, then we have $xf'(x) = x+2x^2+3x^3+\cdots = \frac{x}{(1-x)^2}$. If we choose $|a| >1$, then letting $x = \frac{1}{a}$ we have $\frac{1}{a} + \frac{2}{a^2}+ \frac{3}{a^3}+ \cdots = \frac{\frac{1}{a}}{(1-\frac{1}{a})^2}= \frac{a}{(a-1)^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/551234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
is this conjecture true or false? I want to know if this conjecture istrue or false $$\Large e^{\frac{ \ln x}{x}} \notin \mathbb{Z} $$ for every $x \in \mathbb{R} \setminus \{1,-1,0\} $
You should ask this only for $x>0$, as the expression is not well defined otherwise. You can rule out the case $x\in(0,1)$ easily, since it implies $\ln(x)/x<0$. Now find the maximum of $\ln(x)/x$ on $(1,\infty)$, and conclude.
{ "language": "en", "url": "https://math.stackexchange.com/questions/551337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
How to calculate area of this shape? I was trying to solve a complicated problem then I came accros to this complicated problem. I believe that there is enough information to calculate the area. Can you help me to find a general formula for the area of this shape, in terms of $x,\alpha,\beta$? I forgot to write on the figure: $|AB|$ is tilted $45^\circ$ w.r.t. "ground", $\beta<\alpha$ and $|AB|$ is not parallel to $|DC|$. $|CB|=|DA|=1$ unit and $|AB|=x$.
Set up a coordinate system with its original at $B$. Then, in this coordinate system, we have: $$ A = (x_0, y_0) = \frac{1}{\sqrt2}(x,x) $$ $$ B = (x_1, y_1) = (0,0) $$ $$ C = (x_2,y_2) = (-\cos\beta, \sin\beta) $$ $$ D = (x_3,y_3) = \frac{1}{\sqrt2}(x,x) + (-\cos\alpha, \sin\alpha) $$ Then apply the area formula from here: $$ \text{Area} = \frac12\sum_{i=0}^{n-1}(x_iy_{i+1} -x_{i+1}y_i) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/551522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Convert a WFF to Clausal Form I'm given the following question: Convert the following WFF into clausal form: \begin{equation*} \forall(X)(q(X)\to(\exists(Y)(\neg(p(X,Y)\vee r(X,Y))\to h(X,Y))\wedge f(X))) \end{equation*} This is what I've gotten so far, but I'm not confident that I'm in the proper form at the end. First, eliminate the implications: \begin{gather} \forall(X)(q(X)\to(\exists(Y)((p(X,Y)\vee r(X,Y))\vee h(X,Y))\wedge f(X)))\\ \forall(X)(\neg q(X)\vee(\exists(Y)((p(X,Y)\vee r(X,Y))\vee h(X,Y))\wedge f(X))) \end{gather} Move the quantifiers out front: \begin{gather} \forall(X)\exists(Y)(\neg q(X)\vee((p(X,Y)\vee r(X,Y)\vee h(X,Y))\wedge f(X))) \end{gather} Skolemize existential quantifiers with $g(X)/Y$: \begin{gather} \forall(X)(\neg q(X)\vee((p(X,g(X))\vee r(X,g(X))\vee h(X,g(X)))\wedge f(X))) \end{gather} Remove universal quantifiers: \begin{gather} \neg q(X)\vee((p(X,g(X))\vee r(X,g(X))\vee h(X,g(X)))\wedge f(X)) \end{gather}
To write $\neg q(X)\lor\Big(\big(p(X,g(X))\lor r(X,g(X))\lor h(X,g(X))\big)\land f(X)\Big)$ in clausal form, you first need to put it in conjunctive normal form. To do so, you can distribute the first disjunction over the paranthesis to yield: $$\big(\neg q(X)\lor f(X)\big)\land\big(\neg q(X)\lor p(X,g(X))\lor r(X,g(X))\lor h(X,g(X))\big)$$ Each part of the conjunction corresponds to a clause, which is a finite disjunction of atomics. Thus, the clausal form of your original wff is: $$\bigg\{\Big\{\neg q(X),f(X)\Big\},\Big\{\neg q(X),p(X,g(X)),r(X,g(X)),h(X,g(X))\Big\}\bigg\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/551594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Asymptotic relationship demonstration I have to demonstrate that if $$ \begin{split} f_1(n) &= \Theta(g_1(n)) \\ f_2(n) &= \Theta(g_2(n)) \\ \end{split} $$ then $$ f_1(n) + f_2(n) = \Theta(\max\{g_1(n),g_2(n)\}) $$ Actually I have already proved that $$f_1(n)+f_2(n) = O(\max\{g_1(n),g_2(n)\}).$$ My problem is $$f_1(n)+f_2(n) = \Omega(\max\{g_1(n),g_2(n)\}),$$ could you help me?
Note that if $f = \Theta(g)$ then $g = \Theta(f)$ and since you already proved the $O()$ bound, you can conclude that $f_1 + f_2 = O(g_1 + g_2)$ and by a symmetric argument that $g_1 + g_2 = O(f_1 + f_2)$ which is equivalent to the $\Omega()$ bound you seek.
{ "language": "en", "url": "https://math.stackexchange.com/questions/551714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Can somebody help me to understand this? (Baire Category Theorem) Theorem $\mathbf{6.11}$ (Baire Category Theorem) Every residual subset of $\Bbb R$ is dense in $\Bbb R$. $\mathbf{6.4.5}$ Suppose that $\bigcup_{n=1}^\infty A_n$ contains some interval $(c,d)$. Show that there is a set, say $A_{n_0}$, and a subinterval $(c\,',d\,')\subset(c,d)$ so that $A_{n_0}$ is dense in $(c\,',d\,')$. (Note: This follows, with correct interpretation, directly from the Baire category theorem.) I'm trying to understand why there exists such subinetrval $(c', d')$ using Baire Category Theorem, but I don't how to apply it because I don't know which is the residual set in this case, i.e., a set whose complement can be represented as a countable union of nowhere dense sets. $\textbf{EDIT:}$ I forgot to say that the sets $A_n$'s are closed.
To apply the Baire category theorem (BCT), I assume that it is intended that the sets $\{A_n\}_{n=1}^{\infty}$ are closed subsets of $\mathbb{R}$. Now, let $c<a<b<d$ and consider $[a,b]$, which is a closed subset of a complete metric space, and is therefore a complete metric space itself. So, by the BCT it can not be the union of nowhere dense closed subsets. Since $\cup_{n=1}^{\infty} \big([a,b]\cap A_n\big) = [a,b]$ is the union of closed subsets, at least one of the sets $[a,b] \cap A_{n_0}$ must be somewhere dense. This means that there is a subset $(c',d') \subset [a,b]$ such that $[a,b]\cap A_{n_0}$ is dense in $(c',d')$. Now, since $[a,b] \cap A_{n_0}$ is closed, this will imply that $(c',d') \subset [a,b] \cap A_{n_0}$ (why!?).
{ "language": "en", "url": "https://math.stackexchange.com/questions/551760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
deducibility from peano axioms I have to solve the following problem: Using $\exists$ Introduction prove that PA$\vdash x\leq y \wedge y\leq z \longrightarrow x\leq z$: I used that if $x\leq y$ then $\ \exists \ r\ x+r=y$ and in the same way $\exists t \ y +t=z$, but in logic term I don't know how to use these equalities and the Peano's axioms to get the result.
See Wikipedia Peano Axioms. The point of this answer is to show that you can define an ordering of the natural numbers before you define addition. Proposition 1: If $n$ is any number except $0$, there exist an $m$ such that $n = S(m)$ Moreover, if the successor of both $m$ and $m^{'}$ is equal to $n$, then $m = m^{'}$. Proof: Exercise Definitions: The Decrement mapping $D$ $ n \mapsto D(n)$ sends any nonzero number $n$ to $m$, where $S(m) = n$. We also define $1$ to be $S(0)$. Note that $D(1) = 0$ Note for any nonzero $n$ we have the 'keep decrementing unless you can't' number expressions: (1) $D(n)$, $D(D(n))$, $D(D(D(n)))$, ... It is understood that this list might terminate since you can't apply $D$ to $0$. Proposition 2: If $n$ is any nonzero number, the decrementing terms (1) ends in the number $0$. Proof: It is true for $n$ = 1. Assume it is true for $n$, and consider $S(n)$. But $D(S(n)) = n$, so that for $S(n)$, compared to the terms for $n$, you simply get a new starting term. QED These logical expressions, $m \ne n \;\; iff \;\; D(m) \ne D(n)$ are true provided both $m$ and $n$ are nonzero. By repeatedly applying Proposition 2 you will find that either $m$ or $n$ gets decremented to $0$ first. Definition: If $m$ and $n$ are two distinct nonzero numbers, we say that $m$ is less than $n$, written $m \lt n$. if $m$ is decremented to zero before $n$. We define the relation $\le$ in the obvious fashion and agree that for all natural numbers $n$, $\;0 \le n$. Proposition 3: $x\leq y \wedge y\leq z \longrightarrow x\leq z$ Proof: Assume $x, y, z$ are three distinct nonzero numbers. $x\leq y \wedge y\leq z \longrightarrow x\leq z$ IF AND ONLY IF $D(x)\leq D(y) \wedge D(y)\leq D(z) \longrightarrow D(x)\leq D(z)$ As before, you see that $x$ is decremented to zero before both $y$ and $z$, so that $x \le z$. We leave it to the reader to finish the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/551873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Proof that a limit point compact metric space is compact. If $(X,d)$ is limit point compact, show it is compact. I have found multiple proofs of this that first show that limit point compact implies sequential compact, which in turn implies compactness. I was wondering if there is a direct proof showing that limit point compactness implies compactness in a metric space.
Pedantic comments are in parentheses. WLOG, take $C = \{1\} \cup \{1 + \frac{1}{n}\}$. Let $\scr{U}$ be an open cover of $C$. Then $\exists U_d \in \scr{U}$ such that $1 \in U_d$. Since $U_d$ is open then $1$ is an interior point of $U_d$. Therefore $\exists \epsilon > 0$ such that $N_{\epsilon}(1) \in U_d$. Note that $N_{\epsilon}(1) = (1 - \epsilon, 1 + \epsilon)$. Now let $N \in \mathbb{N} : N \hspace{3 pt}\bar> \frac{1}{\epsilon}$. Similarly, let $n \in \mathbb{N} : n \overline\geq N$. Then $n > \frac{1}{\epsilon}$ by $\bar ~$ and $\overline ~$. Therefore $(1 + \frac{1}{n}) - 1 = \frac{1}{n} < \epsilon$ so $1 + \frac{1}{n} \in N_{\epsilon}(1)$ for any $n > N$. (Here the definition of a limit point was used where we removed $\{1\}$ from $C$ and showed it lied in its closure.) Finally, $$\forall 1 \leq i \leq N - 1\left[\exists U_i \in U : 1 + \frac{1}{i} \in U_i\right]$$ since $C \subset \scr{U}$. (Here we are simply taking the neighborhoods of the finite number of points outside $N_{\epsilon}(1)$. Now we can take $\bigcup_{i = 1}^N U_i$ where $U_N = U_d$. This is the finite sub cover we are looking for (and we shall term this property compactness!!) Generally, however, we can say, Let $X$ be a metric in which every infinite subset $A \subset X$ has a limit point. Let $\delta > 0$. Then if $\{x_i\}$ is a sequence in $X$, it converges to some point $x$ such that $d(x_n, x) < \epsilon$ for all $n \leq i$. Consider $N_{\epsilon}(x)$. As shown at the top of this answer, $N_{\epsilon}(x)$ is expressible as a union of open balls with rational radius because it contains a countably dense subset. Thus $\bigcup_{i = 1}^n U_i = A$ where $U_n \supset N_{\epsilon}(x)$ and $U_i$ have center $x_i$. An alternate answer can be found here #24 http://math.mit.edu/~rbm/18.100B/HW4-solved.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/551921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
prove a statement (complements of unions) I want to prove this statement: $$(A_1 \cup A_2)^c = {A_1}^c \cup {A_2}^c$$ where the $c$ means the complement. Any help would be greatly appreciated.
This is false in the general case. For example, if $A_1$ is the set of integers ($\{\ldots,-2,-1,0,1,2,\ldots\}$) and $A_2$ is the set of positive real numbers, and the universe is the set of all real numbers, then $(A_1 \cup A_2)^c$ doesn't contain $-1$ or $\frac12$, but $A_1^c \cup A_2^c$ contains them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/551994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Determine homogeneous transformation matrix for reflection about the line $y = mx + b$, or specifically $y = 2x - 6$ Determine the homogeneous transformation matrix for reflection about the line $y = mx + b$, or specifically $ y = 2x - 6$. I use $mx - y +b =0$: $\text{slope} = m$, $\tan(\theta)= m$ intersection with the axes: $x =0$ is $y = -b$ and $y =0$ is $x = \dfrac{b}{m}$ My question is, what can I do next?
Note that any matrix $\mathbf A\cdot \vec{0}=\vec{0}$, so there is no matrix that can flip over $y=2x-6$ as it must map $\vec 0 \to (4 \frac{4}{5},-2\frac{2}{5})$. You might want to do something about that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/552129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
$f=g$ almost everywhere $\Rightarrow |f|=|g|$ almost everywhere? Suppose $(X, \mathcal{M}, \mu)$ is a measure space. Assume $f: X\to\overline{\mathbb{R}}$ and $g=X\to\overline{\mathbb{R}}$ are measurable maps. Here $\overline{\mathbb{R}}$ denotes the set of extended real numbers. My question is: If $f=g$ almost everywhere, does it follow that $|f|=|g|$ almost everywhere? I know the answer is "Yes" if $X$ is a complete measure space: If $f=g$ a.e. then $E=\{x\in X: f(x)\neq g(x)\}$ is a null set, i.e. $\mu(E)=0$. It is clear that $$ F=\{x\in X : |f(x)|\neq |g(x)|\}\subseteq E $$ Since $X$ is complete, all subsets of null sets are in $\mathcal{M}$, and so $\mu(F)=0$, and $|f|=|g|$ a.e. What happens when $X$ is not complete? Thanks for your time :)
Note that since $f$ and $g$ are measurable, then so are $|f|$ and $|g|$ by continuity of $|\cdot|$, and hence, $h=|f|-|g|$ is measurable. Noting that $$F=\left(h^{-1}\bigl(\{0\}\bigr)\right)^c,$$ we readily have $F\in\mathcal M$. By nonnegativity and monotonicity of measure, since $F\subseteq E$ and $\mu(E)=0,$ then $\mu(F)=0,$ and so $|f|=|g|$ a.e. There is no need for $(X,\mathcal M,\mu)$ to be a complete measure space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/552201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
90 people with ten friends in the group. Prove its possible to have each person invite 3 people such that each knows at least two others A high school has 90 alumni, each of whom has ten friends among the other alumni. Prove that each alumni can invite three people for lunch so that each of the four people at the lunch table will know at least two of the others. I know each vertex has degree 10 and its a simple graph with the pigeonhole principle working its way in there somehow. Any help would be appreciated.
Pick any vertex v. It is connected to 10 vertices. Keep these 11 vertices aside. We are left with 79 vertices on the other side. Realize that the central case is when we have five disjoint pairs(that is, each pair consists of distinct vertices) of edge connected vertices among the 10 vertices v is connected to. Thus we have 5 triangles with v being the only common vertex in each of them. Case 1: if we have an extra edge between any two of the ten vertices, we can obtain a square or a figure similar to having two triangles with a common base, either of which gives us the set of four required vertices. Case 2: if not we have at least 80 edges(this' the reason I called it the central config. At present there's 8 edges for each of the ten vertices v's connected to. If I subtract an edge from here the no. 80 will increase. If I move away from the idea of 5 distinct pairs using 5 or more edges within the 10 vertices v is connected to, I will land in case 1.) emanating out of our set of 11 vertices towards the set of 79 vertices. By PHP, at least 1 of the 79 vertices is connected to two of the ten vertices leading to a square.
{ "language": "en", "url": "https://math.stackexchange.com/questions/552375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why are these equations equal? I have racked my brain to death trying to understand how these two equations are equal: $$\frac{1}{1-q} = 1 + q + q^2 + q^3 + \cdots$$ as found in http://www.math.dartmouth.edu/archive/m68f03/public_html/partitions.pdf From what I understand if I substitute $5$ for $q$ the answer: $$\frac{1}{1-5} = -\frac{1}{4}$$ is much different than: $$1+5+25+125+\cdots$$ Any help understanding what is going on would be GREATLY appreciated. Thank you all for your time
$S=1+q+q^2+\dots$ For any $k\in \mathbf{N}$, Let $S_k=1+q+q^2+\dots+q^k$, Then, $qS_k=q+q^2+q^3+\dots+q^{k+1}$. So, $(1-q)S_k=S_k-qS_k=1+q+q^2+\dots+q^k-(q+q^2+q^3+\dots+q^{k+1})=1-q^{k+1}$ If $|q|<1$, then $q^k \longrightarrow 0$ as $k \longrightarrow \infty$, so, $$S=\lim_{k \to \infty}S_k=\lim_{k \to \infty}\frac{1-q^{k+1}}{1-q} =\frac{1}{1-q}$$ Note, however, that the function $$f(x)=\frac{1}{1-x}$$ is valid for all $x \neq 1$. But, $f(x) = 1+x+x^2+\dots$ only for $|x| <1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/552442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
$\left( \frac{1 \cdot 2}{p} \right) + \left( \frac{2 \cdot 3}{p} \right) + \cdots + \left( \frac{(p-2)(p-1)}{p} \right) = -1$ Let $p$ be an odd prime number. Prove that $$\left( \frac{1 \cdot 2}{p} \right) + \left( \frac{2 \cdot 3}{p} \right) + \left( \frac{3 \cdot 4}{p} \right) + \cdots + \left( \frac{(p-2)(p-1)}{p} \right) = -1$$ where $\left( \frac{a}{p}\right)$ is the Legendre symbol. This seems to be a tricky one! I've tried using the property $\left( \frac{ab}{p} \right)=\left( \frac{a}{p}\right) \left( \frac{b}{p} \right)$ and prime factoring all the non-primes but to no avail. I had a quick thought of induction haha, but that was silly. I tried factoring the common Legendre symbols like $\left( \frac{3}{p}\right) \left[ \left( \frac{2}{p} \right) + \left( \frac{4}{p} \right) \right]$ but that didn't bring anything either. And I've been looking for pairwise cancellation with $1$ term leftover, but that does not seem to work. Can you help?
Let $a^\ast$ be the inverse of $a$ modulo $p$. Then $$\left(\frac{a(a+1)}{p}\right)=\left(\frac{a(a+aa^\ast)}{p}\right)=\left(\frac{a^2(1+a^\ast)}{p}\right)=\left(\frac{1+a^\ast}{p}\right).$$ As $a$ ranges from $1$ to $p-2$, the number $1+a^\ast$ ranges, modulo $p$, through the integers from $2$ to $p-1$. But the sum from $1$ to $p-1$ of the Legendre symbols is $0$, so our sum is $-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/552521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
How does the mathematical form of nCr ensure that the result is a whole number The mathematical form for nCr is (n!)/(r!(n-r)!) How does this form ensure that nCr is indeed a whole number. Is there a mathematical proof?
The nice thing is that it is the combinatorial argument that $\frac{n!}{r!(n-r)!}$ counts something (i.e., the number of ways to choose $r$ items from $n$ unordered items) that proves (in the most rigorous sense) that it is a natural number. What are proof is needed?
{ "language": "en", "url": "https://math.stackexchange.com/questions/552590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculating integral with standard normal distribution. I have a problem to solving this, Because I think that for solving this problem, I need to calculate cdf of standard normal distribution and plug Y value and calculate. However, at the bottom I found that Integral from zero to infinity of 1 goes to infinity and I cannot derive the answer. Can you tell me what's the problem and what can I do? I appreciate for your help in advance:)
You have managed to state a closed form for $F_x(x)$. This is not in fact possible for a normal distribution, so you have an error in your integral about a third of the way down: the integral of $\displaystyle e^{-\frac12 t^2}$ is not $\displaystyle -\frac{1}{t} e^{-\frac12 t^2}$ There is an easier solution, but it uses a shortcut which your teacher might not accept here: * *For a non-negative random variable, you have $E[Y] = \displaystyle\int_{t=0}^\infty \Pr(Y \gt t) dt$. *$X^2$ is indeed non-negative, so you want $E[X^2]$. *For $X$ with a standard $N(0,1)$ Gaussian distribution, $E[X^2]=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/552669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is $(tr(A))^n\geq n^n \det(A)$ for a symmetric positive definite matrix $A\in M_{n\times n} (\mathbb{R})$ If $A\in M_{n\times n} (\mathbb{R})$ a positive definite symmetric matrix, Question is to check if : $$(tr(A))^n\geq n^n \det(A)$$ What i have tried is : As $A\in M_{n\times n} (\mathbb{R})$ a positive definite symmetric matrix, all its eigen values would be positive. let $a_i>0$ be eigen values of $A$ then i would have : $tr(A)=a_1+a_2+\dots +a_n$ and $\det(A)=a_1a_2\dots a_n$ for given inequality to be true, I should have $(tr(A))^n\geq n^n \det(A)$ i.e., $\big( \frac{tr(A)}{n}\big)^n \geq \det(A)$ i.e., $\big( \frac{a_1+a_2+\dots+a_n}{n}\big)^n \geq a_1a_2\dots a_n$ I guess this should be true as a more general form of A.M-G.M inequality saying $(\frac{a+b}{2})^{\frac{1}{2}}\geq ab$ where $a,b >0$ So, I believe $(tr(A))^n\geq n^n \det(A)$ should be true.. please let me know if i am correct or try providing some hints if i am wrong. EDIT : As every one say that i am correct now, i would like to "prove" the result which i have used just like that namely generalization of A.M-G.M inequality.. I tried but could not see this result in detail. SO, i would be thankful if some one can help me in this case.
This is really a Calculus problem! Indeed, let us look for the maximum of $h(x_1,\dots,x_n)=x_1^2\cdots x_n^2$ on the sphere $x_1^2+\cdots+x_n^2=1$ (a compact set, hence the maximum exists). First note that if some $x_i=0$, then $h(x)=0$ which is obviously the minimum. Hence we look for a conditioned critical point with no $x_i=0$. For this we compute the gradient of $h$, namely $$ \nabla h=(\dots,2x_iu_i,\dots),\quad u_i=\prod_{j\ne i}x_j^2, $$ and to be a conditioned critical point (Lagrange) it must be orthogonal to the sphere, that is, parallel to $x$. This implies $u_1=\cdots=u_n$, and since no $x_i=0$ we conclude $x_1=\pm x_i$ for all $i$. Since $x$ is in the sphere, $x_1^2+\cdots+x_1^2=1$ and $x_1^2=1/n$. At this point we get the maximum of $h$ on the sphere: $$ h(x)=x_1^{2n}=1/n^n. $$ And now we can deduce the bound. Let $a_1,\dots,a_n$ be positive real numbers and write $a_i=\alpha_i^2$. The point $z=(\alpha_1,\dots,\alpha_n)/\sqrt{\alpha_1^2+\cdots+\alpha_n^2}$ is in the sphere, hence $$ \frac{1}{n^n}\ge h(z)=\frac{\alpha_1^2\cdots\alpha_n^2}{(\alpha_1^2+\cdots+\alpha_n^2)^n}=\frac{a_1\cdots a_n}{(a_1+\cdots+a_n)^n}, $$ and we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/552780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
$F:G\to B$ is an isomorphism between groups implies $F^{-1}$ is an isomorphism $F$ is an isomorphism from group $G$ onto group $B$. Prove $F^{-1}$ is an isomorphism from $B$ onto $G$. I do not know which direction to go in.
Because $F$ is a bijection, $F^{-1}$ is a bijection as well. Now we know $$F(a\cdot b)=F(a)\cdot F(b)$$ This means that $$F^{-1}(a\cdot b)=F^{-1}(F^{-1}(F(a\cdot b))) = F^{-1}(F^{-1}(F(a)))\cdot F^{-1}(F^{-1}(F(b))) = F^{-1}(a)\cdot F^{-1}(b)$$ Thus $F^{-1}$ is an isomorphism from $B$ to $G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/552867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
$\frac{\prod_{i=1}^n (1+x_i)-1}{\prod_{i=1}^n (1+x_i/\delta)-1} \stackrel{\text{?}}{\le} \frac{(1+x_n)^n-1}{(1+x_n/\delta)^n-1} $ . Let $x_1 \le x_2 \le \cdots \le x_n$. Let $\delta>1$ be some positive real numbers. I assume that $0\le x_i <1$, for $i=1,\ldots,n$ and $x_n >0$. Does the following expression hold? $$ \frac{\prod_{i=1}^n (1+x_i)-1}{\prod_{i=1}^n (1+x_i/\delta)-1} \stackrel{\text{?}}{\le} \frac{(1+x_n)^n-1}{(1+x_n/\delta)^n-1} $$
By putting: $$A=\sum_{i=1}^n\log(1+x_i),\quad A_\delta=\sum_{i=1}^n\log(1+x_i/\delta),$$ $$B=\sum_{i=1}^n\log(1+x_n),\quad B_\delta=\sum_{i=1}^n\log(1+x_n/\delta),$$ the inequality is equivalent to: $$e^{A+B_\delta}+e^{A_\delta}+e^{B}\leq e^{B+A_\delta}+e^{A}+e^{B_\delta}.$$ Provided that $A+B_\delta\leq B+A_\delta$, the latter follows from Karamata's inequality. So we have only to show that: $$ B-A \geq B_\delta-A_\delta, \tag{1}$$ i.e. $$ \sum_{i=1}^n\log\left(\frac{1+x_n}{1+x_i}\right)\geq \sum_{i=1}^n\log\left(\frac{\delta+x_n}{\delta+x_i}\right).\tag{2}$$ This is quite trivial since for any $1>C>D>0$ the function $f(x)=\log\left(\frac{x+C}{x+D}\right)$ is positive and decreasing on $\mathbb{R}^+$ since its derivative is $\frac{D-C}{(x+C)(x+D)}<0$, $f(0)>0$ and $\lim_{x\to+\infty}f(x)=0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/552955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Integral of a positive function is positive? Question: Let $f:[a.b]\to \Bbb R \in R[a,b]$ s.t. $f(x)>0 \ \forall x \in \Bbb R.$ Is $\int _a^b f(x)\,dx>0$ ? What We thought: We know how to prove it for weak inequality, for strong inequality - no clue :-)
If f is continuous, we can argue like this, Let $m$ be the infemum of the function on$[a,b]$. show that $m \gt 0$ and $\int _a^b f(x)\,dx \gt m(b-a) \gt 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/553031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is math built on assumptions? I just came across this statement when I was lecturing a student on math and strictly speaking I used: Assuming that the value of $x$ equals <something>, ... One of my students just rose and asked me: Why do we assume so much in math? Is math really built on assumptions? I couldn't answer him, for, as far as I know, a lot of things are just assumptions. For example: * *$1/\infty$ equals zero, *$\sqrt{-1}$ is $i$, etc. So would you guys mind telling whether math is built on assumptions?
Mathematical and ordinary language and reasoning are the same in this respect. If there is a general question of whether and how human thought relies on hypotheticals, that's an interesting subject to explore, but the only reason it appears to be happening more often in mathematics is that the rules of mathematical speech are more exacting and force the assumptions to be brought out explicitly. In everyday communication there is a larger collection of assumptions, most of which are not articulated, or are assumed (often wrongly) to be carried by context or shared background.
{ "language": "en", "url": "https://math.stackexchange.com/questions/553105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "103", "answer_count": 23, "answer_id": 7 }
Show $a_{n+1}=\sqrt{2+a_n},a_1=\sqrt2$ is monotone increasing and bounded by $2$; what is the limit? Let the sequence $\{a_n\}$ be defined as $a_1 = \sqrt 2$ and $a_{n+1} = \sqrt {2+a_n}$. Show that $a_n \le$ 2 for all $n$, $a_n$ is monotone increasing, and find the limit of $a_n$. I've been working on this problem all night and every approach I've tried has ended in failure. I keep trying to start by saying that $a_n>2$ and showing a contradiction but have yet to conclude my proof.
Hints: 1) Given $a_n$ is monotone increasing and bounded above, what can we say about the convergence of the sequence? 2) Assume the limit is $L$, then it must follow that $L = \sqrt{2 + L}$ (why?). Now you can solve for a positive root.
{ "language": "en", "url": "https://math.stackexchange.com/questions/553160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Where am I going wrong on this probability question? A person has to travel from A to D changing buses at stops B and C en-route. The maximum waiting time at either stop can be 8 minutes each, but any time of waiting up to 8 minutes is equally likely at both places. He can afford up to 13 minutes of total waiting time, if he is to arrive at D on time. What is the probability that the person will arrive late at D? The options given are: a) 8/13 b) 13/64 c) 119/128 d) 9/128 My logic: Since the person can wait for max 8 minutes at each stop so he/she will be late if he/she has to wait for either 14, 15 or 16 minutes. The number of ways this can happen is: Person waits for 8 minutes at B: Probability for this to occur is 1/8 (since it says that any time of waiting upto 8 minutes is equally likely at both places). So in order to get late he/she will have to wait for either 6 or 7 or 8 minutes at C. The probability for which will be 3/8. Person waits for 7 minutes at B: Probability will be 1/8 So probability at C will be 2/8 (either waits for 7 or 8 min) Person waits for 6 minutes at B: Probability will be 1/8. Probability at C will be 1/8 (can wait for only 8 min in order to get late). So total probability is: (1/8)(3/8)+(1/8)(2/8)+(1/8)*(1/8) = 6/64. Now we can consider this same scenario when he/she waits for 8 min at C. So complete probability is 2*(6/64) = 12/64 = 3/16 And that is nowhere in the options! Please tell me where am I going wrong. NOTE: Here I have assumed that he he/she has to wait for atleast 1 min...if I dont assume that and also take into consideration that he/she might not have to wait at all then instead of 1/8 we will have 1/9. But that also is not giving me the correct solution.
You are assuming that the waiting time is a natural number. A more reasonable assumption is that the waiting time is uniformly distributed over the interval $[0,8]$ You can think of the two waiting times as choosing a point in the square $[0,8] \times [0,8]$ and the chance of exceeding a total of $13$ minutes is the part of the area above $x+y=13$. That will get you one of the choices.
{ "language": "en", "url": "https://math.stackexchange.com/questions/553236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
finite dimensional range implies compact operator Let $X,Y$ be normed spaces over $\mathbb C$. A linear map $T\colon X\to Y$ is compact if $T$ carries bounded sets into relatively compact sets (i.e sets with compact closure). Equivalently if $x_n\in X$ is a bounded sequence, there exist a subsequence $x_{n_k}$ such that $Tx_{n_k}$ is convergent. I want to prove that if $T\colon X\to Y$ has finite dimensional range, then it's compact.
You require the boundedness of $T$. Fact is that boundedness + finite rank of $T$ gives compactness of $T$. This you can find in any good book in Functional Analysis. I would recommend J.B Conway
{ "language": "en", "url": "https://math.stackexchange.com/questions/553313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 3 }
Homework - set theory infinite union A question from my homework I'm having trouble understanding. We are given: $A(1) = \{\varnothing\}$, $A(n+1) = A(n)\cup (A(n)\times A(n))$ $A=A(1)\cup A(2)\cup A(3)\cup \cdots \cup A(n)\cup A(n+1) \cup \cdots$ to infinity The questions are: 1) show that $A\times A \subseteq A$ 2) Is $A \times A = A$? Thank you for your help. I've tried writing $A(2)$ but it gets really complicated and I'm having trouble understanding what the sets are. Let alone solve the question.
HINT: For (2), note that not all the elements of $A$ are ordered pairs. Also, let's write $A(2)$, but to make it simpler let's call $A(1)=X$. Then $A(2)=X\cup(X\times X)=\{\varnothing\}\cup\{\langle\varnothing,\varnothing\rangle\}=\{\varnothing,\langle\varnothing,\varnothing\rangle\}$. Not very difficult, $A(3)$ will have six elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/553403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Question on Matrix Transform Operations Given $e = Y - XB$, where $ e = \begin{bmatrix} e_1 \\ \vdots \\ e_n \\ \end{bmatrix} $, $ Y= \begin{bmatrix} y_1 \\ \vdots \\ y_n \\ \end{bmatrix} $, $ X= \begin{bmatrix} 1 & x_1 & x^2_1 \\ \vdots & \vdots & \vdots \\ 1 & x_n & x^2_n \\ \end{bmatrix} $, and $ B = \begin{bmatrix} a \\ b \\ c \\ \end{bmatrix} $. Define the scalar sum of the squares as $$ \begin{equation} SSQ = e^T e \end{equation} $$ Substitute $Y - XB$ for $e$. $$ \begin{equation} SSQ = (Y-XB)^T (Y-XB) \tag{2} \end{equation} $$ I have seen the expansion of $(2)$ written as $$ \begin{equation} SSQ = Y^TY - 2B^TX^TY + X^TB^TXB \tag{3} \end{equation} $$ I don't completely understand the expansion. If the expansion follows the FOIL method, I understand $Y^TY$ and $X^TB^TXB$. I don't understand how the middle term is created. I would have expected $-X^TB^TY-Y^TXB$. I guess I don't understand why one can substitute $Y^T$ for $Y$, $X^T$ for $X$, and $B^T$ for $B$, which would give $(3)$. Thanks for helping this novice amateur mathematician.
The transpose is contravariant, meaning $(AB)^T=B^TA^T$ is to be used. So, we have $$Y^TY - B^TX^TY - Y^TXB +B^TX^TXB\,.$$ Now, $Y^TXB$ is a $1\times 1$ matrix, nonetheless, is the scalar product of vectors $Y$ and $XB$, but then it is symmetric as matrix, so we have $$Y^TXB=(Y^TXB)^T=B^TX^TY\,.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/553482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Cutting up a circle to make a square We know that there is no paper-and-scissors solution to Tarski's circle-squaring problem (my six-year-old daughter told me this while eating lunch one day) but what are the closest approximations, if we don't allow overlapping? More precisely: For N pieces that together will fit inside a circle of unit area and a square of unit area without overlapping, what is the maximum area that can be covered? N=1 seems obvious: (90.9454%) A possible winner for N=3: (95%) It seems likely that with, say, N=10 we could get very close indeed but I've never seen any example, and I doubt that my N=3 example above is even the optimum. (Edit: It's not!) And I've no idea what the solution for N=2 would look like. This page discusses some curved shapes that can be cut up into squares. There's a nice simple proof here that there's no paper-and-scissors solution for the circle and the square.
Ok not my greatest piece of work ever, but here's a suggestion in 2 parts:
{ "language": "en", "url": "https://math.stackexchange.com/questions/553571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51", "answer_count": 4, "answer_id": 0 }
The irrationality of the square root of 2 Is there a proof to the irrationality of the square root of 2 besides using the argument that a rational number is expressed to be p/q?
This question has been asked before. Please search Math.SE for an answer to your question before asking. Regardless, yes, there are lots of ways to prove the irrationality of $\sqrt{2}$. Here are some pertinent resources: * *https://mathoverflow.net/questions/32011/direct-proof-of-irrationality/32017#32017 (That's a really neat proof of the type for which you are looking.) *http://en.wikipedia.org/wiki/Square_root_of_2#Proofs_of_irrationality *Direct proof for the irrationality of $\sqrt 2$. *Irrationality proofs not by contradiction
{ "language": "en", "url": "https://math.stackexchange.com/questions/553635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Computing Coefficients for Generalized Combinatorial Sets I'm new to Combinatorics and am wondering if there is a well-known generalization to the binomial coefficients in the following sense: The binomial coefficients, $n \choose d$, can be considered as the number of ways in which $d$ objects can be chosen from among $n$ such that order of selection doesn't matter and no object may be selected more than once, while the multi-set coefficients, ${\big(\!{n\choose d}\!\big)}={n+d-1\choose d}$, enumerate the ways in which $d$ objects can be chosen from among $n$ objects such that order doesn't matter but any object may be chosen any number of times. Is there a middle ground? Specifically, one in which $d$ objects are chosen from among $n$ such that order doesn't matter but the $i^{th}$ object may be selected no more than $k_i$ times? Heuristically, there are two fairly obvious approaches for calculating these numbers (aside from brute force). Let's call these numbers $C(n,d;K)$ for convenience, where $K\equiv \{k_1,k_2,...,k_n\}$. First, one could find the coefficients of polynomials of the form $$ {\large\prod_{i=0}^{n}{\huge(}\sum_{j=0}^{k_i}(x^j){\huge)}=\sum_{m=0}^{N}C(n,m;K)x^m} $$ where $k_0\equiv 0$ and $N=\sum_{i=0}^{n}k_i$. On the other hand, one could use an algorithm similar in logic to that of Pascal's Triangle wherein the $C(n,d;K)$ are calculated by summing $k_n$ terms with $n$-value $n'=n-1$ and with $K'=K-\{k_n\}$ and $N'=N-k_n$ (i.e. from the previous "line" in a generalized Pascal's Triangle). $$ \large{C(n,d;K)=\sum_{m=d-k_n}^{d}C(n',m;K')} $$ with $C(n',m;K')=0$ for all $m<0$ and all $m>N'$. To help clarify, here's an example: Suppose we have 4 object types with the following $k$-values $k_1=1,\ k_2=2,\ k_3=3,\ k_4=4.$ From this we can construct a generalized Pascal's Triangle: $\underline{n=0}:\ 1\\\underline{n=1}:\ 1\ \ 1\\\underline{n=2}:\ 1\ \ 2\ \ 2\ \ 1\\\underline{n=3}:\ 1\ \ 3\ \ 5\ \ 6\ \ 5\ \ 3\ \ 1\\\underline{n=4}:\ 1\ \ 4\ \ 9\ \ 15\ \ 20\ \ 22\ \ 20\ \ 15\ \ 9\ \ 4\ \ 1$ Each term in the $n$th row of the triangle is computed by sliding a window of size $k_n+1$ across the $(n-1)^{th}$ row of the triangle and summing the terms within that window (i.e. for the 3rd row [recall that $k_3=3$], $1=0+0+0+1$, $3=0+0+1+2$, $5=0+1+2+2$, and so on). Like Pascal's Triangle, these generalized triangles have many convenient properties. For instance, the $n^{th}$ row always has $N_n+1$ terms in it; the rows are always symmetrical; and, the final row in the triangle (the one which provides the coefficients we are looking for) doesn't depend on the order in which you compute the previous rows (i.e. the $n=4$ row would look the same in the above example even if the $k$-values were swapped around [e.g. $k_1=2,\ k_2=4$, etc.]). Perhaps the most interesting property is the following: $$ \large{\sum_{m=0}^{N}C(n,m;K)=\prod_{j=0}^{n}(k_j+1)} $$ So in the above example this implies that the sum of the $n^{th}$ row is $(n+1)!$, or in the limiting case of the binomial coefficients (in which all $k_i=1$), the sum of the $n^{th}$ row is $2^n$ - as expected. Clearly, there are methods for computing these general set coefficients, but I have been unable to find any literature on a concise formula for the $C(n,d;K)$ given all the parameters of the problem or even anything about efficient methods for computing one of these coefficients without the use of the polynomial generating function cited above. Does anyone know of any such methods or formulae? Any help would be greatly appreciated. Thanks!
Ok, so you want to buy $d$ items in a store with $n$ elements, but where item $i$ only has a stock of $s_i$. What you are looking for is the coefficient of $x^d$ in the product $(1+x^1+ x^2+ \dots+ x^{s_1})(1+x^1+ x^2+ \dots+ x^{s_2})\dots (1+x^1+ x^2+ \dots+ x^{s_n})=$ $\dfrac{(1-x^{s_1+1})(1-x^{s_2+1})\dots (1-x^{s_n+1})}{(1-x)^n}$ But, I don't think you can get something nicer than that. There is a good method for when all of the stocks are equal. I leave a link here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/553730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Showing Inequality using Gauss Function If $\alpha, \beta\in \Bbb{R}$ and $m, n\in \Bbb{N}$ show that the inequality $[(m+n)\alpha]+[(m+n)\beta] \ge [m\alpha]+[m\beta]+[n\alpha+n\beta]$ holds iff m=n I thought that we have to find a counter-example such that when $m\neq n$, there always exists certain (m, n) which makes the inequality doesn't hold. But it was too confusing to do so. I also tried basic inequalities with Gauss functions, but I was rather afraid I might not be able to find counter-examples in that case. Could it be possible to break it through using rather simple methods rather than seperating each possible case and find a counter-example in each case? Thanks!
You can proceed as follows. Lemma 1. The inequality is true if $m=n$. Proof of lemma 1. Replacing $m$ with $n$, the inequality becomes $$ [2n\alpha]+[2n\beta] \geq [n\alpha]+[n\beta]+[n(\alpha+\beta)] \tag{1} $$ Let us put $x=n\alpha-[n\alpha],y=n\beta-[n\beta]$. Then $x,y$ are in $[0,1)$, $[2n\alpha]=2[n\alpha]+[2x], [2n\beta]=2[n\beta]+[2y]$ and $[n(\alpha+\beta)]=[n\alpha]+[n\beta]+[x+y]$, so that (1) reduces to $[2x]+[2y] \geq [x+y]$. Now $[x+y]$ can only be $0$ or $1$ ; if it is $0$ the inequality is clear, and if it is $1$ then at least one of $x,y$ is larger than $\frac{1}{2}$, which concludes the proof of Lemma 1. Lemma 2. If $m < n$, the inequality becomes false for $\alpha=\frac{2m^2+1}{2mn(m+n)}, \beta=\frac{2mn-1}{2mn(m+n)}$. Proof of lemma 2. Note that $\alpha+\beta=\frac{1}{n}$, $(m+n)\alpha=1-\frac{2m(n-m)-1}{2mn}$ and $(m+n)\beta=1-\frac{1}{2mn}$. We deduce that $$ [(m+n)\alpha]=[(m+n)\beta]=[n\alpha]=[n\beta]=0, [n(\alpha+\beta)]=1. $$ Lemma 3. If $m > n$ and $m$ is not a multiple of $n$, the inequality becomes false for $\alpha=\frac{1}{m}, \beta=\frac{q}{m}$, where $q=[\frac{m}{n}]$. Proof of lemma 3. Indeed, we have $(m+n)\alpha=1+\frac{n}{m}$, so $[(m+n)\alpha]=1$. Since $m$ is not a multiple of $n$, we have $q <\frac{m}{n}$, so $(m+n)\beta=q+q\frac{n}{m}$ yields $[(m+n)\beta]=q$. Trivially $[m\alpha]=1$ and $[m\beta]=q$ ; also $n(\alpha+\beta)=\frac{n}{m}(q+1)$ and $\frac{m}{n} \leq q+1 \leq \frac{m}{n}+1 < \frac{2m}{n}$, so $[n(\alpha+\beta)]=1$. Finally $$ [(m+n)\alpha]+[(m+n)\beta]=1+q < 1+q+1= [m\alpha]+[m\beta]+[n\alpha+n\beta] $$ Lemma 4. If $m > n$ and $m$ is a multiple of $n$, say $m=tn$ for some integer $t\geq 2$, the inequality becomes false for $\alpha=\frac{3t+1}{2nt(t+1)}$, $\beta=\frac{2t^2-1}{2nt(t+1)}$. Proof of lemma 4. The following equalities hold : $$ \begin{array}{rclcl} (m+n)\alpha &=& 1+\frac{t+1}{2t} &=& 2-\frac{t-1}{2t} \\ (m+n)\beta &=& t-1+\frac{2t-1}{2t} &=& t-\frac{1}{2t} \\ m\alpha &=& 1+\frac{t-1}{2(t+1)} &=& 2-\frac{t+3}{2t+2} \\ m\beta &=& t-1+\frac{1}{2(t+1)} &=& t-\frac{2t+1}{2t+2} \\ n(\alpha+\beta) &=& 1+\frac{1}{2(t+1) } & & \\ \end{array} $$ so $$ [(m+n)\alpha]+[(m+n)\beta]=1+(t-1) < 1+(t-1)+1= [m\alpha]+[m\beta]+[n\alpha+n\beta] $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/553823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How find this $\lim_{n\to\infty}\sum_{i=1}^{n}\left(\frac{i}{n}\right)^n$ How find this $$\lim_{n\to\infty}\sum_{i=1}^{n}\left(\dfrac{i}{n}\right)^n$$ I think this answer is $\dfrac{e}{e-1}$ and I think this problem have more nice methods,Thank you
For each fixed $x$, Bernoulli's inequality $$ (1 + h)^{\alpha} \geq 1 + \alpha h,$$ which holds for $h \geq -1$ and $\alpha \geq 1$, shows that whenever $x \leq n+1$ we have $$ \left( 1 - \frac{x}{n+1} \right)^{n+1} = \left\{ \left( 1 - \frac{x}{n+1} \right)^{(n+1)/n} \right\}^{n} \geq \left( 1 - \frac{x}{n} \right)^{n}. $$ In particular, it follows that $$ \left( 1 - \tfrac{k}{n} \right)_{+}^{n} \nearrow e^{-k} \quad \text{as} \quad n\to\infty. $$ Here, the symbol $(x)_{+} := \max\{0, x\}$ denotes the positive part function. Thus whenever $m < n$, the following inequality holds $$ \sum_{k=0}^{m} \left( 1 - \tfrac{k}{n} \right)^{n} \leq \sum_{k=0}^{n-1} \left( 1 - \tfrac{k}{n} \right)^{n} \leq \sum_{k=0}^{\infty} e^{-k}. $$ We fix $m$ and take limits to obtain $$ \liminf_{n\to\infty} \sum_{k=0}^{m} \left( 1 - \tfrac{k}{n} \right)^{n} \leq \liminf_{n\to\infty} \sum_{k=0}^{n-1} \left( 1 - \tfrac{k}{n} \right)^{n} \leq \limsup_{n\to\infty} \sum_{k=0}^{n-1} \left( 1 - \tfrac{k}{n} \right)^{n} \leq \sum_{k=0}^{\infty} e^{-k}. \tag{1} $$ But since $$ \liminf_{n\to\infty} \sum_{k=0}^{m} \left( 1 - \tfrac{k}{n} \right)^{n} = \sum_{k=0}^{m} e^{-k}. $$ taking $m \to \infty$ to the inequality $\text{(1)}$, we have $$ \sum_{k=0}^{\infty} e^{-k} \leq \liminf_{n\to\infty} \sum_{k=0}^{n-1} \left( 1 - \tfrac{k}{n} \right)^{n} \leq \limsup_{n\to\infty} \sum_{k=0}^{n-1} \left( 1 - \tfrac{k}{n} \right)^{n} \leq \sum_{k=0}^{\infty} e^{-k}, $$ from which it follows that $$ \lim_{n\to\infty} \sum_{i=1}^{n} \left( \frac{i}{n} \right)^{n} = \lim_{n\to\infty} \sum_{k=0}^{n-1} \left( 1 - \tfrac{k}{n} \right)^{n} = \sum_{k=0}^{\infty} e^{-k} = \frac{e}{e-1}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/553895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
measurability of metric space valued functions Let's say that we have a measure space $(X, \Sigma)$ and a metric space $(Y, d)$ with its Borel sigma algebra. If $f_n: X\rightarrow Y$ is an arbitrary sequence of measurable functions, then I already know that if $f$ is a pointwise everywhere limit, then $f$ is measurable. But if I don't assume that it converges everywhere, and instead ask where does it converge, is the set of points at which the $f_n$ converge measurable?
Not in general. Consider the functions $f_n:X\to X\times [0,1]$ defined by $f_n(x)=(x,1/n)$. This sequence converges everywhere to $x\mapsto (x,0)$. Now I pull a trick: define $Y=(X\times [0,1])\setminus (E\times \{0\})$ where $E\subset X $ is a nonmeasurable set. The functions $f_n:X\to Y$, defined as above, converge only on the set $X\setminus E$, which is not measurable. But it is true (as Davide Giraudo said) that the set of all $x\in X$ such that $(f_n(x))$ is a Cauchy sequence is measurable. Specifically, it is $\bigcap_k \bigcup_N \bigcap_{m\ge N} \bigcap_{n\ge N} A(m,n,k)$ where $$A(m,n,k)=\{x\in X : d(f_m(x),f_n(x))<1/k\}$$ Consequently, the set of convergence is measurable when $Y$ is complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/554000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Judicious guess for the solution of differential equation $y''-2y'+5y=2(\cos t)^2 e^t$ I want to find the solutions of the differential equation: $y''-2y'+5y=2(\cos t)^2 e^t$. I want to do this with the judicious guessing method and therefore I want to write the right part of the differential equations as the imaginary part of a something. How can I do this?
Hint: $(\cos t)^2 = (1 + \cos(2 t))/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/554091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
sufficient condition for a polynomial to have roots in $[0,1]$ Question is to check : which of the following is sufficient condition for a polynomial $f(x)=a_0 +a_1x+a_2x^2+\dots +a_nx^n\in \mathbb{R}[x] $ to have a root in $[0,1]$. * *$a_0 <0$ and $a_0+a_1+a_2+\dots +a_n >0$ *$a_0+\frac{a_1}{2}+\frac{a_2}{3}+\dots +\frac{a_n}{n+1}=0$ *$\frac{a_0}{1.2}+\frac{a_1}{2.3}+\dots+\frac{a_n}{(n+1).(n+2)} =0$ First of all i tried by considering degree $1$ polynomial and then degree $2$ polynomial and then degree $3$ polnomial hoping to see some patern but could not make it out. And then, I saw that $a_0= f(0)$ and $f(1)=a_0+a_1+a_2+\dots +a_n$. So, if $f(0)<0$ and $f(1)>0$ it would be sufficient for $f$ to have root in $[0,1]$ In first case we have $a_0 <0$ i.e., $f(0)<0$ and $f(1)>1>0$. So, first condition should be implying existence of a root in $[0,1]$ for second case, let $f(x)$ be a linear polynomial i.e., $f(x)=a_0+a_1x$ Now, $a_0+\frac{a_1}{2}=0$ implies $0\leq x=\frac{-a_0}{a_1}=\frac{1}{2}< 1$ So, this might be possibly give existence in case of linear polynomials. Now, $\frac{a_0}{1.2}+\frac{a_1}{2.3}=0$implies $0\leq x=\frac{-a_0}{a_1}=\frac{1}{3}< 1$ So, this might be possibly give existence in case of linear polynomials. So, for linear polynomials all the three conditions imply existence of a root in $[0,1]$. But, i guess this can not be generalized for higher degree polynomial. I think there should be some "neat idea" than checking for roots and all. I am sure about first case but I have no idea how to consider the other two cases. please provide some hints to proceed further.
for third case we consider polynomial $F(x)=\frac{a_0}{1.2}x^2+\frac{a_1}{2.3}x^3+\dots + \frac{a_n}{(n+1)(n+2)}a_nx^{n+2}$ we now assume third condition i.e., $\frac{a_0}{1.2}+\frac{a_1}{2.3}+\dots+\frac{a_n}{(n+1).(n+2)} =0$ In that case, for polynomial $F(x)$ we would then have $F(0)=0$ and $F(1)=0$ (with given condition) So, by rolle's theorem we have a root for $F'(x)$ in $[0,1]$ i.e., we have a root for $F'(x)=\frac{a_0}{1}x+\frac{a_1}{2}x^2+\dots+ \frac{a_n}{n+1}x^{n+1}$ in $[0,1]$ say at $c\in [0,1]$ Now, for $F'(x)$ we have two zeros.. i.e., $F'(0)=0$ and $F'(c)=0$ Now, i will use rolle's theorem again i.e, i have root for $F''$ in $[0,c]$ where $F''(x)=a_0+a_1x+a_2x^2+\dots+a_nx^n$ to conclude, i now set $F''(x)=f(x)$ and with given condition, i have a root in $[0,c] $ for some $c\in [0,1]$ particularly, it has a zero in $[0,1]$ i.e., $f(x)=a_0+a_1x+a_2x^2+\dots+a_nx^n$ has a root in $[0,1]$ To conlcude, with above answer and my previous observation of second case, $f(x)=a_0 +a_1x+a_2x^2+\dots +a_nx^n\in \mathbb{R}[x] $ have a root in $[0,1]$ in all three following cases: * *$a_0 <0$ and $a_0+a_1+a_2+\dots +a_n >0$ *$a_0+\frac{a_1}{2}+\frac{a_2}{3}+\dots +\frac{a_n}{n+1}=0$ *$\frac{a_0}{1.2}+\frac{a_1}{2.3}+\dots+\frac{a_n}{(n+1).(n+2)} =0$ P.S : This is completely for the sake of my reference and all the credit goes to above two users who have helped me to go through this idea.
{ "language": "en", "url": "https://math.stackexchange.com/questions/554262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
How to find the distance between two planes? The following show you the whole question. Find the distance d bewteen two planes \begin{eqnarray} \\C1:x+y+2z=4 \space \space~~~ \text{and}~~~ \space \space C2:3x+3y+6z=18.\\ \end{eqnarray} Find the other plane $C3\neq C1$ that has the distance d to the plane $C2$. According to the example my teacher gave me, the answer should be : Am I right? However, I do not know what is normal and why there are P(5) and Q($-\frac{1}{2}$). Thank you for your attention
The term "normal" means perpendicularity. A normal vector of a plane in three-dimensional space points in the direction perpendicular to that plane. (This vector is unique up to non-zero multiplies.) For determining the distance between the planes $C_1$ and $C_2$ you have to understand what is the projection of a vector onto the direction of a second one, see https://en.wikipedia.org/wiki/Vector_projection. The the projection of any vector $\vec{QP}$ pointing from one of the planes to the other onto the direction of the common normal vector $\vec{n}=(1,1,2)$ (or $\vec{n}=(3,3,6)$ because length does not matter but only direction does) is a vector perpendicular to both planes and pointing from one to another. So its length has to be the distance. Finding $C_3$ is easy as well. $C_3$ is the image of $C_1$ under reflection about $C_2$. So $C_3$ is the set of points $(x,y,z)$ which can be written as $2(x'',y'',z'')-(x',y',z')$ with $(x'',y'',z'')$ in $C_2$ and $(x',y',z')$ in $C_1$. Checking what is $3$(first coordinate)$+3$(second coordinate)$+6$(third coordinate) for the points of $C_3$, we get $3(2x''-x')+3(2y''-y')+6(2z''-z')=2(3x''+3y''+6z'')-3(x'+y'+z')=2\cdot18-3\cdot 4=12$. Hence the equation for the plane $C_3$ is $3x+3y+6z=24$ or $x+y+2z=8$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/554380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 8, "answer_id": 3 }
Show that 2 surfaces are tangent in a given point Show that the surfaces $ \Large\frac{x^2}{a^2} + \Large\frac{y^2}{b^2} = \Large\frac{z^2}{c^2}$ and $ x^2 + y^2+ \left(z - \Large\frac{b^2 + c^2}{c} \right)^2 = \Large\frac{b^2}{c^2} \small(b^2 + c^2)$ are tangent at the point $(0, ±b,c)$ To show that 2 surfaces are tangent, is it necessary and suficient to show that both points are in both surfaces and the tangent plane at those points is the same? Because if we just show that both points are in both surfaces, the surfaces could just intercept each other. And if we just show the tangent plane is the same at 2 points of the surfaces, those points do not need to be the same. Thanks!
The respective gradients of the surfaces are locally perpendicular to them: $$ \nabla f_1 = 2\left(\frac{x}{a^2}, \frac{y}{b^2}, -\frac{z}{c^2} \right) \\ \nabla f_2 = 2\left(x, y, z - \frac{b^2+c^2}{c}\right) $$ At any point of common tangency, the gradients are proportional. Therefore $\nabla f_1 = \lambda \nabla f_2$. This is seen to be true is you substitute $\left(0, \pm b, c\right)$ into the expressions for the gradients, the proportionality constant being $b^2$. QED.
{ "language": "en", "url": "https://math.stackexchange.com/questions/554443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What really is an indeterminate form? We can apply l’Hôpital’s Rule to the indeterminate quotients $ \dfrac{0}{0} $ and $ \dfrac{\infty}{\infty} $, but why can’t we directly apply it to the indeterminate difference $ \infty - \infty $ or to the indeterminate product $ 0 \cdot \infty $? Furthermore, why can’t we call $ \infty + \infty $ and $ \infty \cdot \infty $ indeterminate forms? I’m new to calculus, so please clear up my concepts if you can. Thanks!
The phrase “indeterminate form” is used to mean a function that we can't compute the limit of by simply applying some general theorem. One can easily show that, if $\lim_{x\to x_0}f(x)=a$ and $\lim_{x\to x_0}g(x)=b$, then $$ \lim_{x\to x_0}(f(x)+g(x))=a+b $$ when $a,b\in\mathbb{R}$. One can also extend this to the case when one or both $a$ and $b$ are infinite: * *If $a=\infty$ and $b\in\mathbb{R}$, then $\lim_{x\to x_0}(f(x)+g(x))=\infty$ *If $a=\infty$ and $b=\infty$, then $\lim_{x\to x_0}(f(x)+g(x))=\infty$ *If $a=-\infty$ and $b\in\mathbb{R}$, then $\lim_{x\to x_0}(f(x)+g(x))=-\infty$ *If $a=-\infty$ and $b=-\infty$, then $\lim_{x\to x_0}(f(x)+g(x))=-\infty$ (Note: $a$ and $b$ can be interchanged; $x_0$ can also be $\infty$ or $-\infty$.) However, it's not possible to extend this to the case where $a=\infty$ and $b=-\infty$ (or conversely). We summarize this statement by saying that $\infty-\infty$ is an indeterminate form. For instance, * *if $f(x)=x$ and $g(x)=1-x$, then clearly $\lim_{x\to\infty}(f(x)+g(x))=1$ *if $f(x)=x^2$ and $g(x)=x$, then $\lim_{x\to\infty}(f(x)+g(x))=\infty$ Other cases are possible. There are similar criterions for functions of the form $f(x)/g(x)$; if the limits of the two functions exist, then we can easily say something about the limit of the quotient, except in the cases when * *$\lim_{x\to x_0}f(x)=0$ and $\lim_{x\to x_0}g(x)=0$, or *$\lim_{x\to x_0}f(x)=\pm\infty$ and $\lim_{x\to x_0}g(x)=\pm\infty$ Therefore it's traditional to summarize this lack of general theorems in these cases by saying that $0/0$ and $\infty/\infty$ are indeterminate forms. There's nothing mysterious: we just know that, in order to compute (or show the existence of) a limit that appears to be in one of the indeterminate forms, we have to do more work than simply calculate a quotient. For instance apply l'Hôpital's theorem, or cleverly rewrite the function, or using other methods such as Taylor expansion. Also the case when $\lim_{x\to x_0}f(x)\ne0$ and $\lim_{x\to x_0}g(x)=0$ is somewhat delicate, but not really “indeterminate”: the limit of the quotient either doesn't exist or is infinite ($\infty$ or $-\infty$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/554521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 0 }
Unitary trivialization over Riemann surfaces with boundary I am puzzled with the proof of Proposition 2.66. in the book "Introduction to Symplectic Topology" by Salamon, McDuff. The Proposition states, that every Hermitian vector bundle $E \rightarrow \Sigma$ over a compact smooth Riemann surface $\Sigma$ with $\partial \Sigma \neq \emptyset$ admitts a unitary trivialization. The first part of the proof consists of a Lemma, which states that we can trivialize the bundle along curves. (This part is clear) Then it is claimed that this construction carries over to produce a trivialization on the disc if one just uses trivializations along the rays stating in the origin. I don`t understand why these trivialization should fit smoothly together? Near the origin this follows from the construction, but globally each ray is obtained by different patching procedures..
Perhaps you may use the riemann-cevita connection to move the indenpendent vectors at the origin to the whole disc along radical pathes, which gives you a independent vector field, hence trivalization?
{ "language": "en", "url": "https://math.stackexchange.com/questions/554573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Integral $\int_0^\infty\frac{1}{x\,\sqrt{2}+\sqrt{2\,x^2+1}}\cdot\frac{\log x}{\sqrt{x^2+1}}\mathrm dx$ I need your assistance with evaluating the integral $$\int_0^\infty\frac{1}{x\,\sqrt{2}+\sqrt{2\,x^2+1}}\cdot\frac{\log x}{\sqrt{x^2+1}}dx$$ I tried manual integration by parts, but it seemed to only complicate the integrand more. I also tried to evaluate it with a CAS, but it was not able to handle it.
This is not a full answer but how far I got, maybe someone can complete $\frac{1}{x\,\sqrt{2}+\sqrt{2\,x^2+1}}=\frac{1}{\sqrt{2\,x^2}+\sqrt{2\,x^2+1}}=\frac{\sqrt{2\,x^2+1}-\sqrt{2\,x^2}}{\sqrt{2\,x^2+1}^2-\sqrt{2\,x^2}^2}=\sqrt{2\,x^2+1}-\sqrt{2\,x^2}$ So you are looking at $$\int_0^\infty(\sqrt{\frac{2\,x^2+1}{x^2+1}} - \sqrt{\frac{2\,x^2}{x^2+1}}) \log x\,dx$$ I was then looking at $t = \frac{x^2}{x^2+1}$ but then I don't know
{ "language": "en", "url": "https://math.stackexchange.com/questions/554624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 3, "answer_id": 2 }
Hyperplanes in finite and infinite dimension vector spaces. I know that hyperplanes of a n-dimensional vector space are sub-spaces of dimension n-1, This is in finite dimension spaces. BUT what about infinite dimension spaces what are hyperplanes? are they the same?
I know this is an old question, but it seems to me that no one has answered it in a "correct" fashion yet, concerning "infinite" of course. The most generalized definition I've seen is the next one: Let H be a subspace in a vector space X. H is called hyperplane if $H \neq X$ and for every subspace V such that $ H \subseteq V $ only one of the following is satisfied: $ V = X$ or $ V = H $. Lemma: A subspace H in X is hyperplane iff e is in X \ H such that $$ <\{e,H \}> = \{ \lambda e + h : \lambda \epsilon \mathbb{R}, h \epsilon H\} = X $$ Just for fun: Theorem: A subspace H in a vector space X is hyperplane iff there is a non-zero linear function $$ l : X \to \mathbb{R}$$ such that $$ H = \{x \epsilon X : l(x) = 0\} = Ker( l ) $$ Good luck proving this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/554736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 0 }
Calculating $\int_{- \infty}^{\infty} \frac{\sin x \,dx}{x+i} $ I'm having trouble calculating the integral $$\int_{- \infty}^\infty \frac{\sin x}{x+i}\,dx $$ using residue calculus. I've previously encountered expressions of the form $$\int_{- \infty}^\infty f(x) \sin x \,dx $$ where you would consider $f(z)e^{iz}$ on an appropriate contour (half circle), do away with the part of the contour that wasn't on the real axis by letting the radius go to infinity, then recover the imaginary part of the answer to get back the sine. However here, I can't replace the sine with $e^{iz}$ in my complex function because $$\operatorname{Im} \frac{e^{ix}}{x+i} \neq \frac{\sin x}{x+i}\cdots$$ How to remedy this? I'm not sure if substituting $\sin x = \frac{1}{2i}(e^{ix}-e^{-ix})$ and solving two integrals is how this problem is meant to be solved, although I'm 99% sure it would work
You're on the right track. Rewrite $\sin{x}=(e^{i x}-e^{-i x})/(2 i)$, but evaluate separately. For $e^{i x}$, consider $$\frac{1}{2 i}\oint_{C_+} dz \frac{e^{i z}}{z+i}$$ where $C_{+}$ is the semicircle of radius $R$ in the upper half plane. The integral is then zero because the pole at $z=-i$ is outside the contour. Thus, by Jordan's lemma, we have $$\int_{-\infty}^{\infty} dx \frac{e^{i x}}{x+i}=0$$ For the other piece, consider $$\frac{1}{2 i}\oint_{C_-} dz \frac{e^{-i z}}{z+i}$$ where $C_{-}$ is the semicircle of radius $R$ in the lower half plane. By Jordan's lemma and the residue theorem, $$-\int_{-\infty}^{\infty} dx \frac{e^{-i x}}{x+i}= i 2 \pi e^{-i (-i)} = \frac{i 2 \pi}{e}$$ Note the minus sign before the integral because we require the contour to be traversed in a positive sense (counterclockwise). Therefore $$\int_{-\infty}^{\infty} dx \frac{\sin{x}}{x+i} = \frac{1}{2 i} \frac{i 2 \pi}{e} = \frac{\pi}{e}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/554810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Integration: product containing square root. How can I integrate the following expression? I have tried using u-substitution, but I am having problems with integrating the entire expression. So far, I have the following: Any help on this is highly appreciated!
You have it. $u=1+e^{-kt}$ and $du=-ke^{-kt}dt$, so $dt = -\frac{du}{k}$ where $k$ is just a constant. Rewrite this and you have $\int \sqrt{u}\frac{du}{-k}$, which I bet you know how to integrate!
{ "language": "en", "url": "https://math.stackexchange.com/questions/554917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof: $2^{n-1}(a^n+b^n)>(a+b)^n$ If $n \in \mathbb{N}$ with $n \geq 2$ and $a,b \in \mathbb{R}$ with $a+b >0$ and $a \neq b$, then $$2^{n-1}(a^n+b^n)>(a+b)^n.$$ I tried to do it with induction. The induction basis was no problem but I got stuck in the induction step: $n \to n+1$ $2^n(a^{n+1}+b^{n+1})>(a+b)^{n+1} $ $ \Leftrightarrow 2^n(a\cdot a^n + b\cdot b^n)>(a+b)(a+b)^n$ $\Leftrightarrow a(2a)^n+ b(2b)^n>(a+b)(a+b)^n$ dont know what to do now :/
You can write that as $$\frac{{{a^n} + {b^n}}}{2} > {\left( {\frac{{a + b}}{2}} \right)^n}$$ Think convexity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/555002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Determination of the prime ideals lying over $2$ in a quadratic order Let $K$ be a quadratic number field. Let $R$ be an order of $K$, $D$ its discriminant. By this question, $1, \omega = \frac{(D + \sqrt D)}{2}$ is a basis of $R$ as a $\mathbb{Z}$-module. Let $x_1,\cdots, x_n$ be a sequence of elements of $R$. We denote by $[x_1,\cdots,x_n]$ the $\mathbb{Z}$-submodule of $R$ generated by $x_1,\cdots, x_n$. My Question Is the following proposition correct? If yes, how do you prove it? Proposition Case 1 $D$ is even. $P = [2, \omega]$ is a prime ideal and $2R = P^2$. Case 2 $D \equiv 1$ (mod $8$). $P = [p, \omega]$ and $P' = [p, 1 + \omega]$ are distinct prime ideals and $2R = PP'$. Moreover $P' = \sigma(P)$, where $\sigma$ is the unique non-identity automorphism of $K/\mathbb{Q}$. Case 3 $D \equiv 5$ (mod $8$). $2R$ is a prime ideal.
I realized after I posted this question that the proposition is not correct. I will prove a corrected version of the proposition. We need some notation. Let $\sigma$ be the unique non-identity automorphism of $K/\mathbb{Q}$. We denote $\sigma(\alpha)$ by $\alpha'$ for $\alpha \in R$. We denote $\sigma(I)$ by $I'$ for an ideal $I$ of $R$. Proposition Let $K$ be a quadratic number field, $d$ its discriminant. Let $R$ be an order of $K$, $D$ its discriminant. By this question, $D \equiv 0$ or $1$ (mod $4$). By this question, $1, \omega = \frac{(D + \sqrt D)}{2}$ is a basis of $R$ as a $\mathbb{Z}$-module. Let $f$ be the order of $\mathcal{O}_K/R$ as a $\mathbb{Z}$-module. Then $D = f^2d$ by this question. We suppose gcd$(f, 2) = 1$. Case 1 $D$ is even. Since $D \equiv 0$ (mod $4$), $D \equiv 0, 4$ (mod $8$). If $D \equiv 0$ (mod $8$), let $P = [2, \omega]$. If $D \equiv 4$ (mod $8$), let $P = [2, 1 + \omega]$. Then $P$ is a prime ideal and $2R = P^2$. Moreover $P = P'$. Case 2 $D \equiv 1$ (mod $8$). $P = [p, \omega]$ and $P' = [p, 1 + \omega]$ are distinct prime ideals and $2R = PP'$. Case 3 $D \equiv 5$ (mod $8$). $2R$ is a prime ideal. We need the following lemmas to prove the proposition. Lemma 1 Let $K, R, D, \omega$ be as in the proposition. Let $P = [2, b + \omega]$, where $b$ is a rational integer. Then $P$ is an ideal if and only if $(2b + D)^2 - D \equiv 0$ (mod $8$). Moreover, if $P$ satisfies this condition, $P$ is a prime ideal. Proof: By this question, $P = [2, b + \omega]$ is an ideal if and only if $N_{K/\mathbb{Q}}(b + \omega) \equiv 0$ (mod $2$). $N_{K/\mathbb{Q}}(b + \omega) = (b + \omega)(b + \omega') = \frac{2b + D + \sqrt D}{2}\frac{2b + D - \sqrt D}{2} = \frac{(2b + D)^2 - D}{4}$. Hence $P$ is an ideal if and only if $(2b + D)^2 - D \equiv 0$ (mod $8$) Since $N(P) = 2$, $P$ is a prime ideal. Lemma 2 Let $K, R, D, \omega$ be as in the proposition. Suppose gcd$(f, 2) = 1$ and there exist no prime ideals of the form $P = [2, b + \omega]$, where $b$ is an integer. Then $2R$ is a prime ideal of $R$. Proof: Let $P$ be a prime ideal of $R$ lying over $2$. Then $P \cap \mathbb{Z} = 2\mathbb{Z}$. By this question, there exist integers $b, c$ such that $P = [2, b + c\omega], c \gt 0, 2 \equiv 0$ (mod $c$), $b \equiv 0$ (mod $c$). Then $c = 1$ or $2$. By the assumption, $c = 2$. Hence $P = [2, 2\omega] = 2R$. Proof of the proposition Case 1 $D$ is even. Let $P = [2, b + \omega]$, where $b$ is an integer. We may assume that $b = 0$ or $1$. By Lemma 1, $P$ is a prime ideal if and only if $(2b + D)^2 - D \equiv 0$ (mod $8$). Suppose $D \equiv 0$ (mod $8$). Then $(2b + D)^2 - D \equiv 0$ (mod $8$) if and only if $b = 0$. Hence $P = [2, \omega]$ is an ideal of $R$. $P' = [2, \omega'] = [2, D - \omega] = [2, -\omega] = [2, \omega] = P$. Suppose $D \equiv 4$ (mod $8$). Then $(2b + D)^2 - D \equiv 0$ (mod $8$) if and only if $b = 1$. Hence $P = [2, 1 + \omega]$ is an ideal of $R$. $P' = [2, 1 + \omega'] = [2, 1 + D - \omega] = [2, -1 - D + \omega] = [2, 1 + \omega] = P$. Since gcd$(f, 2) = 1$, $P$ is regular by this quuestion. Hence $PP' = 2R$ by this question. Case 2 $D \equiv 1$ (mod $8$). If $b = 0, 1$, then $(2b + D)^2 - D \equiv (2b + 1)^2 - 1 \equiv 0$ (mod $8$). Hence, by Lemma 1, $P = [2, \omega]$ and $Q = [2, 1 + \omega]$ are prime ideals of $R$. $P' = [2, \omega'] = [2, D - \omega] = [2, - D + \omega] = [2, 1 + \omega] = Q$. Since gcd$(f, 2) = 1$, $P$ is regular by this question. Hence $PP' = 2R$ by this question. Case 3 $D \equiv 5$ (mod $8$). Consider the following congruence equation. $(2b + D)^2 - D \equiv (2b + 5)^2 - 5 \equiv 0$ (mod $8$). Since $b$ does not satisfy this congruence equation when $b = 0$ or $1$, there exist no ideals of the form $[2, b + \omega]$. Hence $2R$ is a prime ideal by Lemma 2.
{ "language": "en", "url": "https://math.stackexchange.com/questions/555045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find an ordered basis $B$ for $M_{n\times n}(\mathbb{R})$ such that $[T]B$ is a diagonal matrix for $n > 2$ I have a homework problem that I'm stuck on. It is problem 5.1.17 in the Friedberg, Insel, and Spence Linear Algebra book for reference. "Let T be the linear operator on $M_{n\times n}(\mathbb{R})$ defined by $T(A) = A^t$" is the beginning of the problem. The part I'm concerned with is part d, "Find an ordered basis $B$ for $M_{n\times n}(\mathbb{R})$ such that $[T]B$ is a diagonal matrix for $n > 2$?" I completed part c), which is the same part but for $2\times2$ matrices instead. I'm stuck on figuring this part out.
Hint: What matrices satisfy $A^\top =\lambda A$ for some $\lambda$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/555157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The number of integral solutions for the equation $x-y = x^2 + y^2 - xy$ Find the number of integral solutions for the equation $x-y = x^2 + y^2 - xy$ and the equation of type $x+y = x^2 + y^2 - xy$
Added: The approach below is ugly: It would be most comfortable to delete. We look at your second equation. Look first at the case $x\ge 0$, $y\ge 0$. We have $x^2+y^2-xy=(x-y)^2+xy$. Thus $x^2+y^2-xy\ge xy$. So if the equation is to hold, we need $xy\le x+y$. Note that $xy-x-y=(x-1)(y-1)-1$. The only way we can have $xy-x-y\le 0$ is if $(x-1)(y-1)=0$ or $(x-1)(y-1)=1$. In the first case, we have $x=1$ or $y=1$. Suppose that $x=1$. Then we are looking at the equation $1+y=1+y^2-y$, giving $y=0$ or $y=2$. By symmetry we also have the solution $y=1$, $x=0$ or $x=2$. If $(x-1)(y-1)=1$, we have $x=0$, $y=0$ or $x=2$, $y=2$. Now you can do an analysis of the remaining $3$ cases $x\lt 0$, $y\ge 0$; $y\lt 0$, $x\ge 0$; $x\lt 0$, $y\lt 0$. There is less to these than meets the eye. The first two cases are essentially the same. And since $x^2+y^2-xy=\frac{1}{2}((x-y)^2+x^2+y^2)$, we have $x^2+y^2-xy\ge 0$ for all $x,y$, so $x\lt 0$, $y\lt 0$ is impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/555235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 3 }
Twisted sheaf $\mathcal{F}(n)$. Let $\mathcal{F}$ be a sheaf on a scheme $X$ and $O_X(k)$ as usual. We define $\mathcal{F}(n) = \mathcal{F} \otimes_{O_X} O_X(n)$, I don't undertand this definition. What is this tensor product? Then, if we can try to find examples, if $n=1$ and $\mathcal{F}=\mathcal{O}_{\mathbb{A}^1_k}$, who is $\mathcal{O}_{\mathbb{A}^1_k}(1)$?, and $\mathcal{O}_{\mathbb{A}^1_k}(n)$?
I am sorry, but I think $O_X(n)$ can be defined when the underlying scheme is projective. So I think the proper example can be $\mathcal{O}_{\mathbb{P}^1_k}(1) $,and this is just tautological line bundle which is made from the dgree one parts of the oringinal graded ring.
{ "language": "en", "url": "https://math.stackexchange.com/questions/555310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 3, "answer_id": 2 }
What are the most overpowered theorems in mathematics? What are the most overpowered theorems in mathematics? By "overpowered," I mean theorems that allow disproportionately strong conclusions to be drawn from minimal / relatively simple assumptions. I'm looking for the biggest guns a research mathematician can wield. This is different from "proof nukes" (that is, applying advanced results to solve much simpler problems). It is also not the same as requesting very important theorems, since while those are very beautiful and high level and motivate the creation of entire new disciplines of mathematics, they aren't always commonly used to prove other things (e.g. FLT), and if they are they tend to have more elaborate conditions that are more proportional to the conclusions (e.g. classification of finite simple groups). Answers should contain the name and statement of the theorem (if it is named), the discipline(s) it comes from, and a brief discussion of why the theorem is so good. I'll start with an example. The Feit-Thompson Theorem. All finite groups of odd order are solvable. Solvability is an incredibly strong condition in that it immediately eliminates all the bizarre, chaotic things that can happen in finite nonabelian simple groups. Solvable groups have terminating central and derived series, a composition series made of cyclic groups of prime order, a full set of Hall subgroups, and so on. The fact that we can read off all that structure simply by looking at the parity of a group's order is amazing.
there are no theorems whose conclusions contain more that their fully-spelled-out premises. all purported examples of such just use, e.g., definitions that allow a short syntactic statement of the theorem while shoving all the actual work of it under the rug. well, unless math is inconsistent at any rate - in which case we have bigger problems.
{ "language": "en", "url": "https://math.stackexchange.com/questions/555316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "159", "answer_count": 33, "answer_id": 30 }
Proof of natural log identities I need to prove a few of the following identities from a real analysis perspective- this means I do not have access the $\ln e^2 = 2$ type definition of the log function. I am developing the log function from the definition $log x = \int_1^x \frac1t \mathrm dt$ for $0 < x$. I need to prove that: $\ln(ab) = \ln(a) + \ln(b)$ first. I imagine the proof for that will apply itself pretty directly to $\ln(x^n) = n\ln(x)$. I've been given the hint that the idea here is to define $f(x) = \ln(ax)$ and show that $f'(x) = \ln(x)'$, implying $f(x) = L(ax) = L(x) + k$, where I can show that $k = L(a)$. Any help with starting this?
Let $f(x) = \ln(x)$ and let $g(x) = \ln(ax)$, where $a$ is some constant. We claim that $f$ and $g$ only differ by a constant. To see this, it suffices to prove that they have the same derivative. Indeed, by the fundamental theorem of calculus and chain rule, we have that: \begin{align*} g'(x) &= \frac{d}{dx} \ln(ax) = \frac{d}{dx} \int_1^{ax} \frac{dt}{t} = \frac{1}{ax} \cdot a = \frac{1}{x} = \frac{d}{dx} \int_1^{x} \frac{dt}{t} = \frac{d}{dx} \ln(x) = f'(x) \end{align*} Hence, we have that $g(x) = f(x) + k$ for some constant $k$. Now substitute $x=1$. This yields: \begin{align*} g(1) &= f(1) + k \\ \ln(a \cdot 1) &= \ln(1) + k \\ \int_1^{a \cdot 1}\frac{dt}{t} &= \int_1^1\frac{dt}{t} + k \\ \int_1^{a}\frac{dt}{t} &= 0 + k \\ \ln(a) &= k \\ \end{align*} So we have that $\ln(ax) = \ln(a) + \ln(x)$, as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/555406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Understanding an Approximation I am reading the paper A Group-theoretic Approach to Fast Matrix Multiplication and there is an approximation in the paper I don't fully understand. In the proof of Theorem 3.3. it is stated that $$ \frac{\ln (n(n+1)/2)!)}{\ln (1!2!\dots n!)}= 2+ \frac{2-\ln 2}{\ln n}+O\left( \frac{1}{\ln^2 n}\right). $$ This is how far I got: I used Stirling's Approximation to estimate the numerator, leading to $$ \ln \left( \frac{n(n+1)}{2}!\right)= \frac{n(n+1)}{2} \ln \left( \frac{n(n+1)}{2}\right)- \frac{n(n+1)}{2}+O\left(\ln \frac{n(n+1)}{2}\right). $$ For the denominator I also use Stirling's approximation and $\ln (ab) = \ln a + \ln b$, leading to $$ \ln (1!2!\dots n!) = \sum_{k=1}^n k\ln k -k + O(\ln k). $$ I couldn't find a nice closed-form expression for $\sum_{k=1}^n k\ln k$ (is there one?), so I approximated it by $\int x \ln x \, dx = \frac{x^2\ln x} 2 -\frac {x^2} 4$, leading to $$ \ln (1!2!\dots n!) \approx \frac{n^2 \ln n} 2 - \frac{3n^2 } 4 +O(n\ln n). $$ Now, since the leading term of our fraction is, after cancelling, about $\frac{\ln(n^2)}{\ln n}$, I can see where the leading $2$ on the RHS comes from. But what about $\frac{2-\ln 2}{\ln n}$? Can it be calculated by a more precise bound on the denominator, e.g. by avoiding the integral?
The highest order term in $\log \prod\limits_{k=1}^n k!$ is $\frac{n^2}{2}\log n$, so in the desired $$\log \left(\frac{n(n+1)}{2}\right){\Large !} = \left(2 + \frac{2-\log 2}{\log n} + O\left(\frac{1}{\log^2 n}\right)\right)\log \prod_{k=1}^n k!,\tag{1}$$ we get on the right hand side a term $O\left(\frac{n^2}{\log n}\right)$ which we cannot (need not) specify more precisely. Hence in the approximations, we can ignore every term of order $\frac{n^2}{\log n}$ or less. What we need to establish $(1)$ is $$\begin{align} \log \left(\frac{n(n+1)}{2}\right){\Large !} &= \frac{n^2}{2}\log \frac{n(n+1)}{2} - \frac{n^2}{2} + O(n\log n),\tag{2}\\ \log \prod_{k=1}^n k! &= \frac{n^2}{2}\log n - \frac{3n^2}{4} + O(n\log n).\tag{3} \end{align}$$ We obtain $$\begin{align} \log \left(\frac{n(n+1)}{2}\right){\Large !} - 2\log \prod_{k=1}^n k! &= \frac{n^2}{2} \log n(n+1) - \frac{n^2}{2}\log 2 - \frac{n^2}{2}\\ &\quad - \frac{n^2}{2} \log n^2 + \frac{3n^2}{2} + O(n\log n)\\ &= \frac{n^2}{2}\log\frac{n+1}{n} + \frac{n^2}{2}(2-\log 2) + O(n\log n)\\ &= \frac{n^2}{2}(2-\log 2) + O(n\log n)\\ &= \frac{2-\log 2}{\log n}\left(\log \prod_{k=1}^nk! + O(n^2)\right) + O(n\log n) \end{align}$$ and hence $$\log \left(\frac{n(n+1)}{2}\right){\Large !} = \left(2 + \frac{2-\log 2}{\log n}\right)\log \prod_{k=1}^n k! + O\left(\frac{n^2}{\log n}\right).$$ Since $\log \prod k! \in \Theta\left(n^2\log n\right)$, the division yields $$\frac{\log \left(\frac{n(n+1)}{2}\right){\Large !}}{\log \prod\limits_{k=1}^nk!} = 2 + \frac{2-\log 2}{\log n} + O\left(\frac{1}{\log^2 n}\right)$$ as desired. It remains to establish $(2)$ and $(3)$. For that, we use the most significant terms of Stirling's approximation $$\log k! = \left(k+\frac12\right)\log k - k + \frac12\log 2\pi + \frac{1}{12k} + O\left(\frac{1}{k^2}\right).$$ We obtain $(2)$ by ignoring all terms of order less than $n^2$ in Stirling's approximation. For $(3)$, a comparison with the integral $\int_1^n t\log t\,dt$ yields $$\sum_{k=1}^n k\log k = \frac{n^2}{2}\log n - \frac{n^2}{4} + O(n\log n),$$ and hence $$\log \prod_{k=1}^n k! = \sum_{k=1}^n (k\log k - k) + O(n\log n) = \frac{n^2}{2}\log n - \frac{3n^2}{4} + O(n\log n).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/555481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
is the number algebraic? Is the number $\alpha=1+\sqrt{2}+\sqrt{3}$ algebraic? My first attempt was to try a polynomial for which $p(\alpha)=0$ for some $p(x)=a_{0}+a_{1}b_{1}+\cdots +b_{n-1}x^{n-1}$ i. e $x=1+\sqrt{2}+\sqrt{3}$ and then square it many times to get rid of the irrationals. This procedure was futile. Second attempt: I remember from the lecture we had that if $L=K(\alpha, \beta)$ with $\alpha,\beta$ algebraic over $K$ then $[L:K]<\infty$ moreover $[K(\gamma):K]<\infty$ for $\gamma=\alpha\pm \beta$ and $\gamma=\alpha\beta$ and $\gamma=\frac{\alpha}{\beta},\beta\neq 0$ as $K(\gamma)\subseteq L$ hence $\gamma$ is algebraic over $K$. So applying the above to the problem then $\alpha=1+\sqrt{2}+\sqrt{3}$ is algebraic then over $K$ Right? Or is there any other way to show it? But I would like to know if it is possible to find a minimal polynomial having $\alpha$ as a zero? How to do that?
I can't do this in my head but I hope the pattern is obvious $$\begin{align} & (x - 1 - \sqrt{2} - \sqrt{3})(x - 1 - \sqrt{2} + \sqrt{3}) (x - 1 + \sqrt{2} - \sqrt{3})(x - 1 + \sqrt{2} + \sqrt{3})\\ = & x^4-4x^3-4x^2+16x-8 \end{align}$$ In general, given any two algebraic numbers $\alpha, \beta$ with minimal polynomial $f(x), g(x) \in \mathbb{Q}[x]$. If we let $\alpha_1, \alpha_2, \ldots, \alpha_{\deg f}$ and $\beta_1, \beta_2, \ldots, \beta_{\deg g}$ be the two complete sets of roots of $f(x)$ and $g(x)$ in some splitting field of $f(x)g(x)$. i.e $$f(x) = \prod_{i=1}^{\deg f}(x - \alpha_i)\quad\text{ and }\quad g(x) = \prod_{j=1}^{\deg g}(x - \beta_j)$$ $\alpha + \beta$ will then be a root of a polynomial with degree $\deg f \cdot \deg g$. The polynomial can be defined as $$h(x) = \prod_{i=1}^{\deg f}\prod_{j=1}^{\deg g} ( x - \alpha_i - \beta_j ) = \prod_{i=1}^{\deg f} g(x - \alpha_i) = \prod_{j=1}^{\deg g} f(x - \beta_j) $$ The coefficients of $h(x)$ are symmetric polynomials with integer coefficients in $\alpha_i$ and $\beta_j$. This means they can be expressed in terms of elementary symmetric polynomials in either $\alpha_i$ or in $\beta_j$. i.e. in terms of coefficients $f(x)$ and $g(x)$ which belongs to $\mathbb{Q}$. As a result, the polynomial $h(x) \in \mathbb{Q}[x]$ and hence $\alpha + \beta$ is algebraic. The $h(x)$ so constructed need not be minimal polynomial of $\alpha + \beta$. However, one of its irreducible factor will be the one you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/555553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Chaoticity and randomness in a time series Suppose we have a time series : $X=\{X_t,t\in T\}$. How can we check if the data $X_t$ are random or they are the result of some chaotic behaviour of a nonlinear dynamical system? Is there some test useful to prove the chaoticity of the series? Thanks.
You can apply chaotic time series analysis to the data. There are some useful tools about this subject here. Such implementations try to find positive Lyapunov exponents from the data set based on the studies of Rosenstein et. al. ("A practical method for calculating largest Lyapunov exponents from small data sets") or Wolf et. al. ("Determining Lyapunov exponents from a time series"). You can also check the book of the Julien Sprott about this subject. If you can find positive Lyapunov exponents with the above methods you can say that the time series is chaotic. However, as Henning Makholm stated, randomness is a different case. You can apply statistical randomness tests to a sufficiently big data just to conclude that the hypothesis about randomness cannot be rejected for a significance level.
{ "language": "en", "url": "https://math.stackexchange.com/questions/555675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Number of elements in a group and its subgroups (GS 2013) Every countable group has only countably many distinct subgroups. The above statement is false. How to show it? One counterexample may be sufficient, but I am blind to find it out. I have considered some counterexample only like $(\mathbb{Z}, +)$, $(\mathbb{Q}, +)$ and $(\mathbb{R}, +)$. Is there any relationship between number of elements in a group and number of its subgroup? I do not know. Please discuss a little. What will be if the group be uncountable? Thank you for your help.
The countably infinite sum $S$ of copies of the group $G=\mathbb Z/2\mathbb Z$ (cyclic group of order two) indexed by $I$ is countable. Every subset $A$ of the index set $I$ corresponds to a subgroup of $S$ consisting of elements with nonzero components only in the copies of $G$ corresponding to the subset $A\subset I$. This gives uncountably many subgroups because the set of subsets of $I$ is uncountable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/555724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Equivalence relation class $\bar{0}$ In the set $\mathbb{Z}$ we define the following relation: $$a\Re b \iff a\equiv \bmod2\text{ and }a\equiv \bmod3$$ 1)Prove that $\Re$ is an equivalence relation. (Done) 2) Describe the equivalence class $\bar{0}$. How many different equivalence classes exist? My thought on $\bar{0}$ is : $$\bar{0} = a \in \mathbb{Z} / a\Re b \implies a\equiv 0\bmod2\text{ and }a\equiv 0\bmod3$$
Your thoughts?: exactly. Now, $a \equiv 0 \pmod 2 $ and $a \equiv 0 \pmod 3 \implies a \equiv 0 \pmod 6$. So, the equivalence class of $\bar{0}$ is equal to the set of all integer multiples of $6$: $$\bar{0} = \{6k\mid k\in \mathbb Z\}$$ Can you see that the equivalence classes of $\Re$ are the residue classes, modulo $6$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/555800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What are some examples of notation that really improved mathematics? I've always felt that the concise, suggestive nature of the written language of mathematics is one of the reasons it can be so powerful. Off the top of my head I can think of a few notational conventions that simplify problem statements and ideas, (these are all almost ubiquitous today): * *$\binom{n}{k}$ *$\left \lfloor x \right \rfloor$ and $\left \lceil x \right \rceil$ *$\sum f(n)$ *$\int f(x) dx$ *$[P] = \begin{cases} 1 & \text{if } P \text{ is true;} \\ 0 & \text{otherwise} \end{cases}$ The last one being the Iverson Bracket. A motivating example for the use of this notation can be found here. What are some other examples of notation that really improved mathematics over the years? Maybe also it is appropriate to ask what notational issues exist in mathematics today? EDIT (11/7/13 4:35 PM): Just thought of this now, but the introduction of the Cartesian Coordinate System for plotting functions was a HUGE improvement! I don't think this is outside the bounds of my original question and note that I am considering the actual graphical object here and not the use of $(x,y)$ to denote a point in the plane.
So - what about fraction notation? Using this: $$\frac{a+b}{c+d}$$ Instead of this: $$(a+b)\div(c+d)$$ And to some extent anything else that trades vertical space for horizontal compactness, e.g.: $$\sum^{10}_{x=1}x^2$$ instead of (for example) $\mathrm{sum}(x,1,10,x\uparrow 2)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/555895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "133", "answer_count": 28, "answer_id": 6 }
Proving A Trigonometric Identity- Double Angles $(\cos(2x)-\sin(2x))(\sin(2x)+\cos(2x)) = \cos(4x)$ I'm trying to prove that the left side equals the right side. I'm just stuck on which double angle formula of cosine to use.
$$(\cos(2x)-\sin(2x))(\sin(2x)+\cos(2x))=(\cos^2(2x)-\sin^2(2x)) = \cos(4x)$$ From $$\cos(a+b)=\cos a \cos b-\sin a\sin b$$ if $a=2x,b=2x$ then $$\cos(4x)=\cos^22x-\sin^22x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/555934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Are there thoughtfully simple concepts that we cannot currently prove? I was driving and just happened to wonder if there existed some concepts that are simple to grasp, yet are not provable via current mathematical techniques. Does anyone know of concepts that fit this criteria? I imagine the level of simple could vary considerably from person to person, myself being on the very low end of things.
we know that $e$ is transcendental, $\pi$ is transcendental. but still we dont know that whether $e + \pi$ and $e - \pi$ is transcendental or not. we know that atleast one of them is transcendental which follows from simple calculation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/556033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Colored blocks and towers George got a big box from his parents. In this box are colored blocks. He has white, black, red, blue and orange blocks. These blocks are all exactly the same size and of he has the same amount of blocks for each color. George will build towers of $10$ blocks high. Two towers are equal if they have the same color on every level of the tower. How many different towers can George build? A. Without white blocks? B. With exactly $6$ white blocks? C. With two blocks of each color? A. Is this just $4^{10}$? B. Is this ${{10}\choose{6}} + 4^4$? C. Is this $5!$ ?
For C), we choose the $2$ positions (from the $10$) that will be occupied by colour 1. Then we choose the $2$ positions, from the remaining $8$, that are occupied by colour $2$. And so on. For B), for every way of selecting where the whites will go, there are $4^4$ ways to fill in the rest. So you need to multiply, not add. The answer to A) is correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/556278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Test for convergence/divergence of $\sum_{k=1}^\infty \frac{k^2-1}{k^3+4}$ Given the series $$\sum_{k=1}^\infty \frac{k^2-1}{k^3+4}.$$ I need to test for convergence/divergence. I can compare this to the series $\sum_{k=1}^\infty \frac{1}{k}$, which diverges. To use the comparison test, won't I need to show that $\frac{k^2-1}{k^3+4}>\frac{k^3}{k^4}=\frac{1}{k}$, in order to state the original series diverges? This doesn't seem to hold, I feel like I'm missing the obvious. Any help is appreciated. Thanks.
Hint: For $k\ge3$, $$ \frac{k^2-1}{k^3+4}\ge\frac1{k+1} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/556414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Calculating the probability of an event occurring in a specific time period I am confused at how to approach the following question, i.e. what probability formula I am supposed to use. If the probability of a flood is 0.12 during a year, what is the probability of two floods over the next 10 years...? I have thought perhaps trying Geometric distribution at first, but it didn't seem to work out properly. I also tried Poisson, but it turned out to be quite a small number... which doesn't seem viable. So my question is, how can I go about solving this and which probability distribution am I supposed to use? Thanks in advance
Correct me if I'm wrong, but wouldn't this just be another Poisson function? Assuming you could have more than 1 flood per year The expected value --> E(x) = lambda10 = for 10 years = lambda1 * 10 = .12*10 = 1.2 floods in a 10 year period. So then rerun Poisson's for this new period of 10 years with x = 2 --> P(2) = (1.2^2)e^(-1.2)/2! = .216859 = 21.69% chance of getting EXACTLY 2 floods over 10 years.
{ "language": "en", "url": "https://math.stackexchange.com/questions/556481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Subgroup of a nilpotent group Let $G$ be nilpotent and $H \le G$. Let $P_1,P_2,\ldots,P_k$ be the Sylow subgroups of $H$. Is it true that $H = P_1 P_2 \cdots P_k$? I know that when $G$ is nilpotent, it is the direct product of its Sylow subgroups, but is that true for a subgroup of $G$ as well?
Yes,because a subgroup of a nilpotent group is also nilpotent. Let $1=G_0\leq G_1\leq ...\leq G_n=G$ be a central series of $G$ that means $[G_i,G]\leq G_{i-1}$. Let $H_i=H\cap G_i$ .Then $1=H_0\leq H_1\leq....\leq H_n=H$ is a central series of $H$, because $[H_i,H]=[H\cap G_i,H]\leq H\cap [G_i,G]\leq H\cap G_{i-1}=H_{i-1}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/556617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the sum of (n-1)+(n-2)+...+(n-k)? What is the sum of this series ? $(n-1)+(n-2)+(n-3)+...+(n-k)$ $(n-1)+(n-2)+...+3+2+1 = \frac{n(n-1)}{2}$ So how can we find the sum from $n-1$ to $n-k$ ?
$$(n-1)+(n-2)\cdots(n-k)=\underbrace{n+n+\cdots +n}_{\text{$k$ copies}}-(1+2+\cdots k)=nk-\frac{k}{2}(k+1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/556807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Differentiation of $f(x_1, x_2) = \frac{x_1^2 + x_2^2}{x_2 - x_1 + 2}$ this question might sound stupid to you, but I am having problems right now to differentiate this function: $$f(x_1, x_2) = \frac{x_1^2 + x_2^2}{x_2 - x_1 + 2}$$ I know the solution, from wolfram alpha, however I do not know how to come up with it by my own. I would appreciate your answer, if you could show me how to differentiate by $x_1$?
You need to use the definition of partial derivative, before getting used to sentences like "treat $x_2$ as constant". Let us do it: $$\frac{\partial f}{\partial x_1}(x_1,x_2):=\lim_{t\rightarrow 0}\frac{f(x_1+t,x_2)-f(x_1,x_2)}{t}=\lim_{t\rightarrow 0}\frac{(x_1^2+x_2^2)(x_2-x_1+2)-(x_1^2+x_2^2)(x_2-x_1+2)+t(2x_1+t)(x_2-x_1+2)+t(x_1^2+x_2^2)}{t(x_2-x_1-t+2)(x_2-x_1+2)}= \\\lim_{t\rightarrow 0}\frac{(2x_1+t)(x_2-x_1+2)+(x_1^2+x_2^2)}{(x_2-x_1-t+2)(x_2-x_1+2)}=\frac{2x_1(x_2-x_1+2)+(x_1^2+x_2^2)}{(x_2-x_1+2)^2},$$ which is the desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/556891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
some weak conditions about Vitali convergence theorem As we have known,the Vitali convergence theorem is stated: Let $(X,\mathbb{M},\mu)$ be a positive measure space.If (i)$\mu(X)<\infty$; (ii)$\{f_n\}$ is uniformly integrable; (iii)$f_n(x)\to f(x)~~a.e.as~~n\to\infty$; (iv)$|f(x)|<\infty~~a.e$; then $f\in L^1(\mu)$ and $$\lim_{n\to\infty}\int_X |f_n-f|d\mu=0.$$ Call a set $\Phi\subset L^1(\mu)$ uniformly integrable if to each $\varepsilon>0$ corresponds a $\delta>0$ such that $$|\int_E fd\mu|<\varepsilon$$ whenever $f\in\Phi$ and $\mu(E)<\delta$. So I want to give some weaker conditions to verify whether the Vitali theorem is still valid or not. First,if $\mu$ is Lebesgue measure on $(-\infty,\infty)$,and moreover,$\{\parallel f_n\parallel_1\}$ is assumed to be bounded.How about the Vitali theorem? Second,can we omit the hypothesis (iv) when we consider the Lebesgue measure on a bounded interval?And can this be extended to finite measures? Any help will be appreciated.
Here is the strongest version of Vitali's theorem (from O. Kavian, Introduction à la théorie des points critiques, Springer, 1993) Definition. A sequence $\{f_n\}_n$ in $L^1(\Omega)$ is equi-integrable if the following condition is satisfied: for every $\varepsilon>0$ there exists a measurable set $A$ of finite measure and there exists $\delta>0$ such that * *for every $n \geq 1$, $\int_{\complement A} |f_n|<\varepsilon$; *for every measurable $E \subset \Omega$ whose measure is less than $\delta$, there results $\int_E |f_n|<\varepsilon$ for every $n \geq 1$. Theorem. Let $\{f_n\}_n$ be a sequence in $L^1(\Omega)$ that converges almost everywhere to some measurable function $f$. The sequence $\{f_n\}_n$ converges to $f$ in $L^1(\Omega)$ if and only if it is equi-integrable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/556975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the equation for a line tangent to a circle from a point outside the circle? I need to know the equation for a line tangent to a circle and through a point outside the circle. I have found a number of solutions which involve specific numbers for the circles equation and the point outside but I need a specific solution, i.e., I need an equation which gives me the $m$ and the $b$ in $f(x) = mx + b$ for this line.
The key point is see that the line that is tangent to the circle and intersects your point forms a triangle with a right angle. See https://stackoverflow.com/questions/1351746/find-a-tangent-point-on-circle
{ "language": "en", "url": "https://math.stackexchange.com/questions/557036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to prove a function has no maximum I have a function: $$p\cdot(w+6s)^2+(1-p)\cdot(w-s)^2$$ where $p\in(0,1)$, $w>0$ and $s\geq0$ is a choice variable. I am looking for the maximum of the function with respect to $s$, but can quickly see that the maximum is undefined as the function tends towards infinity as $s$ increases. The question is how can I proof that in a concise way? Thanks in advance.
Suppose $s \ge 0$. You have $p (w+6s)^2+(1-p)\cdot(w-s)^2 \ge p (w+6s)^2 \ge p s^2$, hence $\sup_s p (w+6s)^2+(1-p)\cdot(w-s)^2 = \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/557136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Prove that a set is countable Let $A\subseteq \Bbb R$, A is an infinite set of positive numbers. Suppose, $\exists k \in \Bbb Z$ for all finite $B\subseteq A$. $\sum_{b\in B}b<k$ Prove $A$ is countable. I have a hint: $A_n=\lbrace a \in A | a>\frac1n\rbrace$ I understand that $B \in \Bbb Q$ and $k$ can be the last element of $A$ but I'm not really sure if that's right or what to do with it. Any help would be appreciated.
HINT: Prove that each $A_n$ must be finite, and hence that $A=\bigcup_{n\in\Bbb Z^+}A_n$ is countable. If $A_n$ is infinite, and all of its elements are bigger than $\frac1n$, then there is a finite $B\subseteq A_n$ such that ... ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/557246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Radius and Interval of Convergence of $\sum (-1)^n \frac{ (x+2)^n }{n2^n}$ I have found the radius of convergence $R=2$ and the interval of convergence $I =[-2,2)$ for the following infinite series: $\sum_{n=1}^\infty (-1)^n \frac{ (x+2)^n }{n2^n}$ Approach: let $a_n = (-1)^n \frac{ (x+2)^n }{n2^n}$ Take the ratio test $\lim_{x\to -\infty} \left| \frac{a_{n+1}}{a_n} \right| = \left| -\frac{1}{2}(x+2)\right|$ So it converges for $-2 < (x+2) < 2, R = 2$ Test end points if $x = -2$ $\sum_{n=1}^\infty (-1)^n(0) =0 < \infty$ , converges if $x = 2$ $\sum_{n=1}^\infty (-1)^n \frac{4^n}{n2^n} = \sum_{n=1}^\infty (-1)^n \frac{2^{2n}}{n2^n} = \sum_{n=1}^\infty (-1)^n \frac{2^n}{n}$ let $b_n = \frac{2^n}{n}$ $b_{n+1} > b_n$ for every value of $n$ By the Alternating Series Test, $\sum_{n=1}^\infty (-1)^n b_n$ diverges Hence, $R = 2$ and $I = [-2, 2)$ Is my approach correct? Are there any flaws, or better ways to solve?
The radius of convergence is right. The interval of convergence started out OK, you wrote that for sure we have convergence if $-2\lt x+2\lt 2$. But this should become $-4\lt x\lt 0$. The endpoint testing was inevitably wrong, since the incorrect endpoints were being tested. We have convergence at $x=0$ (alternating series) and divergence at $x=-4$ (harmonic series).
{ "language": "en", "url": "https://math.stackexchange.com/questions/557319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Choosing a branch for $\log$ when comparing $\prod_{n=1}^\infty(1+a_n)$ and $\sum_{n=1}^\infty \log{(1+a_n)}$ On Ahlfors on p. 191 he is talking about the relation between $\prod_{n=1}^\infty (1+a_n)$ and $\sum_{n=1}^\infty \log(1+a_n)$. He says: Since the $a_n$ are complex, we must agree on a definite branch of the logarithms, and we decide to choose the principal branch in each term. He makes no mention of the possibility that one of the terms might lie on the negative real axis. What then?
Since a necessary condition for the product to converge is that the factors converge to $1$, when testing the convergence of $\prod\limits_{n=1}^\infty(1+a_n)$, we can assume that $a_n \to 0$, otherwise the product is trivially divergent. So for all but finitely many terms we have $\lvert a_n\rvert < 1$, and then $\operatorname{Re} 1+a_n > 0$, so we can use the principal branch of the logarithm for these. The finitely many terms of the product where $\operatorname{Re} 1+ a_n \leqslant 0$ don't influence the convergence of the product (only the value it converges to, if it converges), so they can be ignored for the purpose. The terms with $a_n = -1$ must be removed from the product (and corresponding logarithm sum) when considering convergence, the terms with $a_n < -1$ could be retained, with an arbitrary choice of the logarithm for these finitely many terms, but it's simpler to also exclude them (but it would have been better to be explicit about that).
{ "language": "en", "url": "https://math.stackexchange.com/questions/557391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Bound of power series coefficients of a growth-order-one entire function An entire function $f(z)$ satisfies $$|f(z)| \leq A_\varepsilon e^{2\pi(M+\varepsilon)|z|}$$ for every positive $\varepsilon$. I want to show that $$\limsup_{n \to \infty}\ [f^{(n)}(0)]^{1/n} \leq 2\pi M.$$ Alternatively, we can state the result as $$\limsup_{n \to \infty}\ (n! |a_n|)^{1/n} \leq 2\pi M,$$ where $f(z) = \sum_{n=0}^\infty a_n z^n$ is the power series expansion. I can prove this result with Paley-Wiener theorem when $f$ is of moderate decrease; I've no idea otherwise. Note that this is a pretty good bound, since equality holds when $f(z)=e^{2\pi Mz}$, just to give one example. P.S. Sorry for the bad question title, since I can't really come up with an illuminating summarization of this problem. Please feel free to edit if you think of something better.
We have $$\begin{align} \left\lvert f^{(n)}(0)\right\rvert &= \frac{n!}{2\pi} \left\lvert\int_{\lvert z\rvert = R} \frac{f(z)}{z^{n+1}}\,dz\right\rvert\\ &\leqslant \frac{n!}{2\pi} \int_0^{2\pi} \frac{\lvert f(Re^{i\varphi})\rvert}{R^n}\,d\varphi\\ &\leqslant \frac{n! A_\varepsilon e^{2\pi(M+\varepsilon)R}}{R^n} \end{align}$$ for all $R > 0$. The right hand side is minimised for $R = \dfrac{n}{2\pi(M+\varepsilon)}$, which gives the estimate $$\left\lvert f^{(n)}(0)\right\rvert \leqslant A_\varepsilon \frac{n!e^n\bigl(2\pi(M+\varepsilon)\bigr)^n}{n^n},$$ and taking $n$-th roots, $$\left\lvert f^{(n)}(0)\right\rvert^{1/n} \leqslant \sqrt[n]{A_\varepsilon}\sqrt[n]{n!}\frac{e}{n}2\pi(M+\varepsilon),$$ from which we deduce $$\limsup_{n\to\infty} \left\lvert f^{(n)}(0)\right\rvert^{1/n} \leqslant 2\pi(M+\varepsilon).$$ Since that holds for all $\varepsilon > 0$, we have indeed $$\limsup_{n\to\infty} \left\lvert f^{(n)}(0)\right\rvert^{1/n} \leqslant 2\pi M.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/557449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$QR$ decomposition of rectangular block matrix So I am running an iterative algorithm. I have matrix $W$ of dimensions $n\times p$ which is fixed for every iteration and matrix $\sqrt{3\rho} \boldsymbol{I}$ of dimension $p\times p$ where the $\rho$ parameter changes at every iteration. And for every iteration I need to evaluate the QR decomposition of the matrix: $$\widetilde{W} = \left[\begin{array}{c} W \\ \sqrt{3\rho} \boldsymbol{I} \end{array} \right] $$ which is a matrix of dimension $(n+p)\times p$. Since $W$ is fixed I wondering if there is any easy way to evaluate the QR decomposition of the matrix $\widetilde{W}$ by just looking at the QR decomposition of $W$? I hope to avoid evaluating the QR decomposition wach time for each different $\rho$.
EDIT: in the following treat $\mathbf{W}$ as a square matrix. It is thus zero padded with extra rows of zero and singular values also and otherwise unchanged. For simplicity I drop the $\sqrt{3\rho}$ and use $\alpha$ instead. Using the SVD of the matrix $\mathbf{W}$, the problem can be transformed into one of doing QR with a fixed unitary matrix $\mathbf{V}^*$ and a parameter diagonal matrix $\mathbf{D}$. Here I show the transform. Given the SVD of $\mathbf{W} = \mathbf{U}\mathbf{S}\mathbf{V}^*$: $$\pmatrix{\mathbf{W} \\ \alpha \mathbf{I}} = \pmatrix{\mathbf{U}\mathbf{S}\mathbf{V}^* \\ \alpha \mathbf{I}} = \pmatrix{\mathbf{U} & \mathbf{0} \\ \mathbf{0} & \mathbf{V} }\pmatrix{\mathbf{S} \\ \alpha \mathbf{I}}\mathbf{V}^*$$ Since $\mathbf{S}$ is diagonal, $\mathbf{S} = \operatorname{diag}(s_0 , s_1 , s_2, \dots)$, it is easy to do Givens rotations on pairs of rows of the matrix $\pmatrix{\mathbf{S} \\ \alpha\mathbf{I}}$ for a single block diagonal matrix $\pmatrix{\mathbf{D} \\ \mathbf{0}}$ so that we have $$\pmatrix{\mathbf{W} \\ \alpha \mathbf{I}} = \pmatrix{\mathbf{U} & \mathbf{0} \\ \mathbf{0} & \mathbf{V} }\pmatrix{\mathbf{G}_s & \mathbf{G}_{\alpha} \\ \mathbf{G}_{\alpha} & -\mathbf{G}_s } \pmatrix{\mathbf{D} \\ \mathbf{0}}\mathbf{V}^*$$ Where $$\mathbf{G}_s = \operatorname{diag}\left(\frac{s_0}{\sqrt{s_0^2 + \alpha^2}} , \frac{s_1}{\sqrt{s_1^2 + \alpha^2}}, \frac{s_2}{\sqrt{s_2^2 + \alpha^2}}, \dots\right)$$ And $$\mathbf{G}_{\alpha} = \operatorname{diag}\left(\frac{\alpha}{\sqrt{s_0^2 + \alpha^2}} , \frac{\alpha}{\sqrt{s_1^2 + \alpha^2}}, \frac{\alpha}{\sqrt{s_2^2 + \alpha^2}}, \dots\right)$$ And $$\mathbf{D} = \operatorname{diag}\left(\sqrt{s_0^2 + \alpha^2} , \sqrt{s_1^2 + \alpha^2}, \sqrt{s_2^2 + \alpha^2}, \dots\right)$$ Thus we have an easy parametrization of the SVD. Let $$\mathbf{Q} = \pmatrix{\mathbf{U} & \mathbf{0} \\ \mathbf{0} & \mathbf{V} }\pmatrix{\mathbf{G}_s & \mathbf{G}_{\alpha} \\ \mathbf{G}_{\alpha} & -\mathbf{G}_s } = \pmatrix{\mathbf{U}\mathbf{G}_s & \mathbf{U}\mathbf{G}_{\alpha} \\ \mathbf{V}\mathbf{G}_{\alpha} & -\mathbf{V}\mathbf{G}_s}$$ where $\mathbf{Q}$ can relatively easily be checked as unitary. This then gives the SVD with parameter $\alpha$ as $$\pmatrix{\mathbf{W} \\ \alpha \mathbf{I}} = \mathbf{Q}\pmatrix{\mathbf{D} \\ \mathbf{0}}\mathbf{V}^*$$ Since this just might suit your problem's needs I will end my response here. I did however post the followup question here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/557537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Computing derivative of function between matrices Let $M_{k,n}$ be the set of all $k\times n$ matrices, $S_k$ be the set of all symmetric $k\times k$ matrices, and $I_k$ the identity $k\times k$ matrix. Let $\phi:M_{k,n}\rightarrow S_k$ be the map $\phi(A)=AA^t$. Show that $D\phi(A)$ can be identified with the map $M_{k,n}\rightarrow S_k$ with $B\rightarrow BA^t+AB^t$. I don't really understand how to compute the map $D\phi(A)$. Usually when there is a map $f:\mathbb{R}^s\rightarrow\mathbb{R}^t$, I compute the map $Df(x)$ by computing the partial derivatives $\partial f_i/\partial x_j$ for $i=1,\ldots,t$ and $j=1,\ldots,s$. But here we have a map from $M_{k,n}$ to $S_k$. How can we show that $D\phi(A)\cdot B=BA^t+AB^t$?
The derivative at $A$ is a linear map $D\phi(A)$ such that $$ \frac{\|\phi(A+H)-\phi(A)-D\phi(A)H\|}{\|H\|}\to0\ \ \mbox{ as } H\to0. $$ (the spirit of this is that $\phi(A+H)-\phi(A)\sim D\phi(A)H$, where one thinks of $H$ as the variable). In our case, we have $$ \phi(A+H)-\phi(A)=(A+H)(A+H)^T-AA^T=AH^T+HA^T+HH^T. $$ So $D\phi(A)H=AH^T+HA^T$ as the term $HH^T$ satisfies $\|HH^T\|=\|H\|^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/557623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
In a triangle $\angle A = 2\angle B$ iff $a^2 = b(b+c)$ Prove that in a triangle $ABC$, $\angle A = \angle 2B$, if and only if: $$a^2 = b(b+c)$$ where $a, b, c$ are the sides opposite to $A, B, C$ respectively. I attacked the problem using the Law of Sines, and tried to prove that if $\angle A$ was indeed equal to $2\angle B$ then the above equation would hold true. Then we can prove the converse of this to complete the proof. From the Law of Sines, $$a = 2R\sin A = 2R\sin (2B) = 4R\sin B\cos B$$ $$b = 2R\sin B$$ $$c = 2R\sin C = 2R\sin(180 - 3B) = 2R\sin(3B) = 2R(\sin B\cos(2B) + \sin(2B)\cos B)$$ $$=2R(\sin B(1 - 2\sin^2 B) +2\sin B\cos^2 B) = 2R(\sin B -2\sin^3 B + 2\sin B(1 - \sin^2B))$$ $$=\boxed{2R(3\sin B - 4\sin^3 B)}$$ Now, $$\implies b(b+c) = 2R\sin B[2R\sin B + 2R(3\sin B - 4\sin^3 B)]$$ $$=4R^2\sin^2 B(1 + 3 - 4\sin^2 B)$$ $$=16R^2\sin^2 B\cos^2 B = a^2$$ Now, to prove the converse: $$c = 2R\sin C = 2R\sin (180 - (A + B)) = 2R\sin(A+B) = 2R\sin A\cos B + 2R\sin B\cos A$$ $$a^2 = b(b+c)$$ $$\implies 4R^2\sin^2 A = 2R\sin B(2R\sin B + 2R\sin A\cos B + 2R\sin B\cos) $$ $$ = 4R^2\sin B(\sin B + \sin A\cos B + \sin B\cos A)$$ I have no idea how to proceed from here. I tried replacing $\sin A$ with $\sqrt{1 - \cos^2 B}$, but that doesn't yield any useful results.
$$a^2-b^2=bc\implies \sin^2A-\sin^2B=\sin B\sin C\text{ as }R\ne0$$ Now, $\displaystyle\sin^2A-\sin^2B=\sin(A+B)\sin(A-B)=\sin(\pi-C)\sin(A-B)=\sin C\sin(A-B)\ \ \ \ (1)$ $$\implies \sin B\sin C=\sin C\sin(A-B)$$ $$\implies \sin B=\sin(A-B)\text{ as }\sin C\ne0$$ $$\implies B=n\pi+(-1)^n(A-B)\text{ where }n\text{ is any integer} $$ If $n$ is even, $n=2m$(say) $\implies B=2m\pi+A-B\iff A=2B-2m\pi=2B$ as $0<A,B<\pi$ If $n$ is odd, $n=2m+1$(say) $\implies B=(2m+1)\pi-(A-B)\iff A=(2m+1)\pi$ which is impossible as $0<A<\pi$ Conversely, if $A=2B$ $\displaystyle\implies a^2-b^2=4R^2(\sin^2A-\sin^2B)=4R^2\sin C\sin(A-B)$ (using $(1)$) $\displaystyle=4R^2\sin C\sin(2B-B)=2R\sin B\cdot 2R\sin C=\cdots$
{ "language": "en", "url": "https://math.stackexchange.com/questions/557704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Polynomials: irreducibility $\iff$ no zeros in F. Given is the polynomial $f \in F[x]$, with $deg(f)=3$. I have to prove, that f is irreducible iff $f$ has no zeros in $F$. "$\Rightarrow$": let's prove the contrapositive: "if $f$ has zeros in $F$, then $f$ is not irreducible." If $f$ has zeros in $F$, it means there's at least one zero and we can represent $f$ as product of two polynomials: $f = (ax + b) * q(x)$, where $deg(q)=2$. But for a polynomial to be irreducible, the only way to factor it into two polynomials is in a way that one of them is of degree $0$, i.e. it's not irreducible. There are gaps in the prove which follows from some blind spots in my understanding of the material. Please, help. QUESTIONS: 1) If a polynomial of degree 3 has a zero point, can I say, we can represent it in one of the following ways? $f=(ax+b)q(x)\\ f= f = (ax+b)(cx+d)g(x)\\ f = (ax+b)(cx+d)(gx+h)$ What about polynomials of type $ax^3 + b$? I can also say that the zero point is $\sqrt[3]{-\frac ba}$. Which one should I use? 2) I saw people writing $(x - a)$ when they factor a polynomial or talk about zero points. If which cases I should use $(x-a)$ and $(ax+b)$ notations? Should I have used $(x-a_1)$ notaion in 1) 3) In my proof I do nothing with the fact that $deg(f) = 3$. I guess, this fact is given for some reason. Could you tell me please, what the reason is? Sure, some people will find these question stupid and the answers straightforward, but I need your help to put every piece of information about polynomials on right places in my mind.
If $f(x)$ is a polynomial and $g(x)$ is any nonzero polynomial, you can always write $f(x)=q(x)g(x)+r(x)$ with polynomials $q(x)$, $r(x)$, $\deg r<\deg g$ (polynomial division with remainder). Especially, if you let $g(x)=x-a$ be a linear polynomial,you obtain $f(x)=q(x)\cdot(x-a)+r$ where $r$ is constant. If $a$ happens to be a root of $f$, observe that from $f(a)=q(a)\cdot 0+r$ you obtain $r=0$, i.e. $f(x)=q(x)(x-a)$. So if $\deg f>1$ and $f(a)=0$ for some $a\in F$, $f$ cannot be irreducible. The property that $\deg f=3$ is needed for the other direction of the proof (For example $x^4-5x^2+6$ is reducible in $\mathbb Q[x]$ but has no root in $\mathbb Q$). Assume $f$ is reducible, say $f(x)=g(x)h(x)$ for some nonconstant polynomials $g,h$. What can you say about the degrees of $g$ and $h$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/557775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Integral converges/diverges Question: $$p \in \Bbb R \int _0 ^\infty \frac {\sin x}{x^p}dx$$ For what values of p does this integral converge/diverge? Thoughts We've tried using Dirichlet criteria and bounding it from below, after splitting it from $[0,1]$ and $[1.\infty]$ There seems to be some kind of trick we are probably missing.
For $p = 1$, it is convergent. You can show it by using. $$ \int_{1}^{b}\frac{\text{sin}\;x}{x}dx = -\frac{\text{cos}\; b}{b} + \text{cos}\;1 - \int_{1}^{b}\frac{\text{cos}\;x}{x^2}dx $$ The point is, you can show by using integral by parts.
{ "language": "en", "url": "https://math.stackexchange.com/questions/557889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What am I doing wrong? Integration and limits I need some help identifying what I'm doing wrong here.. What is the limit of $y(x)$ when $x→∞$ if $y$ is given by: $$y(x) = 10 + \int_0^x \frac{22(y(t))^2}{1 + t^2}\,dt$$ What i've done: 1) Integrating on both sides(and using the Fundamental Theorem of Calculus): $$\frac{dy}{dx} = 0 + \frac{22(y(x))^2}{1 + x^2}$$ 2) $$\frac{-1}{22y} = \arctan x$$ And after moving around and stuff I end up with the answer: $\quad\dfrac{-1}{11 \pi}.$ What's wrong?
To add my comment above, the answer should be $\frac{-1}{22y} + \frac{1}{22\cdot 10} = \text{atan}(x)$. This reduces to $y(x) = \frac{-1}{22}\cdot (\text{atan}(x) - \frac{1}{220})^{-1}$. On a rigorous level, we know that this holds since $y(x)\geq 10\; \forall\; x$. Now $\text{lim}\; f(x) = \frac{-1}{22}\cdot (\frac{\pi}{2} - \frac{1}{220})^{-1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/557994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Is my reasoning correct here regarding uniform convergence? Let $f_n(x) = x^n$. Is this sequence of functions uniformly convergent on the closed interval $[0, 1]$? My reasoning Consider $0 \le x < 1$. $f_n(x)$ converges pointwise to the zero function on this interval. $f_n(x)$ is uniformly convergent if there exists some $N$ such that for $n \ge N$ $$|f_n(x) - f(x)| < \epsilon$$ for all $x$, $\epsilon > 0$. In particular, $|f_N(x) - f(x)| < \epsilon$. Choose $\epsilon = \frac{1}{4}$ So is $|f_N(x) - f(x)| < \epsilon$ for every $x$? I.e. is $|f_N(x)| = |x^N| = x^N < \frac{1}{4}$ for every $x$? Well I claim that no matter what $N$ we take, if we take $$x \ge 4^{\frac{-1}{N}}$$ Then $x^N \nless \frac{1}{4}$ so the convergence isn't uniform. Is that reasoning correct?
You're are certainly very close to a full (and correct!) solution. Let's first straighten a couple things out: 1) The $N$ you choose can depend on $\epsilon$. That is, uniform convergence of $f_n \to f$ means: given an $\epsilon >0$ there exists an $N \in \mathbb{N}$ such that for all $x$ in the domain of $f$ it holds that $|f_n(x) - f(x)| < \epsilon$. 2) Let's take your value of $\epsilon = 1/4$. Then, what you can say is that there exists an $N$ (depending on the value $1/4$) such that for every $n \geq N$ and any $x \in [0,1]$, $|f_n(x) -f(x)|<\epsilon$ (if it does converge uniformly). However, as you noted if $x \in (4^{-1/n},1)$ then $|f_n(x) - 0| \geq (4^{-1/n})^n = 1/4 = \epsilon$. This proves that $f_n$ does not converge uniformly to the function $f(x)$ where $f(x) \equiv 0$ on $[0,1)$. 3) Since if $f_n \to f$ uniformly and $f_n \to g$ pointwise, it must be that $f = g$. As you said, $f_n \to 0$ pointwise on $[0,1)$ so the only candidate function $f(x)$ is the function $f(x) \equiv 0$ on $[0,1)$. Now your proof is done because you have found a contradiction! There is another way to prove this as well. Here's how: Note that each $f_n$ is continuous on $[0,1]$. Therefore, if $f_n \to f$ uniformly, it must be that $f$ is continuous on $[0,1]$. However, pointwise $f_n$ converges to the function $$f(x) = \begin{cases} 0 & x \in [0,1) \\ 1 & x=1 \end{cases}$$ which is not continuous. Since uniform convergence implies pointwise convergence and $f_n \to f$ pointwise with $f$ not continuous, it must be that $f_n$ does not converge uniformly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/558076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
estimation of a parameter The question is: $x_i = \alpha + \omega_i, $ for $i = 1, \ldots, n.$ where $\alpha$ is a non-zero constant, but unknown, parameter to be estimated, and $\omega_i$ are uncorrelated, zero_mean, Gaussian random variable with known variance $\sigma_i^2$. Note that $\sigma_i^2$ and $\sigma_j^2$, for $i \neq j$, may be distinct. We wish to estimate $\alpha$ from a weighted sum of $x_i$, i.e. $$\hat{\alpha} = \sum^n_{i=1}b_ix_i$$ Determine $b_i$, $i= 1, \ldots, n$, such that $\hat{\alpha}$ is unbiased and the variance of $\hat{\alpha}$ is as small as possible. I have tried to use the unbiased condition and get that: $\sum_{i=1}^nb_i = 1$ I don't know how to use the variance of $\hat{\alpha}$ is as small as possible condition.
For the unbiasedness, we have $$ E\left(\hat{\alpha}\right)=E\left(\sum_{i=1}^nb_ix_i\right)=E\left(\sum_{i=1}^nb_i(\alpha+\omega_i)\right)=\alpha\sum_{i=1}^nb_i + E\left(\sum_{i=1}^nb_i\omega_i\right)=\alpha\sum_{i=1}^nb_i $$ and we get that $\sum_{i=1}^nb_i=1$ as you say. Now, what follows is to simply make this homoscedastic so that we can use the Gauss-Markov theorem. Divide through by $\sigma_i$: $$ \frac{x_i}{\sigma_i}=\frac{\alpha}{\sigma_i}+\frac{\omega_i}{\sigma_i}\Rightarrow x_i^*=\alpha\frac{1}{\sigma_i}+\omega_i^* $$ where $\omega_i^*\sim N(0, 1)$ (stars indicate variance adjusted). This satisfies the usual OLS conditions, so by the Gauss-Markov theorem OLS is efficient and unbiased. The estimator then is: $$ \hat{\alpha}=\arg\min_{a}\sum_{i=1}^n(x_i^*-a\frac{1}{\sigma_i})^2\Rightarrow-2\sum_{i=1}^n\frac{(x_i^*-a\frac{1}{\sigma_i})}{\sigma_i}=0\\ \sum_{i=1}^n\frac{x_i}{\sigma_i^2}=\sum_{i=1}^n\frac{a}{\sigma^2_i}\\ \sum_{i=1}^n\frac{x_i}{\sigma^2_i}=a\sum_{i=1}^n\frac{1}{\sigma^2_i}\\ \frac{\sum_{i=1}^n\frac{x_i}{\sigma^2_i}}{\sum_{i=1}^n\frac{1}{\sigma^2_i}}=a $$ so the weights are $$ b_i=\frac{\frac{1}{\sigma^2_i}}{\sum_{i=1}^n\frac{1}{\sigma^2_i}}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/558151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Find the sum of the series For any integer $n$ define $k(n) = \frac{n^7}{7} + \frac{n^3}{3} + \frac{11n}{21} + 1$ and $$f(n) = 0 \text{if $k(n)$ is an integer ; $\frac{1}{n^2}$ if $k(n)$ is not an integer } $$ Find $\sum_{n = - \infty}^{\infty} f(n)$. I do not know how to solve such problem of series. So I could not try anything. Thank you for your help.
HINT: Using Fermat's Little theorem, $$n^p-n\equiv0\pmod p$$ where $p$ is any prime and $n$ is any integer $\displaystyle\implies n^7-n\equiv0\pmod 7\implies \frac{n^7-n}7$ is an integer Show that $k(n)$ is integer for all integer $n$
{ "language": "en", "url": "https://math.stackexchange.com/questions/560219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $\sqrt{x}$ is continuous on its domain $[0, \infty).$ Prove that the function $\sqrt{x}$ is continuous on its domain $[0,\infty)$. Proof. Since $\sqrt{0} = 0, $ we consider the function $\sqrt{x}$ on $[a,\infty)$ where $a$ is real number and $s \neq 0.$ Let $\delta=2\sqrt{a}\epsilon.$ Then, $\forall x \in dom,$ and $\left | x-x_0\right | < \delta \Rightarrow \left| \sqrt{x}-\sqrt{x_0}\right| = \left| \frac{x-x_0}{ \sqrt{x}+\sqrt{x_0}} \right| < \left|\frac{\delta}{2\sqrt{a}}\right|=\epsilon.$ Can I do this?
Prove that $\sqrt{x}$ is continuous on its domain $[0,\infty[$ I have used the following definition and therom: Definition 1: A sequence of real numbers {$x_n$} is said to converge to a real number $a \in R$ if and only if for every $\epsilon>0$ there is an N $\in$ N such that $n \geq N$ implies $|x_n-a|<\epsilon$ Therom 1 Suppose that E is a nonempty subset of R, that a $\in$ E, and that f: E$\rightarrow$ R. Then the following statemants are equivalent: * *f is continuous at a $\in$ E. *If $x_n$ converges to a and $x_n$ $\in$ E, then $f(x_n) \rightarrow f(a) $as $n \rightarrow \infty$ Proof If $x_n \rightarrow a$ implies that $f(x_n) \rightarrow f(a)$ as $n\rightarrow \infty$ this means that the function f must be continuous. (This is just another way of stating the $\epsilon$ and $\delta$ statement) Supose that as $n \rightarrow \infty$ then $x_n \rightarrow x$ (x plays the role of a) We need to prove that if $x_n \rightarrow x$ then $\sqrt{x_n} \rightarrow \sqrt{x}$ as $n \rightarrow \infty$. If this is true then $\sqrt{x}$ is continuous. Case 1: x=0 (We cannot devide by zero, so we have to split up the proof, which will be clear in case 2.) If x=0. We let $\epsilon > 0$ and choose N such that $n \geq N$ implies $|x_n-x|<\epsilon^2 \Leftrightarrow \sqrt{x_n-x} = \sqrt{x_n}<\epsilon$ for $n \geq N$. So $\sqrt{x_n} \rightarrow 0$ when $n \rightarrow \infty$ (Remember that $\epsilon$ can be made aberitaryly small, but given a large enough N, $\sqrt{x_n}$ will be smaller thus it aproaches 0.) Case 2: x > 0 $f(x_n)-f(x)=\sqrt{x_n}-\sqrt{x}=(\sqrt{x_n}-\sqrt{x})\cdot (\frac{\sqrt{x_n}+\sqrt{x}}{\sqrt{x_n}+\sqrt{x}})=\frac{|x_n-x|}{\sqrt{x_n}+\sqrt{x}}$ $\sqrt{x_n}\geq 0$ so we can write $\sqrt{x_n}-\sqrt{x} \leq \frac{|x_n-x|}{\sqrt{x}}$ When $n \rightarrow 0$ then $x_n \rightarrow x$ meaning the righthandsite converges to 0. $\sqrt{x_n}-\sqrt{x} \leq 0$ So it follows from the Squeeze Therom that $\sqrt{x_n} \rightarrow \sqrt{x}$ as $n \rightarrow \infty$ And therefore $\sqrt{x}$ is continuous on the domain [0,$\infty$[
{ "language": "en", "url": "https://math.stackexchange.com/questions/560307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 4 }
Defining irreducible polynomials recursively: how far can we go? Fix $n\in\mathbb N$ and a starting polynomial $p_n=a_0+a_1x+\dots+a_nx^n$ with $a_k\in\mathbb Z\ \forall k$ and $a_n\ne0$. Define $p_{n+1},p_{n+2},\dots$ recursively by $p_r = p_{r-1}+a_rx^r$ such that $a_r\in \mathbb N$ is the smallest such that $p_r$ is irreducible over $\mathbb Q$. It should not be too hard to prove (but how?) that there will always be an $r_0$ such that $a_r=1\ \forall r>r_0$. Let $r_0$ be smallest possible. E.g. for $n=0$ and $p_0\equiv 1$, we have to go as far as $r_0=11$, getting before that $(a_0,\dots,a_{11})=(1,1,1,2,2,3,1,2,1,3,1,2)$. Questions (apart from proving the existence of $r_0$): * *Is it possible to construct, for a certain $n$, a polynomial $p_n$ such that $a_{n+1}$ is bigger than $3$ or even arbitrarily large? (From the above example, for $n=4$ and $p_n=1+x+x^2+2x^3+2x^4$, we get $a_5=3$, likewise for $n=8$ and $p_n=1+x+x^2+2x^3+2x^4+3x^5+x^6+2x^7+x^8$, we get $a_9=3$.) * *Is it possible to construct, for a certain $n$, a polynomial $p_n$ such that $r_0-n$ is bigger than $11$? If so, how big can $r_0-n$ be?
The answer to your first question is YES (and a similar method can probably answer positively your second question as well). One can use the following lemma : Lemma Let $n,g$ be positive integers. Then there is a polynomial $Q_{n,g}\in {\mathbb Z}[X]$ of degree $\leq n$ such that $Q_{n,g}(i)=i^g$ for any integer $i$ between $1$ and $n+1$. Answer to question 1 using the lemma. Take $p_n=-Q_{n,n+2}$, so that $p_n(x)=-x^{n+2}$ for $1\leq x \leq n+1$. Then for $a\leq n+1$, the polynomial $ax^{n+1}+p_n(x)$ is zero when $x=a$, so it is not irreducible. We deduce $a_{n+1} \geq n+2$ for this $p_n$. Proof of lemma By Lagrange interpolation, we know that there is a unique polynomial $Q_{n,g}\in {\mathbb Q}[X]$ satisfying the conditions. What we need to show is that the coefficients of $Q_{n,g}$ are integers. If $g\leq n$, then $Q_{n,g}(x)=x^g$ and we are done. For $g>n$, the identity $$ \frac{x^g-1}{x-1}=\sum_{k=0}^{g-1} \binom{g}{k+1} (x-1)^k $$ shows that when $2 \leq x \leq n+1$, we have $$ x^g=1+(x-1)\bigg( \sum_{k=0}^{g-1} \binom{g}{k+1} Q_{n-1,k}(x-1)\bigg) $$ and so we may take $$ Q_{n,g}(x)=1+(x-1)\bigg(\sum_{k=0}^{g-1} \binom{g}{k+1} Q_{n-1,k}(x-1)\bigg) $$ and we are done by induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/560383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Show $|1-e^{ix}|^2=2(1-\cos x)$ Show $|1-e^{ix}|^2=2(1-\cos x)$ $$|1-e^{ix}||1-e^{ix}|=1-2e^{ix}+e^{2ix}=e^{ix}(e^{-ix}-2+e^{ix})=e^{ix}(2\cos x-2)=-2e^{ix}(1-\cos x)$$ Not sure how they got rid of the $-e^{ix}$ factor. Did I expand the absolute values wrong? thank you
$$ |1-e^{ix}|^{2} = |1-\cos x - i \sin x|^{2} = (1-\cos x)^{2} + (-\sin x)^{2} = 1 - 2 \cos x + \cos^{2} x + \sin^{2} x $$ $$= 2 - 2 \cos x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/560461", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Number of ways of sorting distinct elements into 4 sets This was on a test I just had. The first part says: "A person donates nine antique clocks to four different museums. Supposing all clocks are identical and he can distribute them in any way he chooses, how many ways are there of donating the clocks"? This is choosing elements with replacement which has a well known formula of $n+r-1\choose r$ where $n=9$ and $r=3$ in our case. The next part is what has me stumped: "Now suppose each clock is different. In how many ways can they be donated"? He can still distribute them any way he chooses, so the first museum could get 8, the second 1 and the others none. I still can't figure out how to do this. I thought one would first want to choose an arrangement of the 9 objects of which there are 9! such arrangements. But then I'm not sure how to account for all the possible number of elements that each museum could get.
Every clock goes to one of four museums, so there are $4$ choices for the first clock, $4$ for the second,... for a total of $4^9$ in all by the rule of product for combinatorics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/560565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Adding one Random Real I'm studying Kanamori's book The Higher Infinite and now I'm stuck. I want to prove that forcing with Borel sets of positive measure adds one random real. Let me state the theorem: Let $M$ be a countable transitive model for ZFC. $\mathcal{B} = \{A \in Bor(\omega^\omega): \mu(A) > 0\}$ and $p \leq q \leftrightarrow p \subset q.$ Let $G$ be $\mathcal{B}^{M}$-generic over $M$. Then there is a unique $x \in \omega^\omega$ such that for any closed Borel code $c \in M$, $$x \in A_{c}^{M[G]} \leftrightarrow A_{c}^{M} \in G$$ and $M[x] = M[G]$. The proof first asserts: "(*)For any $n \in \omega$, $$\{C \in \mathcal{B}^{M}: C\text{ is closed }\land \exists k (C \subset \{f \in \omega^\omega: f(n) = k\})\},$$ is dense in $\mathcal{B}^M$." My first question: Why? Is it obvious? I think I need at least a hint! [...]Some irrelevant comments. "Arguing in $M[G]$: there is a unique $x \in \omega^\omega$ specified by $$\{x\} = \bigcap \{A_{c}^{M[G]}: c \in M \text{ is a closed code }\land A_{c}^{M} \in G\}$$ since this is an intersection of closed sets with the finite intersection property and (*) holds" Can someone explain it to me more detailed? Thank you!
Let $A_k=\{f\in\omega^\omega: f(n)=k\}$. Fix $C\in \mathcal B^M,$ note that $\bigcup_{k\in\omega}A_k=\omega^\omega$ thus $C=C\cap (\bigcup_{k\in\omega}A_k)=\bigcup_{k\in\omega}(C\cap A_k)$. It follows that there is a $k$ so that $C\cap A_k$ has positive measure. Hence $C\cap A_k\leq C$ and satisfies $(*)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/560635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Find the maximum and minimum values of $A \cos t + B \sin t$ Let $A$ and $B$ be constants. Find the maximum and minimum values of $A \cos t + B \sin t$. I differentiated the function and found the solution to it as follows: $f'(x)= B \cos t - A \sin t$ $B \cos t - A \sin t = 0 $ $t = \cot^{-1}(\frac{A}{B})+\pi n$ However, I got stuck here on how to formulate the minimum and maximum points. Any explanation would be appreciated.
$A\cos t+ B \sin t = \sqrt{A^2+B^2} ( \frac{A}{\sqrt{A^2+B^2}} \cos t + \frac{B}{\sqrt{A^2+B^2}} \sin t)$. Choose $\theta$ such that $e^{i \theta} = \frac{A}{\sqrt{A^2+B^2}} + i\frac{B}{\sqrt{A^2+B^2}} $. Then $A\cos t+ B \sin t = \sqrt{A^2+B^2} ( \cos \theta \cos t + \sin \theta \sin t) = \sqrt{A^2+B^2} \cos(\theta-t)$. It follows that the extreme values are $\pm \sqrt{A^2+B^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/560711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Holomorphic function having finitely many zeros in the open unit disc Suppose $f$ is continuous on the closed unit disc $\overline{\mathbb{D}}$ and is holomorphic on the open unit disc $\mathbb{D}$. Must $f$ have finitely many zeros in $\mathbb{D}$? I know that this is true if $f$ is holomorphic in $\overline{\mathbb{D}}$ (by compactness of the closed unit disc), but I'm not sure of what happens when I just consider $\mathbb{D}$.
Any accumulation point of the zeros must be on the boundary, of course. You can show in this case that the series centered at 0 has radius of convergence 1. For a specific example, I think the following works: For $|z| \leq 1$, $$f(z) = (z-1)\prod_{n=1}^\infty\frac{z-1+1/n^2}{1-(1-1/n^2)z}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/560784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How to solve this mathematically This is a question given in my computer science class. We are given a global variable $5$. Then we are to use keyboard event handlers to do the following: On event keydown double the variable and on event keyup subtract $3$ from this variable. The question is after $12$ presses of any key on the keyboard ( a keydown and keyup event is the same as one press), what will be the value of this variable? The value after $ 4$ presses is $35$. I can implement this and get the answer in less than 10 lines of code, but this is one of those questions I feel there should be a mathematical formula one can derive and apply to the question and arrive at an answer. So is it possible to solve this one with math? I already tried: $\lim\limits_{x \to 12} (2v - 3)x$ where $v$ is the initial variable and I am trying to represent an equation that will evaluate to the new value of the variable as $x$ gets to $12$. However since I am not very mathematically minded, that was as far as I got. Thanks
Limits are way more power than necessary. Assuming $x_0$ is really $5$, we're looking at the recurrence \begin{gather*} x_0=5 \\[0.3ex] x_{n+1}=2x_n-3. \end{gather*} The steady state is $x=2x-3\implies x=3$, suggesting we write $$x_{n+1}-3=2(x_n-3),$$ which implies giving $x_n-3=(x_0-3) 2^n$, or $$x_n=3+(5-2)\cdot 2^n=3+2^{n+1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/560828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to divide a circle with two perpendicular chords to minimize (and maximize) the following expression Consider a circle with two perpendicular chords, dividing the circle into four regions $X, Y, Z, W$(labeled): What is the maximum and minimum possible value of $$\frac{A(X) + A(Z)}{A(W) + A(Y)}$$ where $A(I)$ denotes the area of $I$? I know (instinctively) that the value will be maximum when the two chords will be the diameters of the circle, in that case, the area of the four regions will be equal and the value of the expression will be $1$. I don't know how to rigorously prove this, however. And I have absolutely no idea about minimizing the expression.
When the considered quotient is not constant it has a maximum value $\mu>1$, and the minimum value is then ${1\over \mu}$. I claim that $$\mu={\pi+2\over\pi-2}\doteq4.504\ ,\tag{1}$$ as conjectured by MvG. Proof. I'm referring to the above figure. Since the sum of the four areas is constant the quotient in question is maximal when $$S(\alpha,p):={\rm area}(X)+{\rm area}(Z)\qquad\left(-{\pi\over2}\leq\alpha\leq {\pi\over2}, \quad 0\leq p\leq1\right)$$ is maximal. Turning the turnstile around the point $(p,0)$ and looking at the "infinitesimal sectors" so arising we see that $$\eqalign{{\partial S\over\partial\alpha}&={1\over2}\bigl((r_1^2-r_4^2)+(r_3^2-r_2^2)\bigr)\cr &={1\over2}\bigl((r_1+r_3)^2-(r_4+r_2)^2-\bigr)+(r_2r_4-r_1r_3)\cr &=2\bigl((1-p^2\sin^2\alpha)-(1-p^2\cos^2\alpha)\bigr)+0\cr &=2p^2(\cos^2\alpha-\sin^2\alpha)\ . \cr}$$ From this we conclude the following: Starting at $\alpha=-{\pi\over2}$ we have $S={\pi\over2}$, then $S$ decreases for $-{\pi\over2}<\alpha<-{\pi\over4}$, from then on increases until $\alpha={\pi\over4}$, and finally decreases again to $S={\pi\over2}$ at $\alpha={\pi\over2}$. It follows that for the given $p\geq0$ the area $S$ is maximal at $\alpha={\pi\over4}$. We now fix $\alpha={\pi\over4}$ and move the center $(p,0)$ of the turnstile from $(0,0)$ to $(1,0)$. Instead of "infinitesimal sectors" we now have "infinitesimal trapezoids" and obtain $${\partial S\over\partial p}={1\over\sqrt{2}}((r_2+r_3)-(r_1+r_4)\bigr)>0\qquad(0<p\leq1)\ .$$ It follows that $S$ is maximal when $\alpha={\pi\over4}$ and $p=1$. In this case one has $S=1+{\pi\over2}$, which leads to the $\mu$ given in $(1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/560929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Complex numbers problem I have to solve where n is equal to n=80996.
$$\frac{\sqrt3+5i}{4+2\sqrt3i}=\frac{\sqrt3+5i}{4+2\sqrt3i}\cdot\frac{4-2\sqrt3i}{4-2\sqrt3i}=\frac{14\sqrt3+14i}{28}=\frac{\sqrt3+i}2=\cos\frac{\pi}6+i\sin\frac{\pi}6;$$$$\left(\frac{\sqrt3+5i}{4+2\sqrt3i}\right)^n=\left(\cos\frac{\pi}6+i\sin\frac{\pi}6\right)^n.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/561013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
If $3c$ is a perfect square then why must $c$ be of form $3x^2$ for some $x$? ( $x,c$ integers) For example $3c=36$, then $36=3*4^2$ so $x=4$.
Primes (such as $3$) have the property that if they divide a product, they divide (at least) one of the factors: If $p\mid ab$ then $p\mid a$ or $p\mid b$. Here we take $p=3$ and from the assumption $3c=m^2=m\cdot m$ conclude that $3$ divides one of the two factors on the right, that is $3\mid m$ at any rate. So $m=3x$ for some integer $x$, hence $3x=m^2$ turns into $3c=9x^2$, i.e. $c=3x^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/561116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Simple Algebra Equation I have a simple part of a question to solve. The problem is my answer is different to the solution in my textbook. The equation is: $$\frac{5v}{6} = \frac{(\frac{1}{2}a+b+\frac{1}{2} c)v}{a+b+c}$$ I am supposed to get $$\frac{2}{3}(a+b+c) = b$$ But I simply get: $$b=2a +2c$$ I get my answer by cross multiplying. I then use my answer to get $\frac{b}{a+b+c}$ as some fraction. I have not worked onto this stage as I am unsure about the above work. What am I doing wrong here?
$b=2a+2c$ Adding $2b$ on both sides gives $b + 2b =2a+2b+2c$ Or better $3b =2a+2b+2c$ And if you throw the $3$ to the other side you get: $b =\frac{2}{3}(a+b+c)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/561210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding abundant numbers from 1 to 10 million using a sum my task is to implement algorithm in C of finding abudant numbers from 1 to 10 million. Fistly I don't really understand mathematics. There is several ways how to do it, but efficient and fast (for that BIG input 10 mil) might be by summing - NOT dividing, NOT multiplying, NOT EVEN using remainder after the division. Just sum. But I'm really confused WHAT to sum. Please guys help, appreciate every single answer.
The following naive sieve approach takes only 30 seconds on my modest PC. #define abLimit 10000000 int i, j, *xp; xp = (int*) calloc(abLimit, 4); for (i=2; i<abLimit, i++) for (j=i*2; j<abLimit; j+= i) xp[j] += i; for (i=2; i<abLimit; i++) if (xp[i] > i) printf("%d is abundant\n", i); free(xp); Basically, it builds a table of $\sigma(n)$. It does not require any factorisation into primes, which makes it quick. I suggest you dry run it for a small value of abLimit to see what it's doing. Some pruning may reduce the run-time to under a second.
{ "language": "en", "url": "https://math.stackexchange.com/questions/561328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Does $f_n(x) = \chi_{(n, n+1)}$ converge uniformly on $\mathbb{R}$ Does $f_n(x) = \chi_{(n, n+1)}$ converge uniformly to the zero function on $\mathbb{R}$? It seems to me that it doesn't. Say $\exists$ $N$ such that for all $n \ge N$ $$|\chi_{(n, n+1)} - 0|< \epsilon$$ with $\epsilon > 0$ i.e. $|\chi_{(n, n+1)}|< \epsilon$ In particular, $|\chi_{(N, N+1)}|< \epsilon$ Well, if $N < x < N + 1$, then $f_n(x) = 1 \not< \epsilon$ So $f_n(x)$ is not uniformly convergent, it's rate of convergence is dependent on $x$. Have I got all that correct? If so, how do I interpret the dependence on $x$? Is it that if $x$ is very large it takes longer to converge to the zero function than when $x$ is very small?
You can prove this by the definition. For $\epsilon_0=\frac12$, for any $N\in \mathbb{N}$, choose $n_0=N+1>N $ and $x_0=N+\frac32$, then $|f_{n_0}(x_0)|=|\chi_{(N+1, N+2)}(N+\frac32)|=1>\epsilon_0$. This means that $f_n$ does not uniformly converges on $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/561430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that $\int_{0}^{1}{\frac{\sin{x}}{x}\mathrm dx}$ converges As title says, I need to show that the following integral converges, and I can honestly say I don't really have an idea of where to start. I tried evaluating it using integration by parts, but that only left me with an $I = I$ situation. $$\int \limits_{0}^{1}{\frac{\sin{x}}{x} \mathrm dx}$$
Notice that, for all $0 < x < 1$, we have \begin{eqnarray*} \left|\int_0^1 \frac{\sin x}{x} \, \operatorname{d}\!x\right| &\le& \int_0^1 \left|\frac{\sin x}{x} \right| \operatorname{d}\!x \\ \\ &\le& \int_0^1 1 \, \operatorname{d}\!x \\ \\ &\le& 1 \end{eqnarray*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/561547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }