Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Darboux Theorem Proof http://en.wikipedia.org/wiki/Darboux%27s_theorem_%28analysis%29 I'm having a bit of trouble understanding the proof of Darboux's Theorem on the IVP of derivatives. Why should there be an extremum such that $g'(x) = 0$ from the fact that $g'(a)>0$ and $g'(b)<0$ ?
Suppose that $g$ attains a local maximum at $a$. Then $$\lim_{x \to a+} \frac{g(x)-g(a)}{x-a} \leq 0.$$ Analogously, if $g$ attains a local maximum at $b$, then $$\lim_{x \to b-} \frac{g(x)-g(b)}{x-b}\geq 0.$$ But both contradict $g'(a)>0$ and $g'(b)<0$. Hence the maximum, which exists since $g$ is continuous on $[a,b]$, must lie inside $[a,b]$. Actually, you can visualize the setting: $g'(a)>0$ means that $g$ leaves $a$ in an increasing way, and then reaches $b$ in a decreasing way. Therefore $g$ must attain a maximum somewhere between $a$ and $b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/192426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Sum of the sequence What is the sum of the following sequence $$\begin{align*} (2^1 - 1) &+ \Big((2^1 - 1) + (2^2 - 1)\Big)\\ &+ \Big((2^1 - 1) + (2^2 - 1) + (2^3 - 1) \Big)+\ldots\\ &+\Big( (2^1 - 1)+(2^2 - 1)+(2^3 - 1)+\ldots+(2^n - 1)\Big) \end{align*}$$ I tried to solve this. I reduced the equation into the following equation $$n(2^1) + (n-1)\cdot2^2 + (n-2)\cdot2^3 +\ldots$$ but im not able to solve it further. Can any one help me solve this equation out. and btw its not a Home work problem. This equation is derived from some puzzle. Thanks in advance
Let's note that $$(2^1 - 1) + (2^2 - 1) + \cdots + (2^k - 1) = 2^{k+1} - 2-k$$ where we have used the geometric series. Thus, the desired sum is actually $$\sum_{k=1}^n{2^{k+1}-2-k}$$. As this is a finite sum, we can evaluate each of the terms separately. We get the sum is $$2\left(\frac{2^{n+1}-1}{2-1}-1\right) - 2n- \frac{n(n+1)}{2} = 2^{n+2}-4 - 2n-\frac{n(n+1)}{2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/192520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Can you help me understand this definition for the limit of a sequence? I'm reading the textbook "Calculus - Early Transcendentals" by Jon Rogawski for my Calculus III university course. I'm trying for the life of me to understand the wording of this definition, and I wonder if it can be said in simpler terms to get the basic point across. A sequence $a_n$ converges to a limit $L$, and we write $$\lim_{n\to\infty} a_n=L$$ if, for every $\epsilon > 0$, there is a number $M$ such that $|a_n - L| < \epsilon$ for all $n > M$. If no limit exists, we say that ${a_n}$ diverges. It looks like a very straightforward rule, but I just can't make sense of all the variables in the definition.
Let’s call $\{a_n,a_{n+1},a_{n+2},\dots\}$ the $n$-tail of the sequence. Now suppose that I give you a target around the number $L$: I pick some positive leeway $\epsilon$ want you to hit the interval $(L-\epsilon,L+\epsilon)$. We’ll say that the sequence hits that target if some tail of the sequence lies entirely inside the interval. For instance, if $a_n=\frac1{2^n}$, the $4$-tail of the sequence hits the target $\left(-\frac1{10},\frac1{10}\right)$ with leeway $frac1{10}$ around $0$: the $4$-tail is $$\left\{\frac1{2^4},\frac1{2^5},\frac1{2^6},\dots\right\}=\left\{\frac1{16},\frac1{32},\frac1{64},\dots\right\}\;,$$ and all of these fractions are between $-\frac1{10}$ and $\frac1{10}$. It’s not hard to see that no matter how small a leeway $\epsilon$ I choose, some tail of that sequence hits the target $(-\epsilon,\epsilon)$: I just have to find an $n$ large enough so that $\frac1{2^n}<\epsilon$, and then the $n$-tail will hit the target. Of course, in my example the $4$-tail of the sequence also hits the target $\left(0,\frac18\right)$ with leeway $\frac1{16}$ around $\frac1{16}$. However, there are smaller targets around $\frac1{16}$ that aren’t hit by any tail of the sequence. For instance, no tail hits the target $\left(\frac1{16}-\frac1{32},\frac1{16}+\frac1{32}\right)=\left(\frac1{32},\frac3{32}\right)$: no matter how big $n$ is, $$\frac1{2^{n+6}}\le\frac1{2^6}=\frac1{64}\;,$$ so $\frac1{2^{n+6}}$ is in the $n$-tail but not in the target. When we say that $\lim\limits_{n\to\infty}a_n=L$, we’re saying that no matter how small you set the leeway $\epsilon$ around $L$, the centre of the target, some tail of the sequence hits that tiny target. Thus, $\lim\limits_{n\to\infty}\frac1{2^n}=0$, and $\lim\limits_{n\to\infty}\frac1{2^n}\ne\frac1{16}$: no matter who tiny a target centred on $0$ you set, there is a tail of the sequence that hits it, but I just showed a target around $\frac1{16}$ that isn’t hit by any tail of the sequence. One way to sum this up: $\lim\limits_{n\to\infty}a_n=L$ means that no matter how small an open interval you choose around the number $L$, there is some tail of the sequence that lies entirely inside that interval. You may have to ignore a huge number of terms of the sequence before that tail, but there is a tail small enough to fit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/192586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Prove with MATLAB whether a set of n points is coplanar I need to find a way to prove if a set of n points are coplanar. I found this elegant way on one of the MATLAB forums but I don't understand the proof. Can someone help me understand the proof please? " The most insightful method of solving your problem is to find the mean square orthogonal distance from the set of points to the best-fitting plane in the least squares sense. If that distance is zero, then the points are necessarily coplanar, and otherwise not. Let x, y , and z be n x 1 column vectors of the three coordinates of the point set. Subtract from each, their respective mean values to get V, and form from it the positive definite matrix A, V = [x-mean(x),y-mean(y),z-mean(z)]; A = (1/n)*V'*V; Then from [U,D] = eig(A); select the smallest eigenvalue in the diagonal matrix D. This is the mean square orthogonal distance of the points from the best fitting plane and the corresponding eigenvector of U gives the coefficients in the equation of that plane, along with the fact that it must contain the mean point (mean(x),mean(y),mean(z))." Here is the link from where I obtained this information. http://www.mathworks.com/matlabcentral/newsreader/view_thread/25094
If you put all the points as columns in a matrix, the resulting matrix will have rank equal to 2 if the points are coplanar. If such a matrix is denoted as $\mathbf A$ then $\mathbf{AA}^T$ will have one eigenvalue equal to or close to 0. Consider that V*U = 0 yields the equation of the plane. Then consider that V'*V*U = V'*0 = 0 can be interpreted as A*U = 0*U which by definition makes U the eigenvector associated with the eigenvalue 0 of A.
{ "language": "en", "url": "https://math.stackexchange.com/questions/192641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$p$ is polynomial, set bounded open set with at most $n$ components Assume $p$ is a non constant polynomial of degree $n$. Prove that the set $\{z:|(p(z))| \lt 1\}$ is a bounded open set with at-most $n$ connected components. Give example to show number of components can be less than $n$. thanks. EDIT:Thanks,I meant connected components.
Hints: * *For boundedness: Show that $|p(z)| \to \infty$ as $|z|\to \infty$ *For openness: The preimage $p^{-1}(A)$ of an open set $A$ under a continuous function $p$ is again open. Polynomials are continuous, so just write your set as a preimage. *For the connected components: Recall fundamental theorem of algebra. What may these components have to do with polynomial roots? How many of them can a polynomial have? The example for fewer components is straightforward, just think of it geometrically (draw a curve).
{ "language": "en", "url": "https://math.stackexchange.com/questions/192709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How to solve $x^3=-1$? How to solve $x^3=-1$? I got following: $x^3=-1$ $x=(-1)^{\frac{1}{3}}$ $x=\frac{(-1)^{\frac{1}{2}}}{(-1)^{\frac{1}{6}}}=\frac{i}{(-1)^{\frac{1}{6}}}$...
Observe that $(e^{a i})^3 = e^{3 a i}$ and $-1=e^{\pi i}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/192742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 8, "answer_id": 2 }
How to prove that this function is continuous at zero? Assume that $g : [0, \infty) \rightarrow \mathbb R$ is continuous and $\phi :\mathbb R \rightarrow \mathbb R$ is continuous with compact support with $0\leq \phi(x) \leq 1$, $\phi(x)=1$ for $ x \in [0,1]$ and $\phi(x)=0$ for $x\geq 2$. I wish to prove that $$ \lim_{x \rightarrow 0^-} \sum_{n=1}^\infty \frac{1}{2^n} \phi(-nx) g(-nx)=g(0). $$ I try in the following way. Let $f(x)=\sum_{n=1}^\infty \frac{1}{2^n} \phi(-nx) g(-nx)$ for $x \leq 0$. For $\varepsilon>0$ there exists $n_0 \in \mathbb N$ such that $\sum_{n\geq n_0} \frac{1}{2^n} \leq \frac{\varepsilon}{2|g(0)|}$. For $x<0$ there exists $m(x) \in \mathbb N$, $m(x)> n_0$ such that $\phi(-nx)=0$ for $n>m(x)$. Then $$ |f(x)-f(0)|\leq \sum_{n=1}^\infty \frac{1}{2^n} |\phi(-nx)g(-nx)-\phi(0)g(0|= \sum_{n=1}^{m(x)} +\sum_{n\geq m(x)} $$ (because $\phi(0)=1$, $\sum_{n=1}^\infty \frac{1}{2^n}=1$). The second term is majorized by $\frac{\varepsilon}{2}$ , but I don't know what to do with the first one because $m(x)$ depend on $x$.
Let $h(x)=\phi(x)g(x)$. Then $h\colon[0,\infty)\to\mathbb R$ is continuous and bounded by some $M$ and $h(x)=0$ for $x\ge2$. Given $\epsilon>0$, find $\delta$ such that $x<\delta$ implies $|h(x)-h(0)|<\frac\epsilon3$. Then for $m\in \mathbb N$ $$\sum_{n=1}^\infty \frac1{2^n} h(nx)-h(0)=\sum_{n=1}^{m} \frac1{2^n} (h(nx)-h(0))+\sum_{n=m+1}^\infty \frac1{2^n} h(nx)-\frac1{2^m}h(0)$$ If $m<\frac\delta x$, then $$\left |\sum_{n=1}^{m} \frac1{2^n} (h(nx)-h(0))\right |<\sum_{n=1}^\infty \frac 1{2^n}\frac\epsilon3=\frac\epsilon3.$$ For the middle part, $$\left|\sum_{n=m+1}^\infty \frac1{2^n} h(nx)\right|<\frac M{2^m}$$ and finally $\left|\frac1{2^m}h(0)\right|\le \frac M{2^m}$. If $m>\log_2(\frac {3M}\epsilon)$, we find that $$\left|\sum_{n=1}^\infty \frac1{2^n} h(nx)-h(0)\right|<\epsilon$$ for all $x$ with $x<\min\{\delta,\frac Mm\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/192806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
I need some help solving a Dirichlet problem using a conformal map I'm struggling here, trying to understand how to do this, and after 4 hours of reading, i still can't get around the concept and how to use it. Basically, i have this problem: A={(x,y) / x≥0, 0≤y≤pi So U(x,0) = B; U(x,pi) = C; U'x(0,y) = 0; I know that inside A, the laplace operator of U is 0. So i have to find U, and U must meet those requirements. I don't have to use any form of differential equation. I'm supposed to find some sort of conformal transformation in order to make the domain a little more.. easy to understand. And then i should just get a result. The problem is, i think i don't know how to do that. If any of you could help me understand how to solve this one, i might get the main idea and i could try to reproduce the resolution in similar cases. Thank you very much, and i'm sorry for my english.
The domain is simple enough already. Observe that there is a function of the form $U=\alpha y+\beta$ which satisfies the given conditions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/192868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Compute: $\sum\limits_{n=1}^{\infty}\frac {1\cdot 3\cdots (2n-1)} {2\cdot 4\cdots (2n)\cdot (2n+1)}$ Compute the sum: $$\sum_{n=1}^{\infty}\dfrac {1\cdot 3\cdots (2n-1)} {2\cdot 4\cdots (2n)\cdot (2n+1)}$$ At the moment, I only know that it's convergent and this is not hard to see if you look at the answers here I received for other problem with a similar series. For the further steps I need some hints if possible. Thanks!
Starting with the power series derived using the binomial theorem, $$ (1-x)^{-1/2}=1+\tfrac12x+\tfrac12\tfrac32x^2/2!+\tfrac12\tfrac32\tfrac52x^3/3!+\dots+\tfrac{1\cdot2\cdot3\cdots(2n-1)}{2\cdot4\cdot6\cdots2n}x^n+\cdots $$ and integrating, we get the series for $$ \sin^{-1}(x)=\int_0^x(1-t^2)^{-1/2}\mathrm{d}t=\sum_{n=0}^\infty\tfrac{1\cdot2\cdot3\cdots(2n-1)}{2\cdot4\cdot6\cdots2n}\frac{x^{2n+1}}{2n+1} $$ Setting $x=1$, we get $$ \sum_{n=1}^\infty\tfrac{1\cdot2\cdot3\cdots(2n-1)}{2\cdot4\cdot6\cdots2n}\frac{1}{2n+1}=\sin^{-1}(1)-1=\frac\pi2-1 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/192919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 3, "answer_id": 2 }
about Jacobian and eigenvalues I am studying the dynamical system on a discrete standard map $$x_{n+1} = f(x_n, y_n)$$ $$y_{n+1} = g(x_n, y_n)$$ First of all, could anyone explain the difference between the stationary point and the fixed point? As stated in some book, for the points which satisfying $f(x_0, y_0)=0$ and $g(x_0, y_0)=0$, that point is the steady point. But in other book, they called it fixed point. So they are the same thing? But my question is if I write down the Jacobian as $J$, so what's the eigenvalue of $J$ really mean? Is the eigenvalue related to the angular frequency of the period-n orbits (in some material, they give the frequencies based on the eigenvalues. Thanks.
I will concatenate $x$ and $y$ and work with a single state-transition equation $$x_{k+1} = f (x_k)$$ where $f : \mathbb{R}^n \to \mathbb{R}^n$. Given a state $x$, function $f$ gives you the next state $f (x)$. It's an infinite state machine! Suppose that $f (\bar{x}) = \bar{x}$ for some isolated point $\bar{x} \in \mathbb{R}^n$. We say that $\bar{x}$ is a fixed point, because if $x_0 = \bar{x}$, then $x_1 = f (x_0) = f (\bar{x}) = \bar{x}$, and $x_2 = f (x_1) = f(\bar{x}) = \bar{x}$, and so and so on. In other words, if the system starts at $x_0 = \bar{x}$, it stays there for all $k \geq 0$. Hence, the word "fixed" in "fixed point". The Taylor series expansion of $f$ around $\bar{x}$ is $$f (x) = f (\bar{x}) + (D f) (\bar{x}) (x - \bar{x}) + \text{H.O.T}$$ where $(D f)$ is the Jacobian matrix-valued function, and "H.O.T." stands for "higher-order terms". In a sufficiently small neighborhood of $\bar{x}$, we can neglect the higher-order terms, and thus $$x_{k+1} = f (x_k) \approx f (\bar{x}) + (D f) (\bar{x}) (x_k - \bar{x})$$ and, since $f (\bar{x}) = \bar{x}$, we obtain $x_{k+1} - \bar{x} \approx (D f) (\bar{x}) (x_k - \bar{x})$. Let $e_k := x_{k} - \bar{x}$ be the error vector that measures the deviation from $\bar{x}$, and let $A := (D f) (\bar{x})$. We finally obtain the error dynamics $e_{k+1} \approx A \, e_k$, which yields $e_k \approx A^k \, e_0$. In other words, in a sufficiently small neighborhood of $\bar{x}$, the magnitude of the eigenvalues of matrix $A$ will tell us whether the error vector will converge to the origin or diverge, or equivalently, whether $x_k$ will converge to the fixed point $\bar{x}$ or diverge. If the former is the case, we say that the fixed point $\bar{x}$ is locally stable, whereas if the latter is the case we say that the fixed point $\bar{x}$ is unstable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/192984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Probability of throwing multiple dice of at least a given face with a set of dice I know how to calculate the probability of throwing at least one die of a given face with a set of dice, but can someone tell me how to calculate more than one (e.g., at least two)? For example, I know that the probability of throwing at least one 4 with two 6-sided dice is 27/216, or 1 - (3/6 x 3/6 x 3/6). How do I calculate throwing at least two 4s with four 6-sided dice?
You are asking for the distribution of the number $X_n$ of successes in $n$ independent trials, where each trial is a success with probability $p$. Almost by definition, this distribution is binomial with parameters $(n,p)$, that is, for every $0\leqslant k\leqslant n$, $$ \mathrm P(X_n=k)={n\choose k}\cdot p^k\cdot(1-p)^{n-k}. $$ The probability of throwing at least two 4s with four 6-sided dice is $\mathrm P(X_4\geqslant2)$ with $p=\frac16$. Using the identity $\mathrm P(X_4\geqslant2)=1-\mathrm P(X_4=0)-\mathrm P(X_4=1)$, one gets $$ \mathrm P(X_4\geqslant2)=1-1\cdot\left(\frac16\right)^0\cdot\left(\frac56\right)^4-4\cdot\left(\frac16\right)^1\cdot\left(\frac56\right)^3=\frac{19}{144}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/193050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Picking out columns from a matrix using MAGMA How do I form a new matrix from a given one by picking out some of its columns, using MAGMA?
You can actually do this really easily in Magma using the ColumnSubmatrix command, no looping necessary. You can use this in a few ways. For example if you have a matrix $A$ and you want $B$ to be made up a selection of columns: 1st, 2nd, $\ldots$, 5th columns: B := ColumnSubmatrix(A, 5); 3rd, 4th, $\ldots$, 7th columns: B := ColumnSubmatrix(A, 3, 4); (since 7=3+4) OR ColumnSubmatrixRange(A, 3, 7); 2nd, 5th, 8th columns: This is trickier. Magma doesn't let you do this cleanly. But you can select rows 2, 5, and 8 of a matrix individually. You can obviously replace [2,5,8] with any arbitrary sequence. Transpose(Matrix(Transpose(A)[[2,5,8]]));
{ "language": "en", "url": "https://math.stackexchange.com/questions/193100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Completeness and Topological Equivalence How can I show that if a metric is complete in every other metric topologically equivalent to it , then the given metric is compact ? Any help will be appreciated .
I encountered this result in Queffélec's book's Topologie. The proof is due to A.Ancona. It's known as Bing's theorem. We can assume WLOG that $d\leq 1$, otherwise, replace $d$ by $\frac d{1+d}$. We assume that $(X,d)$ is not compact; then we can find a sequence $\{x_n\}$ without accumulation points. We define $$d'(x,y):=\sup_{f\in B}|f(x)-f(y)|,$$ where $B=\bigcup_{n\geq 1}B_n$ and $$B_n:=\{f\colon X\to \Bbb R,|f(x)-f(y)|\leq \frac 1nd(x,y)\mbox{ and }f(x_j)=0,j>n\}.$$ Since $d'\leq d$, we have to show that $Id\colon (X,d')\to (X,d)$ is continuous. We fix $a\in X$, and by assumption on $\{x_k\}$ for all $\varepsilon>0$ we can find $n_0$ such that $d(x_k,a)>\varepsilon$ whenever $k\geq k_0$. We define $$f(x):=\max\left(\frac{\varepsilon -d(x,a)}{n_0},0\right).$$ By the inequality $|\max(0,s)-\max(0,t)|\leq |s-t|$, we get that $f\in B_{n_0}$. This gives equivalence between the two metrics. Now we check that $\{x_n\}$ still is Cauchy. Fix $\varepsilon>0$, $N\geq\frac 1{\varepsilon}$ and $p,q\geq N$. Let $f\in B$, and $n$ such that $f\in B_n$. * *If $n\geq N$, then $|f(x_p)-f(x_q)|\leq \frac 1nd(x_p,x_q)\leq \frac 1n\leq \varepsilon$; *if $n<N$ then $|f(x_p)-f(x_q)|=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/193152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Prove that $x^2 + 5xy+7y^2 \ge 0$ for all $x,y \in\mathbb{R}$ This is probably really easy for all of you, but my question is how do I prove that $x^2 + 5xy+7y^2 \ge 0$ for all $x,y\in\mathbb{R}$ Thanks for the help!
$$x^2+5xy+7y^2=\left(x+\frac{5y}2\right)^2 + \frac{3y^2}4\ge 0$$ (not $>0$ as $x=y=0$ leads to 0).
{ "language": "en", "url": "https://math.stackexchange.com/questions/193275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
Bijection between the set of classes of positive definite quadratic forms and the set of classes of quadratic numbers in the upper half plane Let $\Gamma = SL_2(\mathbb{Z})$. Let $\mathfrak{F}$ be the set of binary quadratic forms over $\mathbb{Z}$. Let $f(x, y) = ax^2 + bxy + cy^2 \in \mathfrak{F}$. Let $\alpha = \left( \begin{array}{ccc} p & q \\ r & s \end{array} \right)$ be an element of $\Gamma$. We write $f^\alpha(x, y) = f(px + qy, rx + sy)$. Since $(f^\alpha)^\beta$ = $f^{\alpha\beta}$, $\Gamma$ acts on $\mathfrak{F}$. Let $f, g \in \mathfrak{F}$. If $f$ and $g$ belong to the same $\Gamma$-orbit, we say $f$ and $g$ are equivalent. Let $f = ax^2 + bxy + cy^2 \in \mathfrak{F}$. We say $D = b^2 - 4ac$ is the discriminant of $f$. It is easy to see that $D \equiv 0$ (mod $4$) or $D \equiv 1$ (mod $4$). If $D$ is not a square integer and gcd($a, b, c) = 1$, we say $f$ is primitive. If $D < 0$ and $a > 0$, we say $f$ is positive definite. We denote the set of positive definite primitive binary quadratic forms of discriminant $D$ by $\mathfrak{F}^+_0(D)$. By this question, $\mathfrak{F}^+_0(D)$ is $\Gamma$-invariant. We denote the set of $\Gamma$-orbits on $\mathfrak{F}^+_0(D)$ by $\mathfrak{F}^+_0(D)/\Gamma$. Let $\mathcal{H} = \{z \in \mathbb{C}; Im(z) > 0\}$ be the upper half complex plane. Let $\alpha = \left( \begin{array}{ccc} p & q \\ r & s \end{array} \right)$ be an element of $\Gamma$. Let $z \in \mathcal{H}$. We write $$\alpha z = \frac{pz + q}{rz + s}$$ It is easy to see that $\alpha z \in \mathcal{H}$ and $\Gamma$ acts on $\mathcal{H}$ from left. Let $\alpha \in \mathbb{C}$ be an algebraic number. If the minimal plynomial of $\alpha$ over $\mathbb{Q}$ has degree $2$, we say $\alpha$ is a quadratic number. Let $\alpha$ be a quadratic number. There exists the unique polynomial $ax^2 + bx + c \in \mathbb{Z}[x]$ such that $a > 0$ and gcd$(a, b, c) = 1$. $D = b^2 - 4ac$ is called the discriminant of $\alpha$. Let $\alpha \in \mathcal{H}$ be a quadratic number. There exists the unique polynomial $ax^2 + bx + c \in \mathbb{Z}[x]$ such that $a > 0$ and gcd$(a, b, c) = 1$. Let $D = b^2 - 4ac$. Clearly $D < 0$ and $D$ is not a square integer. It is easy to see that $D \equiv 0$ (mod $4$) or $D \equiv 1$ (mod $4$). Conversly suppose $D$ is a negative non-square integer such that $D \equiv 0$ (mod $4$) or $D \equiv 1$ (mod $4$). Then there exists a quadratic number $\alpha \in \mathcal{H}$ whose discriminant is $D$. We denote by $\mathcal{H}(D)$ the set of quadratic numbers of discriminant $D$ in $\mathcal{H}$. It is easy to see that $\mathcal{H}(D)$ is $\Gamma$-invariant. Hence $\Gamma$ acts on $\mathcal{H}(D)$ from left. We denote the set of $\Gamma$-orbits on $\mathcal{H}(D)$ by $\mathcal{H}(D)/\Gamma$. Let $f = ax^2 + bxy + cy^2 \in \mathfrak{F}^+_0(D)$. We denote $\phi(f) = (-b + \sqrt{D})/2a$, where $\sqrt{D} = i\sqrt{|D|}$. It is clear that $\phi(f) \in \mathcal{H}(D)$. Hence we get a map $\phi\colon \mathfrak{F}^+_0(D) \rightarrow \mathcal{H}(D)$. My question Is the following proposition true? If yes, how do we prove it? Proposition Let $D$ be a negative non-square integer such that $D \equiv 0$ (mod $4$) or $D \equiv 1$ (mod $4$). Then the following assertions hold. (1) $\phi\colon \mathfrak{F}^+_0(D) \rightarrow \mathcal{H}(D)$ is a bijection. (2) $\phi(f^\sigma) = \sigma^{-1}\phi(f)$ for $f \in \mathfrak{F}^+_0(D), \sigma \in \Gamma$. Corollary $\phi$ induces a bijection $\mathfrak{F}^+_0(D)/\Gamma \rightarrow \mathcal{H}(D)/\Gamma$.
Proof of (1) We define a map $\psi\colon \mathcal{H}(D) \rightarrow \mathfrak{F}^+_0(D)$ as follows. Let $\theta \in \mathcal{H}(D)$. $\theta$ is a root of the unique polynomial $ax^2 + bx + c \in \mathbb{Z}[x]$ such that $a > 0$ and gcd$(a, b, c) = 1$. $D = b^2 - 4ac$. We define $\psi(\theta) = ax^2 + bxy + cy^2$. Clearly $\psi$ is the inverse map of $\phi$. Proof of (2) Let $f = ax^2 + bxy + cy^2$. Let $\sigma = \left( \begin{array}{ccc} p & q \\ r & s \end{array} \right) \in \Gamma$. Let $f^\sigma = kx^2 + lxy + my^2$. Let $\theta = \phi(f)$. Let $\gamma = \sigma^{-1}\theta$. Then $\theta = \sigma \gamma$. $a\theta^2 + b\theta + c = 0$. Hence $a(p\gamma + q)^2 + b(p\gamma + q)(r\gamma + s) + c(r\gamma + s)^2 = 0$. The left hand side of this equation is $f(p\gamma + q, r\gamma + s) = k\gamma^2 + l\gamma + m$. Since $f^\sigma$ is positive definite by this question, $k \gt 0$. Since gcd$(k, l, m)$ = 1 by the same question, $\psi(\gamma) = f^\sigma$. Hence $\phi(f^\sigma) = \gamma = \sigma^{-1}\theta$. This proves (2).
{ "language": "en", "url": "https://math.stackexchange.com/questions/193329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
When is the set statement: (A⊕B) = (A ∪ B) true? "When is the set statement: (A⊕B) = (A ∪ B) a true statement? Is it true sometimes, never, or always? If it is sometimes, state the cases where it is." How would you go about finding the answer to the question or ones like this one? Thanks for your time!
If I've made the right assumptions in my comment above, a good way to approach this problem is by drawing a Venn diagram. Here's $A\oplus B$: Here's $A\cup B$: So, the area that's filled in in $A\cup B$ but not $A\oplus B$ is $A\cap B$. What do I need to be true about $A\cap B$ to make the two Venn diagrams have the same area filled in?
{ "language": "en", "url": "https://math.stackexchange.com/questions/193396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
A matrix is diagonalizable, so what? I mean, you can say it's similar to a diagonal matrix, it has $n$ independent eigenvectors, etc., but what's the big deal of having diagonalizability? Can I solidly perceive the differences between two linear transformation one of which is diagonalizable and the other is not, either by visualization or figurative description? For example, invertibility can be perceived. Because non-invertible transformation must compress the space in one or more certain direction to $0$. Like crashing a space flat.
Up to change in basis, there are only 2 things a matrix can do. * *It can act like a scaling operator where it takes certain key vectors (eigenvectors) and scales them, or *it can act as a shift operator where it takes a first vector, sends it to a second vector, the second vector to a third vector, and so forth, then sends the last vector in a group to zero. It may be that for some collection of vectors it does scaling whereas for others it does shifting, or it can also do linear combinations of these actions (block scaling and shifting simultaneously). For example, the matrix $$ P \begin{bmatrix} 4 & & & & \\ & 3 & 1 & & \\ & & 3 & 1 &\\ & & & 3 & \\ & & & & 2 \end{bmatrix} P^{-1} = P\left( \begin{bmatrix} 4 & & & & \\ & 3 & & & \\ & & 3 & &\\ & & & 3 & \\ & & & & 2 \end{bmatrix} + \begin{bmatrix} 0& & & & \\ & 0& 1 & & \\ & & 0& 1 &\\ & & & 0& \\ & & & &0 \end{bmatrix}\right)P^{-1} $$ acts as the combination of a scaling operator on all the columns of $P$ * *$p_1 \rightarrow 4 p_1$, $p_2 \rightarrow 3 p_2$, ..., $p_5 \rightarrow 2 p_5$, plus a shifting operator on the 2nd, 3rd and 4th columns of $P$: *$p_4 \rightarrow p_3 \rightarrow p_2 \rightarrow 0$. This idea is the main content behind the Jordan normal form. Being diagonalizable means that it does not do any of the shifting, and only does scaling. For a more thorough explanation, see this excellent blog post by Terry Tao: http://terrytao.wordpress.com/2007/10/12/the-jordan-normal-form-and-the-euclidean-algorithm/
{ "language": "en", "url": "https://math.stackexchange.com/questions/193460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 5, "answer_id": 2 }
Isomorphic subgroups, finite index, infinite index Is it possible to have a group $G,$ which has two different, but isomorphic subgroups $H$ and $H',$ such that one is of finite index, and the other one is of infinite index? If not, why is that not possible. If there is a counterexample please give one.
Yes. For $n\in\Bbb N$ let $G_n$ be a copy of $\Bbb Z/2\Bbb Z$, and let $G=\prod_{n\in\Bbb N}G_n$ be the direct product. Then $H_0=\{0\}\times\prod_{n>0}G_n$ is isomorphic to $H_1=\prod_{n\in\Bbb N}A_n$, where $A_n=\{0\}$ if $n$ is odd and $A_n=G_n$ if $n$ is even. Clearly $[G:H_0]=2$ and $[G:H_1]$ is infinite. Of course $H_0$ and $H_1$ are both isomorphic to $G$, so I could have used $G$ instead of $H_0$, but I thought that you might like the subgroups to be proper.
{ "language": "en", "url": "https://math.stackexchange.com/questions/193525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Detect when a point belongs to a bounding box with distances I have a box with known bounding coordinates (latitudes and longitudes): latN, latS, lonW, lonE. I have a mystery point P with unknown coordinates. The only data available is the distance from P to any point p. dist(p,P).` I need a function that tells me whether this point is inside or outside the box.
The distance measurement from any point gives you a circle around that point as a locus of possible positions of $P$. Make any such measurement from a point $A$. If the question is not settled after this (i.e. if the circle crosses the boundary of the rectangle), make a measurement from any other point $B$. The two intersecting circles leave only two possibilities for the location of $P$. If you are lucky, both options are inside or both outside the rectnagle and we are done. Otherwise, a third measurement taken form any point $C$ not collinear with $A$ and $B$ will settle the question of exact position of $P$ (and after that we easily see if $P$ is inside or not. One may wish to choose the first point $A$ in an "optimal" faschion such that the probability of a definite answer is maximized. While this requires knowledge about soem a prioi distribution where $P$ might be, the center of the rechtangle seems like a good idea. The result is nondecisive only if the measured distance is between half the smallest side of the rectangle and half the diagonal of the rectangle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/193606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 7, "answer_id": 4 }
Proving that $\sum_{j=1}^{n}|z_{j}|^{2}\le 1$ when $|\sum_{j=1}^{n}z_{j}w_{j}| \le 1$ for all $\sum_{j=1}^{n}|w_{j}|^{2}\le 1$ The problem is like this: Fix $n$ a positive integer. Suppose that $z_{1},\cdots,z_{n} \in \mathbb C$ are complex numbers satisfying $|\sum_{j=1}^{n}z_{j}w_{j}| \le 1$ for all $w_{1},\cdots,w_{n} \in \mathbb C$ such that $\sum_{j=1}^{n}|w_{j}|^{2}\le 1$. Prove that $\sum_{j=1}^{n}|z_{j}|^{2}\le 1$. For this problem, I so far have that $|z_{i}|^{2}\le 1$ for all $i$ by plugging $(0, \cdots,0,1,0,\cdots,0)$ for $w=(w_{1},\cdots,w_{n} )$ Also, by plugging $(1/\sqrt{n},\cdots,1/\sqrt{n})$ for $w=(w_{1},\cdots,w_{n} )$ we could have $|z_{1}+\cdots+z_{n}|\le \sqrt{n}$ I wish we can conclude that $|z_{i}|\le 1/\sqrt{n}$ for each $i$. Am I in the right direction? Any comment would be grateful!
To have a chance of success, one must choose a family $(w_j)_j$ adapted to the input $(z_j)_j$. If $z_j=0$ for every $j$, the result holds. Otherwise, try $w_j=c^{-1}\bar z_j$ with $c^2=\sum\limits_{k=1}^n|z_k|^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/193650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Prove that $\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\geq \frac{9}{a+b+c} : (a, b, c) > 0$ Please help me for prove this inequality: $$\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\geq \frac{9}{a+b+c} : (a, b, c) > 0$$
Inequality can be written as: $$\left(a+b+c\right) \cdot \left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right) \geq 9 .$$ And now we apply the $AM-GM$ inequality for both of parenthesis. So: $\displaystyle \frac{a+b+c}{3} \geq \sqrt[3]{abc} \tag{1}$ and $\displaystyle \frac{\frac{1}{a}+\frac{1}{b}+\frac{1}{c}}{3} \geq \frac{1}{\sqrt[3]{abc}} \tag{2}.$ Now multiplying relation $(1)$ with relation $(2)$ we obtained that : $$\left(\frac{a+b+c}{3}\right) \cdot \left(\frac{\frac{1}{a}+\frac{1}{b}+\frac{1}{c}}{3}\right) \geq \frac{\sqrt[3]{abc}}{\sqrt[3]{abc}}=1. $$ So, we obtained our inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/193771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 3 }
What's the probability that the other side of the coin is gold? 4 coins are in a bucket: 1 is gold on both sides, 1 is silver on both sides, and 2 are gold on one side and silver on the other side. I randomly grab a coin from the bucket and see that the side facing me is gold. What is the probability that the other side of the coin is gold? I had thought that the probability is $\frac{1}{3}$ because there are 3 coins with at least one side of gold, and only 1 of these 3 coins can be gold on the other side. However, I suspect that the sides might be unique, which derails my previous logic.
50%. GIVEN that the first side you see is gold, what is the chance that you have the double-gold coin? Assume you do this experiment a hundred times. In 50% of the cases you pull out a coin and see a gold side; the other 50% you see a silver side. In the latter case we have to discard the experiment and only count the cases where we see gold. There is initially a 25% chance of double-gold, 25% chance double-silver, and 50% chance half-and-half. We discard the 25 cases where you draw the double-silver, and the 25 cases where you draw a half-and-half silver-side up. So of the 50 cases remaining, half are double-gold and half are gold-up silver-down. Hence given that you have drawn a coin and see gold on top, there is a 50% there is gold on the bottom.
{ "language": "en", "url": "https://math.stackexchange.com/questions/193851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 1 }
How can I get sequence $4,4,2,4,4,2,4,4,2\ldots$ into equation? How can I write an equation that expresses the nth term of the sequence: $$4, 4, 2, 4, 4, 2, 4, 4, 2, 4, 4, 2,\ldots$$
$$ f(n) = \begin{cases} 4 \text{ if } n \equiv 0 \text{ or } 1 \text{ (mod 3)}\\ 2 \text{ if } n \equiv 2 \text{ (mod 3)} \end{cases} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/193897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 17, "answer_id": 9 }
Example of a bijection between two sets I am trying to come up with a bijective function $f$ between the set : $\left \{ 2\alpha -1:\alpha \in \mathbb{N} \right \}$ and the set $\left \{ \beta\in \mathbb{N} :\beta\geq 9 \right \}$, but I couldn't figure out how to do it. Can anyone come up with such a bijective function? Thanks
Given some element $a$ of $\{ 2\alpha -1 \colon \alpha \in \mathbb{N} \}$, try the function $f(a)=\frac{a+1}{2}+9$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/193950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
(Help with) A simple yet specific argument to prove Q is countable I was asked to prove that $\mathbb{Q}$ is countable. Though there are several proofs to this I want to prove it through a specific argument. Let $\mathbb{Q} = \{x|n.x+m=0; n,m\in\mathbb{Z}\}$ I would like to go with the following argument: given that we know $\mathbb{Z}$ is countable, there are only countable many n and countable many m , therefore there can only be countable many equations AND COUNTABLE MANY SOLUTIONS. The instructor told me that though he liked the argument, it doesn't follow directly that there can only be countable many solutions to those equations. Is there any way of proving that without it being a traditional proof of "$\mathbb{Q}$ is countable"?
Your argument can work, but as presented here there are several gaps in it to be closed: * *Your definition of $\mathbb Q$ does not work unless you exclude the case $m=n=0$ -- otherwise everything is a solution. (Thanks, Brian). *You need to point out explicitly that each choice of $n$ and $m$ has at most one solution. *Just because there are countably many choices for $n$ and countably many choices for $m$, you haven't proved that there are countably many combinations of $n$ and $m$. You need either to explicitly reference an earlier proof that the Cartesian product of countable sets is countable, or prove this explicitly. *Since each rational can be hit by more than one $(m,n)$ pair, what you prove is really that $\mathbb Q$ is at most countable. You will need an argument that it doesn't have some lower cardinality. (Unless your definition of "countable" is "at most the cardinality of $\mathbb N$" rather than "exactly the cardinalty of $\mathbb N$", which happens). An explicit appeal to the Cantor-Bernstein theorem may be in order.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Any concrete example of ''right identity and left inverse do not imply a group''? In the abstract algebra class, we have proved the fact that right identity and right inverse imply a group, while right identity and left inverse do not. My question: Are there any good examples of sets (with operations on) with right identity and left inverse, not being a group? To be specific, suppose $(X,\cdot)$ is a set with a binary operation satisfies the following conditions: (i) $(a\cdot b)\cdot c=a\cdot (b\cdot c)$ for any $a,b,c\in X$; (ii) There exists $e\in X$ such that for every $a\in X$, $a\cdot e=a$; (iii) For any $a\in X$, there exists $b\in X$ such that $b\cdot a=e$. I want an example of $(X,\cdot)$ which is not a group.
$$\matrix{a&a&a\cr b&b&b\cr c&c&c\cr}$$ That is, $xy=x$ for all $x,y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
If $f(n) = \sum_{i = 0}^n X_{i}$, then show by induction that $f(n) = f(n - 1) + X_{n-1}$ I am trying to solve this problem by induction. The sad part is that I don't have a very strong grasp on solving by inductive proving methods. I understand that there is a base case and that I need an inductive step that will set $k = n$ and then one more step that basically sets $k = n + 1$. Here is the problem I am trying to solve: If $f(n) = \sum_{i = 0}^n X_{i}$, then show by induction that $f(n) = f(n - 1) + X_{n-1}$. Can I have someone please try to point me in the right direction? *EDIT: I update the formula to the correct one. I wasn't sure how to typeset it correctly and left errors in my math. Thank you for those that helped. I'm still having the problem but now I have the proper formula posted.
To prove by induction, you need to prove two things. First, you need to prove that your statement is valid for $n=1$. Second, you have to show that the validity of the statement for $n=k$ implies the validity of the statement for $n=k+1$. Putting these two bits of information together, you effectively show that your statement is valid for any value of $n$, since starting from $n=1$, the second bit that you proved above shows that the statement is also valid for $n=2$, and then from the validity of the statement for $n=2$ you know it will also be valid for $n=3$ and so on. As for your problem, we proceed as follows: (Part 1) Given the nature of your problem, it is safe to assume $f(0)=0$ (trivial sum with no elements equals zero). In this case it is trivial to see that $f(1)=f(0)+X_{1}$. (Part 2) Assume the proposition is true for $n=k$, so $$f(k)=\sum_{i=1}^kX_i=f(k-1)+X_k$$ Adding $X_{k+1}$ to both sides of the equation above we get $$f(k)+X_{k+1}=\sum_{i=1}^kX_i=f(k+1)$$ So, the truth of the statement for $n=k$ implies the truth of the statement for $n=k+1$, and so the result is proved for all $n$. As Mr. Newstead said above, the induction step is not really necessary because of the nature of the problem, but it's important to have this principle in mind for more complicated proofs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that in a discrete metric space, every subset is both open and closed. I need to prove that in a discrete metric space, every subset is both open and closed. Now, I find it difficult to imagine what this space looks like. I think it consists of all sequences containing ones and zeros. Now in order to prove that every subset is open, my books says that for $A \subset X $, $A$ is open if $\,\forall x \in A,\,\exists\, \epsilon > 0$ such that $B_\epsilon(x) \subset A$. I was thinking that since $A$ will also contain only zeros and ones, it must be open. Could someone help me ironing out the details?
Let $(X,d)$ be a metric space. Suppose $A \subset X$. Let $ x\in A$ be arbitrary. Setting $r = \frac{1}{2}$ then if $a \in B(x,r)$ we have $d(a,x) < \frac{1}{2}$ which implies that $a=x$ and so a is in A. (1) To show that A is closed. It suffices to note that the complement of A is a subset of X and by (1), it is open hence A must be closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 4, "answer_id": 0 }
Hamiltonian Cycle Problem At the moment I'm trying to prove the statement: $K_n$ is an edge disjoint union of Hamiltonian cycles when $n$ is odd. ($K_n$ is the complete graph with $n$ vertices) So far, I think I've come up with a proof. We know the total number of edges in $K_n$ is $n(n-1)/2$ (or $n \choose 2$) and we can split our graph into individual Hamiltonian cycles of degree 2. We also know that for n vertices all having degree 2, there must consequently be $n$ edges. Thus we write $n(n-1)/2 = n + n + ... n$ (here I'm just splitting $K_n$'s edges into some number of distinct Hamiltonian paths) and the deduction that $n$ must be odd follows easily. However, the assumption I made - that we can always split $K_n$ into Hamiltonian paths of degree 2 if $K_n$ can be written as a disjoint union described above - I'm having trouble proving. I've only relied on trying different values for $n$ trials and it hasn't faltered yet. So, I'm asking: If it is true, how do you prove that if $K_n$ can be split into distinct Hamiltonian cycles, it can be split in such a way that each Hamiltonian cycle is of degree 2?
What you are looking for is a Hamilton cycle decomposition of the complete graph $K_n$, for odd $n$. An example of how this can be done (among many other results in the area) is given in: D. Bryant, Cycle decompositions of complete graphs, in Surveys in Combinatorics, vol. 346, Cambridge University Press, 2007, pp. 67–97. For odd $n$, let $n=2r+1$, take $\mathbb{Z}_{2r} \cup \{\infty\}$ as the vertex set of $K_n$ and let $D$ be the orbit of the $n$−cycle \[(\infty, 0, 1, 2r − 1, 2, 2r − 2, 3, 2r − 3,\ldots , r − 1, r + 1, r)\] under the permutation $\rho_{2r}$ [Here $\rho_{2r}=(0,1,\ldots,2r-1)$]. Then $D$ is a decomposition of $K_n$ into $n$-cycles. Here is the starter cycle for a Hamilton cycle decomposition of $K_{13}$, given in the paper: If you rotate the starter, you obtain the other Hamilton cycles in the decomposition. The method of using a "starter" cycle under the action of a cyclic automorphism is typical in graph decomposition problems.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show me some pigeonhole problems I'm preparing myself to a combinatorics test. A part of it will concentrate on the pigeonhole principle. Thus, I need some hard to very hard problems in the subject to solve. I would be thankful if you can send me links\books\or just a lone problem.
This turned up in a routine google search of the phrase "pidgeonhole principle exercise" and appears to be training problems for the New Zealand olympiad team. It contains numerous problems and has some solutions in the back.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 3 }
Character of $S_3$ I am trying to learn about the characters of a group but I think I am missing something. Consider $S_3$. This has three elements which fix one thing, two elements which fix nothing and one element which fixes everything. So its character should be $\chi=(1,1,1,0,0,3)$ since the trace is just equal to the number of fixed elements (using the standard representation of a permutation matrix). Now I think this is an irreducible representation, so $\langle\chi,\chi\rangle$ should be 1. But it's $\frac{1}{6}(1+1+1+9)=2$. So is the permutation matrix representation actually reducible? Or am I misunderstanding something?
The permutation representation is reducible. It has a subrepresentation spanned by the vector $(1,1,1)$. Hence, the permutation representation is the direct sum of the trivial representation and a $(n-1=2)$-dimensional irreducible representation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Finding $\pi$ through sine and cosine series expansions I am working on a problem in Partha Mitra's book Observed Brain Dynamics (the problem was originally from Rudin's textbook Real and Complex Analysis, and appears on page 54 of Mitra's book). Unfortunately, the book I have does not contain any solutions... Here is the question: By considering the first few terms of the series expansion of sin(x) and cos(x), show that there is a real number $x_0$ between 0 and 2 for which $\cos(x_0)=0$ and $\sin(x_0)=1$. Then, define $\pi=2x_0$, and show that $e^{i\pi/2}=i$ (and therefore that $e^{i\pi}=-1$ and $e^{2\pi i}=1$. Attempt at a solution: In a previous problem I derived the series expansions for sine and cosine as $$ \sin(x) = \sum_{n=0}^{\infty} \left[ (-1)^n \left( \frac{x^{2n+1}}{(2n+1)!} \right) \right] $$ $$ \cos(x) = \sum_{n=0}^{\infty} \left[ (-1)^n \left( \frac{x^{2n}}{(2n)!} \right) \right] $$ My thought is that you can show that $\cos(0)=1$ (trivially), and that $\cos(2)<0$ (less trivially). This then implies that there is a point $x_0$ between 0 and 2 where $\cos(x_0)=0$, since the cosine function is continuous. However, I do not understand how you could then show that $\sin(x_0)=1$ at this same point. My approach may be completely off here. I believe that the second part of this problem ("Then, define $\pi=2x_0$...") will be easier once I get past this first part. Thanks so much for the help. Also - I swear this is not a homework assignment. I am reading through this book on my own to improve my math.
How to show that $\sin (x_0)=1$ if $\cos (x_0)=0$? Quite simply: $$\sin^2 x+\cos^2 x=1$$ (you may also want to specify that $\sin x$ is positive in the given range)
{ "language": "en", "url": "https://math.stackexchange.com/questions/194462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Exercise on finite intermediate extensions Let $E/K$ be a field extension, and let $L_1$ and $L_2$ be intermediate fields of finite degree over $K$. Prove that $[L_1L_2:K] = [L_1 : K][L_2 : K]$ implies $L_1\cap L_2 = K$. My thinking process so far: I've gotten that $K \subseteq L_1 \cap L_2$ because trivially both are intermediate fields over K. I want to show that $L_1 \cap L_2 \subseteq K$, or equivalently that any element of $L_1 \cap L_2$ is also an element of $K$. So I suppose there exists some element $x\in L_1 \cap L_2\setminus K$. Well then I know that this element is algebraic over $K$, implying that $L_1:K=[L_1:K(x)][K(x):K]$, and similarly for $L_2$, implying that that these multiplied together equal $L_1L_2:K$ by hypothesis. And now I’m stuck in the mud... not knowing exactly where the contradiction is.
The assumption implies $[L_1L_2:L_1]=[L_2:K]$. Hence $K$-linearly independent elements $b_1,\ldots ,b_m\in L_2$ are $L_1$-linearly independent, considered as elements of $L_1L_2$. In particular this holds for the powers $1,x,x^2,\ldots ,x^{m-1}$ of an element $x\in L_2$, where $m$ is the degree of the minimal polynomial of $x$ over $K$. Thus $[K(x):K]= [L_1(x):L_1]$. This shows that the minimal polynomial over $K$ of every $x\in L_1\cap L_2$ has degree $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
The integral relation between Perimeter of ellipse and Quarter of Perimeter Ellipse Equation $$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$$ $x=a\cos t$ ,$y=b\sin t$ $$L(\alpha)=\int_0^{\alpha}\sqrt{\left(\frac{dx}{dt}\right)^2+\left(\frac{dy}{dt}\right)^2}\,dt$$ $$L(\alpha)=\int_0^\alpha\sqrt{a^2\sin^2 t+b^2 \cos^2 t}\,dt $$ $$L(2\pi)=\int_0^{2\pi}\sqrt{b^2+(a^2-b^2)\sin^2 t}\,dt \tag{Perimeter of ellipse}$$ $$L(\pi/2)=\int_0^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 t}\,dt \tag {Quarter of Perimeter }$$ Geometrically, we can write $L(2\pi)=4L(\pi/2)$ $$4\int_0^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 t}\,dt=\int_0^{2\pi}\sqrt{b^2+(a^2-b^2)\sin^2 t}\,dt \tag1$$ If I change variable in integral of $L(2\pi)$ $$L(2\pi)=\int_0^{2\pi}\sqrt{b^2+(a^2-b^2)\sin^2 t}\,dt \tag{Perimeter of ellipse}$$ $t=4u$ $$L(2\pi)=4\int_0^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 4u}\,du$$ According to result (1), $$L(2\pi)=4\int_0^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 4u},du=4\int_0^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 t}\,dt$$ $$\int_0^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 4u}\,du=\int_0^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 t}\,dt \tag2$$ How to prove the relation $(2)$ analytically? Thanks a lot for answers
I proved the relation via using analytic way. I would like to share the solution with you. $$\int_0^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 4u}\,du=K$$ $u=\pi/4-z$ $$K=\int_{-\pi/4}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (\pi-4z)}\,dz$$ $\sin (\pi-4z)=\sin \pi \cos 4z-\cos \pi \sin 4z= \sin 4z$ $$\int_{-\pi/4}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (\pi-4z)}\,dz=\int_{-\pi/4}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (4z)}\,dz$$ $$\int_{-\pi/4}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (4z)}\,dz=\int_{-\pi/4}^{0}\sqrt{b^2+(a^2-b^2)\sin^2 (4z)}\,dz+\int_{0}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (4z)}\,dz$$ $$\int_{-\pi/4}^{0}\sqrt{b^2+(a^2-b^2)\sin^2 (4z)}\,dz$$ $z=-p$ $$\int_{\pi/4}^{0}\sqrt{b^2+(a^2-b^2)\sin^2 (-4p)}\,(-dp)=\int_{0}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (4p)}\,dp$$ $$K=\int_{-\pi/4}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (4z)}\,dz=2\int_{0}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (4z)}\,dz$$ $$K=2\int_{0}^{\pi/4}\sqrt{b^2+(a^2-b^2)\sin^2 (4z)}\,dz$$ $z=\pi/8-v$ $$K=2\int_{-\pi/8}^{\pi/8}\sqrt{b^2+(a^2-b^2)\cos^2 (4v)}\,dv$$ $$K=2\int_{-\pi/8}^{0}\sqrt{b^2+(a^2-b^2)\cos^2 (4v)}\,dv+2\int_{0}^{\pi/8}\sqrt{b^2+(a^2-b^2)\cos^2 (4v)}\,dv$$ $$\int_{-\pi/8}^{0}\sqrt{b^2+(a^2-b^2)\cos^2 (4v)}\,dv$$ $v=-h$ $$\int_{\pi/8}^{0}\sqrt{b^2+(a^2-b^2)\cos^2 (-4h)}\,(-dh)=\int_{0}^{\pi/8}\sqrt{b^2+(a^2-b^2)\cos^2 (4h)}\,dh$$ $$K=2\int_{-\pi/8}^{0}\sqrt{b^2+(a^2-b^2)\cos^2 (4v)}\,dv+2\int_{0}^{\pi/8}\sqrt{b^2+(a^2-b^2)\cos^2 (4v)}\,dv$$ $$K=4\int_{0}^{\pi/8}\sqrt{b^2+(a^2-b^2)\cos^2 (4v)}\,dv$$ $v=\pi/8-t/4$ $$K=4\int_{\pi/2}^{0}\sqrt{b^2+(a^2-b^2)\cos^2 (4(\pi/8-t/4))}\,(-dt/4)$$ $$K=\int_{0}^{\pi/2}\sqrt{b^2+(a^2-b^2)\cos^2 (\pi/2-t)}\,dt$$ $$K=\int_{0}^{\pi/2}\sqrt{b^2+(a^2-b^2)\sin^2 (t)}\,dt$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/194596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that the group $G$ is of order $12$ I am studying some exercises about semi-direct product and facing this solved one: Show that the order of group $G=\langle a,b| a^6=1,a^3=b^2,aba=b\rangle$ is $12$. Our aim is to show that $|G|\leq 12$ and then $G=\mathbb Z_3 \rtimes\mathbb Z_4=\langle x\rangle\rtimes\langle y\rangle$. So we need a homomorphism from $\mathbb Z_4$ to $\mathrm{Aut}(\mathbb Z_3)\cong\mathbb Z_2=\langle t\rangle$ to construct the semi-direct product as we wish: $$\phi:=\begin{cases} 1\longrightarrow \mathrm{id},& \\\\y^2\longrightarrow \mathrm{id},& \\\\y\longrightarrow t,& \\\\y^3\longrightarrow t, \end{cases} $$ Here, I do know how to construct $\mathbb Z_3 \rtimes_{\phi}\mathbb Z_4$ by using $\phi$ and according to the definition. My question starts from this point: The solution suddenly goes another way instead of doing $(a,b)(a',b')=(a\phi_b(a'),bb')$. It takes $$\alpha=(x,y^2), \beta=(1,y)$$ and note that these elements satisfy the relations in group $G$. All is right and understandable but how could I find such these later element $\alpha, \beta$?? Is really the formal way for this kind problems finding some generators like $\alpha, \beta$? Thanks for your help.
The subgroup $A$ generated by $a^2$ is normal and order 3. The subgroup $B$ generated by $b$ is of order 4. The intersection of these is trivial so the product $AB$ has order 12. So $G$ has order at least 12. To show it has order 12, we need to see that $a\in AB$; but $b^2=a^3=a^2a$ so $$a=a^{-2}b^2\in AB.$$ Thus the group is the semidirect product of $A$ by $B$ where $$ba^2b^{-1}=(ba)(ab^{-1})=a^{-1}(ba)b^{-1}=a^{-1}(a^{-1}b)b^{-1}=a^{-2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/194665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Is there a direct, elementary proof of $n = \sum_{k|n} \phi(k)$? If $k$ is a positive natural number then $\phi(k)$ denotes the number of natural numbers less than $k$ which are prime to $k$. I have seen proofs that $n = \sum_{k|n} \phi(k)$ which basically partitions $\mathbb{Z}/n\mathbb{Z}$ into subsets of elements of order $k$ (of which there are $\phi(k)$-many) as $k$ ranges over divisors of $n$. But everything we know about $\mathbb{Z}/n\mathbb{Z}$ comes from elementary number theory (division with remainder, bezout relations, divisibility), so the above relation should be provable without invoking the structure of the group $\mathbb{Z}/n\mathbb{Z}$. Does anyone have a nice, clear, proof which avoids $\mathbb{Z}/n\mathbb{Z}$?
Claim:Number of positive integers pair $(a, b) $ satisfying : $n=a+b$ (for given $n$) $\gcd(a, b) =d$ and $d|n$ is $\phi(n/d) $. Proof: Let $a=xd$ and $b=yd$ We want number of solution for $x+y=\frac{n}{d}$ such that $\gcd(x, y) =1$. $\gcd(x,y)=\gcd(x,x+y)=\gcd(x,n/d)=1$ Solution for $x+y=n/d$, $\gcd(x,y)=1$ is $\phi(n/d) $. ________________________ Number of positive integers pair $(a, b) $ satisfying $a+b=n$ is $n$. But this can counted in different way: If $(a, b) $ is solution then $\gcd(a, b) =d$ for some divisor $d$ of $n$. So we can use our claim to write $\sum_{d|n} \phi(n/d) =\sum_{d|n}\phi(d)=$ Number of solution $=n.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/194705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 8, "answer_id": 6 }
The limit of the sum is the sum of the limits I was wondering why the statement in the title is true only if the functions we are dealing with are continuous. Here's the context (perhaps not required): (The upper equation there is just a limit of two sums, and the lower expression is two limits of those two sums.), and if anyone wonders, that's the original source (a pdf explaining the proof of the product rule). P.S. In the context it's given that $g$ and $f$ are differentiable, anyway I only provided it to illustrate the question; my actual question is simply general.
The limit of the sum is not always equal to the sum of the limits, even when the individual limits exist. For example: Define $h(i)=1\sqrt{(n^2)+i}$. For each $i=1,\cdots,n$, the limit of $h(i)$ is zero as n goes to infinity. But the limit of the sum $[h(1)+h(2)+\cdots+h(n)]$ as n goes infinity is not zero. The limit of this sum is actually $1$, using the sandwich theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Calculate $\lim\limits_{x\to a}\frac{a^{a^{x}}-{a^{x^{a}}}}{a^x-x^a}$ Please help me solving $\displaystyle\lim_{x\to a}\frac{a^{a^{x}}-{a^{x^{a}}}}{a^x-x^a}$
The ratio is $R(x)=\dfrac{u(t)-u(s)}{t-s}$ with $u(z)=a^z$, $t=a^x$ and $s=x^a$. When $x\to a$, $t\to a^a$ and $s\to a^a$ hence $R(x)\to u'(a^a)$. Since $u(z)=\exp(z\log a)$, $u'(z)=u(z)\log a$. In particular, $u'(a^a)=u(a^a)\log a$. Since $u(a^a)=a^{a^a}$, $\lim\limits_{x\to a}R(x)=a^{a^a}\log a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 2 }
Linearly disjoint vs. free field extensions Consider two field extensions $K$ and $L$ of a common subfield $k$ and suppose $K$ and $L$ are both subfields of a field $\Omega$, algebraically closed. Lang defines $K$ and $L$ to be 'linearly disjoint over $k$' if any finite set of elements of $K$ that are linearly independent over $k$ stays linearly independent over $L$ (it is, in fact, a symmetric condition). Similarly, he defines $K$ and $L$ to be 'free over $k$' if any finite set of elements of $K$ that are algebraically independent over $k$ stays algebraically independent over $L$. He shows right after that if $K$ and $L$ are linearly disjoint over $k$, then they are free over $k$. Anyway, Wikipedia gives a different definition for linearly disjointness, namely $K$ and $L$ are linearly disjoint over $k$ iff $K \otimes_k L$ is a field, so I was wondering: do we have a similar description of 'free over $k$' in terms of the tensor product $K \otimes_k L$? It should be a weaker condition than $K \otimes_k L$ being a field, perhaps it needs to be a integral domain?
The condition of being linearly disjoint or free depends much on the "positions" of $K, L$ inside $\Omega$, while the isomorphism class of the $k$-algebra $K\otimes_k L$ doesn't. For instance, consider $\Omega=\mathbb C(X,Y)$, $K=\mathbb C(X)$, $L_1=\mathbb C(Y)$ and $L_2=K$. Then $$K\otimes_\mathbb C L_1\simeq K\otimes_{\mathbb C} L_2$$ as $\mathbb C$-algebras. But $K, L_1$ are linearly disjoint (so free) in $\Omega$, not $K, L_2$. This example shows that in general, the linear disjointness nor the freeness can be determined by intrinsic properties of $K\otimes_k L$. If $K$ or $L$ is algebraic over $k$, then it is true that linear disjointness is equivalent to $K\otimes_k L$ is a field. But in this situation the freeness is automatic whenever the tensor product is a field or not (can even be non-reduced).
{ "language": "en", "url": "https://math.stackexchange.com/questions/194875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove: If $a\mid m$ and $b\mid m$ and $\gcd(a,b)=1$ then $ab\mid m\,$ [LCM = product for coprimes] Prove: If $a\mid m$ and $b\mid m$ and $\gcd(a,b)=1$ then $ab\mid m$ I thought that $m=ab$ but I was given a counterexample in a comment below. So all I really know is $m=ax$ and $m=by$ for some $x,y \in \mathbb Z$. Also, $a$ and $b$ are relatively prime since $\gcd(a,b)=1$. One of the comments suggests to use Bézout's identity, i.e., $aq+br=1$ for some $q,r\in\mathbb{Z}$. Any more hints? New to this divisibility/gcd stuff. Thanks in advance!
Write $ax+by=1$, $m=aa'$, $m=bb'$. Let $t=b'x+a'y$. Then $abt=abb'x+baa'y=m(ax+by)=m$ and so $ab \mid m$. Edit: Perhaps this order is more natural and less magical: $m = m(ax+by) = max+mby = bb'ax+aa'by = ab(b'x+a'y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/194961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 0 }
Question about Minimization Let be $J$ a convex functional defined in Hilbert space H and with real values. What hypothesis I should assume to exist solution for the problem?: $J(u) = \inf \left\{{J(v); v \in K}\right\} , u \in K$ For all closed convex $K \subset H.$ I begin using the theorem A functional $J:E\rightarrow\mathbb{R}$ defined over a norm space $E$ is semi-continuous inferiorly if for all sequence $(u_n)_{n\in \mathbb{N}}$ converging to $u$ then: $\lim_{n\rightarrow \infty}\inf J(u_n)\geq J(u)$. But I don't know how make to only "=".
You get equality by taking $u_n$ such that $J(u_n)\to \inf_K J$. Indeed, the weak limit is also an element of $K$ and therefore cannot have a smaller value of the functional than the infimum. The term is "lower semicontinuous", by the way. What you need from $J$ is being bounded from below, and lower semicontinuous with respect to weak convergence of sequences. And if you allow unbounded $K$, it helps to have $J\to\infty $ at infinity, because this forces the sequence $u_n$ to be bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
evaluation of the integral $\int_{0}^{x} \frac{\cos(ut)}{\sqrt{x-t}}dt $ Can the integral $$\int_{0}^{x} \frac{\cos(ut)}{\sqrt{x-t}}dt $$ be expressed in terms of elemental functions or in terms of the sine and cosine integrals ? if possible i would need a hint thanks. From the fractional calculus i guess this integral is the half derivative of the sine function (i think so) $ \sqrt \pi \frac{d^{1/2}}{dx^{1/2}}\sin(ux) $ or similar of course i could expand the cosine into power series and then take the term by term integration but i would like if possible a closed expression for my integral
Let $t=x-y^2$. We then have $dt = -2ydy$. Hence, we get that \begin{align} I = \int_0^x \dfrac{\cos(ut)}{\sqrt{x-t}} dt & = \int_0^{\sqrt{x}} \dfrac{\cos(u(x-y^2))}{y} \cdot 2y dy\\ & = 2 \cos(ux) \int_0^{\sqrt{x}}\cos(uy^2)dy + 2 \sin(ux) \int_0^{\sqrt{x}}\sin(uy^2)dy\\ & = \dfrac{\sqrt{2 \pi}}{\sqrt{u}} \left(\cos(ux) C\left(\sqrt{\dfrac{2ux}{\pi}}\right) + \sin(ux) S\left(\sqrt{\dfrac{2ux}{\pi}}\right) \right) \end{align} where $C(z)$ and $S(z)$ are the Fresnel integrals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Inequality. $\sum{(a+b)(b+c)\sqrt{a-b+c}} \geq 4(a+b+c)\sqrt{(-a+b+c)(a-b+c)(a+b-c)}.$ Let $a,b,c$ be the side-lengths of a triangle. Prove that: I. $$\sum_{cyc}{(a+b)(b+c)\sqrt{a-b+c}} \geq 4(a+b+c)\sqrt{(-a+b+c)(a-b+c)(a+b-c)}.$$ What I have tried: \begin{eqnarray} a-b+c&=&x\\ b-c+a&=&y\\ c-a+b&=&z. \end{eqnarray} So $a+b+c=x+y+z$ and $2a=x+y$, $2b=y+z$, $2c=x+z$ and our inequality become: $$\sum_{cyc}{\frac{\sqrt{x}\cdot(x+2y+z)\cdot(x+y+2)}{4}} \geq 4\cdot(x+y+z)\cdot\sqrt{xyz}. $$ Or if we make one more notation : $S=x+y+z$ we obtain that: $$\sum_{cyc}{\sqrt{x}(S+y)(S+z)} \geq 16S\cdot \sqrt{xyz} \Leftrightarrow$$ $$S^2(\sqrt{x}+\sqrt{y}+\sqrt{z})+S(y\sqrt{x}+z\sqrt{x}+x\sqrt{y}+z\sqrt{y}+x\sqrt{z}+y\sqrt{z})+xy\sqrt{z}+yz\sqrt{x}+xz\sqrt{y} \geq 16S\sqrt{xyz}.$$ To complete the proof we have to prove that: $$y\sqrt{x}+z\sqrt{x}+x\sqrt{y}+z\sqrt{y}+x\sqrt{z}+y\sqrt{z} \geq 16\sqrt{xyz}. $$ Is this last inequality true ? II. Knowing that: $$p=\frac{a+b+c}{2}$$ we can rewrite the inequality: $$\sum_{cyc}{(2p-c)(2p-a)\sqrt{2(p-b)}} \geq 8p \sqrt{2^3 \cdot (p-a)(p-b)(p-c)} \Leftrightarrow$$ $$\sum_{cyc}{(2p-c)(2p-a)\sqrt{(p-b)}} \geq 16p \sqrt{(p-a)(p-b)(p-c)}$$ This help me ? Thanks :)
Notice that the inequality proposed is proved once we estabilish $$\frac{(a+b)(b+c)}{\sqrt{a+b-c}\sqrt{-a+b+c}}+\frac{(b+c)(c+a)}{\sqrt{a-b+c}\sqrt{-a+b+c}}+\frac{(c+a)(a+b)}{\sqrt{a-b+c}\sqrt{a+b-c}}\geq 4(a+b+c).$$ Using AM-GM on the denominators in the LHS, we estabilish that * *$\sqrt{a+b-c}\sqrt{-a+b+c}\leq b$, *$\sqrt{a-b+c}\sqrt{-a+b+c}\leq c$, *$\sqrt{a-b+c}\sqrt{a+b-c}\leq a$. Therefore $$\operatorname{LHS}\geq \frac{(a+b)(b+c)}{b}+\frac{(b+c)(c+a)}{c}+\frac{(c+a)(a+b)}{a}=3(a+b+c)+\frac{ca}{b}+\frac{ab}{c}+\frac{bc}{a}$$ To finish with the proof it suffices then to prove $$\frac{ca}{b}+\frac{ab}{c}+\frac{bc}{a}\geq a+b+c.$$ This follows from AM-GM, indeed * *$$\frac{\frac{ac}{b}+\frac{ab}{c}}{2}\geq a,$$ *$$\frac{\frac{bc}{a}+\frac{ab}{c}}{2}\geq b,$$ *$$\frac{\frac{ac}{b}+\frac{bc}{a}}{2}\geq c;$$ summing up these last inequalities, we get the desired result, hence finishing the proof. Notice that the condition of $a,b,c$ being the sides of a triangle is essential in the first usage of AM-GM.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Average Distance Between Random Points on a Line Segment Suppose I have a line segment of length $L$. I now select two points at random along the segment. What is the expected value of the distance between the two points, and why?
You can picture this problem from a discrete approach, then extend it to the real number line. For the first L natural numbers, the difference between any two of them may range from 1 through L-1. Exactly N pairs of numbers from our set will be L-N units apart. Taking that into account we sum: Sum of distances = sum (0 < x < L, x*(L-x)) Which turns out to be (L-1)(L)(L+1)/6 We then divide by combinations of L in 2 and get the average distance (L+1)/3, which becomes L/3 as we allow for infinitely many more numbers in the [0;L] interval.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63", "answer_count": 11, "answer_id": 4 }
Open cover rationals proper subset of R? If I were to cover each rational number by a non-empty open interval, would their union always be R? It seems correct to me intuitively, but I am quite certain it is wrong. Thanks
If you enumerate the rationals as a sequence $x_1, x_2, \dots$, you can then take a sequence of open intervals $(x_1-\delta, x_1+\delta), (x_2-\delta/2, x_2+\delta/2), (x_3-\delta/4, x_3+\delta/4), \dots$ which gives an open cover for $\mathbb{Q}$ of total length $4\delta$, which can be made as small as you wish, by choosing $\delta$ sufficiently small.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 0 }
Is there a geometric proof to answers about the 3 classical problems? I know that there is a solution to this topic using algebra (for example, this post). But I would like to know if there is a geometric proof to show this impossibility. Thanks.
No such proof is known. Note that this would in fact be meta-geometric: You do not give a construction of an object from givens, but you make a statement about all possible sequences of operations with your drawing instruments. Therefore it is a good idea to classify all points constructable from standard given points. This set of points has no truely geometric properties (after all, they are dense in the standard Euclidean plane, hence arbitrarily good approximations can be constructed) but nice algebraic properties (algebraic numbers with certain properties).
{ "language": "en", "url": "https://math.stackexchange.com/questions/195369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Trouble with formulation of an analytic geometry question I'm having trouble understanding a certain question, so I am asking for an explanation of it. The question is asked in a different language, so my translation will probably be mistake-ridden, I hope you guys can overlook the mistakes (and of course correct them): Show that for each $ a $ the circle $ (x-a)^2 + (y-a)^2 = a^2 $ touches the axes. This is literally how the question is formulated, I'm sure that it isn't a hard question so if one of you can explain what they mean by this question I would appreciate it!
A slightly different take on your question would be to realize that if your circle touches the $Y$ axis, it must do so at a point $(0, y)$. Substitute $x=0$ in the equation of your circle; can you find a value for $y$? The answer for touching the $X$ axis can be found in a similar way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Kernel of Linear Functionals Problem: Prove that for all non zero linear functionials $f:M\to\mathbb{K}$ where $M$ is a vector space over field $\mathbb{K}$, subspace $(f^{-1}(0))$ is of co-dimension one. Could someone solve this for me?
The following is a proof in the finite dimensional case: The dimension of the image of $f$ is 1 because $\textrm{im} f$ is a subspace of $\Bbb{K}$ that has dimension 1 over itself. Since $\textrm{im} f \neq 0$ it must be the whole of $\Bbb{K}$. By rank nullity, $$\begin{eqnarray*} 1 &=& \dim \textrm{im} f \\ &=& \dim_\Bbb{K} M- \dim \ker f\end{eqnarray*}$$ showing that $\ker f$ has codimension 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
Bypassing a series of stochastic stoplights In order for me to drive home, I need to sequentially bypass $(S_1, S_2, ..., S_N)$ stoplights that behave stochastically. Each stoplight, $S_i$ has some individual probability $r_i$ of being red, and an associated probability, $g_i$, per minute of time of turning from red to green. What is the probability density function for the number of minutes I spend waiting at the $N$ stoplights on my way home? Update 2: The first update is incorrect since the $T$ variable $T$ is a mix of a discrete and continuous measures (as Sasha noted), to generate our distribution for $T$, and assuming all lights are the same, we need to compute the weighted sum: Distribution for $x = T = \sum^N_{j=1} Pr[j$ lights are red upon approach$] * Erlang[j, g]$ Here, Pr[$j$ lights are red upon approach] is just the probability of $j$ successes in $N$ trials, where the probability of success is $r$. In the case where all the lights are unique, we perform the same sort of weighted sum with the hypoexponential distribution, where we have to account for all possible subsets of the lights, with unique $g_i$, being red. Update 1 (see update 2 first, this is incorrect!): from Raskolnikov and Sasha's comments, I'm supposing that the following is the case: If we allow all the spotlights, $S_i$ to be the same, following from (http://en.wikipedia.org/wiki/Erlang_distribution), we have an Erlang (or Gamma) distribution where $k = N$ and the rate parameter is $\lambda = \frac{g}{r}$. This gives us a mean waiting time at all the red lights, $x = T$ minutes, of $\frac{k}{\lambda} = \frac{N}{(\frac{g}{r})}$ and the following PDF for $x = T$: $\frac{\lambda^k x^{k-1} e^{-\lambda x}}{(k-1)!}$ = $\frac{(\frac{g}{r})^N T^{N-1} e^{-(\frac{g}{r}) T}}{(N-1)!}$ Now if all of the stoplights are not the same, following from (http://en.wikipedia.org/wiki/Hypoexponential_distribution), we have a hypoexponential distribution where $k = N$ and the rate parameters are $(\lambda_1, \lambda_2, ..., \lambda_N) = ((\frac{g_1}{r_1}), (\frac{g_2}{r_2}), ..., (\frac{g_N}{r_N}))$. This gives us a mean waiting time at all of the red lights, $x = T$ minutes, of $\sum^{k}_{i=1} \frac{1}{\lambda_i} = \sum^{N}_{i=1} \frac{1}{(\frac{g_i}{r_i})}$. I'm having trouble, however, understanding how to correctly calculate the PDF for the hypoexponential distribution. Is the above correct? (answer: no, but the means are correct)
Let $T_i$ denote the time you wait on each stop-light. Because $\mathbb{P}(T_i = 0) = 1-r_i > 0$, $T_i$ is not a continuous random variable, and thus does not have a notion of density. Likewise, the total wait-time $T = T_1+\cdots+T_N$ also has a non-zero probability of being equal to zero, and hence has no density. Incidentally, the sum of exponential random variables with different exponents is known as hypoexponential distribution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Publishing an article after a book? If I first publish an article, afterward I may publish a book containing materials from the article. What's about the reverse: If I first publish a book does it make sense to publish its fragment as an article AFTERWARD?
"If I first publish a book does it make sense to publish its fragment as an article AFTERWARD?" Sure, why not? You might write a book for one audience, and very usefully re-publish a fragment in the form of a journal article for another audience. I have done this with some stuff buried near the end of a long textbook book aimed at beginning grad students, which colleagues are unlikely to read, but which I thought (and the journal editor thought) was interesting enough for stand-alone publication in one of the journals. And I've done things the other way around too, taken something from a book aimed at a professional audience and reworked it as an article in a publication with a more general readership. [This was techie philosophy rather than straight maths, but I'd have thought the same principles would apply.] So I guess you'd need to think about whether you would or would not be reaching significantly different audiences. (Of course, re-publication for the sake of padding out a CV is not a good idea!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/195721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Compute expectation of certain $N$-th largest element of uniform sample A premier B-school has 2009 students.The dean,a math enthusiast,asks each student to submit a randomly chosen number between 0 and 1.She then ranks these numbers in a list of decreasing order and decides to use the 456th largest number as a fraction of students that are going to get an overall pass grade this year.What is the expected fraction of students that get a passing grade? I am not able to think in any direction. As it is really difficult to comprehend.
This is a question on order statistics. Let $U_i$ denote independent random variables, uniformly distributed on unit interval. The teacher picks $m$-th largest, or $n+1-m$-th smallest number in the sample $\{U_1, U_2, \ldots, U_n\}$, which is denoted as $U_{n-m+1:n}$. It is easy to evaluate the cumulative distribution function of $U_{n-m+1:n}$, for $0<u<1$ $$\begin{eqnarray} \mathbb{P}\left(U_{n-m+1:n} \leqslant u \right) &=& \sum_{k=n-m+1}^n \mathbb{P}\left( U_{1:n} \leqslant u, \ldots, U_{k:n} \leqslant u, U_{k+1:n} >u, \ldots, U_{n:n} >u \right) \\ &=& \mathbb{P}\left( \sum_{k=1}^n [ U_k \leqslant u] \geqslant n-m+1\right) \end{eqnarray} $$ where $[U_k \leqslant u]$ denotes the Iverson bracket. It equals 1 is the condition holds, and zero otherwise. Since $U_k$ are independent, $[U_k \leqslant u]$ are independent identically distributed $0-1$ random variables: $$ \mathbb{E}\left( [ U_k \leqslant u] \right) = \mathbb{P}\left(U_k \leqslant u\right) = u $$ The sum of $n$ iid Bernoulli random variables equals in distribution to a binomial random variable, with parameters $n$ and $u$. Thus: $$ F(u) = \mathbb{P}\left(U_{n-m+1:n} \leqslant u \right) = \sum_{k=n-m+1}^n \binom{n}{k} u^{k} (1-u)^{n-k} $$ The mean can be computed by integrating the above: $$\begin{eqnarray} \mathbb{E}\left(U_{n-m+1:n}\right) &=& \int_0^1 u F^\prime(u) \mathrm{d}u = \left. u F(u) \right|_{u=0}^{u=1} - \int_0^1 F(u) \mathrm{d} u \\ &=& 1- \sum_{k=n-m+1}^n \binom{n}{k} \int_0^1 u^{k} (1-u)^{n-k} \mathrm{d} u \\ &=& 1 - \sum_{k=n-m+1}^n \binom{n}{k} B(k+1, n-k+1) \\ &=& 1 - \sum_{k=n-m+1}^n \frac{n!}{k! (n-k)!} \cdot \frac{(k)! (n-k)!}{(n+1)!} \\ &=& 1 - \sum_{k=n-m+1}^n \frac{1}{n+1} = 1 - \frac{m}{n+1} \end{eqnarray} $$ Using $n=2009$ and $m=456$ the exact fraction equals: $$ \left.\mathbb{E}\left(U_{n-m+1:n}\right)\right|_{n=2009,m=456} = \frac{259}{335} \approx 0.77313 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/195772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
In Need of Ideas for a Small Fractal Program I am a freshman in high school who needs a math related project, so I decided on the topic of fractals. Being an avid developer, I thought it would be awesome to write a Ruby program that can calculate a fractal. The only problem is that I am not some programming god, and I have not worked on any huge projects (yet). So I need a basic-ish fractal 'type' to do the project on. I am a very quick learner, and my math skills greatly outdo that of my peers (I was working on derivatives by myself last year). So does anybody have any good ideas? Thanks!!!! :) PS: my school requires a live resource for every project we do, so would anybody be interested in helping? :)
Maybe you want to consider iterated function systems
{ "language": "en", "url": "https://math.stackexchange.com/questions/195830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Integer solutions to $ x^2-y^2=33$ I'm currently trying to solve a programming question that requires me to calculate all the integer solutions of the following equation: $x^2-y^2 = 33$ I've been looking for a solution on the internet already but I couldn't find anything for this kind of equation. Is there any way to calculate and list the integer solutions to this equation? Thanks in advance!
Hint $\ $ Like sums of squares, there is also a composition law for differences of squares, so $\rm\quad \begin{eqnarray} 3\, &=&\, \color{#0A0}2^2-\color{#C00}1^2\\ 11\, &=&\, \color{blue}6^2-5^2\end{eqnarray}$ $\,\ \Rightarrow\,\ $ $\begin{eqnarray} 3\cdot 11\, &=&\, (\color{#0A0}2\cdot\color{blue}6+\color{#C00}1\cdot 5)^2-(\color{#0A0}2\cdot 5+\color{#C00}1\cdot\color{blue}6)^2\, =\, 17^2 - 16^2\\ &=&\, (\color{#0A0}2\cdot\color{blue}6-\color{#C00}1\cdot 5)^2-(\color{#0A0}2\cdot 5-\color{#C00}1\cdot\color{blue}6)^2\, =\, \ 7^2\ -\ 4^2 \end{eqnarray}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/195904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 0 }
What is the point of the Thinning Rule? I am studying predicate calculus on some lecture notes on my own. I have a question concerning a strange rule of inference called the Thinning Rule which is stated from the writer as the third rule of inference for the the formal system K$(L)$ (after Modus Ponens and the Generalisation Rule): TR) $ $ if $\Gamma \vdash \phi$ and $\Gamma \subset \Delta$, then $\Delta \vdash \phi$. Well, it seems to me that TR is not necessary at all since it is easily proven from the very definition of formal proof (without TR, of course). I am not able to see what is the point here. The Notes are here http://www.maths.ox.ac.uk/system/files/coursematerial/2011/2369/4/logic.pdf (page 14-15)
After a lot of research here and there I think I have found the correct answer thanks to Propositional and Predicate Calculus by Derek Goldrei. So I will try to answer my own question. The fact is that when we are dealing with Predicate Calculus we have the following Generalization Rule: GR) If $x_i$ is not free in any formula in $\Gamma$, then from $\Gamma \vdash \phi$ infer $\Gamma \vdash \forall x_i \phi$. So we easily see that the Thinning Rule TR) $ $ If $\Gamma \vdash \phi$ and $\Gamma \subset \Delta$, then $\Delta \vdash \phi$. is a metatheorem of the Propositional Calculus (where no quantifications and so no Generalization Rule occur) but it is not (in general) true for the Predicative Calculus. As a matter of fact it could happen that $x_i$ has a free occurrence in a formula $\phi$ and $\phi \in \Gamma$ but $\phi \notin \Delta$ with $\Gamma \subset \Delta$. In such a case (without TR) if we have found that $\Gamma \vdash \forall x_i \phi$ from $\Gamma \vdash \phi$ we cannot say that $\Delta \vdash \forall x_i \phi$ because we cannot longer apply GR. This is the reason for the Thinning Rule.
{ "language": "en", "url": "https://math.stackexchange.com/questions/195942", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Analysis problem with volume I'm looking for a complete answer to this problem. Let $U,V\subset\mathbb{R}^d$ be open sets and $\Phi:U\rightarrow V$ be a homeomorphism. Suppose $\Phi$ is differentiable in $x_0$ and that $\det D\Phi(x_0)=0$. Let $\{C_n\}$ be a sequence of open(or closed) cubes in $U$ such that $x_0$ is inside the cubes and with its sides going to $0$ when $n\rightarrow\infty$. Denoting the $d$-dimension volume of a set by $\operatorname{Vol}(.)$, show that $$\lim_{n\rightarrow\infty}\frac{\operatorname{Vol}(\Phi(C_n))}{\operatorname{Vol}(C_n)}=0$$ I know that $\Phi$ cant be a diffeomorphism in $x_0$, but a have know idea how to use this, or how to do anything different. Thanks for helping.
Assume $x_0=\Phi(x_0)=0$, and put $d\Phi(0)=:A$. By assumption the matrix $A$ (or $A'$) has rank $\leq d-1$; therefore we can choose an orthonormal basis of ${\mathbb R}^d$ such that the first row of $A$ is $=0$. With respect to this basis $\Phi$ assumes the form $$\Phi:\quad x=(x_1,\ldots, x_d)\mapsto(y_1,\ldots, y_d)\ ,$$ and we know that $$y_i(x)=a_i\cdot x+ o\bigl(|x|\bigr)\qquad(x\to 0)\ .$$ Here the $a_i$ are the row vectors of $A$, whence $a_1=0$. Let an $\epsilon>0$ be given. Then there is a $\delta>0$ with $$\bigl|y_1(x)\bigr|\leq \epsilon|x|\qquad\bigl(|x|\leq\delta\bigr)\ .$$ Furthermore there is a constant $C$ (not depending on $\epsilon$) such that $$\bigl|y(x)\bigr|\leq C|x|\qquad\bigl(|x|\leq\delta\bigr)\ .$$ Consider now a cube $Q$ of side length $r>0$ containing the origin. Its volume is $r^d$. When $r\sqrt{d}\leq\delta$ all points $x\in Q$ satisfy $|x|\leq\delta$. Therefore the image body $Q':=\Phi(Q)$ is contained in a box with center $0$, having side length $2\epsilon r\sqrt{d}$ in $y_1$-direction and side length $2C\sqrt{d}\>r$ in the $d-1$ other directions. It follows that $${{\rm vol}_d(Q')\over{\rm vol}_d(Q)}\leq 2^d\ d^{d/2}\> C^{d-1}\ \epsilon\ .$$ From this the claim easily follows by some juggling of $\epsilon$'s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/196007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
how to calculate the exact value of $\tan \frac{\pi}{10}$ I have an extra homework: to calculate the exact value of $ \tan \frac{\pi}{10}$. From WolframAlpha calculator I know that it's $\sqrt{1-\frac{2}{\sqrt{5}}} $, but i have no idea how to calculate that. Thank you in advance, Greg
Your textbook probably has an example, where $\cos(\pi/5)$ (or $\sin(\pi/5)$) has been worked out. I betcha it also has formulas for $\sin(\alpha/2)$ and $\cos(\alpha/2)$ expressed in terms of $\cos\alpha$. Take it from there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/196067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 0 }
How to prove if function is increasing Need to prove that function $$P(n,k)=\binom{n}{k}= \frac{n!}{k!(n-k)!}$$ is increasing when $\displaystyle k\leq\frac{n}{2}$. Is this inductive maths topic?
As $n$ increases, $n!$ increases. As $n$ increases $(n-k)!$ increases. $(n+1)!$ is $(n+1)$ times larger than n!. $(n+1-k)!$ is however only $(n+1-k)$ times greater than $(n-k)!$. Therefore the numerator increases more than the denominator as n increases. Therefore P increases as n increases. As $k$ increases, $k!$ increases. As $k$ increases $(n-k)!$ decreases. The question is then, what changes at a greater rate: $k!$ or $(n-k)!$. Define the functions $A(x) = x!$ and $B(x) = (n-x)!$. The denominator of $P(n,k)$ is thus $A(k)*B(k)$. Now $A(k) = k!$ and $B(k) = (n-k)!$. Also, $A(k+1) = (k+1)! = (k+1)*k!$ Also, $B(k+1) = (n - k - 1)! = (n-k)!/(n-k)$. Therefore $A(k+1)B(k+1) = (k+1)k!(n-k)!/(n-k) = A(k)B(k)(k+1)/(n-k)$ So $(k+1)/(n-k) \leq 1$ then the function is increasing. This must be the case since $k \leq n/2$ Therefore P is an increasing function of $k$ and $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/196100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Bi-Lipschitzity of maximum function Assume that $f(re^{it})$ is a bi-Lipschitz of the closed unit disk onto itself with $f(0)=0$. Is the function $h(r)=\max_{0\le t \le 2\pi}|f(re^{it})|$ bi-Lipschitz on $[0,1]$?
It is easy to prove that $h$ is Lipschitz whenever $f$ is. Indeed, we simply take the supremum of the uniformly Lipschitz family of functions $\{f_t\}$, where $f_t(r)=|f(re^{it})|$. Also, $h$ is bi-Lipschitz whenever $f$ is. Let $D_r$ be the closed disk of radius $r$. Let $L$ be the Lipschitz constant of the inverse $f^{-1}$. The preimage under $f$ of the $\epsilon/L$-neighborhood of $f(D_r)$ is contained in $D_{r+\epsilon}$. Therefore, $h(r+\epsilon)\ge h(r)+\epsilon/L$, which means the inverse of $h$ is also Lipschitz. Answer to the original question: is $h$ smooth? No, it's no better than Lipschitz. I don't feel like drawing an elaborate picture, but imagine the concentric circles being mapped onto circles with two smooth "horns" on opposite sides. For some values of $r$ the left horn is longer, for others the right horn is longer. Your function $h(r)$ ends up being the maximum of two smooth functions (lengths of horns). This makes it non-differentiable at the values of $r$ where one horn overtakes the other.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Using the integral definition Possible Duplicate: Natural Logarithm and Integral Properties I was asked to prove that ln(xy) = ln x + ln y using the integral definition. While I'm not asking for any answers on the proof, I was wondering how to interpret and set-up this proof using the "integral definition" (As I am unsure what that means.) EDIT And to prove that ln(x/y) = ln x - ln y Is it right to say this? $$\ln(\frac{x}{y})=\int_1^{\frac{x}{y}} \frac{dt}{t}=\int_1^x \frac{dt}{t}-\int_x^{\frac{x}{y}}\frac{dt}{t}.$$
By definition, $$\ln w=\int_1^w \frac{dt}{t}.$$ Thus $$\ln(xy)=\int_1^{xy} \frac{dt}{t}=\int_1^x \frac{dt}{t}+\int_x^{xy}\frac{dt}{t}.$$ Now make an appropriate change of variable to conclude that the last integral on the right is equal to $\ln y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why does $\tan^{-1}(1)+\tan^{-1}(2)+\tan^{-1}(3)=\pi$? Playing around on wolframalpha shows $\tan^{-1}(1)+\tan^{-1}(2)+\tan^{-1}(3)=\pi$. I know $\tan^{-1}(1)=\pi/4$, but how could you compute that $\tan^{-1}(2)+\tan^{-1}(3)=\frac{3}{4}\pi$ to get this result?
Consider, $z_1= \frac{1+2i}{\sqrt{5}}$, $z_2= \frac{1+3i}{\sqrt{10} }$, and $z_3= \frac{1+i}{\sqrt{2} }$, then: $$ z_1 z_2 z_3 = \frac{1}{10} (1+2i)(1+3i)(1+i)=-1 $$ Take arg of both sides and use property that $\arg(z_1 z_2 z_3) = \arg(z_1) + \arg(z_2) + \arg(z_3)$: $$ \arg(z_1) + \arg(z_2) + \arg(z_3) = -1$$ The LHS we can write as: $$ \tan^{-1} ( \frac{2}{1}) +\tan^{-1} ( \frac{3}{1} ) + \tan^{-1} (1) = \pi$$ Tl;dr: Complex number multiplication corresponds to tangent angle addition
{ "language": "en", "url": "https://math.stackexchange.com/questions/197393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "67", "answer_count": 6, "answer_id": 5 }
If $\int_0^\infty f\text{d}x$ exists, does $\lim_{x\to\infty}f(x)=0$? Are there examples of functions $f$ such that $\int_0^\infty f\text{d}x$ exists, but $\lim_{x\to\infty}f(x)\neq 0$? I curious because I know for infinite series, if $a_n\not\to 0$, then $\sum a_n$ diverges. I'm wondering if there is something similar for improper integrals.
If $\lim_{x\to+\infty}f(x)=l>0$, then $\exists M>0:l-\varepsilon<f(x)<l+\varepsilon\quad \forall x>M$, and so $$ \int_M^{+\infty}f(x)dx>\int_M^{+\infty}(l-\varepsilon)dx=+\infty $$ if $\varepsilon$ is sufficiently small.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Balanced but not convex? In a topological vector space $X$, a subset $S$ is convex if \begin{equation}tS+(1-t)S\subset S\end{equation} for all $t\in (0,1)$. $S$ is balanced if \begin{equation}\alpha S\subset S\end{equation} for all $|\alpha|\le 1$. So if $S$ is balanced then $0\in S$, $S$ is uniform in all directions and $S$ contains the line segment connecting 0 to another point in $S$. Due to the last condition it seems to me that balanced sets are convex. However I cannot prove this, and there are also evidence suggesting the opposite. I wonder whether there is an example of a set that is balanced but not convex. Thanks!
The interior of a regular pentagram centered at the origin is balanced but not convex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Entire functions representable in power series How to prove that an entire function f, which is representable in power series with at least one coefficient is 0, is a polynomial?
Define $F_n:=\{z\in \Bbb C, f^{(n)}(z)=0\}$. Since for each $n$, $f^{(n)}$ is holomorphic it's in particular continuous, hence $F_n$ is closed. Since we can write at each $z_0$, $f(z)=\sum_{k=0}^{+\infty}\frac{f^{(k)}(z_0)}{k!}(z-z_0)^k$, the hypothesis implies that $\bigcup_{n\geq 0}F_n=\Bbb C$. As $\Bbb C$ is complete, by Baire's categories theorem, one of the $F_n$ has a non empty interior, say $F_N$. Then $f^{(N)}(z)=0$ for all $z\in B(z_0,r)$, for some $z_0\in \Bbb C$ and some $r>0$. As $B(z_0,r)$ is not discrete and $\Bbb C$ is connected, we have $f^{(N)}\equiv 0$, hence $f$ is a polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Semi-direct product of different groups make a same group? We can prove that both of: $S_3=\mathbb Z_3\rtimes\mathbb Z_2$ and $\mathbb Z_6=\mathbb Z_3\rtimes\mathbb Z_2$ So two different groups (and not isomorphic in examples above) can be described as semi-direct products of a pair of groups ($\mathbb Z_3$ and $\mathbb Z_2$). I hope my question does make sense: Is there any group which can be described as semi-direct products of two different pair of groups? Thanks.
Such a group of smallest order is $D_8$, the Dihedral group of order 8. Write $D_8=\langle x,y\colon x^4=y^2=1, y^{-1}xy=x^{-1}\rangle=\{1,x,x^2,x^3, y,xy,x^2y,x^3y \}$. * *$H=\langle x\rangle$,$K=\langle y\rangle$, then $D_8=H\rtimes K\cong C_4\rtimes C_2$. *$H=\langle x^2,y\rangle$, $K=\langle xy\rangle$, then $D_8=H\rtimes K\cong (C_2\times C_2)\rtimes C_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
$\int\frac{x^3}{\sqrt{4+x^2}}$ I was trying to calculate $$\int\frac{x^3}{\sqrt{4+x^2}}$$ Doing $x = 2\tan(\theta)$, $dx = 2\sec^2(\theta)~d\theta$, $-\pi/2 < 0 < \pi/2$ I have: $$\int\frac{\left(2\tan(\theta)\right)^3\cdot2\cdot\sec^2(\theta)~d\theta}{2\sec(\theta)}$$ which is $$8\int\tan(\theta)\cdot\tan^2(\theta)\cdot\sec(\theta)~d\theta$$ now I got stuck ... any clues what's the next substitution to do? I'm sorry for the formatting. Could someone please help me with the formatting?
You have not chosen an efficient way to proceed. However, let us continue along that path. Note that $\tan^2\theta=\sec^2\theta-1$. So you want $$\int 8(\sec^2\theta -1)\sec\theta\tan\theta\,d\theta.$$ Let $u=\sec\theta$. Remark: My favourite substitution for this problem and close relatives is a variant of the one used by Ayman Hourieh. Let $x^2+4=u^2$. Then $2x\,dx=2u\,du$, and $x^2=u^2-4$. So $$\int \frac{x^3}{\sqrt{x^2+4}}\,dx=\int \frac{(u^2-4)u}{u}\,du=\int (u^2-4)\,du.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/197744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
What is the value of $w+z$ if $1I am having solving the following problem: If the product of the integer $w,x,y,z$ is 770. and if $1<w<x<y<z$ what is the value of $w+z$ ? (ans=$13$) Any suggestions on how I could solve this problem ?
Find the prime factorization of the number. That is always a great place to start when you have a problem involving a product of integer. Now here you are lucky, you find $4$ prime numbers to the power of one, so you know your answer is unique.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Maclaurin expansion of $\arcsin x$ I'm trying to find the first five terms of the Maclaurin expansion of $\arcsin x$, possibly using the fact that $$\arcsin x = \int_0^x \frac{dt}{(1-t^2)^{1/2}}.$$ I can only see that I can interchange differentiation and integration but not sure how to go about this. Thanks!
As has been mentioned in other answers, the series for $\frac1{\sqrt{1-x^2}}$ is most easily found by substituting $x^2$ into the series for $\frac1{\sqrt{1-x}}$. But for fun we can also derive it directly by differentiation. To find $\frac{\mathrm d^n}{\mathrm dx^n}\frac1{\sqrt{1-x^2}}$ at $x=0$, note that any factors of $x$ in the numerator produced by differentiating the denominator must be differentiated by some later differentiation for the term to contribute at $x=0$. Thus the number of contributions is the number of ways to group the $n$ differential operators into pairs, with the first member of each pair being applied to the numerator and the second member being applied to the factor $x$ produced by the first. This number is non-zero only for even $n=2k$, and in that case it is $\frac{(2k)!}{2^kk!}$. Each such pair accumulates another factor $1\cdot3\cdot\cdots\cdot(2k-1)=\frac{(2k)!}{2^kk!}$ from the exponents in the denominator. Thus the value of the $n$-th derivative at $x=0$ is $\frac{(2k)!^2}{4^k(k!)^2}$, so the Maclaurin series is $$ \frac1{\sqrt{1-x^2}}=\sum_{k=0}^\infty\frac1{(2k)!}\frac{(2k)!^2}{4^kk!}x^{2k}=\sum_{k=0}^\infty\frac{\binom{2k}k}{4^k}x^{2k}\;. $$ Then integrating yields $$ \arcsin x=\sum_{k=0}^\infty\frac{\binom{2k}k}{4^k(2k+1)}x^{2k+1}\;. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/197874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 8, "answer_id": 0 }
3-dimensional array I apologize if my question is ill posed as I am trying to grasp this material and poor choice of tagging such question. At the moment, I am taking an independent studies math class at my school. This is not a homework question, but to help further my understanding in this area. I've been looking around on the internet for some understanding on the topic of higher dimensional array. I'm trying to see what the analogue of transposing a matrix is in 3-dimensional array version. To explicitly state my question, can you transpose a 3-dimensional array? What does it look like? I know for the 2d version, you just swap the indices. For example, given a matrix $A$, entry $a_{ij}$ is sent to $a_{ji}$, vice versa. I'm trying to understand this first to then answer a question on trying to find a basis for the space of $n \times n \times n$ symmetric array. Thank you for your time.
There is no single transformation corresponding to taking the transpose. The reason is that while there is only one non-identity permutation of a pair of indices, there are five non-identity permutations of three indices. There are two that leave none of the fixed: one takes $a_{ijk}$ to $a_{jki}$, the other to $a_{kij}$. Exactly what permutations will matter for your underlying question depends on how you’re defining symmetric for a three-dimensional array.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A dubious proof using Weierstrass-M test for $\sum^n_{k=1}\frac{x^k}{k}$ I have been trying to prove the uniform convergence of the series $$f_{n}(x)=\sum^n_{k=1}\frac{x^k}{k}$$ Obviously, the series converges only for $x\in(-1,1)$. Consequently, I decided to split this into two intervals: $(-1,0]$ and $[0,1)$ and see if it converges on both of them using the Weierstrass M-test. For $x\in(-1,0]$, let's take $q\in(-1,x)$. We thus have: $$\left|\frac{x^k}{k}\right|\leq\left|x^k\right|\leq\left|q^k\right|$$ and since $\sum|q^n|$ is convergent, $f_n$ should be uniformly convergent on the given interval. Now let's take $x\in[0,1)$ and $q\in(x,1)$. Now, we have: $$\left|\frac{x^k}{k}\right|=\frac{x^k}{k}\leq\ x^k\leq{q^k}$$ and once again, we obtain the uniform convergence of $f_n$. However, not sure of my result, I decided to cross-check it by checking whether $f_n$ is Cauchy. For $x\in(-1,0]$, I believe it was a positive hit, since for $m>n$ we have: $$\left|f_{m}-f_{n}\right|=\left|f_{n+1}+f_{n+2}+...f_{m}\right|\leq\left|\frac{x^n}{n}\right|\leq\frac{1}{n}$$ which is what we needed. However, I haven't been able to come up with a method to show the same for $x\in[0,1)$. Now, I am not so sure whether $f_n$ is uniformly convergent on $[0,1)$. If it is, then how can we show it otherwise, and if it isn't, then how can we disprove it? Also, what's equally important - what did I do wrong in the Weierstrass-M test?
Weierstrass M-test only gives you uniform convergence on intervals of the form $[-q,q]$, where $0<q<1$. Your proof shows this. You also get uniform convergence on the interval $[-1,0]$, but to see this you need other methods. For example the standard estimate for the cut-off error of a monotonically decreasing alternating series will work here. As David Mitra pointed out, the convergence is not uniform on the interval $[0,1)$. Elaborating on his argument: No matter how large an $n$ we choose, the divergence of the harmonic series tells us that $\sum_{k=n+1}^{n+p}(1/k)>2$ for $p$ large enough. We can then select a number $a\in(0,1)$ such that the powers $a^k, n<k\le n+p$ are all larger than $1/2$. Then it follows that for all $x\in(a,1)$ $$ \sum_{k=n+1}^{n+p}\frac{x^k}k\ge \sum_{k=n+1}^{n+p}\frac{a^k}k>\frac12\sum_{k=n+1}^{n+p}\frac1k>1. $$ Thus the Cauchy condition fails on the subinterval $(a,1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/197977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Which axiom shows that a class of all countable ordinals is a set? As stated in the title, which axiom in ZF shows that a class of all countable (or any cardinal number) ordinals is a set? Not sure which axiom, that's all.
A combination of them, actually. The proof I've seen requires power set and replacement, and I think union or pairing, too, but I can't recall off the top of my head. I can post an outline of that proof, if you like.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
From set of differential equations to set of transfer functions (MIMO system) I want to know how I can get from a set of differential equations to a set of transfer functions for a multi-input multi-output system. I can do this easily with Matlab or by computing $G(s) = C[sI - A]^{-1}B + D$. I have the following two equations: $$ \ddot{y}_1 + 2\dot{y}_1 + \dot{y}_2 + u_1 = 0 \\ \dot{y}_2 - y_2 + u_2 - \dot{u}_1 = 0 $$There are 2 inputs, $y_i$, and 2 outputs, $u_i$. At first I thought when I want to retrieve the transfer function from $y_1$ to $u_1$ that I had to set $y_2$ and $u_2$ equal to zero. Thus I would have been left with, $\ddot{y}_1 + 2\dot{y}_1 + u_1 = 0$ and $\dot{u}_1 = 0$. However this does not lead to the correct answer, $$ y_1 \rightarrow u_1: \frac{-s^2 - s + 1}{s^3 + s^2 - 2 s} $$ I also thought about substituting the two formulas in each other. So expressing $y_2$ and $u_2$ in terms of $y_1$ and $u_1$ however this also lead to nothing. Can someone explain to me how to obtain the 4 transfer functions, $y_1 \rightarrow u_1$, $y_1 \rightarrow u_2$, $y_2 \rightarrow u_1$ and $y_2 \rightarrow u_2$?
I am guessing that you are looking for the transfer function from $u$ to $y$, this would be consistent with current nomenclature. Taking Laplace transforms gives $$ (s^2+2s) \hat{y_1} + s\hat{y_2} + \hat{u_1} = 0\\ (s-1)\hat{y_2} + \hat{u_2}-s \hat{u_1} = 0 $$ Solving algebraically gives $$\hat{y_1} = \frac{1-s-s^2}{s(s+2)(s-1)} \hat{u_1} + \frac{1}{s(s+2)(s-1)}\hat{u_2} \\ \hat{y_2} = \frac{s}{s-1} \hat{u_1} -\frac{1}{s-1} \hat{u_2} $$ from which all four transfer functions can be read off.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How do I simplify this limit with function equations? $$\lim_{x \to 5} \frac{f(x^2)-f(25)}{x-5}$$ Assuming that $f$ is differentiable for all $x$, simplify. (It does not say what $f(x)$ is at all) My teacher has not taught us any of this, and I am unclear about how to proceed.
$f$ is differentiable, so $g(x) = f(x^2)$ is also differentiable. Let's find the derivative of $g$ at $x = 5$ using the definition. $$ g'(5) = \lim_{x \to 5} \frac{g(x) - g(5)}{x - 5} = \lim_{x \to 5} \frac{f(x^2) - f(25)}{x - 5} $$ Now write $g'(5)$ in terms of $f$ to get the desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Basic set questions I would really appreciate it if you could explain the set notation here $$\{n ∈ {\bf N} \mid (n > 1) ∧ (∀x,y ∈ {\bf N})[(xy = n) ⇒ (x = 1 ∨ y = 1)]\}$$ 1) What does $∀x$ mean? 2) I understand that $n ∈ {\bf N} \mid (n > 1) ∧ (∀x,y ∈ {\bf N})$ means $n$ is part of set $\bf N$ such that $(n > 1) ∧ (∀x,y ∈ {\bf N})$. What do the $[\;\;]$ and $⇒$ mean? 3) Prove that if $A ⊆ B$ and $B ⊆ C$, then $A ⊆ C$ I could prove it by drawing a Venn diagram but is there a better way?
1) $(\forall x)$ is the universal quantifier. It means "for all $x$". 2) $[ ]$ is the same as a parenthesis. Probably, the author did not want to use too many round parenthesis because it would get too confusing. $\Rightarrow$ is implies. 3) Suppose $x \in A$. Since $A \subset B$, by definition $x \in B$. Since $B \subset C$, then $x \in C$. So $x \in A$ implies $x \in C$. This is precisely the definition of $A \subset C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Example where $f\circ g$ is bijective, but neither $f$ nor $g$ is bijective Can anyone come up with an explicit example of two functions $f$ and $g$ such that: $f\circ g$ is bijective, but neither $f$ nor $g$ is bijective? I tried the following: $$f:\mathbb{R}\rightarrow \mathbb{R^{+}} $$ $$f(x)=x^{2}$$ and $$g:\mathbb{R^{+}}\rightarrow \mathbb{R}$$ $$g(x)=\sqrt{x}$$ $f$ is not injective, and $g$ is not surjective, but $f\circ g$ is bijective Any other examples?
If we define $f:\mathbb{R}^2 \to \mathbb{R}$ by $f(x,y) = x$ and $g:\mathbb{R} \to \mathbb{R}^2$ by $g(x) = (x,0)$ then $f \circ g : \mathbb{R} \to \mathbb{R}$ is bijective (it is the identity) but $f$ is not injective and $g$ is not surjective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 7, "answer_id": 0 }
Graphing Cubic Functions I'm having a Little bit of trouble in Cubic Functions, especially when i need to graph the Turning Point, Y-intercepts, X-intercepts etc. My class teacher had told us to use Gradient Method: lets say: $$f(x)=x^3+x^2+x+2$$ We can turn this equation around by using the Gradient Method: $$f'(x)=3x^2+2x+1$$ so it a quadratic equation. But i would like to find out more about this method too, if anyone knows. Basically i am not good at sketching graphs so if anyone has a website that might help me find out more about cubic functions and how to graph them, or if anyone can help me out, i'll be thankful. Thanks.
I used to think that the Gradient Method is for plotty functions in 2 variables. However, this answer may give you some pointers. You could start by examining the function domain. In your case, all $x$ values are valid candidates. next, set $x=0$ then $y=0$ to get the intercepts. Setting $x=0$, yields $y=2$, so the point $(0,2)$ is on your graph. Now setting $y=0$ means that you need to solve the following for $x$: $$x^3+x^2+x+2=0$$ Solving such equations is sometimes obvious at least for the first root, but in this case, it is not. You either follow a numeric method or use a calculator such as Cubic Eqn Solver or use the formula in Wolfarm-Cubic equation formula or Wiki-Cubic Function. To get a plot of the function we'll just use the real root value found be either the above methods so the point $(-1.35320,0)$ is on the function graph as well. Note that the other 2 roots are complex, hence the function intersects the x-axis only once. No we can move to find the critical points so that you can determine concavity of the function. Using the derivative method for testing, you can determine the local minimum and maximum of the function. The subject is a bit lengthy to include in detail here. I suggest you read about it in a book such as (page 191 and above) of: Google Books - Calculus of single variable. In your case, the first derivative has no real roots. The second derivative $$6x+2=0$$ has a root at $x=-0.333$, this indicates that $(-0.333,1.741)$ is a point of inflection. To further study the shape, take 2 points immediately before and after the point of inflection to determine the shape of the curve around this point. Use the obtained information so far together with few other points to determine the approximate curve shape. There is a free web based graph plotter at: Desmos-Graph Plotter - Calculator, that may be useful for you also. Here is a sample showing the function and its first derivative:
{ "language": "en", "url": "https://math.stackexchange.com/questions/198445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is Fractal perimeter always infinite? Looking for information on fractals through google I have read several time that one characteristic of fractals is : * *finite area *infinite perimeter Although I can feel the area is finite (at least on the picture of fractal I used to see, but maybe it is not necessarly true ?), I am wondering if the perimeter of a fractal is always infinite ? If you think about series with positive terms, one can find : * *divergent series : harmonic series for example $\sum_0^\infty{\frac{1}{n}}$ *convergent series : $\sum_0^\infty{\frac{1}{2^n}}$ So why couldn't we imagine a fractal built the same way we build the Koch Snowflake but ensuring that at each iteration the new perimeter has grown less than $\frac{1}{2^n}$ or any term that make the whole series convergent ? What in the definition of fractals allows or prevent to have an infinite perimeter ?
If they have infinite sides, than they must have an infinite perimeter, especially if they are perfectly straight because the formula of perimeter of most shapes is adding up the amount of sides, and the fractal has infinite sides, then it should have an infinite perimeter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 4 }
Exactly one nontrivial proper subgroup Question: Determine all the finite groups that have exactly one nontrivial proper subgroup. MY attempt is that the order of group G has to be a positive nonprime integer n which has only one divisor since any divisor a of n will form a proper subgroup of order a. Since 4 is the only nonprime number that has only 1 divisor which is 2, All groups of order 4 has only 1 nontrivial proper subgroups (Z4 and D4)
Let $H$ be the only non-trivial proper subgroup of the finite group $G$. Since $H$ is proper, there must exist an $x \notin H$. Now consider the subgroup $\langle x\rangle$ of $G$. This subgroup cannot be equal to $H$, nor is it trivial, hence $\langle x\rangle = G$, that is $G$ is cyclic, say of order $n$. The number of subgroups of a cyclic group of order $n$ equals the number of divisors of $n$. So $n$ must have three divisors. This can only be the case if $n$ is the square of a prime number. So, $G \cong C_{p^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
Multigrid tutorial/book I was reading Press et. al., "Numerical Recipes" book, which contain section about multigrid method for numerically solving boundary value problems. However, the chapter is quite brief and I would like to understand multigrids to a point where I will be able to implement more advanced and faster version than that provided in the book. The tutorials I found so far are very elaborate and aimed on graduate students. I have notions on several related topics (relaxation methods, preconditioning), but still the combination of PDEs and the multigrid methods is mind-blowing for me... Thanks for any tips for a good explanatory book, website or article.
First please don't be bluffed by those fancy terms coined by computational scientists, and don't worry about preconditioning or conjugate gradient. The multigrid method for numerical PDE can be viewed as a standalone subject, basically what it does is: make use of the "information" on both finer and coarser mesh, in order to solve a linear equation system(obtained from the discretization of the PDE on these meshes), and it does this in an iterative fashion. IMHO Vassilevski from Lawrence Livermore national laboratory puts up a series of very beginner-oriented lecture notes, where he introduced the motivation and preliminary first, how to get the $Ax = b$ type linear equation system from a boundary value problem of $-\Delta u = f$ with $u = g$ on $\partial \Omega$, what is condition number and how does it affect our iterative solvers. Then he introduced all the well-established aspects of multigrid: what is the basic idea in two-grid, how do we do smoothing on the finer mesh, and error correction on the coarser mesh, V-cycle, W-cycle, etc. Algebraic multigrid(the multigrid that uses information from mesh is often called geometric method), also the adaptive methods are covered too. Some example codes for Poisson equation can be easy google'd. If you got more time, this book has a user-friendly and comprehensive introduction on this topic together with some recent advancements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Multiplicative but non-additive function $f : \mathbb{C} \to \mathbb{R}$ I'm trying to find a function $f : \mathbb{C} \to \mathbb{R}$ such that * *$f(az)=af(z)$ for any $a\in\mathbb{R}$, $z\in\mathbb{C}$, but *$f(z_1+z_2) \ne f(z_1)+f(z_2)$ for some $z_1,z_2\in\mathbb{C}$. Any hints or heuristics for finding such a function?
HINT: Look at $z$ in polar form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Check that a curve is a geodesic. Suppose $M$ is a two-dimensional manifold. Let $\sigma:M \rightarrow M$ be an isometry such that $\sigma^2=1$. Suppose that the fixed point set $\gamma=\{x \in M| \sigma(x)=x\}$ is a connected one-dimensional submanifold of $M$. The question asks to show that $\gamma$ is the image of a geodesic.
Let $N=\{x\in M:\sigma(x)=x\}$ and fix $p\in N$. Exercise 1: Prove that either $1$ or $-1$ is an eigenvalue of $d\sigma_p:T_p(M)\to T_p(M)$. Exercise 2: Prove that if $v\in T_p(M)$ is an eigenvector of $d\sigma_p:T_p(M)\to T_p(M)$ of sufficiently small norm, then the unique geodesic $\gamma:I\to M$ for some open interval $I\subseteq \mathbb{R}$ such that $\gamma(0)=p$ and $\gamma'(0)=v$ has image contained in $N$. (Hint: isometries of Riemannian manifolds take geodesics to geodesics.) I hope this helps!
{ "language": "en", "url": "https://math.stackexchange.com/questions/198916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Find delta with a given epsilon for $\lim_{x\to-2}x^3 = - 8$ Here is the problem. If $$\lim_{x\to-2}x^3 = - 8$$ then find $\delta$ to go with $\varepsilon = 1/5 = 0.2$. Is $\delta = -2$?
Sometimes Calculus students are under the impression that in situations like this there is a unique $\delta$ that works for the given $\epsilon$ and that there is some miracle formula or computation for finding it. This is not the case. In certain situations there are obvious choices for $\delta$, in certain situations there are not. In any case you are asking for some $\delta\gt 0$ (!!!) such that for all $x$ with $|x-(-2)|\lt\delta$ we have $|x^3-(-8)|\lt 0.2$. Once you have found some $\delta\gt 0$ that does it, every smaller $\delta\gt 0$ will work as well. This means that you can guess some $\delta$ and check whether it works. In this case this is not so difficult as $x^3$ increases if $x$ increases. So you only have to check what happens if you plug $x=-2-\delta$ and $x=-2+\delta$ into $x^3$ and then for all $x$ with $|x-(-2)|$ you will get values of $x^3$ that fall between these two extremes. For an educated guess on $\delta$, draw a sketch. This should be enough information to solve this problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/198998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Diameter of Nested Sequence of Compact Set Possible Duplicate: the diameter of nested compact sequence Let $(E_j)$ be a nested sequence of compact subsets of some metric space; $E_{j+1} \subseteq E_j$ for each $j$. Let $p > 0$, and suppose that each $E_j$ has diameter $\ge p$ . Prove that $$E = \bigcap_{j=1}^{\infty} E_j$$ also has diameter $\ge p$.
For each $j$ pick two points $x_j, y_j \in E_j$ such that $d(x_j,y_j) \ge p$. Since $x_j \in E_1$ for all $j$, and $E_1$ is compact, the sequence $(x_j)$ has a convergent subsequence $(x_{\sigma(j)})$ say, and likewise $(y_{\sigma(j)})$ has a convergent subsequence $(y_{\tau \sigma(j)})$. What can you say about the limits of these subsequences?
{ "language": "en", "url": "https://math.stackexchange.com/questions/199060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Breakdown of solution to inviscid Burgers equation Let $u = f(x-ut)$ where $f$ is differentiable. Show that $u$ (amost always) satisfies $u_t + uu_x = 0$. What circumstances is it not necessarily satisfied? This is a question in a tutorial sheet I have been given and I am slightly stuck with the second part. To show that $u$ satisfies the equation I have differentiated it to get: $u_t = -f'(x-ut)u$ $u_x = f'(x-ut)$ Then I have substituted these results into the original equation. The part I am unsure of is where it is not satisfied. If someone could push me in the right direction it would be much appreciated.
We have \[ u_t = f'(x-ut)(x-ut)_t = -f'(x-ut)(u_t t + u) \iff \bigl(1 + tf'(x-ut)\bigr)u_t = -uf'(x-ut) \] and \[ u_x = f'(x-ut)(x-ut)_x = f'(x-ut)(1 - u_xt) \iff \bigl(1 + tf'(x-ut)\bigr)u_x = f'(x-ut) \] Which gives that \[ \bigl(1 + tf'(x-ut)\bigr)(u_t +uu_x) = 0 \] so at each point either $1 + tf'(x-ut) = 0$ or $u_t + uu_x = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How do we know how many branches the inverse function of an elementary function has? How do we know how many branches the inverse function of an elementary function has ? For instance Lambert W function. How do we know how many branches it has at e.g. $z=-0.5$ , $z=0$ , $z=0.5$ or $z=2i$ ?
Suppose your elementary function $f$ is entire and has an essential singularity at $\infty$ (as in the case you mention, with $f(z) = z e^z$). Then Picard's "great" theorem says that $f(z)$ takes on every complex value infinitely often, with at most one exception. Thus for every $w$ with at most one exception, the inverse function will have infinitely many branches at $w$. Determining whether that exception exists (and what it is) may require some work. In this case it is easy: $f(z) = 0$ only for $z=0$ because the exponential function is never $0$, so the exception is $w=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How to find the least path consisting of the segments AP, PQ and QB Let $A = (0, 1)$ and $B = (2, 0)$ in the plane. Let $O$ be the origin and $C = (2, 1)$ . Consider $P$ moves on the segment $OB$ and $Q$ move on the segment $AC$. Find the coordinates of $P$ and $Q$ for which the length of the path consisting of the segments $AP, PQ$ and Q$B$ is least.
Hint: Let $A'$ be the point one unit above $A$. Let $B'$ be the point one unit below $B$. Join $A'$ and $B'$ by a straight line. Show that gives the length of the minimal path.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Child lamp problem A street lamp is 12 feet above the ground. A child 3 feet in height amuses itself by walking in such a way that the shadow of its head moves along lines chalked on the ground. (1) How would the child walk if the chalked line is (a) straight, (b) a circle, (c) a square? (2) What difference would it make if the light came from the sun instead of a lamp? Example: The problem is from Sawyer's "Mathematician's Delight". Note: Since this is my first post here, I would like to note that this is not homework. I am just trying to improve my math/problem solving skills.
Similar triangles show that from each point on the line you draw a line to the base of the lamp. When the child's head makes a shadow at a given point it is $\frac 14$ of the way along the line from the point to the lamp. So the child walks in the same shape: line, circle, or square, with size $\frac 34$ of the figure. For the sun, the rays are parallel, so the child's head traces the same figure as the chalk line.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
What is the inverse function of $\ x^2+x$? I think the title says it all; I'm looking for the inverse function of $\ x^2+x$, and I have no idea how to do it. I thought maybe you could use the quadratic equation or something. I would be interesting to know.
If you want to invert $x^2 + x$ on the interval $x \ge -1/2$, write $y = x^2 + x$, so $x^2 + x -y = 0$. Use the quadratic equation with $a=1$, $b=1$, and $c=-y$ to find $$ x= \frac{-1 + \sqrt{1+4y}}{2}.$$ (The choice of whether to use $+\sqrt{4ac}$ rather than $-\sqrt{4ac}$ is because we are finding the inverse of the right side of the parabola. If you want to invert the left side, you would use the other sign.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/199377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 0 }
Find the necessary and sufficient conditions on $a$, $b$ so that $ax^2 + b = 0$ has a real solution. This question is really confusing me, and I'd love some help but not the answer. :D Is it asking: What values of $a$ and $b$ result in a real solution for the equation $ax^2 + b = 0$? $a = b = 0$ would obviously work, but how does $x$ come into play? There'd be infinitely many solutions if $x$ can vary as well ($a = 1$, $x = 1$, $b = -1$, etc.). I understand how necessary and sufficient conditions work in general, but how would it apply here? I know it takes the form of "If $p$ then $q$" but I don't see how I could apply that to the question. Is "If $ax^2 + b = 0$ has a real solution, then $a$ and $b =$ ..." it?
I assume the question is "find conditions that are necessary and sufficient to guarantee solutions" rather than "find necessary conditions and also find sufficient sufficient conditions for a solution." If the former is the case, then you're asked for constraints on $a$ and $b$ such that (1) if the conditions are met then $ax^2+b$ is zero for some $x$ and (2) if the conditions are not met then $ax^2+b$ isn't zero for any $x$. So, when will $ax^2+b$ have some value $x$ for which the expression is zero? Well, as André suggested, try to solve $ax^2+b=0$ "mechanically". By subtracting we have the equivalent equation $ax^2 = -b$ and we'd like to divide by $a$ to get $x^2=-b/a$. Unfortunately, we can't do that if $a=0$, so we need to consider two cases * *$a=0$ *$a\ne 0$ In the first case, our equation becomes $0\cdot x^2+b=0$, namely $b=0$ and if $b=0$ (and, of course $a=0$) then any $x$ will satisfy this. In other words, there's a solution (actually infinitely many solutions) if $a=b=0$, as you've already noted. Now, in case (2) we can safely divide by $a$ to get $x^2=-b/a$. When does this have a solution? We know that $x^2\ge 0$ no matter what $x$ is, so when will our new equation have a solution? You said you don't want the full answer, so I'll denote the answer you discover by $C$, which will be of the form "some condition on $a\text{ and }b$". Once you've done that, your full answer will be: $ax^2+b=0$ has a solution if and only if * *$a=b=0$, or *Your condition $C$. These are sufficient, since either guarantees a solution, and necessary, since if neither is true, then there won't be a solution (since we exhausted all possibilities for solutions).
{ "language": "en", "url": "https://math.stackexchange.com/questions/199430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Homomorphism of free modules $A^m\to A^n$ Let's $\varphi:A^m\to A^n$ is a homomorphism of free modules over commutative (associative and without zerodivisors) unital ring $A$. Is it true that $\ker\varphi\subset A^m$ is a free module? Thanks a lot!
Here is a counterexample which is in some sense universal for the case $m = 2, n = 1$. Let $R = k[x, y, z, w]/(xy - zw)$ ($k$ a field). This is an integral domain because $xy - zw$ is irreducible. The homomorphism $R^2 \to R$ given by $$(m, n) \mapsto (xm - zn)$$ has a kernel which contains both $(y, w)$ and $(z, x)$. If the kernel is free then it must be free on these two elements by degree considerations, but $x(y, w) = (zw, xw) = w(z, x)$ is a relation between them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Is this AM/GM refinement correct or not? In Chap 1.22 of their book Mathematical Inequalities, Cerone and Dragomir prove the following interesting inequality. Let $A_n(p,x)$ and $G_n(p,x)$ denote resp. the weighted arithmetic and the weighted geometric means, where $x_i\in[a,b]$ and $p_i\ge0$. $P_n$ is the sum of all $p_i$. Then the following holds: $$ \exp\left[\frac{1}{b^2P_n^2}\sum\limits_{i<j} p_ip_j(x_i-x_j)^2\right]\le\frac{A_n(p,x)}{G_n(p,x)} \le\exp\left[\frac{1}{a^2P_n^2}\sum\limits_{i<j} p_ip_j(x_i-x_j)^2\right] $$ The relevant two pages of the book may be consulted here. I need help to figure out what is wrong with my next arguments. I will only be interested in the LHS of the inequality. Let $n=3$ and let $p_i=1$ for all $i$ and hence $P_n=3$. Let $x,y,z\in[a,b]$. We can assume that $b=\max\{x,y,z\}$. Our inequality is equivalent to: $$ f(x,y,z)=\frac{x+y+z}{3\sqrt[3]{xyz}}-\exp\left[\frac{(x-y)^2+(x-z)^2+(y-z)^2}{9\max\{x,y,z\}^2}\right]\ge0 $$ According to Mathematica $f(1, 2, 2)=-0.007193536514508<0$ which means that the inequality as stated is incorrect. Moreover, if I plot the values of $f(x,2,2)$ here is what I get: You can download my Mathematica notebook here. As you can see our function is negative for some values of $x$ which means that the inequality does not hold for those values. Obviously it is either me that is wrong or Cerone and Dragomir's derivation. I have read their proofs and I can't find anything wrong so I suspect there is a flaw in my exposition above. Can someone help me find it?
Your modesty in suspecting that the error is yours is commendable, but in fact you found an error in the book. The "simple calculation" on p. $49$ is off by a factor of $2$, as you can easily check using $n=2$ and $p_1=p_2=1$. Including a factor $\frac12$ in the inequality makes it come out right. You can also check this by using $f(x)=x^2$, $n=2$, $p_1=p_2=1$ and $x_1=-1$, $x_2=1$ in inequality $(1.151)$ on p. $48$. Then the difference between the average of the function values and the function value of the average is $1$, and the book's version of the inequality says that it's $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Proof: Symmetric and Positive Definite If $A$ is a symmetric and positive definite matrix and matrix $B$ has linearly independent columns , is it true that $B^T A B$ is symmetric and positive definite?
If the matrices are real yes: take $x\in\Bbb C^d$. Then $Bx\in \Bbb C^d$ hence $x^tB^tABx=(Bx)^tA(Bx)\geq 0$ and if $x\neq 0$, as $B$ is invertible $Bx\neq 0$. $A$ being positive definite we have $x^tB^tABx>0$. But if the matrices are real it's not true: take $A=I_2$, $B:=\pmatrix{1&0\\0&i}$, then $B^tAB=\pmatrix{1&0\\0&-1}$ which is not positive definite. But it's true if you replace the transpose by the adjoint (entrywise conjugate of the transpose).
{ "language": "en", "url": "https://math.stackexchange.com/questions/199623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Cardinality of $R[x]/\langle f\rangle$ via canonical remainder reps. Suppose $R$ is a field and $f$ is a polynomial of degree $d$ in $R[x]$. How do you show that each coset in $R[x]/\langle f\rangle$ may be represented by a unique polynomial of degree less than $d$? Secondly, if $R$ is finite with $n$ elements, how do you show that $R[x]/\langle f\rangle$ has exactly $n^d$ cosets?
Hint $ $ Recall $\rm\ R[x]/(f)\:$ has a complete system of reps being the least degree elements in the cosets, i.e. the remainders mod $\rm\:f,\:$ which uniquely exist by the Polynomial Division Algorithm. Therefore the cardinality of the quotient ring equals the number of such reps, i.e. the number of polynomials $\rm\in R[x]\:$ with degree smaller than that of $\rm\:f.$ Remark $\ $ This is a generalization of the analogous argument for $\rm\:\Bbb Z/m.\:$ The argument generalizes to any ring with a Division (with Remainder) Algorithm, i.e. any Euclidean domain, as explained in the linked answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Solving $(x+y) \exp(x+y) = x \exp(x)$ for $y$. While thinking about the Lambert $W$ function I had to consider Solving $(x+y) \exp(x+y) = x \exp(x)$ for $y$. This is what I arrived at: (for $x$ and $y$ not zero) $(x+y) \exp(x+y) = x \exp(x)$ $x\exp(x+y) + y \exp(x+y) = x \exp(x)$ $\exp(y) + y/x \exp(y) = 1$ $y/x \exp(y) = 1 - \exp(y)$ $y/x = (1-\exp(y))/\exp(y)$ $x/y = \exp(y)/(1-\exp(y))$ $x = y\exp(y)/(1-\exp(y))$ $1/x = 1/y\exp(y) -1/y$ And then I got stuck. Can we solve for $y$ by using Lambert $W$ function? Or how about an expression with integrals?
The solution of $ (x+y) \exp(x+y) = x \exp(x) $ is given in terms of the Lambert W function Let $z=x+y$, then we have $$ z {\rm e}^{z} = x {\rm e}^{x} \Rightarrow z = { W} \left( x{{\rm e}^{x}} \right) \Rightarrow y = -x + { W} \left( x{{\rm e}^{x}} \right) \,. $$ Added: Based on the comment by Robert, here are the graphs of $ y = -x + { W_0} \left( x{{\rm e}^{x}} \right) $ and $ y = -x + { W_{-1}} \left( x{{\rm e}^{x}} \right) $
{ "language": "en", "url": "https://math.stackexchange.com/questions/199829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculus and Physics Help! If a particle's position is given by $x = 4-12t+3t^2$ (where $t$ is in seconds and $x$ is in meters): a) What is the velocity at $t = 1$ s? Ok, so I have an answer: $v = \frac{dx}{dt} = -12 + 6t$ At $t = 1$, $v = -12 + 6(1) = -6$ m/s But my problem is that I want to see the steps of using the formula $v = \frac{dx}{dt}$ in order to achieve $-12 + 6t$... I am in physics with calc, and calc is only a co-requisite for this class, so I'm taking it while I'm taking physics. As you can see calc is a little behind. We're just now learning limits in calc, and I was hoping someone could help me figure this out.
You see the problem here is that the question is asking for a velocity at $t=1$. This means that they require and instantaneous velocity which is by definition the derivative of the position function at $t=1$. If you don't want to use derivative rules for some reason and you don't mind a little extra work then you can calculate the velocity from a limit. (In reality this is the same thing as taking the derivative as you will later see. A derivative is just a limit itself.) To take the instantaneous velocity, we use the formula $\overline{v} = \frac{\Delta x}{\Delta t}$. I use $\overline{v}$ to denote the average velocity. Between time $t$ and $t+\Delta$ we have $$\overline{v} = \frac{x(t+\Delta t) - x(t)}{\Delta t} = \frac{\left(3(t+\Delta t)^2 - 12(t+\Delta t)+4\right)-\left(3t^2-12t+4\right)}{\Delta t} $$ Simplifying a bit $$\overline{v} = \frac{6t\Delta t + (\Delta t)^2 -12\Delta t}{\Delta t}$$ Now comes the calculus. If we take the limit as $\Delta t \rightarrow 0$, that is if we take the time interval to be smaller and smaller so that the average velocity approaches the instantaneous, then the instantaneous velocity $v$ is $$v = \lim_{\Delta t\rightarrow 0}\frac{6t\Delta t + (\Delta t)^2 -12\Delta t}{\Delta t}=\lim_{\Delta t\rightarrow 0}\left(6t-12+\Delta t\right)=6t-12$$ Notice that this is exactly the derivative you had before and the above steps are in fact a calculation of the derivative from first principles.
{ "language": "en", "url": "https://math.stackexchange.com/questions/199865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Replacing one of the conditions of a norm Consider the definition of a norm on a real vector space X. I want to show that replacing the condition $\|x\| = 0 \Leftrightarrow x = 0\quad$ with $\quad\|x\| = 0 \Rightarrow x = 0$ does not alter the the concept of a norm (a norm under the "new axioms" will fulfill the "old axioms" as well). Any hints on how to get started?
All you need to show is that $\|0\|=0$. Let $x$ be any element of the normed space. What is $\|0\cdot x\|$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/199956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that the discrete metric can not be obtained from $X\neq\{0\}$ If $X \neq \{ 0\}$ is a vector space. How does one go about showing that the discrete metric on $X$ cannot be obtained from any norm on $X$? I know this is because $0$ does not lie in $X$, but I am having problems. Formalizing a proof for this. This is also my final question for some time, after this I will reread the answers, and not stop until I can finally understand these strange spaces.
You know that the discrete metric only takes values of $1$ and $0$. Now suppose it comes from some norm $||.||$. Then for any $\alpha$ in the underlying field of your vector space and $x,y \in X$, you must have that $$\lVert\alpha(x-y)\rVert = \lvert\alpha\rvert\,\lVert x-y\rVert.$$ But now $||x-y||$ is a fixed number and I can make $\alpha$ arbitrarily large and consequently the discrete metric does not come from any norm on $X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/200023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Confusion related to the concatenation of two grammars I have this confusion. Lets say I have two languages produced by type 3 grammar such that L(G1) = <Vn1,Vt,P1,S1> L(G2) = <Vn2,Vt,P2,S2> I need to find a type3 grammar G3 such that L(G3) = L(G1)L(G2) I can't use $S3 \rightarrow S1S2$ to get the concatenaion, because the production is not type 3 as well. So what should I do?
First change one of the grammars, if necessary, to make sure that they have disjoint sets of non-terminal symbols. If you’re allowing only productions of the forms $X\to a$ and $X\to Ya$, make the new grammar $G$ generate a word of $L(G_2)$ first and then a word of $L(G_1)$: replace every production of the form $X\to a$ in $G_2$ by the production $X\to S_1a$. If you’re allowing only productions of the forms $X\to a$ and $X\to aY$, replace every production of the form $X\to a$ in $G_1$, by the production $X\to aS_2$. If you’re allowing productions of both forms, you’re not talking about type $3$ grammars.
{ "language": "en", "url": "https://math.stackexchange.com/questions/200087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A problem involving Laplace operator $\Omega$ is a bounded open set in $\mathbb R^n$, consider the number $ r = \inf \{ \left\| {du} \right\|_{{L^2}(\Omega )}^2:u \in H_0^1(\Omega ),{\left\| u \right\|_{{L^2}(\Omega )}} = 1\}$ If for some $v\in H_0^1(\Omega )$ the infimum is achieved, then is $\Delta v\in L^2(\Omega)$?
Let $$ f, g: H_0^1(\Omega) \to \mathbb{R}, f(u)=\|\nabla u\|_{L^2(\Omega)}^2,\ g(u)=\|u\|_{L^2(\Omega)}^2. $$ Then $$ r=\inf\{f(u):\ u \in H_0^1(\Omega),\ g(u)=1\}. $$ If $$ r=f(v), $$ where $v$ belongs to $H_0^1(\Omega)$ and satisfies $g(v)=1$, then, there is a $\lambda \in \mathbb{R}$ such that $$ Df(v)\cdot h=\lambda Dg(u)\cdot h \quad \forall h \in H_0^1(\Omega), $$ i.e. $$ \int_\Omega\nabla v\cdot\nabla h=\lambda\int_\Omega vh \quad \forall h \in H_0^1(\Omega). $$ The latter shows that $v$ is a weak solution of the PDE $$ -\Delta u=\lambda u, \ u \in H_0^1(\Omega). $$ Hence $\Delta v =-\lambda v=-f(v)v \in L^2(\Omega)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/200156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is an abstract simplicial complex a quiver? Let $\Delta$ be an abstract simplicial complex. Then for $B\in \Delta$ and $A\subseteq B$ we have that $A\in\Delta$. If we define $V$ to be the set of faces of $\Delta$, construct a directed edge from $B$ to $A$ if $A$ is a face of $B$ (i.e. $A\subseteq B$) and define $E$ to be the set of directed edges, then will $\Gamma=(V,E)$ be a quiver?
Yes, and it's the poset of faces ordered by inclusion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/200200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Problem with Ring $\mathbb{Z}_p[i]$ and integral domains Let $$\Bbb Z_p[i]:=\{a+bi\;:\; a,b \in \Bbb Z_p\,\,,\,\, i^2 = -1\}$$ -(a)Show that if $p$ is not prime, then $\mathbb{Z}_p[i]$ is not an integral domain. -(b)Assume $p$ is prime. Show that every nonzero element in $\mathbb{Z}_p[i]$ is a unit if and only if $x^2+y^2$ is not equal to $0$ ($\bmod p$) for any pair of elements $x$ and $y$ in $\mathbb{Z}_p$. (a)I think that I can prove the first part of this assignment. Let $p$ be not prime. Then there exist $x,y$ such that $p=xy$, where $1<x<p$ and $1<y<p$. Then $(x+0i)(y+0i)=xy=0$ in $\mathbb{Z}_p[i]$. Thus $(x+0i)(y+0i)=0$ in $\mathbb{Z}_p[i]$. Since none of $x+0i$ and $y+0i$ is equal to $0$ in $\mathbb{Z}_p[i]$, we have $\mathbb{Z}_p[i]$ is not an integral domain. However, I don't know how to continue from here.
Note that $(a+bi)(a-bi)=a^2+b^2$. If $a^2+b^2\equiv0\pmod p$, then $a+bi$ is not a unit. And if $a^2+b^2$ is not zero modulo $p$, then it's invertible modulo $p$, so $a+bi$ is a unit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/200259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }