Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
About convex function The exercise is about convex functions: How to prove that $f(t)=\int_0^t g(s)ds$ is convex in $(a,b)$ whenever $0\in (a,b)$ and $g$ is increasing in $[a,b]$? I proved that $$f(x)\leq \frac{x-a'}{b'-x}f(b')+\left(1-\frac{x-a'}{b'-x}\right)f(a')$$ when we have $$x=\left(1-\frac{x-a'}{b'-a'}\right)a'+\frac{x-a'}{b'-a'}b'$$
A slightly different approach: We need to show $f(x+\lambda(y-x)) \leq f(x) + \lambda (f(y)-f(x))$, with $\lambda \in (0,1)$. Suppose $x<y$. Then $$f(x+\lambda(y-x)) - f(x) = \int_{x}^{x+\lambda(y-x)} g(s) \; ds$$ Using the change of variables $t=\frac{s-x}{\lambda}+x$, we get $$\int_{x}^{x+\lambda(y-x)} g(s) \; ds = \int_{x}^{y} g(\lambda(t-x)+x) \; \lambda \; dt \leq \lambda \int_{x}^{y} g(t) \; dt = \lambda(f(y)-f(x)),$$ where the second to last step follows because $\lambda(t-x)+x \leq t$, and $g$ is increasing. If $x>y$, let $\mu = 1-\lambda$ (note $\mu \in (0,1)$), then we have already shown that $$f(y+\mu(x-y)) \leq f(y) + \mu (f(x)-f(y)).$$ Since $c+\mu(d-c) = c+(1-\lambda)(d-c) = d+\lambda(c-d)$, the desired result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/143721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Hypergeometric series If found that : "Assume further that this equation has e series solution $\sum a_ix^i$ whose coefficients are connected by two term recurrence formula. Then, such a series can be expressed in terms of hypergeometric series." [Bragg, 1969] how can we do this conversion? thanks in advance
The (freely downloadable) book A = B, by Petkovsek, Wilf, and Zeilberger, is, generally speaking, a must-read. The authors explain, in particular how to deduce a hypergeometric series from a recurrence relation and the other way round.
{ "language": "en", "url": "https://math.stackexchange.com/questions/143784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
inequality in a differential equation Let $u:\mathbb{R}\to\mathbb{R}^3$ where $u(t)=(u_1(t),u_2(t), u_3(t))$ be a function that satisfies $$\frac{d}{dt}|u(t)|^2+|u|^2\le 1,\tag{1}$$where $|\cdot|$ is the Euclidean norm. According to Temam's book paragraph 2.2 on page 32 number (2.10), inequality (1) implies $$|u(t)|^2\le|u(0)|^2\exp(-t)+1-\exp(-t),\tag{2}$$but I do not understand why (1) implies (2).
The basic argument would go like this. Go ahead and let $f(t) = |u(t)|^2$, so that equation (1) says $f'(t) + f(t) \leq 1$. We can rewrite this as $$\frac{f'(t)}{1-f(t)}\leq 1.$$ Let $g(t) = \log(1 - f(t))$. Then this inequality is exactly that $$-g'(t)\leq 1.$$ It follows that $$g(t) = g(0) + \int_0^t g'(s)\,ds\geq g(0) - t.$$ Plugging in $\log(1 - f(t))$ for $g$, we see that $$\log(1-f(t)) \geq \log(1 - f(0)) - t.$$ Exponentiating both sides gives $$1 - f(t) \geq e^{-t}(1 - f(0)),$$ which is exactly the inequality you are looking for. Edit: In the case when $f(t)>1$, the argument above doesn't apply because $g(t)$ is not defined. Instead, take $g(t) = \log(f(t) -1)$, so that $g'(t)\leq -1$. It follows that $g(t) \leq g(0) - t$ (at least for small enough $t$ that $g$ remains defined), which upon exponentiating again gives the desired inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/143914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
A simple Riemann mapping question Let $\Delta$ denote the open unit disc. Let $G$ be a simply connected region and $G\neq\mathbb{C}$. Suppose $f:G\rightarrow\Delta$ is a one-to-one holomorphic map with $f(a)=0$ and $f'(a)>0$ for some $a$ in $G$. Let $g$ be any other holomorphic, one-to-one map of $G$ onto $\Delta$. Express $g$ in terms of $f$. Attempt: Set $\alpha=g(a)$ and $$\phi_\alpha=\frac{z-\alpha}{1-\overline{\alpha}z}\in\operatorname{Aut}\Delta.$$ Then $f\circ g^{-1}\circ \phi_\alpha^{-1}$ is an automorphism of the unit disc which fixes 0. Hence $f\circ g^{-1}\circ \phi_\alpha^{-1}(z)=e^{i\theta}z$ for some $\theta\in[0,2\pi)$. Therefore $g(z)=\phi^{-1}(e^{-i\theta}f(z))$. Question: Since I haven't used the fact that $f'(a)>0$ (and I know that this makes $f$ unique), is there a way to be more specific about the form of $g$ using this information? Thanks in advance.
Your solution works for any fixed $f$ whether or not $f'(a)>0$. The condition $f'(a)>0$ is just a convenient way to fix the phase of the derivative; this affects the exact value of $\theta$ in your argument, but not its existence or uniqueness.
{ "language": "en", "url": "https://math.stackexchange.com/questions/143971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$T :\mathbb {R^7}\rightarrow \mathbb {R^7} $ is defined by $T(x_1,x_2,\ldots x_6,x_7) = (x_7,x_6,\ldots x_2,x_1)$ pick out the true statements. Consider the linear transformations $T :\mathbb {R^7}\rightarrow \mathbb {R^7} $ defined by $T(x_1,x_2,\ldots x_6,x_7) = (x_7,x_6,\ldots x_2,x_1)$. Which of the following statements are true. 1- $\det T = 1$ 2 - There is a basis of $\mathbb {R^7}$ with respect to which $T$ is a diagonal matrix, 3- $T^7=I$ 4- The smallest $n$ such that $T^n = I$ is even. What i have done so for is I have tried for $T :\mathbb {R^2}\rightarrow \mathbb {R^2} $ and found that all the statments are true. Can i generalize my conclusion to $\mathbb {R^7} $. Do i need to find $7\times 7$ matrix? Is there any other approach?
We can start guessing the eigenvectors: with eigenvalue $1$, we have eigenvectors $e_1 + e_7$, $e_2 + e_6$, $e_3 + e_5$, and $e_4$; with eigenvalue $-1$, we have eigenvectors $e_1 - e_7$, $e_2 - e_6$, $e_3 - e_5$. These seven eigenvectors form a basis of $\mathbb{R}^7$, so with respect to this basis $T$ will be diagonal. Also, since the determinant is the product of the eigenvalues, $\det T = 1^4 \cdot (-1)^3 = -1$. We can easily see that $T$ switches three pairs of coordinates, so in order to come back to $x \in \mathbb{R}^7$ after applying $T$ repeatedly $n$ times on $x$, $n$ has to be even and in particular it cannot be $7$ (or alternatively: if $T^n = I$, $\det T^n = (\det T)^n = (-1)^n = det I = 1$, so $n$ is even).
{ "language": "en", "url": "https://math.stackexchange.com/questions/144037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Finding the rank of a matrix Let $A$ be a $5\times 4$ matrix with real entries such that the space of all solutions of the linear system $AX^t = (1,2,3,4,5)^t$ is given by$\{(1+2s, 2+3s, 3+4s, 4+5s)^t :s\in \mathbb{R}\}$ where $t$ denotes the transpose of a matrix. Then what would be the rank of $A$? Here is my attempt Number of linearly independent solution of a non homogeneous system of linear equations is given by $n-r+1$ where n refers to number of unknowns and $r$ denotes rank of the coefficient matrix $A$. Based on this fact, we may write $n-r+1 =1 $. Since there seems only one linearly independent solution (Here i am confused). Am i right? or How can i do it right? thanks
Rank theorem says that if $A$ be the coefficient matrix of a consistent system of linear equations with $n$ variables then number of free variables (parameters) = $n$ - $rank(A)$ By using this we have $1 = 4 - rank(A)$ Thus $rank(A) = 3$ thanks to Dr Arturo sir for clearing my doubt.
{ "language": "en", "url": "https://math.stackexchange.com/questions/144116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
A probabilistic method I am trying to study for a exam and i found a assignmet, that i cant solve. Consider a board of $n$ x $n$ cells, where $n = 2k, k≥2$. Each of the numbers from $S = \{1,...,\frac{n^2}{2}\}$ is written to two cells so that each cell contains exactly one number. How can i show that $n$ cells $c_{i, j}$ can be chosen with one cell per row and one cell per column such that no pair of cells contains the same number. I tried it now for severel hours but i cant get it right. I think random permutations can help here but i am not sure.
The standard probabilistic approach would be the following: For each $i$, calculate the probability that both $i$s are in the permutation provided that they are not already in the same row/column (then the probability is zero, of course). This gives $\dfrac 1 {n(n-1)}$, since there are $n(n-1)$ possible selections from the rows containing $i$ and only one that gives the two $i$s. Sum these probabilities for all $i$s to get an upper bound for the expected value of pairs in a randomly chosen permutation. This gives $\dfrac{n}{2(n-1)}$. Observe that this expected value is smaller than 1 which implies that there has to be at least one permutation without a pair of the same values.
{ "language": "en", "url": "https://math.stackexchange.com/questions/144191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A strange characterization of continuity The problem I'm going to post, may appear a bit routine at first sight but it is not so! Suppose that a,b are two real numbers and $f:(a,b)\rightarrow \mathbb{R}$ satisfies: $f((c,d))$ is a bounded open interval for EVERY subinterval $(c,d)$ of $(a,b)$. Can we conclude that $f$ is continuous on $(a,b)$ then? A little thought will reveal that the problem is equivalent to asking whether we can conclude that $f$ is monotone on $(a,b)$, under the same hypotheses. But I am finding it as much impossible to prove that $f$ is continuous, as to find a discontinuous counterexample, satisfying the hypotheses!!
This answer is based on the answer Brian M. Scott gave at the link Siminore mentioned. Take the interval $(0,1)$. Let $\equiv$ be the equivalence relation on $\mathbb{R}$ given by $x\equiv y$ if and only if $x-y\in\mathbb{Q}$. Each equivalence class is countable and $|(0,1)|=|\mathbb{R}|=|\mathbb{R}^\mathbb{N}|$, so there is a bijection $f:\big\{[x]:x\in\mathbb{R}\big\}\to(0,1)$. Let $\pi:x\mapsto[x]$ be the canonical projection. Then for every open interval $(a,b)$, we have $f\circ\pi\big((a,b)\big)=(0,1)$, which also shows that $f\circ\pi$ is not continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/144255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Are there infinitely many primes of the form $n^2 - d$, for any $d$ not a square? Clearly, for $d$ a square number, there is at most one prime of the form $n^2 - d$, since $n^2-d=(n+\sqrt d)(n-\sqrt d)$. What about when $d$ is not a square number?
There's a host of conjectures that assert that there an infinite number of primes of the form $n^2-d$ for fixed non-square $d$. For example Hardy and Littlewood's Conjecture F, the Bunyakovsky Conjecture, Schinzel's Hypothesis H and the Bateman-Horn Conjecture. As given by Shanks 1960, a special case of Hardy and Littlewood's Conjecture F, related to this question, is as follows: Conjecture: If $a$ is an integer which is not a negative square, $a \neq -k^2$, and if $P_a(N)$ is the number of primes of the form $n^2+a$ for $1 \leq n \leq N$, then \[P_a(N) \sim \frac{1}{2} h_a \int_2^N \frac{dn}{\log n}\] where $h_a$ is the infinite product \[h_a=\prod_{\text{prime } w \text{ does not divide } a}^\infty \left(1-\left(\frac{-a}{w}\right) \frac{1}{w-1}\right)\] taken over all odd primes, $w$, which do not divide $a$, and for which $(-a/w)$ is the Legendre symbol. The integral is (up to multiplicative/additive constants) the logarithmic integral function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/144334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Are there any situations where you can only memorize rather than understand? I realize that you should understand theorems, equations etc. rather than just memorizing them, but are there any circumstances where memorizing in necessary? (I have always considered math a logical subject, where every fact can be deducted using logic rather than through memory)
If there are any, I'm certain that humans haven't discovered them. If there truly is a situation in mathematics where you can only memorize and there is no logical reasoning you can use to get there, then there comes an interesting question: how do we know that this formula is true? Everything we know about in mathematics we know from reasoning. If we can't figure it out with reasoning, then we simply can't figure out what there is to memorize.
{ "language": "en", "url": "https://math.stackexchange.com/questions/144393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
Original author of an exponential generating function for the Bernoulli numbers? The Bernoulli numbers were being used long before Bernoulli wrote about them, but according to Wikipedia, "The Swiss mathematician Jakob Bernoulli (1654–1705) was the first to realize the existence of a single sequence of constants B0, B1, B2, ... which provide a uniform formula for all sums of powers." Did he publish an exponential generating function as such for the series and was he the first to do so? If not, who published it first? According to Wikipedia again, Abraham de Moivre was the first to introduce the concept of generating functions per se in 1730. This question is motivated by MSE-Q143499. Let me try to make the question clearer so that responses won't involve the multitude of uses or properties of the Bernoulli numbers, which are fascinating, but not what I'm addressing by this question. Who first published $$\displaystyle\frac{t}{e^t-1}=\sum B_n \frac{t^n}{n!}$$ as an encoding of the Bernoulli numbers?
I highly recommend the book Sources in the Development of Mathematics: Infinite Series and Products from the Fifteenth to the Twenty-first Century, by Ranjan Roy (Cambridge University Press, 2011). Got it from the library a couple of weeks ago. It has almost $1000$ pages of treasures. On page $23$, Roy writes: "In the early $1730$'s, Euler found a generating function for the Bernoulli numbers, apparently unaware that Bernoulli had already defined these numbers in a different way." (The generating function is the one that you gave.) Roy is very careful about sources, so it seems very likely that Euler was first. The only explicit reference to a paper that I found is to De Seriebus Quibusdam Considerationes (Euler), apparently written in $1740$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/144436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Is it wrong to tell children that $1/0 =$ NaN is incorrect, and should be $∞$? I was on the tube and overheard a dad questioning his kids about maths. The children were probably about 11 or 12 years old. After several more mundane questions he asked his daughter what $1/0$ evaluated to. She stated that it had no answer. He asked who told her that and she said her teacher. He then stated that her teacher had "taught it wrong" and it was actually $∞$. I thought the Dad's statement was a little irresponsible. Does that seem like reasonable attitude? I suppose this question is partly about morality.
The usual meaning of $a/b=c$ is that $a=b\cdot c$. Since for $b=0$ we have $0\cdot x=0$ for any $x$, there simply isn't any $c$ such that $1=0\cdot c$, unless we throw the properties of arithmetic to the garbage (i.e. adding new elements which do not respect laws like $a(x+y)=ax+ay$). So "undefined" or "not a number" is the most correct answer possible. However: It is sometimes useful to break the laws of arithmetic by adding new elements such as "$\infty$" and even defining $1/0=\infty$. It is very context-dependent and assumes everyone understands what's going on. This is certainly not something to be stated to kids as some general law of Mathematics. Also: I believe that the common misconception of "$1/0=\infty$" comes from elementary Calculus, where the following equality holds: $\lim_{x\to 0^+}\frac{1}{x} = \infty$. This cannot be simplified to a statement like $\frac{1}{0}=\infty$ because of two problems: * *$\lim_{x\to 0^-}\frac{1}{x} = -\infty$, so the "direction" of the limit matters; moreover, because of this, $\lim_{x\to 0}\frac{1}{x}$ is undefined. *By writing $\lim f(x)=\infty$ we don't really mean that something gets the value "$\infty$" - in Calculus $\infty$ is what we call "potential infinity" - it describes a property of a function (namely, that for every $N>0$ we can find $x_N$ such that $f(x_N)>N$ and $x_N$ is in some specific neighborhood).
{ "language": "en", "url": "https://math.stackexchange.com/questions/144526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50", "answer_count": 12, "answer_id": 0 }
Constructing Young subgroups of $S_4$ Given the symmetric group $S_4$ and a subgroup $H\subset S_4$ I want to construct a Young subgroup $Y\subset S_4$ such that $Y$ is minimal, meaning that there is no other young subgroup $Y'$ such that $H\subset Y'\subset Y$. My understanding of the problem is in two ways: 1) Consider a partition of the set $\{1,2,3,4\}$. A Young subgroup is the direct product of the symmetric groups on the components of the partition. So the Young subgroup $Y$ must contain all combinations of all permutations on all subsets forming the partition. So we need only to complete the list of permutations that are missing in $H$ to obtain $Y$. For example, consider $H=\{1,(12)(34)\}$ then this subgroup "corresponds" to the partition $\{1,2\}\cup\{3,4\}$, and to obtain $Y$ we just need to add to $H$ the permutations $(12)$ and $(34)$. 2) The second way to do this is to write every permutation in $H$ as a product of disjoint cycles then to write every cycle as a product of transpositions. Then $Y$ will be the subgroup generated by all these transpositions. For example if $H=\{1,(134),(143)\}$ then we write $H=\{1,(13)(14),(14)(13)\}$ and then $Y$ is just the subgroup generated by $(13)$ and $(14)$ which is isomorphic to $S_3$. Are my thoughts correct? and is there another way to solve the problem?
It might be easier to think in terms of the orbits of the actions of $H$ and $Y$ on the set $\{1,2,3,4\}$, as the $Y$-orbits must contain the $H$-orbits (since $H$ is a subgroup of $Y$). The $Y$-orbits are simply the partition of $\{1,2,3,4\}$ which defines $Y$. This obviously agrees with your answer to the first example. For the second example you wrote there that $Y$ would be the group generated by $(13)$ and $(14)$, namely the subgroup of $S_4$ that fixes 2. This agrees with the observation that the orbits under $H$ are $\{1,3,4\}$ and $\{2\}$, and this set of orbits is the partition that defines $Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/144585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
One sided limit of an increasing function defined on an open interval Let $f:(a,b)\to \mathbb{R}$ be a strictly increasing function. Does the limit $\lim_{x\to a^+}f(x)$ necessarily exist and is a real number or $-\infty$? If so, is it true that $\ell=\lim_{x\to a}f(x)\le f(x) \ \ \forall x\in (a,b)$? Please provide proofs.
As for the second statement, I think there is a more precise result. Below is my trial, please check if it is correct. Lemma Let $f\colon D\subset\mathbb{R}\to\mathbb{R}.$ Suppose that $a, b\in D$ with $a<b, $ and the point $a$ is a right-sided limit point of $D.$ Let $f$ is strictly increasing on $(a,b)\cap D.$ If $f$ has a right limit $f(a+0)$ at $a,$ then \begin{gather*} f(x)>f(a+0),\qquad\forall x\in (a,b)\cap D. \end{gather*} Proof: Let $A=f(a+0):=\lim\limits_{x\to a+0}f(x).$ If $A=-\infty,$ then there is nothing to show. So we assume that $A\in\mathbb{R}.$ We prove the statement by contradiction. If otherwise there exists $x_1\in (a,b)\cap D$ such that $f(x_1)<A,$ then, for $\epsilon_1=\frac{A-f(x_1)}{2},$ there exists $\delta_1>0$ with $\delta<x_1-a,$ such that for all $x\in (a,b)\cap D,$ if $0<a-x<\delta_1,$ then \begin{gather*} f(x)>A-\epsilon_1=A-\frac{A-f(x_1)}{2}=\frac{A+f(x_1)}{2}>f(x_1). \end{gather*} Pick $y_1\in (a,a+\delta_1),$ then $a<y_1<a+\delta_1<a+x_1-a=x_1,$ and $f(y_1)>f(x_1).$ But this contradicts that $f$ is strictly increasing on $(a,b)\cap D.$ If otherwise there also exists $x_2\in (a,b)\cap D$ such that $f(x_2)=A,$ then, pick arbitrarily $x_3\in (a,x_2),$ then, by strict increasingness of $f$ on $(a,b)\cap D,$ we have $f(x_3)<f(x_2)=A.$ By what we have proved, a contradiction also occurs. Therefore we have proved that $f(x)>f(a+0),$ for all $x\in (a,b)\cap D.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/144637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Multiple choice question from general topology Let $X =\mathbb{N}\times \mathbb{Q}$ with the subspace topology of $\mathbb{R}^2$ and $P = \{(n, \frac{1}{n}): n\in \mathbb{N}\}$ . Then in the space $X$ Pick out the true statements 1 $P$ is closed but not open 2 $P$ is open but not closed 3 $P$ is both open and closed 4 $P$ is neither open nor closed what can we say about boundary of $P$ in $X$? I always struggle to figure out subspace topology. Though i am aware of basic definition and theory of subspace topology. I need a bit explanation here about how to find out subspace topology of $P$? Thanks for care
For example, 1 is true. You can see that $P$ is not open by looking at an $\varepsilon$-ball around any point $p = (n, \frac1n )$ in $P$. Then there will be a rational $q$ such that $(n,q)$ is inside the ball hence $P$ is not open. (because $\mathbb Q$ is dense in $\mathbb R$) Also, it's closed: think about why its complement is open. (You can make an $\varepsilon$-ball around the point that doesn't intersect with $P$) Now that we have established that 1 is true, we know that 2, 3 and 4 are false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/144689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Could someone please explain what this question is asking? I have some trouble understanding the following question: Suppose we have 1st fundamental form $E \, dx^2+2F \, dx \, dy+G \, dy^2$ and we are given that for any $u,v$, the curve given by $x=u, y=v$ are geodesics. Show that ${\partial \over \partial y}\left({F\over \sqrt{G}}\right)={\partial \sqrt{G}\over \partial x}$. I don't understand what "$x=u, y=v$ are geodesics" mean. So the path is a constant point?? That doesn't make sense! Can anybody understand what it is saying?
Remember that $(u,v)$ is a local system of coordinates of a neighborhood of your surface. If you have a first fundamental form given, implicitly the system of local coordinates is given wich is a diffeomorphism. $x=u$ and $y=v$ meaning that you are looking the images of coordinate axis, this images must be geodesics for each one separated. Have not sense see the image of origin.
{ "language": "en", "url": "https://math.stackexchange.com/questions/144738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Subgroups of finite solvable groups. Solvable? I am attempting to prove that, given a non-trivial normal subgroup $N$ of a finite group $G$, we have that $G$ is solvable iff both $N$, $G/N$ are solvable. I was able to show that if $N,G/N$ are solvable, then $G$ is; also, that if $G$ is solvable, then $G/N$ is. I am stuck showing that $N$ must be solvable if $G$ is. It seems intuitive that any subgroup of a finite solvable group is necessarily solvable, as well. Is this true in general? For normal subgroups? How can I go about proving this result? Edit: By solvable, I mean we have a finite sequence $1=G_0\unlhd ... \unlhd G_k=G$ such that $G_{j+1}/G_j$ is abelian for each $1\leq j<k$.
With your definition, to show that if $G$ is solvable then $N$ is solvable, let $$ 1 =G_0 \triangleleft G_1\triangleleft\cdots\triangleleft G_{m-1}\triangleleft G_m=G$$ be such that $G_{i+1}/G_{i}$ is abelian for each $i$. Note: We do not need to assume that $N$ is normal; the argument below works just as well for any subgroup of $G$, not merely normal ones. Let $N_i = G_i\cap N$. Note that since $G_i\triangleleft G_{i+1}$, then $N_i\triangleleft N_{i+1}$: indeed, if $x\in N_i$ and $y\in N_{i+1}$, then $yxy^{-1}\in N$ (since $x,y\in N$) and $yxy^{-1}\in G_i$ (since $G_i\triangleleft G_{i+1}$), hence $yxy^{-1}\in N\cap G_i = N_i$. So we have a sequence $$1 = N_0\triangleleft N_1\triangleleft\cdots\triangleleft N_{m} = N.$$ Thus, it suffices to show that $N_{i+1}/N_i$ is abelian for each $i$. Note that $N_{i} = N\cap G_i = (N\cap G_{i+1})\cap G_i = N_{i+1}\cap G_i$. Now we simply use the isomorphism theorems: $$\frac{N_{i+1}}{N_i} =\frac{N_{i+1}}{N_{i+1}\cap G_i} \cong \frac{N_{i+1}G_i}{G_i} \leq \frac{G_{i+1}}{G_i}$$ since $N_{i+1},G_i$ are both subgroups of $G_{i+1}$ and $G_i$ is normal in $G_{i+1}$, so $N_{i+1}G_i$ is a subgroup of $G_{i+1}$. But $G_{i+1}/G_i$ is abelian by assumption, hence $N_{i+1}/N_i$ is (isomorphic to) a subgroup of an abelian group, hence also abelian, as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/144812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Show that $\forall n \in \mathbb{N} \left ( \left [(2+i)^n + (2-i)^n \right ]\in \mathbb{R} \right )$ Show that $\forall n \in \mathbb{N} \left ( \left [(2+i)^n + (2-i)^n \right ]\in \mathbb{R} \right )$ My Trig is really rusty and weak so I don't understand the given answer: $(2+i)^n + (2-i)^n $ $= \left ( \sqrt{5} \right )^n \left (\cos n\theta + i \sin n\theta \right ) + \left ( \sqrt{5} \right )^n \left (\cos (-n\theta) + i \sin (-n\theta) \right ) $ $= \left ( \sqrt{5} \right )^n \left ( \cos n\theta + \cos (-n\theta) + i \sin n\theta + i \sin (-n\theta) \right ) $ $= \left ( \sqrt{5} \right )^n 2\cos n\theta$ Could someone please explain this?
Hint $\ $ Scaling the equation by $\sqrt{5}^{\:-n}$ and using Euler's $\: e^{{\it i}\:\!x} = \cos(x) + {\it i}\: \sin(x),\ $ it becomes $$\smash[b]{\left(\frac{2+i}{\sqrt{5}}\right)^n + \left(\frac{2-i}{\sqrt{5}}\right)^n} =\: (e^{{\it i}\:\!\theta})^n + (e^{- {\it i}\:\!\theta})^n $$ But $$\smash[t]{ \left|\frac{2+i}{\sqrt{5}}\right| = 1\ \Rightarrow\ \exists\:\theta\!:\ e^{{\it i}\:\!\theta} = \frac{2+i}{\sqrt{5}} \ \Rightarrow\ e^{-{\it i}\:\!\theta} = \frac{1}{e^{i\:\!\theta}} = \frac{\sqrt{5}}{2+i} = \frac{2-i}{\sqrt 5}}$$ Remark $\ $ This is an example of the method that I describe here, of transforming the equation into a simpler form that makes obvious the laws or identities needed to prove it. Indeed, in this form, the only nontrivial step in the proof becomes obvious, viz. for complex numbers on the unit circle, the inverse equals the conjugate: $\: \alpha \alpha' = 1\:\Rightarrow\: \alpha' = 1/\alpha.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/144901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
How to solve this inequation Given two real numbers $0<a<1$ and $0<\delta<1$, I want to find a positive integer $i$ (it is better to a smaller $i$) such that $$\frac{a^i}{i!} \le \delta.$$
Here is a not-very-good answer. Let $i$ be the result of rounding $\log\delta/\log a$ up to the nearest integer. Then $i\ge\log\delta/\log a$, so $i\log a\le\log\delta$ (remember, $\log a\lt0$), so $a^i\le\delta$, so $a^i/i!\le\delta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/144959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Path lifting theorem http://www.maths.manchester.ac.uk/~jelena/teaching/AlgebraicTopology/PathLifting.pdf I'm trying to generalize this theorem. But, was wondering in the proof given here and similarly in Hatchers. Can you replace the $S^{1}$ with a general space X. As it seems to not be that important in the proof. So if you choose the subsets of a general $X$ can't you make the deduction? So is $S^{1}$ really that important for the proof? But, can't see how this generalizes to a general cover of $X$, covering it with $\tilde{X}$.
We have the following generalization (this can e.g. be found in Munkres "Topology", 3rd edition): All spaces are assumed to be connected, locally path connected. Lemma 79.1 (The general lifting lemma): Let $p: E\to X$ be a covering map; let $p(e_0) = x_0$. Let $f: Y\to X$ be a continuous map with $f(y_0) = x_0$. The map $f$ can be lifted to a map $\tilde f: Y \to E$ such that $\tilde f(y_0) = e_0$ if and only if $$ f_\ast (\pi_1(Y,y_0)) \subset p_\ast(\pi_1(E,e_0))$$ Furthermore, if such a lifting exists, it is unique. Sketch of Proof: The "only if" direction follows immediately from $$f_\ast(\pi_1(Y,y_0)) = (p_\ast\circ \tilde f_\ast)(\pi_1(Y,y_0)) \subset p_\ast(\pi_1(E,e_0))$$ For the other direction, we really don't have much of a choice in the definition of $\tilde f$: Given $y\in Y$, choose a path $\gamma: [0,1]\to Y$ from $y_0$ to $y$. Then $f\circ \gamma$ is a path from $x_0$ to $f(y)$ and we want $$f\circ \gamma = p\circ (\tilde f\circ \gamma)$$ So $\tilde f\circ \gamma$ must necessarily be a lifting of $f\circ \gamma$, starting at $e_0$. In particular, we must have $$\tilde f(y) = \tilde f(\gamma(1)) = \text{endpoint of the lifting of $f\circ \gamma$}$$ Now local path-connectedness is used to show that this map is in fact continuous and the condition on the image of the fundamental group is used to prove that $\tilde f$ is well-defined.
{ "language": "en", "url": "https://math.stackexchange.com/questions/145028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
How are the "real" spherical harmonics derived? How were the real spherical harmonics derived? The complex spherical harmonics: $$ Y_l^m( \theta, \phi ) = K_l^m P_l^m( \cos{ \theta } ) e^{im\phi} $$ But the "real" spherical harmonics are given on this wiki page as $$ Y_{lm} = \begin{cases} \frac{1}{\sqrt{2}} ( Y_l^m + (-1)^mY_l^{-m} ) & \text{if } m > 0 \\ Y_l^m & \text{if } m = 0 \\ \frac{1}{i \sqrt{2}}( Y_l^{-m} - (-1)^mY_l^m) & \text{if } m < 0 \end{cases} $$ * *Note: $Y_{lm} $ is the real spherical harmonic function and $Y_l^m$ is the complex-valued version (defined above) What's going on here? Why are the real spherical harmonics defined this way and not simply as $ \Re{( Y_l^m )} $ ?
Why are the real spherical harmonics defined this way and not simply as $\Re{(Y_l^m)}$? Well yes it is! The real spherical harmonics can be rewritten as followed: $$Y_{lm} = \begin{cases} \sqrt{2}\Re{(Y_l^m)}=\sqrt{2}N_l^m\cos{(m\phi)}P_l^m(\cos \theta) & \text{if } m > 0 \\ Y_l^0=N_l^0P_l^0(\cos \theta) & \text{if } m = 0 \\ \sqrt{2}\Im{(Y_l^m)}=\sqrt{2}N_l^{|m|}\sin{(|m|\phi)}P_l^{|m|}(\cos \theta) & \text{if } m < 0 \end{cases} $$ (Some texts denote lowercase $y$ for real harmonics). If you look at the table, the negative $m$ is the imaginary part of the positive $m$ (but not vice versa).
{ "language": "en", "url": "https://math.stackexchange.com/questions/145080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 1 }
Complex eigenvalues of real matrices Given a matrix $$A = \begin{pmatrix} 40 & -29 & -11\\ \ -18 & 30\ & -12 \\\ \ 26 &24 & -50 \end{pmatrix}$$ has a certain complex number $l\neq0$ as an eigenvalue. Which of the following must also be an eigenvalue of $A$: $$l+20, l-20, 20-l, -20-l?$$ It seems that complex eigenvalues occur in conjugate pairs. It is clear that the determinant of the matrix is zero, then $0$ seems to be one of the eigenvalues. Please suggest.
Hint: The trace of the matrix is $40+30+(-50)$. As you observed, $0$ is an eigenvalue.
{ "language": "en", "url": "https://math.stackexchange.com/questions/145135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Examples of mathematical induction What are the best examples of mathematical induction available at the secondary-school level---totally elementary---that do not involve expressions of the form $\bullet+\cdots\cdots\cdots+\bullet$ where the number of terms depends on $n$ and you're doing induction on $n$? Postscript three years later: I see that I phrased this last part in a somewhat clunky way. I'll leave it there but rephrase it here: --- that are not instances of induction on the number of terms in a sum?
I like the ones that involve division. For instance, prove that $7 \mid 11^n-4^n$ for $n=1, 2, 3, \cdots$ Another example would be perhaps proving that $$(3+\sqrt{5})^n+(3-\sqrt{5})^n$$ is an even integer for all natural numbers $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/145189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51", "answer_count": 19, "answer_id": 13 }
Area of a Spherical Triangle from Side Lengths I am currently working on a proof involving finding bounds for the f-vector of a simplicial $3$-complex given an $n$-element point set in $\mathbb{E}^3$, and (for a reason I won't explain) am needing to find the answer to the following embarrassingly easy (I think) question. What is the area of a spherical triangle with all equal side lengths equal to $\pi / 3$? I have tried using L'Juilier's Theorem, but the first term $$\tan(\frac{1}{2}s)=\tan(\frac{1}{2}(\frac{3\pi}{3}))=\tan(\frac{\pi}{2})$$ is undefined, where $s$ is the semiperimeter (perimeter divided by 2). Any ideas for how to compute this?
As an equilateral spherical triangle gets arbitrarily small, its angles all approach π/3. So one might say there is a degenerate spherical triangle whose angles are in fact all π/3 and whose area is 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/145255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving that surjective endomorphisms of Noetherian modules are isomorphisms and a semi-simple and noetherian module is artinian. I am revising for my Rings and Modules exam and am stuck on the following two questions: $1.$ Let $M$ be a noetherian module and $ \ f : M \rightarrow M \ $ a surjective homomorphism. Show that $f : M \rightarrow M $ is an isomorphism. $2$. Show that if a semi-simple module is noetherian then it is artinian. Both these questions seem like they should be fairly straightforward to prove but I cannot seem to solve them.
$1.$ Let $\,f:M\to M\,$ be an epimorphism or $\,R\,$-modules, with $\,M\,$ Noetherian. i) Show that $\,M\,$ can be made into $\,R[t]\,$-module, defining $\,tm:=f(m)\,,\,\forall m\in M$ ii) Putting $\,I:=\langle t\rangle=tR[t]\,$ , show that $\,MI=M\,$ iii) Apply Nakayama's Lemma to deduce that there exists $\,1+g(t)t\in I\,$ s.t. $\,(1+g(t)t)M=0$ iv) Finally, take $\,y\in \ker f\,$ and show $\,y=0\,$ applying (iii)
{ "language": "en", "url": "https://math.stackexchange.com/questions/145310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 6, "answer_id": 2 }
The topology on $\mathbb{R}$ with sub-basis consisting of all half open intervals $[a,b)$. Let $\tau$ be to topology on $\mathbb{R}$ with sub-basis consisting of all half open intervals $[a,b)$. How would you find the closure of $(0,1)$ in $\tau$? I'm trying to find the smallest closed set containing $(0,1)$ in that topology but then I realised I don't fully understand what an 'open' interval is. Is an open interval in this topology one that is half open like in the sub-basis?
Hints: (i) Show that $(-\infty,b)$ is in $\tau$ for every $b$. (ii) Show that $[a,+\infty)$ is in $\tau$ for every $a$. (iii) Deduce that $[a,b)$ is closed in $\tau$ for every $a\lt b$. (iv) Show that $a$ is in the closure of $(a,b)$ with respect to $\tau$ for every $a\lt b$. (v) Conclude that the closure of $(a,b)$ is $[a,b)$ for every $a\lt b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/145393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Solve equations using the $\max$ function How do you solve equations that involve the $\max$ function? For example: $$\max(8-x, 0) + \max(272-x, 0) + \max(-100-x, 0) = 180$$ In this case, I can work out in my head that $x = 92.$ But what is the general procedure to use when the number of $\max$ terms are arbitrary? Thanks for the help, here is a Python solution for the problem if anyone is interested. def solve_max(y, a): y = sorted(y) for idx, y1 in enumerate(y): y_left = y[idx:] y_sum = sum(y_left) x = (y_sum - a) / len(y_left) if x <= y1: return x print solve_max([8, 272, -100], 180)
Check each of the possible cases. In your equations the "critical" points (i. e. the points where one of the max's switches) are $8$, $272$ and $-100$. For $x \le -100$ your equation reads \[ 8-x + 272 - x + (-100-x) = 180 \iff 180 - 3x = 180 \] which doesn't have a solution in $(-\infty, -100]$. For $-100 \le x \le 8$, we have \[ 8-x + 272 - x = 180 \iff 280 - 2x = 180 \] and the only solution $50\not\in [-100, 8]$. For $8 \le x \le 272$ we have \[ 272-x = 180 \iff x = 92 \] so here we have a solution. And finally for $x \ge 272$ the equation reads \[ 0 = 180 \] so no more solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/145458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Metrization of the cofinite topology Can you help me please with this question? Let $X$ be a non-empty set with the cofinite topology. Is $\left ( X,\tau_{\operatorname{cofinite}} \right ) $ a metrizable space? Thanks a lot!
* *If $X$ is finite, the cofinite topology is the discrete one, which is metrizable, for example using the distance $d$ defined by $d(x,y):=\begin{cases}0&\mbox{ if }x=y,\\ 1&\mbox{ otherwise} .\end{cases}$ *If $X$ is infinite, it's not a Hausdorff space. Indeed, let $x,y\in X$, and assume that $U$ and $V$ are two disjoint open subsets of $X$ containing respectively $x$ and $y$, then $U=X\setminus F_1$ and $V=X\setminus F_2$ where $F_1$ and $F_2$ are finite. Hence $\emptyset =X\cap F_1^c\cap F_2^c=X\setminus (F_1\cup F_2)^c$, which is a contradiction since $X$ is infinite. But each singleton is closed in such a space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/145516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
What is the $x(t)$ function of $\dot{v} = a v² + bv + c$ to obtain $x(t)$ How to solve $$\frac{dv}{dt} = av^2 + bv + c$$ to obtain $x(t)$, where $a$, $b$ and $c$ are constants, $v$ is velocity, $t$ is time and $x$ is position. Boundaries for the first integral are $v_0$, $v_t$ and $0$, $t$ and boundaries for the second integral are $0$, $x_{max}$ and $0$, $t$.
"Separating variables" means writing $${dv\over av^2+bv +c}=dt\ .\qquad(1)$$ The next step depends on the values of $a$, $b$, $c$. Assuming $a>0$ one has $$a v^2+bv +c={1\over a}\Bigl(\bigl(av +{b\over 2}\bigr)^2+{4ac -b^2\over 4}\Bigr)\ ,$$ so that after a linear substitution of the dependent variable $v$ the equation $(1)$ transforms into one of $${du\over 1+u^2}= p\ dt,\quad{du\over u^2}=p\ dt,\quad {du\over u^2-1}=p \ dt\ ,$$ depending on the value of $4ac-b^2$. Up to scalings and shifts the first form implies $$\arctan u = t\quad{\rm or}\quad u=\tan t\ .$$ It follows that (again neglecting scalings and shifts) $$x(t)=\int_{t_0}^t v(t)\ dt=\int _{t_0}^t \tan t\ dt=-\log(\cos t)\Bigr|_{t_0}^t\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/145570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Choosing squares from a grid so that no two chosen squares are in the same row or column How many ways can 3 squares be chosen from a 5x5 grid so that no two chosen squares are in the same row or column? Why is this not simply $\binom{5}{3}\cdot\binom{5}{3}$? I figured that there were $\binom{5}{3}$ ways to choose $3$ different "$x$-axis" coordinates and then same for the "$y$-axis". Thus I would multiply them. Thanks
Here's the hard way to do the problem: inclusion-exclusion. There are $25\choose3$ ways to choose 3 squares from the 25. Now you have to subtract the ways that have two squares in the same row or column. There are 10 ways to choose the row/column, $5\choose2$ ways to choose the two squares in the row/column, and 23 choices remaining for the third square, so all up you must subtract $10\times{5\choose2}\times23$. Now you have to put back in all the configurations you subtracted out twice. These are the ones with two in the same row and two in the same column, of which there are $25\times4\times4$, and also the ones in which there are three in the same row/column, of which there are $10\times{5\choose3}$. The ones with 3 in a column were counted in once, then subtracted out 3 times, so they have to be put back in twice. So the answer is $${25\choose3}-10\times{5\choose2}\times23+25\times4\times4+2\times10\times{5\choose3}$$ which comes out to 600.
{ "language": "en", "url": "https://math.stackexchange.com/questions/145667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Prove that $\log(n) = O(\sqrt{n})$ How to prove $\log(n) = O(\sqrt{n})$? How do I find the $c$ and the $n_0$? I understand to start, I need to find something that $\log(n)$ is smaller to, but I m having a hard time coming up with the example.
$\log(x) < \sqrt{x}$ for all $x>0$ because $\log(x) /\sqrt{x}$ has a single maximum value $2/e<1$ (at $x=e^2$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/145739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 4, "answer_id": 2 }
Cantor set: Lebesgue measure and uncountability I have to prove two things. First is that the Cantor set has a lebesgue measure of 0. If we regard the supersets $C_n$, where $C_0 = [0,1]$, $C_1 = [0,\frac{1}{3}] \cup [\frac{2}{3},1]$ and so on. Each containig interals of length $3^{-n}$ and by construction there are $2^n$ such intervals. The lebesgue measure of each such interval is $\lambda ( [x, x + 3^{-n}]) = 3^{-n}$ therefore the measure of $C_n$ is $\frac{2^n}{3^n} = e^{(\ln(2)-\ln(3)) n }$ which goes to zero with $n \rightarrow \infty$. But does this prove it? The other thing I have to prove is that the Cantor set is uncountable. I found that I should contruct a surjectiv function to $[0,1]$. But im totaly puzzeld how to do this. Thanks for help
I decided I would answer the latter part of your question in a really pretty way. You can easily create a surjection from the Cantor set to $[0,1]$ by using binary numbers. Binary numbers are simply numbers represented in base $2$, so that the only digits that can be used are $0$ and $1$. If you take any number in the cantor set, which are numbers in base $3$ written exclusively with the digits $0$ and $2$, you can make the rule that if you see a $2$, turn it into a one. You have now mapped the Cantor set to the interval $[0,1]$ in binary (and therefore in any base), which proves uncountability.
{ "language": "en", "url": "https://math.stackexchange.com/questions/145803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
A finite-dimensional vector space cannot be covered by finitely many proper subspaces? Let $V$ be a finite-dimensional vector space, $V_i$ is a proper subspace of $V$ for every $1\leq i\leq m$ for some integer $m$. In my linear algebra text, I've seen a result that $V$ can never be covered by $\{V_i\}$, but I don't know how to prove it correctly. I've written down my false proof below: First we may prove the result when $V_i$ is a codimension-1 subspace. Since $codim(V_i)=1$, we can pick a vector $e_i\in V$ s.t. $V_i\oplus\mathcal{L}(e_i)=V$, where $\mathcal{L}(v)$ is the linear subspace span by $v$. Then we choose $e=e_1+\cdots+e_m$, I want to show that none of $V_i$ contains $e$ but I failed. Could you tell me a simple and corrected proof to this result? Ideas of proof are also welcome~ Remark: As @Jim Conant mentioned that this is possible for finite field, I assume the base field of $V$ to be a number field.
This question was asked on MathOverflow several years ago and received many answers: please see here. One of these answers was mine. I referred to this expository note, which has since appeared in the January 2012 issue of the American Mathematical Monthly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/145869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 6, "answer_id": 0 }
Compact complex surfaces with $h^{1,0} < h^{0,1}$ I am looking for an example of a compact complex surface with $h^{1,0} < h^{0,1}$. The bound that $h^{1,0} \leq h^{0,1}$ is known. In the Kähler case, $h^{p,q}=h^{q,p}$, so the example cannot be (for example) a projective variety or a complex torus. Does anyone know of such an example? Thanks.
For a compact Kähler manifold, $h^{p,q} = h^{q, p}$, so the odd Betti numbers are even. For a compact complex surface, the only potentially non-zero odd Betti numbers are $b_1$ and $b_3$; note that by Poincaré duality, they are equal. So if $X$ is a compact complex surface, and $X$ is Kähler, then $b_1$ is even. Surprisingly, the converse is also true. That is: Let $X$ be a compact complex surface. Then $X$ is Kähler if and only if $b_1$ is even. In particular, in complex dimension two, the existence of a Kähler metric is a purely topological question. The above statement was originally a conjecture of Kodaira and was first shown by using the Enriques-Kodaira classification, with the final case of $K3$ surfaces done by Siu in $1983$. In $1999$, Buchdahl and Lamari independently gave direct proofs which did not rely on the classification. As $b_1 = h^{1,0} + h^{0,1}$ for compact complex surfaces (see Barth, Hulek, Peters, & Van de Ven Compact Complex Surfaces (second edition), Chapter IV, Theorem $2.7$ (i)), and $h^{p,q} = h^{q,p}$ for compact Kähler manifolds, we can restate the above result as follows: Let $X$ be a compact complex surface. Then $X$ is Kähler if and only if $h^{1,0} = h^{0,1}$. For any compact complex surface, one can show that the map $H^{1,0}_{\bar{\partial}}(X) \to H^{0,1}_{\bar{\partial}}(X)$, $\alpha \mapsto [\bar{\alpha}]$ is well-defined and injective, so $h^{1,0} \leq h^{0,1}$. Combining with the above result, we have the following: Let $X$ be a compact complex surface. Then $h^{1,0} < h^{0,1}$ if and only if $X$ is non-Kähler. So surfaces of class VII (in particular Hopf surfaces), and Kodaira surfaces satisfy the condition $h^{1, 0} < h^{0,1}$. Note, in the non-Kähler case, we actually have $h^{1,0} = h^{0,1} - 1$ (see Chapter IV, Theorem $2.7$ (iii) of the aforementioned book).
{ "language": "en", "url": "https://math.stackexchange.com/questions/145920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Prove or disprove: $(\mathbb{Q}, +)$ is isomorphic to $(\mathbb{Z} \times \mathbb{Z}, +)$? Prove or disprove: $\mathbb{Q}$ is isomorphic to $\mathbb{Z} \times \mathbb{Z}$. I mean the groups $(\mathbb Q, +)$ and $(\mathbb Z \times \mathbb Z,+).$ Is there an isomorphism?
Yet another way to see the two cannot be isomorphic as additive groups: if $a,b\in\mathbb{Q}$, and neither $a$ nor $b$ are equal to $0$, then $\langle a\rangle\cap\langle b\rangle\neq\{0\}$; that is, any two nontrivial subgroups intersect nontrivially. To see this, write $a=\frac{r}{s}$, $b=\frac{u}{v}$, with $r,s,u,v\in\mathbb{Z}$, $\gcd(r,s)=\gcd(u,v)=1$. Then $(su)a = (rv)b\neq 0$ lies in the intersection, so the intersection is nontrivial. However, in $\mathbb{Z}\times\mathbb{Z}$, the elements $(1,0)$ and $(0,1)$ are both nontrivial, but $\langle (1,0)\rangle\cap\langle (0,1)\rangle = \{(0,0)\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/146071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 7, "answer_id": 4 }
Terminology question What do you call a set of points with the following property? For any point and any number $\epsilon$, you can find another point in the set that is less than $\epsilon$ away from the first point. An example would be the rationals, because for any $\epsilon$ there is some positive rational number smaller than it, and you can just add that number to your point to get the required second point. Thanks!
Turning my comment into an answer: Such a set is said to be dense-in-itself. The term perfect is also sometimes used, but I prefer to avoid it, since it has other meanings in general topology. One can also describe such a set by saying that it has no isolated points. All of this terminology applies to topological spaces in general, not just to metric spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/146194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
sequence of decreasing compact sets In Royden 3rd P192, Assertion 1: Let $K_n$ be a decreasing sequence compact sets, that is, $K_{n+1} \subset K_n$. Let $O$ be an open set with $\bigcap_1^\infty K_n \subset O$. Then $K_n \subset O$ for some $n$. Assertion 2: From this, we can easily see that $\bigcap_1^\infty K_n$ is also compact. I know this is trivial if $K_1$ is $T_2$ (Hausdorff). But is it true if we assume only $T_0$ or $T_1$? Any counterexample is greatly appreciated.
Here's a T_1 space for which Assertion 2 fails. Take the set of integers. Say that a set is open iff it is either a subset of the negative integers or else is cofinite. Then let K_n be the complement of {0, 1, ..., n}. Then each K_n is compact, but the intersection of K_n from n=1 to infinity is the set of negative integers, which is open and noncompact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/146260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Relating Gamma and factorial function for non-integer values. We have $$\Gamma(n+1)=n!,\ \ \ \ \ \Gamma(n+2)=(n+1)!$$ for integers, so if $\Delta$ is some real value with $$0<\Delta<1,$$ then $$n!\ <\ \Gamma(n+1+\Delta)\ <\ (n+1)!,$$ because $\Gamma$ is monotone there and so there is another number $f$ with $$0<f<1,$$ such that $$\Gamma(n+1+\Delta)=(1-f)\times n!+f\times(n+1)!.$$ How can we make this more precise? Can we find $f(\Delta)$? Or if we know the value $\Delta$, which will usually be the case, what $f$ will be a good approximation?
Asymptotically, as $n \to \infty$ with fixed $\Delta$, $$ f(n,\Delta) = \dfrac{\Gamma(n+1+\Delta)-\Gamma(n+1)}{\Gamma(n+2)-\Gamma(n+1)} = n^\Delta \left( \dfrac{1}{n} + \dfrac{\Delta(1+\Delta)}{2n^2} + \dfrac{\Delta(-1+\Delta)(3\Delta+2)(1+\Delta)}{24n^3} + \ldots \right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/146336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What's the difference between $\mathbb{Q}[\sqrt{-d}]$ and $\mathbb{Q}(\sqrt{-d})$? Sorry to ask this, I know it's not really a maths question but a definition question, but Googling didn't help. When asked to show that elements in each are irreducible, is it the same?
The notation $\rm\:R[\alpha]\:$ denotes a ring-adjunction, and, analogously, $\rm\:F(\alpha)\:$ denotes a field adjunction. Generally if $\alpha$ is a root of a monic $\rm\:f(x)\:$ over a domain $\rm\:D\:$ then $\rm\:D[\alpha]\:$ is a field iff $\rm\:D\:$ is a field. The same is true for arbitrary integral extensions of domains. See this post for a detailed treament of the quadratic case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/146437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Determine the conditional probability mass function of the size of a randomly chosen family containing 2 girls. Suppose that 15 percent of the families in a certain community have no children, 20 percent have 1, 35 percent have 2, and 30 percent have 3 children; suppose further that each child is equally likely (and independently) to be a boy or a girl. If a family is chosen at random from this community, then B, the number of boys, and G, the number of girls, determine the conditional probability mass function of the size of a randomly chosen family containing 2 girls. My attempt There are exactly three ways this can happen: 1) family has exactly 2 girls 2) family has 2 girls and 1 boy 3) family has all 3 girls The first one is pretty simple. Given that you are going to "select" exactly two children, find the probability that they are BOTH girls (it's a coin flip, so p = 50% = 0.5): $0.5^2 = 0.25$ So the probability that the family has exactly 2 girls is the probability that the family has exactly two children times the probability that those two children will be girls: $\frac{1}{4} \cdot 35\% = 8.75\%$ Now find the probability that, given the family has exactly 3 children, that exacly two are girls. Now you flip 3 times but only need to "win" twice-this is a binomial experiment. There are 3 choose 2 = 3 ways to have exactly two girls: 1st, 2nd, or 3rd is a boy... interestingly the probability of having any particular permutation is just $0.5^3 = 1/8$ (because it's still $0.5 \times 0.5$ for two girls, then $0.5$ for one boy). So the chance of exactly 2 girls is: $\frac{3}{8}$ Now find the probability for having exactly 3 girls... that's easy, there's only one way, you just have all 3 girls, probability is just $\frac{1}{8}$. Now, add these up $\frac{3}{8} + \frac{1}{8} = \frac{4}{8} = \frac{1}{2}$ So now use the percent of families with exactly 3 children to find this portion of the probability: $\frac{1}{2} \cdot 30\% = 15\%$ Hence, add the two probabilities... here is it in full detail $$\begin{eqnarray}\mathbb{P}(\text{contains 2 girls}) &=& \mathbb{P}(\text{2 children}) \times \mathbb{P}(\text{2 girls, 2 children}) + \\ &\phantom{+=}& \mathbb{P}(\text{3 children}) \times \mathbb{P}(\text{2 or 3 girls, 3 children}) \end{eqnarray}$$ $\frac{1}{4} 35\% + 30\% \times \left(\frac{3}{8} +\frac{ 1}{8}\right)$ $8.75\% + 15\% = 23.75\%$ Is my attempt correct?
It’s correct as far as it goes, but it’s incomplete. You’ve shown that $23.75$% of the families have at least two girls, but that doesn’t answer the question. What you’re to find is probability mass function of the family size given that the family has two girls. In other words, you want to calculate $$\Bbb P(B+G=x\mid G\ge 2)$$ for the various possible values of $x$. This is very easy and obvious for $x=0$ and $x=1$, so I’ll skip to $x=2$. You calculated that $8.75$% of all the families have exactly two girls and no boys. What fraction of the families with at least two girls is this? It’s $$\frac{8.75}{23.75}=\frac7{19}\;,$$ so the conditional probability that a randomly chosen family has exactly two children given that it has at least two girls is $7/19$: $\Bbb P(B+G=2\mid G\ge 2)=7/19$. From here you should be able to finish it, I think.
{ "language": "en", "url": "https://math.stackexchange.com/questions/146486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
"Weierstrass preparation" of $\mathbb{C}[[X,Y]]$ In Lang's book "Algebra", theorem 9.2, it said that suppose $f\in \mathbb{C}[[X,Y]]$, then by some conditions imposed to $f$, $f$ can be written as a product of a polynomial $g\in \mathbb{C}[[X]][Y]$ and a unit $u$ in $\mathbb{C}[[X,Y]]$. It suggests the following claim is not true in general. Let $f\in \mathbb{C}[[X,Y]]$, then there exists a $g\in \mathbb{C}[X,Y]$ and a unit $u\in \mathbb{C}[[X,Y]]$ such that $f=gu$. I would like to find a counter-example. Thanks.
It is known that there are transcendental power series $h(X)\in \mathbb C[[X]]$ over $\mathbb C[X]$. Note that $Xh(X)$ is also transcendental. Let $$f(X,Y)=Y-Xh(X)\in\mathbb C[[X,Y]].$$ Suppose $f=gu$ with $g$ polynomial and $u$ invertible. Consider the ring homomorphism $\phi: \mathbb C[[X,Y]]\to \mathbb C[[X]]$ which maps $X$ to $X$ and $Y$ to $Xh(X)$. Applying this homomorphism to $f=gu$, we get $$0 = g(X, Xh(X))\phi(u), \quad \phi(u)\in \mathbb C[[X]]^*.$$ So $g(X, Xh(X))=0$. As $Xh(X)$ is transcendental over $\mathbb C[X]$, and $g(X,Y)\in \mathbb C[X][Y]$, this implies that $g(X,Y)=0$. Hence $f(X,Y)=0$, absurd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/146566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to prove $\mathcal{l}(D+P) \leq \mathcal l{(D)} + 1$ Let $X$ be an irreducible curve, and define $\mathcal{L}(D)$ as usual for $D \in \mathrm{Div}(X)$. Define $l(D) = \mathrm{dim} \ \mathcal{L}(D)$. I'd like to show that for any divisor $D$ and point $P$, $\mathcal{l}(D+P) \leq \mathcal l{(D)} + 1$. Say $D = \sum n_i P_i$. I can prove this provided $P$ is not any of the $P_i$, by considering the map $\lambda : \mathcal{L}(D) \to k$, $f \mapsto f(P)$. This map has kernel $\mathcal{L}(D-P)$, and rank-nullity gives the result. But if $P$ is one of the $P_i$, say $P=P_j$ then I'm struggling. Any help would be appreciated. Thanks
Here is an elementary formulation, without sheaves. Let $t\in Rat(X)$ be a uniformizing parameter at $P$ (that is, $t$ vanishes with order $1$ at $P$) and let $n_P\in \mathbb Z$ be the coefficient of $D=\sum n_QQ$ at $P$. You then have en evaluation map $$\lambda: \mathcal L(D+P)\to k:f\mapsto (t^{n_P +1}\cdot f)(P)$$ and you can conclude with the reank-nullity theorem or in more sophisticated terminology with the exact sequence of $k$-vector spaces $$ 0\to \mathcal L(D)\to \mathcal L(D+P)\stackrel {\lambda}{\to} k $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/146686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Scheduling 12 teams competing at 6 different events I have a seemingly simple question. There are 12 teams competing in 6 different events. Each event is seeing two teams compete. Is there a way to arrange the schedule so that no two teams meet twice and no teams repeat an event. Thanks. Edit: Round 1: All 6 events happen at the same time. Round 2: All 6 events happen at the same time. And so on until Round 6.
A solution to the specific problem is here: Event 1 Event 2 Event 3 Event 4 Event 5 Event 6 1 - 2 11 - 1 1 - 3 6 - 1 10 - 1 1 - 9 3 - 4 2 - 3 4 - 2 2 - 11 2 - 9 10 - 2 5 - 6 4 - 5 5 - 7 7 - 4 3 - 11 4 - 8 7 - 8 6 - 7 8 - 6 3 - 10 4 - 6 7 - 3 9 - 10 8 - 9 10 - 11 9 - 5 8 - 5 11 - 5 11 - 12 12 - 10 9 - 12 12 - 8 7 - 12 12 - 6 which came from the following webpage: http://www.crowsdarts.com/roundrobin/sched12.html
{ "language": "en", "url": "https://math.stackexchange.com/questions/146763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Given an integral symplectic matrix and a primitive vector, is their product also primitive? Given a matrix $A \in Sp(k,\mathbb{Z})$, and a column k-vector $g$ that is primitive ( $g \neq kr$ for any integer k and any column k-vector $r$), why does it follow that $Ag$ is also primitive? Can we take A from a larger space than the space of integral symplectic matrices?
Suppose $\,Ag\,$ is non-primitive, then $\,Ag=mr\,\,,\,\,m\in\mathbb{Z}\,\Longrightarrow g=mA^{-1}r\, $ , which means $\,g\,$ is not primitive
{ "language": "en", "url": "https://math.stackexchange.com/questions/146822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is solving non-linear recurrence relations "hopeless"? I came across a non-linear recurrence relation I want to solve, and most of the places I look for help will say things like "it's hopeless to solve non-linear recurrence relations in general." Is there a rigorous reason or an illustrative example as to why this is the case? It would seem to me that the correct response would be "we just don't know how to solve them," or "there is no solution using elementary functions," but there might be a solution in the form of, say, an infinite product or a power series or something. Just for completion, the recurrence relation I'm looking at is (slightly more than just non-linear, and this is a simplified version): $p_n = a_n b_n\\ a_n = a_{n-1} + c \\ b_n = b_{n-1} + d$ And $a_0 > 0, b_0 > 0, c,d$ fixed constants
Although it is possible to solve selected non-linear recurrence relations if you happen to be lucky, in general all sorts of peculiar and difficult-to-characterize things can happen. One example is found in chaotic systems. These are hypersensitive to initial conditions, meaning that the behavior after many iterations is extremely sensitive to tiny variations in the initial conditions, and thus any formula expressing the relationship will grow impossibly large. These recurrence equations can be amazingly simple, with xn+1 = 4xn(1-xn) with x0 between 0 and 1 as one of the classic simple examples (i.e. merely quadratic; this is the logistic map). User @Did has already given the Mandelbrot set example--similarly simple to express, and similarly difficult to characterize analytically (e.g. by giving a finite closed-form solution). Finally, note that to solve every non-linear recurrence relation would imply that one could solve the Halting problem, since one could encode a program as initial states and the workings of the Turing machine as the recurrence relations. So it is certainly hopeless in the most general case. (Which highly restricted cases admit solutions is still an interesting question.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/147075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 3, "answer_id": 0 }
Proving that a space is disconnected Show that a subspace $T$ of a topological space $S$ is disconnected iff there are nonempty sets $A,B \subset T$ such that $T= A\cup B$ and $\overline{A} \cap B = A \cap \overline{B} = \emptyset$. Where the closure is taken in $S$. I've used this relatively simple proof for many of these slightly different types of questions so I was wondering if it's the right method. It seems pretty good, except for the 'where the closure is taken in $S$ part'. $T$ is disconnected if and only if there exists a partition $A,B \subset T$ such that $T = A \cup B$ and $A \cap B = \emptyset$. Also, $A$ and $B$ are both open and closed therefore $\overline{A} = A$ and $\overline{B} = B$. The result follows.
It looks fine to me, in particular because as $\,A\subset \overline{A}\Longrightarrow A\cap B\subset \overline{A}\cap B\,$ , so if the rightmost intersection is empty then also the leftmost one is, which is the usual definition
{ "language": "en", "url": "https://math.stackexchange.com/questions/147128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Birational map between product of projective varieties What is an example of a birational morphism between $\mathbb{P}^{n} \times \mathbb{P}^{m} \rightarrow \mathbb{P}^{n+m}$?
The subset $\mathbb A^n\times \mathbb A^m$ is open dense in $\mathbb P^n\times \mathbb P^m$ and the subset $\mathbb A^{n+m}$ is open dense in $\mathbb P^n\times \mathbb P^m$. Hence the isomorphism $\mathbb A^n\times \mathbb A^m\stackrel {\cong}{\to} \mathbb A^{n+m}$ is the required birational isomorphism. The astonishing point is that a rational map need only be defined on a dense open subset , which explains the uneasy feeling one may have toward the preceding argument, which may look like cheating. The consideration of "maps" which are not defined everywhere is typical of algebraic ( or complex analytic) geometry, as opposed to other geometric theories like topology, differential geometry,...
{ "language": "en", "url": "https://math.stackexchange.com/questions/147190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Need to understand question about not-a-knot spline I am having some trouble understanding what the question below is asking. What does the given polynomial $P(x)$ have to do with deriving the not-a-knot spline interpolant for $S(x)$? Also, since not-a-knot is a boundary condition, what does it mean to derived it for $S(x)$? For general data points $(x_1, y_1), (x_2, y_2),...,(x_N , y_N )$, where $x_1 < x_2 < . .. < x_N$ and $N \geq 4$, Assume that S(x) is a cubic spline interpolant for four data points $(x_1, y_1)$, $(x_2, y_2)$, $(x_3, y_3)$, and $(x_4, y_4)$ $$ S(x) = \begin{cases} p_1(x), & [x_1,x_2] \\ p_2(x), & [x_2,x_3] \\ p_3(x), & [x_3,x_4] \\ \end{cases} $$ Suppose $P (x) = 2x^3 + 5x +7$ is the cubic interpolant for the same four points $(x_1, y_1)$, $(x_2, y_2)$, $(x_3, y_3)$, $(x_4, y_4)$ where $x_1 < x_2 < x_3 < x_4$ are knots. What is the not-a-knot spline interpolant $S(x)$?
If $S$ is a N-a-K spline with knots $x_1, \dotsc, x_4$ then it satisfies the spline conditions: twelve equations in twelve unknowns. (Twelve coefficients, six equations to prescribe values at the knots and six more to force continuity of derivatives up to third order at $x_2$ and $x_3$.) Since $p_1, p_2, p_3$ fit up to third order in all (two) inner knots, it follows that $p_1 = p_2 = p_3$. So $S$ and $P$ are both Lagrange interpolation polynomials and therefore $S=P$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Weierstrass Factorization Theorem Are there any generalisations of the Weierstrass Factorization Theorem, and if so where can I find information on them? I'm trying to investigate infinite products of the form $$\prod_{k=1}^\infty f(z)^{k^a}e^{g(z)},$$ where $g\in\mathbb{Z}[z]$ and $a\in\mathbb{N}$.
The Weierstrass factorization theorem provides a way of constructing an entire function with any prescribed set of zeros, provided the set of zeros does not have a limit point in $\mathbb{C}$. I know that this generalizes to being able to construct a function holomorphic on a region $G$ with any prescribed set of zeros in $G$, provided that the set of zeros does not have a limit point in $G$. These are theorems VII.5.14 and VII.5.15 in Conway's Functions of One Complex Variable. They lead to the (important) corollary that every meromorphic function on an open set $\Omega$ is a ratio of functions holomorphic on $\Omega$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Symmetric Matrix as the Difference of Two Positive Definite Symmetric Matrices Prove that any real symmetric matrix can be expressed as the difference of two positive definite symmetric matrices. I was trying to use the fact that real symmetric matrices are diagonalisable , but the confusion I am having is that 'if $A$ be invertible and $B$ be a positive definite diagonal matrix, then is $ABA^{-1}$ positive definite' . Thanks for any help .
Let $ A^{*} $ be the adjoint of $ A $ and $S$ the positive square root of the positive self-adjoint operator $ S^{2}=A^{*}A $ (e.g. Rudin, ``Functional Analysis'', Mc Graw-Hill, New York 1973, p. 313-314, Th. 12.32 and 12.33) and write $ P=S+A $, $ N=S-A $. Let $n$ be the finite dimension of $A$ and $\lambda_{i}, i=1\dots n$ its eigenvalues. The eigenvalues of $S$ are $|\lambda_{i}|\ge0$, those of $P$ are $0$ if $\lambda_{i}\le0$ and $2|\lambda_{i}|$ if $\lambda_{i}>0$ and those of $N$ are $0$ if $\lambda_{i}\ge0$ and $2|\lambda_{i}|$ if $\lambda_{i}<0$. Thus $S$, $P$ and $N$ are positive definite according to the definition given by Rudin in Th. 12.32. $ A=S-N $ and $ A=(P-N)/2 $ are two possible decomposition of $A$ into the difference of two positive definite operators.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Change of Basis Calculation I've just been looking through my Linear Algebra notes recently, and while revising the topic of change of basis matrices I've been trying something: "Suppose that our coordinates are $x$ in the standard basis and $y$ in a different basis, so that $x = Fy$, where $F$ is our change of basis matrix, then any matrix $A$ acting on the $x$ variables by taking $x$ to $Ax$ is represented in $y$ variables as: $F^{-1}AF$ " Now, I've attempted to prove the above, is my intuition right? Proof: We want to write the matrix $A$ in terms of $y$ co-ordinates. a) $Fy$ turns our y co-ordinates into $x$ co-ordinates. b) pre multiply by $A$, resulting in $AFy$, which is performing our transformation on $x$ co-ordinates c) Now, to convert back into $y$ co-ordinates, pre multiply by $F^{-1}$, resulting in $F^{-1}AFy$ d) We see that when we multiply $y$ by $F^{-1}AF$ we perform the equivalent of multiplying $A$ by $x$ to obtain $Ax$, thus proved. Also, just to check, are the entries in the matrix $F^{-1}AF$ still written in terms of the standard basis? Thanks.
Without saying much, here is how I usually remember the statement and also the proof in one big picture: \begin{array}{ccc} x_{1},\dots,x_{n} & \underrightarrow{\;\;\; A\;\;\;} & Ax_{1},\dots,Ax_{n}\\ \\ \uparrow F & & \downarrow F^{-1}\\ \\ y_{1},\dots,y_{n} & \underrightarrow{\;\;\; B\;\;\;} & By_{1},\dots,By_{n} \end{array} And $$By=F^{-1}AFy$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/147441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Prove Continuous functions are borel functions Take $f: (a,b) \to \mathbb{R}$ , continuous for all $x_{0}\in (a,b)$ and take $(Ω = (a,b) , F = ( (a,b) ⋂ B(\mathbb{R}))$ where $B(\mathbb{R})$ is the Borel $\sigma$-algebra. Prove $f$ is a borel function by showing that $\{x \in(a,b): f(x) < c \}$ is in $F$. I know that continuity of f means that for all $x\in(a,b)$ and all $\varepsilon>0$ there exists a $\delta>0$ such that $|x-x_{0}| < \delta$ implies $|f(x)-f(x_{0})| < \varepsilon$. But Then I am stuck, how would I use these facts to help me ? Thanks in advance for any help
To expand on Thomas E.'s comment: if $f$ is continuous, $f^{-1}(O)$ for $O$ open is again open. $\{x \in (a,b) : f(x) < c \} = f^{-1}((- \infty , c)) \cap (a,b)$. Now all you need to show to finish this proof is that $f^{-1}((- \infty , c))$ is in the Borel sigma algebra of $\mathbb R$. Edit (in response to comment) Reading your comment I think that your lecturer shows that $S := \{x \in (a,b) : f(x) < c \} $ is open. In a metric space, such as $\mathbb R$ with the Euclidean metric, a set $S$ is open if for all $x_0$ in $S$ you can find a $\delta > 0$ such that $(x_0-\delta, x_0+\delta) \subset S$. To show this, your lecturer picks an arbitrary $x_0 \in S$. Then by the definition of $S$ you know that $f(x_0) < c$. This means there exists an $\varepsilon > 0$ such that $f(x_0) + \varepsilon < c$, for $\varepsilon$ small enough. Since $f$ is continuous you know you can find a $\delta_1 > 0$ such that $x \in (x_0 - \delta_1, x_0 + \delta_1) $ implies that $|f(x_0) - f(x)| < \varepsilon$. Now you don't know whether $(x_0 - \delta_1, x_0 + \delta_1) $ is contained in $(a,b)$. But you know that since $(a,b)$ is open you can find a $\delta_2 > 0$ such that $(x_0 - \delta_2, x_0 + \delta_2) \subset (a,b)$. Now picking $\delta := \min (\delta_1, \delta_2)$ gives you that $(x_0 - \delta, x_0 + \delta) \subset S$ because $(f(x_0) - \varepsilon, f(x_0) + \varepsilon) \subset (-\infty , c)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that the order of an element in the group N is the lcm(order of the element in N's factors p and q) How would you prove that $$\operatorname{ord}_N(\alpha) = \operatorname{lcm}(\operatorname{ord}_p(\alpha),\operatorname{ord}_q(\alpha))$$ where $N=pq$ ($p$ and $q$ are distinct primes) and $\alpha \in \mathbb{Z}^*_N$ I've got this: The order of an element $\alpha$ of a group is the smallest positive integer $m$ such that $\alpha^m = e$ where $e$ denotes the identity element. And I guess that the right side has to be the $\operatorname{lcm}()$ of the orders from $p$ and $q$ because they are relatively prime to each other. But I can't put it together, any help would be appreciated!
Hint. There are natural maps $\mathbb{Z}^*_N\to\mathbb{Z}^*_p$ and $\mathbb{Z}^*_N\to\mathbb{Z}^*_q$ given by reduction modulo $p$ and reduction modulo $q$. This gives you a homomorphism $\mathbb{Z}^*_N\to \mathbb{Z}^*_p\times\mathbb{Z}^*_q$. What is the kernel of the map into the product? What is the order of an element $(x,y)$ in the product?
{ "language": "en", "url": "https://math.stackexchange.com/questions/147567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Differential equation problem I am looking at the differential equation: $$\frac{dR}{d\theta} + R = e^{-\theta} \sec^2 \theta.$$ I understand how to use $e^{\int 1 d\theta}$ to multiply both sides which gives me: (looking at left hand side of equation only) $$e^\theta \frac{dR}{d\theta} + e^\theta R.$$ However I am not sure how to further simplify the left hand side of the equation before integrating. Can someone please show me the process for doing that? Thanks kindly for any help.
We have $$\frac{d R(\theta)}{d \theta} + R(\theta) = \exp(-\theta) \sec^2(\theta)$$ Multiply throughout by $\exp(\theta)$, we get $$\exp(\theta) \frac{dR(\theta)}{d \theta} + \exp(\theta) R(\theta) = \sec^{2}(\theta)$$ Note that $$\frac{d (R(\theta) \exp(\theta))}{d \theta} = R(\theta) \exp(\theta) + \exp(\theta) \frac{d R(\theta)}{d \theta}.$$ Hence, we get that $$\frac{d(R(\theta) \exp(\theta))}{d \theta} = \sec^2(\theta).$$ Integrating it out, we get $$R(\theta) \exp(\theta) = \tan(\theta) + C$$ This gives us that $$R(\theta) = \exp(-\theta) \tan(\theta) + C \exp(-\theta).$$ EDIT I am adding what Henry T. Horton points out in the comments and elaborating it a bit more. The idea behind the integrating factor is to rewrite the left hand side as a derivative. For instance, if we have the differential equation in the form \begin{align} \frac{d R(\theta)}{d \theta} + M(\theta) R(\theta) & = N(\theta) & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(1) \end{align} the goal is to find the "integrating factor" $L(\theta)$ such that when we multiply the differential equation by $L(\theta)$, we can rewrite the equation as \begin{align} \frac{d (L(\theta)R(\theta))}{d \theta} & = L(\theta) N(\theta) & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(2) \end{align} The above is the key ingredient in the solving process. So the question is, how to determine the function $L(\theta)$? Since the above two equations are the same, except that the second equation is multiplied by $L(\theta)$, we can expand the second equation and divide by $L(\theta)$ to get the first equation. Expanding the second equation, we get that \begin{align} L(\theta) \frac{d R(\theta)}{d \theta} + \frac{d L(\theta)}{d \theta} R(\theta) & = L(\theta) N(\theta) & (3) \end{align} Dividing the third equation by $L(\theta)$, we get that \begin{align} \frac{d R(\theta)}{d \theta} + \frac{\frac{d L(\theta)}{d \theta}}{L(\theta)} R(\theta) & = N(\theta) & \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,(4) \end{align} Comparing this with the first equation, if we set $$\frac{\frac{d L(\theta)}{d \theta}}{L(\theta)} = M(\theta)$$ then the solution to the first and second equation will be the same. Hence, we need to find $L(\theta)$ such that $$\frac{dL(\theta)}{d \theta} = M(\theta) L(\theta).$$ Note that $\displaystyle L(\theta) = \exp \left(\int_0^{\theta} M(t)dt \right)$ will do the job and this is termed the integrating factor. Hence, once we the first equation in the form of the second equation, we can then integrate out directly to get $$ L(\theta) R(\theta) = \int_{\theta_0}^{\theta} L(t) N(t) dt + C$$ and thereby conclude that $$R(\theta) = \dfrac{\displaystyle \int_{\theta_0}^{\theta} L(t) N(t) dt}{L(\theta)} + \frac{C}{L(\theta)}$$ where the function $\displaystyle L(\theta) = \exp \left(\int_0^{\theta} M(t)dt \right)$ and $C$ is a constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the intersection of these two planes. Find the intersection of $8x + 8y +z = 35$ and $x = \left(\begin{array}{cc} 6\\ -2\\ 3\\ \end{array}\right) +$ $ \lambda_1 \left(\begin{array}{cc} -2\\ 1\\ 3\\ \end{array}\right) +$ $ \lambda_2 \left(\begin{array}{cc} 1\\ 1\\ -1\\ \end{array}\right) $ So, I have been trying this two different ways. One is to convert the vector form to Cartesian (the method I have shown below) and the other was to convert the provided Cartesian equation into a vector equation and try to find the equation of the line that way, but I was having some trouble with both methods. Converting to Cartesian method: normal = $ \left(\begin{array}{cc} -4\\ 1\\ -3\\ \end{array}\right) $ Cartesian of x $=-4x + y -3z = 35$ Solving simultaneously with $8x + 8y + z = 35$, I get the point $(7, 0, -21)$ to be on both planes, i.e., on the line of intersection. Then taking the cross of both normals, I get a parallel vector for the line of intersection to be $(25, -20, -40)$. So, I would have the vector equation of the line to be: $ \left(\begin{array}{cc} 7\\ 0\\ -21\\ \end{array}\right) +$ $\lambda \left(\begin{array}{cc} 25\\ -20\\ -40\\ \end{array}\right) $ But my provided answer is: $ \left(\begin{array}{cc} 6\\ -2\\ 3\\ \end{array}\right)+ $ $ \lambda \left(\begin{array}{cc} -5\\ 4\\ 8\\ \end{array}\right) $ I can see that the directional vector is the same, but why doesn't the provided answer's point satisfy the Cartesian equation I found? Also, how would I do this if I converted the original Cartesian equation into a vector equation? Would I just equate the two vector equations and solve using an augmented matrix? I tried it a few times but couldn't get a reasonable answer, perhaps I am just making simple errors, or is this not the correct method for vector form?
It's just a simple sign mistake. The equation should be $$-4x+y-3z=-35$$ instead of $$-4x+y-3z=35.$$ Your solution will work fine then.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Product Measures Consider the case $\Omega = \mathbb R^6 , F= B(\mathbb R^6)$ Then the projections $\ X_i(\omega) = x_i ,[ \omega=(x_1,x_2,\ldots,x_6) \in \Omega $ are random variables $i=1,\ldots,6$. Fix $\ S_n = S_0$ $\ u^{\Sigma X_i(\omega)}d^{n-\Sigma X_i(\omega)} \omega \in \Omega $, $\ n=1,\ldots,6 $. Choose the measure P = $\bigotimes_{i=1}^6 Q$ on ($\Omega,F$) where $Q$ denotes the measure $p\delta_1 + q\delta_0 $ on $(\mathbb R, B(\mathbb R))$ for some $p,q>0$ such that $p+q = 1$. Show that the projections $\ X_i(\omega), i=1,\ldots,6$ are mutually independent. Since $\ X_i(\omega)$ is a random variable then am I correct in saying that to show their independence I must show that their sigma algebras $\sigma(\ X_i(\omega))$ are independent how would I go about doing this? Thanks very much!
Yes, that is correct. You have to show that $\sigma(X_i)$ and $\sigma(X_j)$ are independent, when $j\neq i$ (note that I have omitted the $\omega$ in $\sigma(X_i(\omega))$, because that is not what you want). Now, recall that $$ \sigma(X_i)=\sigma(\{X_i^{-1}(A)\mid A\in \mathcal{B}(\mathbb{R})\}), $$ and hence it is enough to show that $\{X_i^{-1}(A)\mid A\in \mathcal{B}(\mathbb{R})\}$ and $\{X_j^{-1}(A)\mid A\in \mathcal{B}(\mathbb{R})\}$ are independent when $i\neq j$. Now, if $A\in\mathcal{B}(\mathbb{R})$ then $$ X_i^{-1}(A)=\{(x_1,\ldots,x_6)\in\mathbb{R}\mid x_i\in A\}=\mathbb{R}\times\cdots \times A\times\cdots\times\mathbb{R}, $$ where $A$ is on the $i$'th place. If $j\neq i$, then $$ X_i^{-1}(A)\cap X_j^{-1}(B)=\mathbb{R}\times\cdots \times A\times B\times\cdots\times\mathbb{R}, $$ where $A$ is on the $i$'th place and $B$ is on the $j$'th place. Now $$ P(X_i^{-1}(A)\cap X_j^{-1}(B))=Q(\mathbb{R})^{4}Q(A)Q(B)=Q(A)Q(B)=P(X_i^{-1}(A))P(X_j^{-1}(B)), $$ and hence the events are independent for every choice of $A,B\in\mathcal{B}(\mathbb{R})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Computing conditional probability out of joint probability If I have given a complete table for the joint probability $$P(A,B,C,D,E)$$ how can I compute an arbitrary conditional probability out of it, for instance: $$P(A|B)$$
$$\mathbb{P}(A=a \vert B=b) = \frac{\mathbb{P}(A=a, B=b)}{\mathbb{P}(B=b)} = \frac{\displaystyle \sum_{c,d,e} \mathbb{P}(A=a, B=b, C=c, D=d, E=e)}{\displaystyle \sum_{a,c,d,e} \mathbb{P}(A=a, B=b, C=c, D=d, E=e)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/147831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why are zeros/roots (real) solutions to an equation of an n-degree polynomial? I can't really put a proper title on this one, but I seem to be missing one crucial point. Why do roots of a function like $f(x) = ax^2 + bx + c$ provide the solutions when $f(x) = 0$. What does that $ y = 0$ mean for the solutions, the intercept at the $x$ axis? Why aren't the solutions at $f(x) = 403045$ or some other arbitrary $n$? What makes the x-intercept special?
One reason is that it makes solving an equation simple, especially if $f(x)$ is written only as the product of a few terms. This is because $a\cdot b = 0$ implies either $a = 0$ or $b = 0$. For example, take $f(x) = (x-5)(x+2)(x-2)$. To find the values of $x$ where $f(x) = 0$ we see that $x$ must be $5$, $-2$, or $2$. To find the values of $x$ so that $f(x) = 5$, well, we can't conclude anything immediately because having 3 numbers multiply to 5 (or any non-zero number) doesn't tell us anything about those 3 numbers. This makes 0 special.
{ "language": "en", "url": "https://math.stackexchange.com/questions/147932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 4 }
Graph decomposition What is the smallest $n \in \mathbb{N}$ with $ n \geq5$ such that the edge set of the complete graph $K_n$ can be partitioned (decomposed) to edge disjoint copies of $K_4$? I got a necesary condition for the decomposition is that $12 |n(n-1)$ and $3|n-1$, thus it implies $n \geq 13$. But can $K_{13}$ indeed be decomposed into edge disjoint copies of $K_4$?
The degree of $K_9$ is 8, whereas the degree of $K_4$ is 3. Since $3$ does not divide $8$, there is no $K_4$ decomposition of $K_9$. $K_n$ has a decomposition into edge-disjoint copies of $K_4$ whenever $n \equiv 1 \text{ or 4 } (\text{mod} 12)$, so the next smallest example after $K_4$ is $K_{13}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Solving polynomial differential equation I have $a(v)$ where $a$ is acceleration and $v$ is velocity. $a$ can be described as a polynomial of degree 3: $$a(v) = \sum\limits_{i=0}^3 p_i v^i = \sum\limits_{i=0}^3 p_i \left(\frac{dd(t)}{dt}\right)^i,$$ where $d(t)$ is distance with respect to time. I want to solve (or approximate) this equation for $d(t)$, but it's been a few years since I graduated, and I seem to have forgotten most of my math skills :)
Since the acceleration is the derivative of velocity, you can write $$ \frac{\mathrm{d} v}{\mathrm{d} t} = p_0 + p_1 v + p_2 v^2 + p_3 v^3 $$ separating the variables we get the integral form $$ \int \frac{\mathrm{d}v}{p_0 + p_1 v + p_2 v^2 + p_3 v^3} = \int \mathrm{d}t = t + c$$ Which we can integrate using partial fractions (also see this page). To summarise the method: Using the fundamental theorem of algebra we can factor the polynomial $$ p_0 + p_1 v + p_2 v^2 + p_3 v^3 = p_3 (v + \alpha_1)(v + \alpha_2)(v + \alpha_3) $$ where the $\alpha$s are the roots of the polynomial (assume they are distinct for now; repeated roots will require some additional work). Then we look for $\beta_1,\beta_2,\beta_3$ such that $$ \sum \frac{\beta_i}{v+\alpha_i} = \frac{1}{(v+\alpha_1)(v+\alpha_2)(v+\alpha_3)} $$ Expanding the sum you see that this requires $$\begin{align} \beta_1 + \beta_2 + \beta_3 &= 0 \\ \beta_1 (\alpha_2 + \alpha_3) + \beta_2(\alpha_1+\alpha_3) + \beta_3(\alpha_1 + \alpha_2) &= 0 \\ \beta_1 \alpha_2\alpha_3 + \beta_2\alpha_1\alpha_3 + \beta_3 \alpha_1\alpha_2 &= 1 \end{align}$$ which is a linear system that can be solved. This way we reduce our integral equation to $$ t + c = \frac{1}{p_3}\int \frac{\beta_1}{v + \alpha_1} + \frac{\beta_2}{v+\alpha_2} + \frac{\beta_3}{v+\alpha_3} \mathrm{d}v $$ where the $\alpha$ and $\beta$ coefficients are determined from the polynomial you started with. This gives us the implicit solution $$ p_3t + C = \beta_1 \ln (v+\alpha_1) + \beta_2 \ln(v+\alpha_2) + \beta_3 \ln(v+\alpha_3) $$ or $$ e^{p_3 t + C} = (v+\alpha_1)^{\beta_1}(v+\alpha_2)^{\beta_2}(v+\alpha_3)^{\beta_3} \tag{*}$$ However, this is generally where one gets stuck. To obtain $d$ from $v$ you have to integrate $v$ one more time. But now equation (*) may not have nice analytic representation for $v$, nevermind a simple integral for you to obtain $d$. In those cases the best you can do is probably ask Mathematica. (Sometime you may get lucky. For example, if your polynomial is a perfect cube, then you have $$ \int \frac{\mathrm{d}v}{p(v+q)^3} = -\frac{1}{2p(v+q)^2} + C $$ then you get that $$ v + q = \sqrt{2p t + C} $$ which one can easily integrate to get $d = \int v~\mathrm{d}t$. But those depends on special form of the coefficients $p_i$ which you have not specified.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/148131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Curve arc length parametrization definition I did some assignments related to curve arc length parametrization. But what I can't seem to find online is a formal definition of it. I've found procedures and ways to find a curve's equation by arc length parametrization, but I'm still missing a formal definition which I have to write in my assignment. I saw many links related to the topic http://homepage.smc.edu/kennedy_john/ArcLengthParametrization.pdf but they all seem too long and don't provide a short, concise definition. Could anyone help me writing a formal definition of curve arc length parametrization?
Suppose $\gamma:[a,b]\rightarrow {\Bbb R}$ is a smooth curve with $\gamma'(t) \not = 0$ for $t\in[a,b]$. Define $$s(t) = \int_a^t ||\gamma'(\xi)||\,d\xi$$ for $t\in[a,b]$. This function $s$ has a positive derivative, so it possesses a differentiable inverse. You can use it to get a unit-speed reparametrization of your curve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Given n raffles, what is the chance of winning k in a row? I was reading this interesting article about the probability of of tossing heads k times in a row out of n tosses. The final result was $$P = 1-\frac{\operatorname{fib}_k(n+2)}{2^n}\;,$$ where $\operatorname{fib}_k(n)$ is the $n$-th $k$-step Fibonacci number. However, I could not figure out how to adapt it to cases where the probability is not half but just some generic $p$. How do I approach that, and is there a generic solution for all $p$? To be clear, $n$ is the number of raffles, $p$ is the probability of winning a single one, and $k$ is the number of consecutive successes required. $P$ is the desired value.
We can proceed as follows. Let $p$ be the probability that we flip a head, and $q=1-p$ the probability that we flip tails. Let us search for the probability that we do NOT have at least $k$ heads in a row at some point after $n$ flips, which we will denote $P(n,k)$. Given a sequence of coin tosses (of length at least $k)$ which does not have $k$ heads in a row, the end of sequence must be a tail followed by $i$ heads, where $0\leq i<k$. We will let $P(n,k,i)$ denote the probability that a string of length $n$ has less than $k$ heads AND ends with $i$ heads. Clearly $P(n,k)=\sum P(n,k,i)$. (Also note that we can still work with $n<k$ by treating a string of just $i$ heads as being in the class $(n,k,i)$). Suppose we have a series of $n$ coin flips, with no more than $k$ heads, and we are in the class $(n,k,i)$. What can happen if we flip the coin once more? If we get tails, we end up in class $(n+1,k,0)$, which happens with probability $q$, and if we get heads, we end up in the class $(n,k,i+1)$ which happens with probability $p$. The only caveat is that if $i=k-1$, our string will have $k$ heads in a row if the next run is a head. From this, and using the fact that the $(n+1)$st flip is independent of the flips that came before, we can calculate: $$P(n+1,k,i+1)=pP(n,k,i) \qquad 0\leq i<k, $$ and so $$P(n,k,i)=p^iP(n-i,k,0) \qquad 0\leq i<k.$$ This could have been seen more directly by noting that the only way to be in the class $(n,k,i)$ is to have a string in class $(n-i,k,0)$ and to then have $i$ heads in a row, which happens with probability $p^i$. This means that we only need to use things of the form $P(n,k)$ and $P(n,k,0)$ in our calculations. By similar reasoning about how strings come about, we have $$P(n+1,k,0)=qP(n,k)=q\sum_{i=0}^{k-1} P(n,k,i)=q\sum_{i=0}^{k-1} p^iP(n-i,k,0).$$ This gives us a nice linear recurrence relation for $P(n,k,0)$ very similar to the one for the $k$-Fibonacci numbers, and dividing by $q$, we see that $P(n,k)$ satisfies the same recurrence. Adding the initial condition $P(n,k)=1$ if $n<k$ allows us to easily generate the values we need. Moreover, if we multiply our recurrence by $p^{-(n+1)}$, we get a slightly simpler recurrence for $Q(n+1,k,0)=p^{-(n+1)}P(n+1,k,0)$, namely $$Q(n+1,k,0)=\frac{q}{p} \sum_{i=0}^{k-1} Q(n-i,k,0).$$ When $p=q$, this becomes the recurrence for the $k$-Fibonacci numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
cohomology of a finite cyclic group I apologize if this is a duplicate. I don't know enough about group cohomology to know if this is just a special case of an earlier post with the same title. Let $G=\langle\sigma\rangle$ where $\sigma^m=1$. Let $N=1+\sigma+\sigma^2+\cdots+\sigma^{m-1}$. Then it is claimed in Dummit and Foote that $$\cdots\mathbb{Z} G \xrightarrow{\;\sigma -1\;} \mathbb{Z} G \xrightarrow{\;N\;} \mathbb{Z} G \xrightarrow{\;\sigma -1\;} \cdots \xrightarrow{\;N\;} \mathbb{Z} G \xrightarrow{\;\sigma -1\;} \mathbb{Z} G \xrightarrow{\;\text{aug}\;} \mathbb{Z} \longrightarrow 0$$ is a free resolution of the trivial $G$-module $\mathbb{Z}$. Here $\mathbb{Z} G$ is the group ring and $\text{aug}$ is the augmentation map which sums coefficients. It's clear that $N( \sigma -1) = 0$ so that the composition of consecutive maps is zero. But I can't see why the kernel of a map should be contained in the image of the previous map. any suggestions would be greatly appreciated. Thanks for your time.
As $(\sigma-1)(c_0+c_1\sigma+\dots c_{n-1}\sigma^{n-1})=(c_n-c_0)+(c_0-c_1)\sigma+\dots (c_{n-2}-c_{n-1})\sigma^{n-1}$, the element $a=c_0+c_1\sigma+\dots c_{n-1}\sigma^{n-1}$ is in the kernel of $\sigma-1$ iff all $c_i$'s are equal, i.e. iff $a=Nc$ for some $c\in\mathbb{Z}$. Similarly, $Na=(\sum c_i)N$, so here the kernel is given by the condition $\sum c_i=0$, but this means $a=(\sigma-1)(-c_0-(c_0+c_1)\sigma-(c_0+c_1+c_2)\sigma^2-\cdots)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why is unique ergodicity important or interesting? I have a very simple motivational question: why do we care if a measure-preserving transformation is uniquely ergodic or not? I can appreciate that being ergodic means that a system can't really be decomposed into smaller subsystems (the only invariant pieces are really big or really small), but once you know that a transformation is ergodic, why do you care if there is only one measure which it's ergodic with respect to or not?
Unique ergodicity is defined for topological dynamical systems and it tells you that the time average of any function converges pointwise to a constant (see Walters: Introduction to Ergodic Theory, th 6.19). This property is often useful. Any ergodic measure preserving system is isomorphic to a uniquely ergodic (minimal) topological system (see http://projecteuclid.org/euclid.bsmsp/1200514225).
{ "language": "en", "url": "https://math.stackexchange.com/questions/148502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 1, "answer_id": 0 }
Density of the set $S=\{m/2^n| n\in\mathbb{N}, m\in\mathbb{Z}\}$ on $\mathbb{R}$? Let $S=\{\frac{m}{2^n}| n\in\mathbb{N}, m\in\mathbb{Z}\}$, is $S$ a dense set on $\mathbb{R}$?
Yes, is it, given open interval $(a,b)$ (suppose $a$ and $b$ positives) you can find $n\in\mathbb{N}$ such that $1/2^n<|b-a|$. Then consider the set: $$X=\{k\in \mathbb{N}; k/2^n > b\}$$ This is a subset of $\mathbb{N}$, for well ordering principe $X$ has a least element $k_0$ then is enought taking $(k_0-1)/2^n\in(a,b)$. The same is if $a$, $b$ or both are negatives (because $(a,b)$ is bounded).
{ "language": "en", "url": "https://math.stackexchange.com/questions/148558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
What exactly is nonstandard about Nonstandard Analysis? I have only a vague understanding of nonstandard analysis from reading Reuben Hersh & Philip Davis, The Mathematical Experience. As a physics major I do have some education in standard analysis, but wonder what the properties are that the nonstandardness (is that a word?) is composed of. Is it more than defining numbers smaller than any positive real as the tag suggests? Can you give examples? Do you know of a gentle introduction to the nonstandard properties?
To complement the fine answers given earlier, I would like to address directly the question of the title: "What exactly is nonstandard about Nonstandard Analysis?" The answer is: "Nothing" (the name "nonstandard analysis" is merely a descriptive title of a field of research, chosen by Robinson). This is why some scholars try to avoid using the term in their publications, preferring to speak of "infinitesimals" or "analysis over the hyperreals", as for example in the following popular books: Goldblatt, Robert, Lectures on the hyperreals. Graduate Texts in Mathematics, 188. Springer-Verlag, New York, 1998 Vakil, Nader, Real analysis through modern infinitesimals. Encyclopedia of Mathematics and its Applications, 140. Cambridge University Press, Cambridge, 2011. More specifically, there is nothing "nonstandard" about Robinson's theory in the sense that he is working in a classical framework that a majority of mathematicians work in today, namely the Zermelo-Fraenkel set theory, and relying on classical logic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 5, "answer_id": 3 }
Homotopic to a Constant I'm having a little trouble understanding several topics from algebraic topology. This question covers a range of topics I have been looking at. Can anyone help? Thanks! Suppose $X$ and $Y$ are connected manifolds, $X$ is simply connected, and the universal cover of $Y$ is contractible. Why is every continuous mapping from $X$ to $Y$ homotopic to a constant?
Let $\tilde{Y} \xrightarrow{\pi} Y$ be the universal cover of $Y$. Since $X$ is simply connected, any continuous map $X \xrightarrow{f} Y$ can be factorized as a continuous map $X \xrightarrow{\tilde{f}} \tilde{Y} \xrightarrow{\pi} Y$. Since $\tilde{Y}$ is contractible, there is a point $y \in \tilde{Y}$ and an homotopy $h$ between the identity map on $\tilde{Y}$ and the constant map $y$ : $h : \begin{array}{c}\tilde{Y} \xrightarrow{id} \tilde{Y} \\ \Downarrow \\ \tilde{Y} \xrightarrow{y} \{y\}\end{array}$ Composing this homotopy with $\tilde{f}$ and $\pi$, you get an homothopy $h'(t,x) = \pi(h(t,\tilde{f}(x))$ $h': \begin{array}{rcl}X \xrightarrow{\tilde{f}} & \tilde{Y} \xrightarrow{id} \tilde{Y} &\xrightarrow{\pi} Y \\ &\Downarrow &\\ X \xrightarrow{\tilde{f}} & \tilde{Y} \xrightarrow{y} \{y\} & \xrightarrow{\pi} \{\pi(y)\} \end{array}$ between $f$ and the constant map $\pi(y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Cumulative probability and predicted take in a raffle? Not sure if this is the right term! If I have a raffle with 100 tickets in at $5 each, and people pull a ticket sequentially, how do I calculate the likely return before the winning ticket is drawn? I'm half way there. I get that you work out the cumulative probability is But how do I add the prices to work out a likely return? I want to be able to change the number of tickets sold to adjust the return.
The calculations seem to involve a strange kind of raffle, in which the first ticket is sold (for $5$ dollars). We check whether this is the winning ticket. If it is not, we sell another ticket, and check whether it is the winner. And so on. You seem to be asking for the expected return. This is $5$ times $E(X)$, where $X$ is the total number of tickets until we reach the winning ticket. The random variable $X$ has a distribution which is a special case of what is sometimes called the Negative Hypergeometric Distribution. (There are other names, such as Inverse Hypergeometric Distribution.) The general negative hypergeometric allows the possibility of $r$ "winning tickets" among the $N$ tickets, and the possibility that we will allow sales until $k$ winning tickets have turned up. You are looking at the special case $r=k=1$ (I am using the notation of the link). In your case, if the total number of tickets is $N$, of which only one is a winning one, we have $$E(X)=\frac{N+1}{2}.$$ Taking $N=100$, and $5$ dollars a ticket, the expected return is $5\frac{101}{2}$ dollars. Remark: If the model I have described is not the model you have in mind, perhaps the question can be clarified.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Exponential objects in a cartesian closed category: $a^1 \cong a$ Hi I'm having problems with coming up with a proof for this simple property of cartesian closed categories (CCC) and exponential objects, namely that for any object $a$ in a CCC $C$ with an initial object $0$, $a$ is isomorphic to $a^1$ where $1$ is the terminal object of $C$. In most of the category theory books i've read this is usually left as an exercise, but for some reason I can't get a handle on it.
You can also reason as follows, without the Yoneda lemma. But proving uniqueness of right adjoints is cumbersome without using Yoneda, and easy with. Anyway, here it goes: The functor $(-)\times 1$ is isomorphic to the identity functor. The identity functor is a right adjoint of itself, so the identity functor is also right adjoint to $(-)\times 1$. Then uniqueness of right adjoints gives that $(-)^1$ is isomorphic to the identity functor.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 0 }
Sum of three primes Can all natural numbers ($n\ge 6$) be represented as the sum of three primes? With computer I checked up to $10000$, but couldn't prove it.
It was proved by Vinogradov that every large enough odd integer is the sum of at most $3$ primes, and it seems essentially certain that apart from a few uninteresting small cases, every odd integer is the sum of $3$ primes. Even integers are a different matter. To prove that every even integer $n$ is the sum of three primes, one would have to prove the Goldbach Conjecture, since one of the three primes must be $2$, and therefore $n-2$ must be the sum of two primes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/148924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Spectra of restrictions of bounded operators Suppose $T$ is a bounded operator on a Banach Space $X$ and $Y$ is a non-trivial closed invariant subspace for $T$. It is fairly easy to show that for the point spectrum one has $\sigma_p(T_{|Y})\subseteq\sigma_p(T)$ and this is also true for the approximate point spectrum, i.e. $\sigma_a(T_{|Y})\subseteq\sigma_a(T)$. However I think it is not true in general that $\sigma(T_{|Y})\subseteq\sigma(T)$. We also have $$ \partial(\sigma(T_{|Y}))\subseteq\sigma_a(T_{|Y})\subseteq\sigma_a(T) $$ Hence $\sigma(T_{|Y})\cap\sigma(T)\ne\emptyset$. Moreover, if $\sigma(T)$ is discrete then $\partial(\sigma(T_{|Y}))$ is also discrete, which implies that $\partial(\sigma(T_{|Y}))=\sigma(T_{|Y})$, so at least in this case the inclusion $\sigma(T_{|Y})\subseteq\sigma(T)$ holds true. So for example holds true for compact, strictly singular and quasinilpotent operators. Question 1: Is it true, as I suspect, that $\sigma(T_{|Y})\subseteq\sigma(T)$ doesn't hold in general? A counterexample will be appreciated. On $l_2$ will do, as I think that on some Banach spaces this holds for any operators. For example, if $X$ is hereditary indecomposable (HI), the spectrum of any operator is discrete. Question 2 (imprecise): If the answer to Q1 is 'yes', is there some known result regarding how large the spectrum of the restriction can become? Thank you.
For example, consider the right shift operator $R$ on $X = \ell^2({\mathbb Z})$, $Y = \{y \in X: y_j = 0 \ \text{for}\ j < 0\}$. Then $Y$ is invariant under $R$, and $\sigma(R)$ is the unit circle while $\sigma(R|_Y)$ is the closed unit disk.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Analyze the convergence or divergence of the sequence $\left\{\frac1n+\sin\frac{n\pi}{2}\right\}$ Analyze the convergence or divergence of the following sequence a) $\left\{\frac{1}{n}+\sin\frac{n\pi}{2}\right\}$ The first one is divergent because of the in $\sin\frac{n\pi}{2}$ term, which takes the values, for $n = 1, 2, 3, 4, 5, \dots$: $$1, 0, -1, 0, 1, 0, -1, 0, 1, \dots$$ As you can see, it's divergent. To formally prove it,I could simply notice that it has constant subsequences of $1$s, $0$s, and $-1$s, all of which converge to different limits. If it were as subsequence, they would all be the same limit. My procedure is that correct?
You’re on the right track, but you’ve left out an important step: you haven’t said anything to take the $1/n$ term into account. It’s obvious what’s happening, but you still have to say something. Let $a_n=\frac1n+\sin\frac{n\pi}2$. If $\langle a_n:n\in\Bbb Z^+\rangle$ converged, say to $L$, then the sequence $\left\langle a_n-\frac1n:n\in\Bbb Z^+\right\rangle$ would converge to $L-0=L$, because $\left\langle\frac1n:n\in\Bbb Z^+\right\rangle$ converges to $0$. Now make your (correct) argument about $\left\langle\sin\frac{n\pi}2:n\in\Bbb Z^+\right\rangle$ not converging and thereby get a contradiction. Then you can conclude that $\langle a_n:n\in\Bbb Z^+\rangle$ does not converge.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Evaluating $\lim\limits_{n\to\infty} \left(\frac{1^p+2^p+3^p + \cdots + n^p}{n^p} - \frac{n}{p+1}\right)$ Evaluate $$\lim_{n\to\infty} \left(\frac{1^p+2^p+3^p + \cdots + n^p}{n^p} - \frac{n}{p+1}\right)$$
The result is more general. Fact: For any function $f$ regular enough on $[0,1]$, introduce $$ A_n=\sum_{k=1}^nf\left(\frac{k}n\right)\qquad B=\int_0^1f(x)\mathrm dx\qquad C=f(1)-f(0) $$ Then, $$ \lim\limits_{n\to\infty}A_n-nB=\frac12C $$ For any real number $p\gt0$, if $f(x)=x^p$, one sees that $B=\frac1{p+1}$ and $C=1$, which is the result in the question. To prove the fact stated above, start from Taylor formula: for every $0\leqslant x\leqslant 1/n$ and $1\leqslant k\leqslant n$, $$ f(x+(k-1)/n)=f(k/n)-(1-x)f'(k/n)+u_{n,k}(x)/n $$ where $u_{n,k}(x)\to0$ when $n\to\infty$, uniformly on $k$ and $x$, say $|u_{n,k}(x)|\leqslant v_n$ with $v_n\to0$. Integrating this on $[0,1/n]$ and summing from $k=1$ to $k=n$, one gets $$ \int_0^1f(x)\mathrm dx=\frac1n\sum_{k=1}^nf\left(\frac{k}n\right)-\frac1n\int_0^{1/n}u\mathrm du\cdot\sum_{k=1}^nf'\left(\frac{k}n\right)+\frac1nu_n $$ where $|u_n|\leqslant v_n$. Reordering, this says that $$ A_n=nB+\frac12\frac1n\sum_{k=1}^nf'\left(\frac{k}n\right)-u_n=nB+\frac12\int_0^1f'(x)\mathrm dx+r_n-u_n $$ with $r_n\to0$, thanks to the Riemann integrability of the function $f'$ on $[0,1]$. The proof is complete since $r_n-u_n\to0$ and the last integral is $f(1)-f(0)=C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 7, "answer_id": 1 }
Finding the second-degree polynomial that is the best approximation for cos(x) So, I need to find the second-degree polynomial that is the best approximation for $f(x) = cos(x)$ in $L^2_w[a, b]$, where $w(x) = e^{-x}$, $a=0$, $b=\infty$. "Best approximation" for f is a function $\hat{\varphi} \in \Phi$ such that: $||f - \hat{\varphi}|| \le ||f - \varphi||,\; \forall \varphi \in \Phi$ I have several methods available: * *Lagrange interpolation *Hermite interpolation Which would be the most appropriate?
In your $L^2$ space the Laguerre polynomials form an orthonormal family, so if you use the polynomial $$ P(x)=\sum_{i=0}^n a_i L_i(x), $$ you will get the approximation error $$ ||P(x)-\cos x||^2=\sum_{i=0}^n(a_i-b_i)^2+\sum_{i>n}b_i^2, $$ (Possibly you need to add a constant to account for the squared norm of the component of cosine, if any, that is orthogonal to all the polynomials. If the Laguerre polynomials form a complete orthonormal family, then this extra term is not needed. Anyway, having that extra term will not affect the solution of this problem.) where $$ b_k=\langle L_k(x)|\cos x\rangle=\int_0^{\infty}L_k(x)\cos x e^{-x}\,dx. $$ I recommend that you calculate $b_0$, $b_1$ and $b_2$, and then try and figure out how you should select the numbers $a_i$ to minimize the error and meet your degree constraint.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Limit points of sets Find all limit points of given sets: $A = \left\{ (x,y)\in\mathbb{R}^2 : x\in \mathbb{Z}\right\}$ $B = \left\{ (x,y)\in\mathbb{R}^2 : x^2+y^2 >1 \right\}$ I don't know how to do that. Are there any standard ways to do this?
1) Is set A closed or not? If it is we're then done, otherwise there's some point not in it that is a limit point of A 2) As before but perhaps even easier.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Notation for infinite product in reverse order This question is related to notation of infinite product. We know that, $$ \prod_{i=1}^{\infty}x_{i}=x_{1}x_{2}x_{3}\cdots $$ How do I denote $$ \cdots x_{3}x_{2}x_{1} ? $$ One approach could be $$ \prod_{i=\infty}^{1}x_{i}=\cdots x_{3}x_{2}x_{1} $$ I need to use this expression in a bigger expression so I need a good notation for this. Thank you in advance for your help.
(With tongue in cheek:) what about this? $$\left(x_n\prod_{i=1}^\infty \right)\;$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/149398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 4 }
Calculate $\int_\gamma \frac{1}{(z-z_0)^2}dz$ This is the definition of the fundamental theorem of contour integration that I have: If $f:D\subseteq\mathbb{C}\rightarrow \mathbb{C}$ is a continuous function on a domain $D \subseteq \mathbb{C}$ and $F:D\subseteq \mathbb{C} \rightarrow \mathbb{C}$ satisfies $F'=f$ on $D$, then for each contour $\gamma$ we have that: $\int_\gamma f(z) dz =F(z_1)-F(z_0)$ where $\gamma[a,b]\rightarrow D$ with $\gamma(a)=Z_0$ and $\gamma(b)=Z_1$. $F$ is the antiderivative of $f$. Let $\gamma(t)=Re^{it}, \ 0\le t \le 2\pi, \ R>0$. In my example it said $\int_\gamma \frac{1}{(z-z_0)^2}dz=0$. Im trying to calculate it out myself, but I got stuck. I get that $f(z)=\frac{1}{(z-z_0)^2}$ has an antiderivative $F(z)=-\frac{1}{(z-z_0)}$. Thus by the fundamental theorem of contour integration: $\int_\gamma \frac{1}{(z-z_0)^2}dz =F(z_1)-F(z_0)\\F(\gamma(2\pi))-F(\gamma(0))\\F(Re^{2\pi i})-F(R)\\-\frac{1}{Re^{2\pi i}-z_0} +\frac{1}{R-z_0}\\-\frac{1}{Re^{i}-z_0} +\frac{1}{R-z_0}$ How does $\int_\gamma \frac{1}{(z-z_0)^2}dz=0$?
$\gamma(2\pi)=Re^{2\pi i}=R=Re^0=\gamma(0)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/149444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$\lim_{x\rightarrow a}\|f(x)\|$ and $\lim_{x\rightarrow a}\frac{\|f(x)\|}{\|x-a\|}$ Given any function $f: \mathbb{R^n} \to \mathbb{R^m}$ , if $$\lim_{x\rightarrow a}\|f(x)\| = 0$$ then does $$\lim_{x\rightarrow a}\frac{\|f(x)\|}{\|x-a\|} = 0 $$ as well? Is the converse true?
For the first part, consider e.g. the case $m = n$ with $f$ defined by $f(x) = x - a$ for all $x$ to see that the answer is no. For the second part, the answer is yes. If $\lim_{x \to a} \|f(x)\|/\|x-a\| = L$ exists (we do not need to assume that it is $0$), then since $\lim_{x \to a} \|x - a\| = 0$ clearly exists, we have that $$ \lim_{x \to a} \|f(x)\| = \lim_{x \to a}\left( \|x - a\| \cdot \frac{\|f(x)\|}{\|x-a\|}\right) = \lim_{x \to a} \|x - a\| \cdot \lim_{x \to a} \frac{\|f(x)\|}{\|x-a\|} = 0 \cdot L = 0 $$ exists and is $0$ by standard limit laws.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a uniform way to define angle bisectors using vectors? Look at the left figure. $x_1$ and $x_2$ are two vectors with the same length (norm). Then $x_1+x_2$ is along the bisector of the angle subtended by $x_1$ and $x_2$. But look at the upper right figure. When $x_1$ and $x_2$ are collinear and in reverse directions, $x_1+x_2=0$ and no longer represent the bisector of the angle (in this case 180 deg). The bisector should be perpendicular to $x_1$ and $x_2$. (The $x_1+x_2$ works well for the case shown in the lower right figure.) Question: Is there a way to represent the bisector for all the three cases? I don't want to exclude the upper right case. Is it possibly helpful to introduce some infinity elements?
I also would like to give a solution, which I am currently using in my work. The key idea is to use a rotation matrix. Suppose the angle between $x_1$ and $x_2$ is $\theta$. Let $R(\theta/2)$ be a rotation matrix, which can rotate a vector $\theta/2$. Then $$y=R(\theta/2)x_1$$ is a unified way to express the bisector. Of course, we need also pay attention to the details, which can be determined straightforwardly: * *the rotation matrix rotates a vector clockwise or counter-clockwise? *how to define the angle $\theta$? *the bisector should be $y=R(\theta/2)x_1$ or $y=R(\theta/2)x_2$? EDIT: I give an example here. Consider two unit-length vectors $x_1$ and $x_2$, which will give two angles: one is in [0,pi] and the other is in (pi,2pi). We can define the angle $\theta$ such that rotating $x_1$ counterclockwise $\theta$ about the origin yields $x_2$. Here $\theta\in[0,2\pi)$. Consequently define the rotation matrix $R(\theta/2)$ rotates a vector counterclockwise $\theta/2$. (The formula of this kind of R is given here) Thus $R(\theta/2)x_1$ is a unit-length vector lying on the bisector of $\theta$. Another thing as mentioned by coffemath is that: how to compute the angle given two vectors? Of course, it is not enough to only use $\cos \theta=x_1^Tx_2$ because $\cos \theta$ gives two angles whose sum is $2\pi$. However, if we carefully define the angle $\theta$ and $R$ we can also compute $\sin \theta$. For example, we define the angle and rotation matrix as above mentioned. Then define $x_2^{\perp}=R(\pi/2)x_2$. Then it can be calculated that $x_1^Tx_2^{\perp}=-\sin \theta$. hence from both $\cos\theta$ and $\sin\theta$, we can compute $\theta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Why is the expected value $E(X^2) \neq E(X)^2$? I wish to use the Computational formula of the variance to calculate the variance of a normal-distributed function. For this, I need the expected value of $X$ as well as the one of $X^2$. Intuitively, I would have assumed that $E(X^2)$ is always equal to $E(X)^2$. In fact, I cannot imagine how they could be different. Could you explain how this is possible, e.g. with an example?
May as well chime in :) Expectations are linear pretty much by definition, so $E(aX + b) = aE(X) + b$. Also linear is the function $f(x) = ax$. If we take a look at $f(x^2)$, we get $f(x^2) = a(x^2) \not= (ax)^2 = f(x)^2$. If $E(X^2) = E(X)^2$, then $E(X)$ could not be linear, which is a contradiction of its definition. So, it's not true :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/149723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 9, "answer_id": 7 }
Positive Operator Value Measurement Question I'm attempting to understand some of the characteristics of Posiitive Operator Value Measurement (POVM). For instance in Nielsen and Chuang, they obtain a set of measurement operators $\{E_m\}$ for states $|\psi_1\rangle = |0\rangle, |\psi_2\rangle = (|0\rangle + |1\rangle)/\sqrt{2}$. The end up obtaining the following set of operators: \begin{align*} E_1 &\equiv \frac{\sqrt{2}}{1+\sqrt{2}} |1\rangle \langle 1 |, \\ E_2 &\equiv \frac{\sqrt{2}}{1+\sqrt{2}} \frac{(|0\rangle - |1\rangle) (\langle 0 | - \langle 1 |)}{2}, \\ E_3 &\equiv I - E_1 - E_2 \end{align*} Basically, I'm oblivious to how they were able to obtain these. I thought that perhaps they found $E_1$ by utilizing the formula: \begin{align*} E_1 = \frac{I - |\psi_2\rangle \langle \psi_2|}{1 + |\langle \psi_1|\psi_2\rangle|} \end{align*} However, when working it out, I do not obtain the same result. I'm sure it's something dumb and obvious I'm missing here. Any help on this would be very much appreciated. Thanks.
Yes, those are the results but you have the subindexes swaped.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Is critical Haudorff measure a Frostman measure? Let $K$ be a compact set in $\mathbb{R}^d$ of Hausdorff dimension $\alpha<d$, $H_\alpha(\cdot)$ the $\alpha$-dimensional Hausdorff measure. If $0<H_\alpha(K)<\infty$, is it necessarily true that $H_\alpha(K\cap B)\lesssim r(B)^\alpha$ for any open ball $B$? Here $r(B)$ denotes the radius of the ball $B$. This seems to be true when $K$ enjoys some self-similarity, e.g. when $K$ is the standard Cantor set. But I am not sure if it is also true for the general sets.
Consider e.g. $\alpha=1$, $d=2$. Given $p > 1$, let $K$ be the union of a sequence of line segments of lengths $1/n^2$, $n = 1,2,3,\ldots$, all with one endpoint at $0$. Then for $0 < r < 1$, if $B$ is the ball of radius $r$ centred at $0$, $H_1(K \cap B) = \sum_{n \le r^{-1/2}} r + \sum_{n > r^{-1/2}} n^{-2} \approx r^{1/2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/149833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Prove that $N(\gamma) = 1$ if, and only if, $\gamma$ is a unit in the ring $\mathbb{Z}[\sqrt{n}]$ Prove that $N(\gamma) = 1$ if, and only if, $\gamma$ is a unit in the ring $\mathbb{Z}[\sqrt{n}]$ Where $N$ is the norm function that maps $\gamma = a+b\sqrt{n} \mapsto \left | a^2-nb^2 \right |$ I have managed to prove $N(\gamma) = 1 \Rightarrow \gamma$ is a unit (i think), but cannot prove $\gamma$ is a unit $\Rightarrow N( \gamma ) = 1$ Any help would be appreciated, cheers
Hint $\rm\ \ unit\ \alpha\iff \alpha\:|\: 1\iff \alpha\alpha'\:|\:1 \iff unit\ \alpha\alpha',\ $ since $\rm\:\alpha\:|\:1\iff\alpha'\:|\:1' = 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/149886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Peano postulates I'm looking for a set containing an element 0 and a successor function s that satisfies the first two Peano postulates (s is injective and 0 is not in its image), but not the third (the one about induction). This is of course exercise 1.4.9 in MacLane's Algebra book, so it's more or less homework, so if you could do the thing where you like point me in the right direction without giving it all away that'd be great. Thanks!
Since your set has 0 and a successor function, it must contain $\Bbb N$. The induction axiom is what ensures that every element is reachable from 0. So throw in some extra non-$\Bbb N$ elements that are not reachable from 0 and give them successors. There are several ways to do this. Geometrically, $\Bbb N$ is a ray with its endpoint at 0. The Peano axioms force it to be this shape. Each axiom prevents a different pathology. For example, the axiom $Sn\ne 0$ is required to prevent the ray from curling up into a circle. It's a really good exercise to draw various pathological shapes and then see which ones are ruled out by which axioms, and conversely, for each axiom, to produce a pathology which is ruled out by that axiom. Addendum: I just happened to be reading Frege's Theorem and the Peano Postulates by G. Boolos, and on p.318 it presents a variation of this exercise that you might enjoy. Boolos states a version of the Peano axioms: * *$\forall x. {\bf 0}\ne {\bf s}x$ *$\forall x.\forall y.{\bf s}x={\bf s}y\rightarrow x=y$ *(Induction) $\forall F. (F{\bf 0}\wedge \forall x(Fx\rightarrow F{\bf s}x)\rightarrow \forall x. F x) $ And then says: Henkin observed that (3) implies the disjunction of (1) and (2)… It is easy to construct models in which each of the seven conjunctions ±1±2±3 other than –1–2+3 holds; so no other dependencies among 1, 2, and 3 await discovery. Your job: find the models!
{ "language": "en", "url": "https://math.stackexchange.com/questions/149944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 1 }
Removing redundant sets from an intersection Let $I$ be a non-empty set and $(A_i)_{i\in I}$ a family of sets. Is it true that there exists a subset $J\subset I$ such that $\bigcap_{j\in J}A_j=\bigcap_{i\in I}A_i$ and, for any $j_0\in J$, $\bigcap_{j\in J-\{j_0\}}A_j\neq\bigcap_{j\in J}A_j$? If $I=\mathbb{N}$, the answer is yes (if I am not mistaken): $J$ can be constructed by starting with $\mathbb{N}$ and, at the $n$-th step, removing $n$ if that does not affect the intersection. What if $I$ is uncountable? I guess the answer is still "yes" and tried to prove it by generalizing the above approach using transfinite induction, but I failed. The answer "yes" or "no" and a sketch of a proof (resp. a counterexample) would be nice.
The answer is no, even in the case $I=\mathbb N$. to see this, consider the collection $A_i=[i,\infty)\subset \mathbb R$. Then $\bigcap\limits_{i\in I}A_i=\emptyset$ and this remains true if we intersect over any infinite subset $J\subseteq I$, yet is false if we intersect over a finite subset. Thus there is no minimal subset $J$ such that $\bigcap\limits_{i\in I}A_i=\bigcap\limits_{j\in J}A_j$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/149995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Alternative proof of the limitof the quotient of two sums. I found the following problem by Apostol: Let $a \in \Bbb R$ and $s_n(a)=\sum\limits_{k=1}^n k^a$. Find $$\lim_{n\to +\infty} \frac{s_n(a+1)}{ns_n(a)}$$ After some struggling and helpless ideas I considered the following solution. If $a > -1$, then $$\int_0^1 x^a dx=\frac{1}{a+1}$$ is well defined. Thus, let $$\lambda_n(a)=\frac{s_n(a)}{n^{a+1}}$$ It is clear that $$\lim\limits_{n\to +\infty} \lambda_n(a)=\int_0^1 x^a dx=\frac{1}{a+1}$$ and thus $$\lim_{n\to +\infty} \frac{s_n(a+1)}{ns_n(a)}=\lim_{n \to +\infty} \frac{\lambda_n(a+1)}{\lambda_n(a)}=\frac{a+1}{a+2}$$ Can you provide any other proof for this? I used mostly integration theory but maybe there are other simpler ideas (or more complex ones) that can be used. (If $a=-1$ then the limit is zero, since it is simply $H_n^{-1}$ which goes to zero since the harmonic series is divergent. For the case $a <-1$, the simple inequalities $s_n(a+1) \le n\cdot n^{a+1} = n^{a+2}$ and $s_n(a) \ge 1$ show that the limit is also zero.)
The argument below works for any real $a > -1$. We are given that $$s_n(a) = \sum_{k=1}^{n} k^a$$ Let $a_n = 1$ and $A(t) = \displaystyle \sum_{k \leq t} a_n = \left \lfloor t \right \rfloor$. Hence, $$s_n(a) = \int_{1^-}^{n^+} t^a dA(t)$$ The integral is to be interpreted as the Riemann Stieltjes integral. Now integrating by parts, we get that $$s_n(a) = \left. t^a A(t) \right \rvert_{1^-}^{n^+} - \int_{1^-}^{n^+} A(t) a t^{a-1} dt = n^a \times n - a \int_{1^-}^{n^+} \left \lfloor t \right \rfloor t^{a-1} dt\\ = n^{a+1} - a \int_{1^-}^{n^+} (t -\left \{ t \right \}) t^{a-1} dt = n^{a+1} - a \int_{1^-}^{n^+} t^a dt + a \int_{1^-}^{n^+}\left \{ t \right \} t^{a-1} dt\\ = n^{a+1} - a \left. \dfrac{t^{a+1}}{a+1} \right \rvert_{1^-}^{n^+} + a \int_{1^-}^{n^+}\left \{ t \right \} t^{a-1} dt\\ =n^{a+1} - a \dfrac{n^{a+1}-1}{a+1} + a \int_{1^-}^{n^+}\left \{ t \right \} t^{a-1} dt\\ = \dfrac{n^{a+1}}{a+1} + \dfrac{a}{a+1} + \mathcal{O} \left( a \times 1 \times \dfrac{n^a}{a}\right)\\ = \dfrac{n^{a+1}}{a+1} + \mathcal{O} \left( n^a \right)$$ Hence, we get that $$\lim_{n \rightarrow \infty} \dfrac{s_n(a)}{n^{a+1}/(a+1)} = 1$$ Hence, now $$\dfrac{s_{n}(a+1)}{n s_n(a)} = \dfrac{\dfrac{s_n(a+1)}{n^{a+2}/(a+2)}}{\dfrac{s_n(a)}{n^{a+1}/(a+1)}} \times \dfrac{a+1}{a+2}$$ Hence, we get that $$\lim_{n \rightarrow \infty} \dfrac{s_{n}(a+1)}{n s_n(a)} = \dfrac{\displaystyle \lim_{n \rightarrow \infty} \dfrac{s_n(a+1)}{n^{a+2}/(a+2)}}{\displaystyle \lim_{n \rightarrow \infty} \dfrac{s_n(a)}{n^{a+1}/(a+1)}} \times \dfrac{a+1}{a+2} = \dfrac11 \times \dfrac{a+1}{a+2} = \dfrac{a+1}{a+2}$$ Note that the argument needs to be slightly modified for $a = -1$ or $a = -2$. However, the two cases can be argued directly itself. If $a=-1$, then we want $$\lim_{n \rightarrow \infty} \dfrac{s_n(0)}{n s_n(-1)} = \lim_{n \rightarrow \infty} \dfrac{n}{n H_n} = 0$$ If $a=-2$, then we want $$\lim_{n \rightarrow \infty} \dfrac{s_n(-1)}{n s_n(-2)} = \dfrac{6}{\pi^2} \lim_{n \rightarrow \infty} \dfrac{H_n}{n} = 0$$ In general, for $a <-2$, note that both $s_n(a+1)$ and $s_n(a)$ converge. Hence, the limit is $0$. For $a \in (-2,-1)$, $s_n(a)$ converges but $s_n(a+1)$ diverges slower than $n$. Hence, the limit is again $0$. Hence to summarize $$\lim_{n \rightarrow \infty} \dfrac{s_n(a+1)}{n s_n(a)} = \begin{cases} \dfrac{a+1}{a+2} & \text{ if }a>-1\\ 0 & \text{ if } a \leq -1 \end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/150059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Showing that if $R$ is a commutative ring and $M$ an $R$-module, then $M \otimes_R (R/\mathfrak m) \cong M / \mathfrak m M$. Let $R$ be a local ring, and let $\mathfrak m$ be the maximal ideal of $R$. Let $M$ be an $R$-module. I understand that $M \otimes_R (R / \mathfrak m)$ is isomorphic to $M / \mathfrak m M$, but I verified this directly by defining a map $M \to M \otimes_R (R / \mathfrak m)$ with kernel $\mathfrak m M$. However I have heard that there is a way to show these are isomorphic using exact sequences and using exactness properties of the tensor product, but I am not sure how to do this. Can anyone explain this approach? Also can the statement $M \otimes_R (R / \mathfrak m) \cong M / \mathfrak m M$ be generalised at all to non-local rings?
Morover, let $I$ be a right ideal of a ring $R$ (noncommutative ring) and $M$ a left $R$-module, then $M/IM\cong R/I\otimes_R M$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Area of ellipse given foci? Is it possible to get the area of an ellipse from the foci alone? Or do I need at least one point on the ellipse too?
If the foci are points $p,q\in\mathbb{R}^{2}$ on a horizontal line and a point on the ellipse is $c\in\mathbb{R}^{2}$, then the string length $\ell=\left|p-c\right|+\left|q-c\right|$ (the distance from the first focus to the point on the ellipse to the second focus) determines the semi-axis lengths. Using the Pythagorean theorem, the vertical semi-axis has length $\sqrt{\frac{\ell^{2}}{4}-\frac{\left|p-q\right|^{2}}{4}}$. Using the fact that the horizontal semi-axis is along the line joining $p$ to $q$, the horizontal semi-axis has length $\frac{\ell}{2}$. Thus the area is $\pi\sqrt{\frac{\ell^{2}-\left|p-q\right|^{2}}{4}}\frac{\ell}{2}$ ($\pi$ times each semi-major axis length, analogous to the circle area formula $\pi r^{2}$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/150169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to prove an L$^p$ type inequality Let $a,b\in[0,\infty)$ and let $p\in[1,\infty)$. How can I prove$$a^p+b^p\le(a^2+b^2)^{p/2}.$$
Some hints: * *By homogeneity, we can assume that $b=1$. *Let $f(t):=(t^2+1)^{p/2}-t^p-1$ for $t\geq 0$. We have $f'(t)=p((t^2+1)^{p/2-1}-t^{p-1})$. We have $t^2+1\geq t^2$, so the derivative is non-negative/non-positive if $p\geq 2$ or $p<2$. *Deduce the wanted inequality (when it is reversed or not).
{ "language": "en", "url": "https://math.stackexchange.com/questions/150316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finitely Generated Group Let be $G$ finitely generated; My question is: Does always exist $H\leq G,H\not=G$ with finite index? Of course if G is finite it is true. But $G$ is infinite?
No. I suspect there are easier and more elegant ways to answer this question, but the following argument is one way to see it: * *There are finitely generated infinite simple groups: * *In 1951, Higman constructed the first example in A Finitely Generated Infinite Simple Group, J. London Math. Soc. (1951) s1-26 (1), 61–64. *Very popular are Thompson's groups. *I happen to like the Burger–Mozes family of finitely presented infinite simple torsion-free groups, described in Lattices in product of trees. Publications Mathématiques de l'IHÉS, 92 (2000), p. 151–194 (full disclosure: I wrote my thesis under the direction of M.B.). *See P. de la Harpe, Topics in Geometric Group Theory, Complement V.26 for further examples and references. *If a group $G$ has a finite index subgroup $H$ then $H$ contains a finite index normal subgroup of $G$, in particular no infinite simple group can have a non-trivial finite index subgroup. See also Higman's group for an example of a finitely presented group with no non-trivial finite quotients. By the same reasoning as above it can't have a non-trivial finite index subgroup.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Simplifying quotient or localisation of a polynomial ring Let $R$ be a commutative unital ring and $g\in R[X]$ a polynomial with the property that $g(0)$ is a unit in $R$ and $g(1)=1$. Is there any possible way to understand either $$R[X]/g$$ or $$g^{-1}R[X]$$ better? Here $g^{-1}R[X]$ is the localised ring for the multiplicative set $\{g^n\}$. I can't find a way to incorporate the extra conditions on $g$. It would be favorable if the rings were expressable using the ring $R[X,X^{-1}]$ or the ideal $X(X-1)R[X]$. Also any graded ring with a simple expression in degree zero is much appreciated. Background: Im working on some exact sequences in the K-theory of rings. The two rings above are part of some localisation sequence and understanding either one of them would help me simplifying things. Edit: I tried to go for the basic commutative algebra version of it, since I expected it to be easy. What I am actually trying to show is that if $S$ is the set of all those $g$ whit the property as described above, then there is an exact sequence (assume $R$ regular, or take homotopy K-theory) $$0\to K_i(R)\to K_i(S^{-1}(R[X]))\to K_i(R)\to 0$$ where the first map comes from the natural inclusion and the second map should be something like the difference of the two possible evaluation maps (as described in @Martin's answer). There are some reasons to believe that this is true (coming from a way more abstract setting), but it still looks strange to me. Moreover we should be able to work with one $g$ at a time and then take the colimit. With this edit I assume this question should go to MO as well -_-
The only thing which comes into my mind is the following: $g(1)=1$ (or $g(0)$ is a unit) ensures that $R[X] \to R$, $x \mapsto 1$ (or $x \mapsto 0$), extends to a homomorphism $g^{-1} R[X] \to R$. For more specific answers, a more specific question is necessary ;).
{ "language": "en", "url": "https://math.stackexchange.com/questions/150435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What are some books that I should read on 3D mathematics? I'm a first-grade highschool student who has been making games in 2D most of the time, but I started working on a 3D project for a change. I'm using a high-level engine that abstracts most of the math away from me, but I'd like to know what I'm dealing with! What books should I read on 3D mathematics? Terms like "rotation matrices" should be explained in there, for example. I could, of course, go searching these things on the interweb, but I really like books and I would probably miss something out by self-educating, which is what I do most of the time anyway. I mostly know basic mathematics, derivatives of polynomial functions is the limit to my current knowledge, but I probably do have some holes on the fields of trigonometry and such (we didn't start learning that in school, yet, so basically I'm only familiar with sin, cos and atan2).
"Computer Graphics: Principles and Practice, Third Edition, remains the most authoritative introduction to the field. The first edition, the original “Foley and van Dam,” helped to define computer graphics and how it could be taught. The second edition became an even more comprehensive resource for practitioners and students alike. This third edition has been completely rewritten to provide detailed and up-to-date coverage of key concepts, algorithms, technologies, and applications." This quote from Amazon.com represents the high regard this text on computer graphics has commanded for decades, as Foley & van Dam presents algorithms for generating CG as well as answers to more obscure issues such as clipping and examples of different methods for rendering in enough detail to actually implement solutions
{ "language": "en", "url": "https://math.stackexchange.com/questions/150510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
UFDs are integrally closed Let $A$ be a UFD, $K$ its field of fractions, and $f$ an element of $A[T]$ a monic polynomial. I'm trying to prove that if $f$ has a root $\alpha \in K$, then in fact $\alpha \in A$. I'm trying to exploit the fact of something about irreducibility, will it help? I havent done anything with splitting fields, but this is something i can look for.
Overkill: It suffices to show that $A$ is integrally closed, which we prove using Serre's criterion. For this, we recall some of the definitions, including the definitions of the properties $R_n$ and $S_n$ that appear in Serre's criterion. We will assume $A$ is locally Noetherian (can one assume less for this approach to work?) Background Definition. A ring $A$ is said to satisfy the criterion $R_n$ if for every prime ideal $\mathfrak p$ of $A$ such that $\operatorname{ht}(\mathfrak p)\le n$, the localization $A_{\mathfrak p}$ is a regular local ring, which means that the maximal ideal can be generated by $\operatorname{ht}(\mathfrak p)$ elements. Definition. A Noetherian ring $A$ is said to satisfy the criterion $S_n$ if for every prime ideal $\mathfrak p$ of $A$, we have the inequality $$\operatorname{depth}(A_{\mathfrak p}) \ge \min\{n,\operatorname{ht}(\mathfrak p)\}$$ This relies on the notion of depth, which is the length of a maximal regular sequence in the maximal ideal. Exercise. Give a definition of the $S_n$ condition for modules. (Note: there are actually two distinct definitions in the literature, which only agree when the annihilator of the module is a nilpotent ideal.) Exercise. Show that a Noetherian ring $A$ is reduced if and only if $A$ satisfies $R_0$ and $S_1$. With these definitions out of the way, we now state the criterion of which we will benefit. Theorem. (Serre's Criterion). A Noetherian integral domain $A$ is integrally closed if and only if $A$ has the properties $R_1$ and $S_2$. Proof that UFDs are Integrally Closed Firstly, localizations of UFDs are UFDs while intersections of integrally closed domains in the field of fractions of $A$ are integrally closed. Recalling that $A=\bigcap_{\mathfrak p\in\operatorname{Spec}A} A_{\mathfrak p}$, we may assume $A$ is local. Now, $A$ is $R_1$ because prime ideals $\mathfrak p$ of height $1$ are principal and thus $A_{\mathfrak p}$ is a DVR. Also, $A$ is $S_1$ because $A$ is an integral domain and thus reduced, while for any irreducible $f\in A$ we have $A/fA$ is an integral domain, so $A/fA$ is $S_1$. This implies \begin{equation*} \operatorname{depth} A \ge \min\{2,\dim A\} \end{equation*} The argument works for any local UFD, in particular the localizations of $A$. So, $A$ is $S_2$. By Serre's criterion, $A$ is integrally closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 3, "answer_id": 1 }
Classification of automorphisms of projective space Let $k$ be a field, n a positive integer. Vakil's notes, 17.4.B: Show that all the automorphisms of the projective scheme $P_k^n$ correspond to $(n+1)\times(n+1)$ invertible matrices over k, modulo scalars. His hint is to show that $f^\star \mathcal{O}(1) \cong \mathcal{O}(1).$ (f is the automorphism. I don't if $\mathcal{O}(1)$ is the conventional notation; if it's unclear, it's an invertible sheaf over $P_k^n$) I can show what he wants assuming this, but can someone help me find a clean way to show this?
Well, $f^*(\mathcal{O}(1))$ must be a line bundle on $\mathbb{P}^n$. In fact, $f^*$ gives a group automorphism of $\text{Pic}(\mathbb{P}^n) \cong \mathbb{Z}$, with inverse $(f^{-1})^*$. Thus, $f^*(\mathcal{O}(1))$ must be a generator of $\text{Pic}(\mathbb{P}^n)$, either $\mathcal{O}(1)$ or $\mathcal{O}(-1)$. But $f^*$ is also an automorphism on the space of global sections, again with inverse $(f^{-1})^*$. Since $\mathcal{O}(1)$ has an $(n+1)$-dimensional vector space of global sections, but $\mathcal{O}(-1)$ has no non-zero global sections, it is impossible for $f^*(\mathcal{O}(1))$ to be $\mathcal{O}(-1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
Some interesting questions on completeness (interesting to me anyway) Suppose $(Y,\Vert\cdot\Vert)$ is a complete normed linear space. If the vector space $X\supset Y$ with the same norm $\Vert\cdot\Vert$ is a normed linear space, then is $(X,\Vert\cdot\Vert)$ necessarily complete? My guess is no. However, I am not aware of any examples. Side interest: If X and Y are Banach (with possibly different norms), I want to make $X \times Y$ Banach. But I realize that in order to do this, we cannot use the same norm as we did for $X$ and $Y$ because it's not like $X \subseteq X \times Y$ or $Y \subseteq X \times Y$. What norm (if there is one) on $X \times Y$ will garuntee us a Banach space? I'm sure these questions are standard ones in functional analysis. I just haven't come across them in my module. Thanks in advance.
You have to look at infinite dimensional Banach spaces. For example, $X=\ell^2$, vector space of square-summable sequence of real numbers. Let $Y:=\{(x_n)_n, \exists k\in\Bbb N, x_n=0\mbox{ if }n\geq k\}$. It's a vector subspace of $X$, but not complete since it's not closed (it's in fact a strict and dense subset). However, for two Banach spaces $X$ and $Y$, you can put norms on $X\times Y$ such that this space is a Banach space. For example, if $N$ is a norm on $\Bbb R^2$, define $\lVert(x,y)\rVert:=N(\lVert x\rVert_X,\lVert y\rVert_Y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Ex${{{}}}$tremely Tricky Probability Question? Here's the question. It's quite difficult: David is given a deck of 40 cards. There are 3 gold cards in the deck, 3 silver cards in the deck, 3 bronze cards in the deck and 3 black cards in the deck. If David draws the gold card on his first turn, he will win $50. (The object is to get at least one gold card). The other colored cards are used to help him get the gold card, while the remaining 28 do nothing. David initially draws a hand of 6 cards, and will now try to draw a gold card, if he did not already already draw one. He may now use the other cards to help him. All of the differently colored cards may be used in the first turn. David can use a silver card to draw 1 more card. David can use a bronze card to draw 1 more card. However, he can only use 1 of these per turn. David can use a black card to look at the top 3 cards of the deck, and add one of them to his hand. He then sends the rest back to the deck and shuffles. He can only use 1 of these cards per turn. What are the odds David draws the gold card on his first turn?
We can ignore the silver cards-each should be replaced whenever we see it with another. Similarly, if you have a bronze, you should draw immediately (but subsequent bronzes don't let you replace them). So the deck is really $36$ cards, $3$ gold, $3$ black, and $30$ other (including the 2 bronzes after the first). You win if there is a gold in the first $6$, or a black in the first six, no gold in 1-6 and a gold in 7-9, or a black in 1-6, another in 1-9, no gold in 1-9 and a gold in 10-12, or a black in 1-6, another in 1-9, another in 1-12, no gold in 1-12 and a gold in 13-15. All these possibilities are disjoint, so you can just add them up.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Relation between min max of a bounded with compact and continuity While reading through Kantorovitz's book on functional analysis, I had a query that need clarification. If $X$ is compact, $C_{B}(X)$ - bounded continuous function, with the sup-norm coincides with $C(X)$ - continuous real valued function, with the sup-norm, since if $f:X \rightarrow \mathbb{R}$ is continuous and $X$ is compact, then $\vert f \vert$ is bounded. May I know how the above relates to the corollary that states: Let $X$ be a compact topological space. If $f \in C(X)$, then $\vert f \vert$ has a minimum and a maximum value on $X$. I believe the relation here is that the function is bounded and hence relate to the corollary but hope someone can clarify just to be sure. Thank You.
That is exactly what you said, but changing it a little: Every continuous real function over a compact space is bounded. We know that the image of a compact set by a continuous function is compact, and that implies boundedness of the image. A function is bounded exactly when its image is bounded, so it's proved! Then $C(X)=C_B(X)$. The minimum and maximum is a plus, that implies boundedness, so that your reasoning was correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/150991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Conditions for integrability Michael Spivak, in his "Calculus" writes Although it is possible to say precisely which functions are integrable,the criterion for integrability is too difficult to be stated here I request someone to please state that condition.Thank you very much!
This is commonly called the Riemann-Lebesgue Theorem, or the Lebesgue Criterion for Riemann Integration (the wiki article). The statement is that a function on $[a,b]$ is Riemann integrable iff * *It is bounded *It is continuous almost everywhere, or equivalently that the set of discontinuities is of zero lebesgue measure
{ "language": "en", "url": "https://math.stackexchange.com/questions/151060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
An alternating series ... Find the limit of the following series: $$ 1 - \frac{1}{4} + \frac{1}{6} - \frac{1}{9} + \frac{1}{11} - \frac{1}{14} + \cdot \cdot \cdot $$ If i go the integration way all is fine for a while but then things become pretty ugly. I'm trying to find out if there is some easier way to follow.
Let $S = 1 - x^{3} + x^{5} -x^{8} + x^{10} - x^{13} + \cdots$. Then what you want is $\int_{0}^{1} S \ dx$. But we have \begin{align*} S &= 1 - x^{3} + x^{5} -x^{8} + x^{10} - x^{13} + \cdots \\\ &= -(x^{3}+x^{8} + x^{13} + \cdots) + (1+x^{5} + x^{10} + \cdots) \\\ &= -\frac{x^{3}}{1-x^{5}} + \frac{1}{1-x^{5}} \end{align*} Now you have to evaluate: $\displaystyle \int_{0}^{1}\frac{1-x^{3}}{1-x^{5}} \ dx$ And wolfram gives the answer as:
{ "language": "en", "url": "https://math.stackexchange.com/questions/151113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Simple recurrence relation in three dimensions I have the following recurrence relation: $$f[i,j,k] = f[i-1,j,k] + f[i,j-1,k] + f[i,j,k-1],\quad \mbox{for } i \geq j+k,$$ starting with $f[0,0,0]=1$, for $i$, $j$, and $k$ non-negative. Is there any way to find a closed form expression for $f[i,j,k]$? Note that this basically is a three dimensional version of the Catalan triangle, for which $f[i,j] = f[i-1,j] + f[i,j-1]$, for $i \geq j$, starting with $f[0,0]=1$. For this, a closed form expression is known: $f[i,j] = \frac{(i+j)!(i-j+1)}{j!(i+1)!}$. Appreciate your help!
With the constraint $i \geq j+k$ I got following formula (inspired by the Fuss-Catalan tetrahedra formula page 10 and with my thanks to Brian M. Scott for pointing out my errancies...) : $$f[i,j,k]=\binom{i+1+j}{j} \binom{i+j+k}{k} \frac{i+1-j-k}{i+1+j}\ \ \text{for}\ i \geq j+k\ \ \text{and}\ \ 0\ \ \text{else}$$ plane $k=0$ $ \begin{array} {lllll|lllll} 1\\ 1 & 1\\ 1 & 2 & 2\\ 1 & 3 & 5 & 5\\ 1 & 4 & 9 & 14 & 14\\ \end{array} $ plane $k=1$ $ \begin{array} {l} 0\\ 1 \\ 2 & 4\\ 3 & 10 & 15\\ 4 & 18 & 42 & 56\\ 5 & 28 & 84 & 168 & 210\\ \end{array} $ plane $k=2$ $ \begin{array} {l} 0\\ 0\\ 2 \\ 5 & 15\\ 9 & 42 & 84\\ 14 & 84 & 252 & 420\\ 20 & 144 & 540 & 1320 & 1980\\ \end{array} $ Without the $i \geq j+k$ constrains we get the simple : $$f[i,j,k]=\frac{(i+j+k)!}{i!j!k!}$$ That is the Trinomial expansion (extension of Pascal triangle in 3D : Pascal tetrahedron). At least with the rules : * *$f[0,0,0]=1$ *$f[i,j,k]=0$ if $i<0$ or $j<0$ or $k<0$ *$f[i,j,k] = f[i-1,j,k] + f[i,j-1,k] + f[i,j,k-1]$ in the remaining cases
{ "language": "en", "url": "https://math.stackexchange.com/questions/151193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }