Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
I need help with this divisibility problem. I need help with the following divisibility problem. Find all prime numbers m and n such that $mn |12^{n+m}-1$ and $m= n+2$.
You want to solve $p(p+2)|(12^{p+1}-1)(12^{p+1}+1)$. Hint: First exclude $p=2,3$, so we have $$\eqalign{ 12^{p+1}-1 \equiv 143 &= 11 \cdot 13 &\pmod p,\\ 12^{p+1}+1 \equiv 145 &= 5 \cdot 29 &\pmod p, }$$ and deduce that $p$ must be one of $5,11,29$. Edit: I'll just add more details: We want that $p$ divides $(12^{p+1}-1)(12^{p+1}+1)$, so $p$ must divide one of the factors of this product. Suppose $p|12^{p+1}-1=k\cdot p+143$ (the congruence follows from Fermat's little theorem). This means $p|143$ and hence $p=11$ or $p=13$. If $p+2$ is prime then we automatically have $p+2|12^{p+1}-1$ again by Fermat's theorem, so $p=11$ is a solution. $p=13$ isn't, as $p+2$ is not prime. In the other case, $p|12^{p+1}+1$ we get 2 solutions, $p=5$ and $p=29$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/129582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
On solvable quintics and septics Here is a nice sufficient (but not necessary) condition on whether a quintic is solvable in radicals or not. Given, $x^5+10cx^3+10dx^2+5ex+f = 0\tag{1}$ If there is an ordering of its roots such that, $x_1 x_2 + x_2 x_3 + x_3 x_4 + x_4 x_5 + x_5 x_1 - (x_1 x_3 + x_3 x_5 + x_5 x_2 + x_2 x_4 + x_4 x_1) = 0\tag{2}$ or alternatively, its coefficients are related by the quadratic in f, $(c^3 + d^2 - c e) \big((5 c^2 - e)^2 + 16 c d^2\big) = (c^2 d + d e - c f)^2 \tag{3}$ then (1) is solvable. This also implies that if $c\neq0$, then it has a solvable twin, $x^5+10cx^3+10dx^2+5ex+f' = 0\tag{4}$ where $f'$ is the other root of (3). The Lagrange resolvent are the roots of, $z^4+fz^3+(2c^5-5c^3e-4d^2e+ce^2+2cdf)z^2-c^5fz+c^{10} = 0\tag{5}$ so, $x = z_1^{1/5}+z_2^{1/5}+z_3^{1/5}+z_4^{1/5}\tag{6}$ Two questions though: I. Does the septic (7th deg) analogue, $x_1 x_2 + x_2 x_3 + \dots + x_7 x_1 – (x_1 x_3 + x_3 x_5 + \dots + x_6 x_1) = 0\tag{7}$ imply such a septic is solvable? II. The septic has a $5! = 120$-deg resolvent. While this is next to impossible to explicitly construct, is it feasible to construct just the constant term? Equating it to zero would then imply a family of solvable septics, just like (3) above. More details and examples for (2) like the Emma Lehmer quintic in my blog.
This problem is old but quite interesting. I have an answer to (I) which depends on some calculations in $\textsf{GAP}$ and Mathematica. I haven't thought about (II). Suppose an irreducible septic has roots $x_1,\ldots,x_7$ that satisfy $$ x_1 x_2 + x_2 x_3 + \dots + x_7 x_1 – (x_1 x_3 + x_3 x_5 + \dots + x_6 x_1) = 0. $$ I claim that the Galois group is solvable. Since the polynomial is irreducible, the Galois group must act transitively on the roots. Up to conjugacy, there are only seven transitive permutation groups of degree 7, of which only three are non-solvable. These are: * *The symmetric group $S_7$. *The alternating group $A_7$. *The group $L(7) \cong \mathrm{PSL}(2,7)$ of symmetries of the Fano plane. Since $L(7) \subset A_7 \subset S_7$, we can suppose that the Galois group contains a copy of $L(7)$ and attempt to derive a contradiction. Now, the given identity involves a circular order on the seven roots of the septic. Up to symmetry, there are only three possible circular orders on the points of the Fano plane, corresponding to the three elements of $D_7\backslash S_7/L(7)$. (This result was computed in $\textsf{GAP}$.) Bijections of the Fano plane with $\{1,\ldots,7\}$ corresponding to these orders are shown below. Thus, we may assume that the Galois group contains the symmetries of one of these three Fano planes. Before tackling these cases individually, observe in general that the pointwise stabilizer of a line in the Fano plane is a Klein four-group, where every element is a product of two transpositions. For example, in the first plane, the symmetries that fix $1$, $2$, and $7$ are precisely $(3\;4)(5\;6)$, $(3\;5)(4\;6)$, and $(3\;6)(4\;5)$. These are the only sorts of elements of $L(7)$ that we will need for the argument. Cases 1 and 2: In each of the first two planes, $\{3,5,7\}$ is a line, and thus $(1\;2)(4\;6)$ is an element of $L(7)$. Applying this permutation to the roots in the equation $$ x_1 x_2 + x_2 x_3 + \dots + x_7 x_1 – (x_1 x_3 + x_3 x_5 + \dots + x_6 x_1) = 0 $$ and subtracting from the original yields the equation $$ (2x_1-2x_2-x_4+x_6)(x_7-x_3) \;=\; 0. $$ Since $x_3\ne x_7$, we conclude that $2x_1 + x_6 = 2x_2+x_4$. Now, no three of $1,2,4,6$ are collinear in either of the first two planes. It follows that there is a symmetry of each of the planes that fixes $1$ and $6$ but switches $2$ and $4$, namely $(2\;4)(3\;7)$ for the first plane and $(2\;4)(5\;7)$ for the second plane. Thus we have two equations $$ 2x_1 + x_6 = 2x_2+x_4,\qquad 2x_1 + x_6 = x_2+2x_4, $$ and subtracting gives $x_2=x_4$, a contradiction. Case 3: The argument for the last plane is similar but slightly more complicated. Observe that each of the following eight permutations is a symmetry of the third plane: $$ \text{the identity},\qquad (2\;7)(3\;4),\qquad (3\;6)(5\;7),\qquad (1\;3)(4\;5) $$ $$ (1\;2)(3\;6),\qquad (3\;7)(5\;6),\qquad (3\;5)(6\;7), \qquad (1\;2)(5\;7) $$ We apply each of these permutations to the equation $$ x_1 x_2 + x_2 x_3 + \dots + x_7 x_1 – (x_1 x_3 + x_3 x_5 + \dots + x_6 x_1) = 0, $$ adding together the first four results, and then subtracting the other four results. According to Mathematica, this gives the equation $$ 5(x_2-x_1)(x_3-x_5+x_6-x_7)=0. $$ Since $x_1\ne x_2$, it follows that $x_3+x_6=x_5+x_7$. As in the last case, observe that no three of $3,5,6,7$ are collinear, so there exists a symmetry of the plane that fixes $3$ and $5$ and switches $6$ and $7$, namely $(6\;7)(1\;4)$. This gives us two equations $$ x_3+x_6=x_5+x_7,\qquad x_3+x_7=x_5+x_6 $$ and subtracting yields $x_6=x_7$, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/129655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 0 }
Can a non-proper variety contain a proper curve Let $f:X\to S$ be a finite type, separated but non-proper morphism of schemes. Can there be a projective curve $g:C\to S$ and a closed immersion $C\to X$ over $S$? Just to be clear: A projective curve is a smooth projective morphism $X\to S$ such that the geometric fibres are geometrically connected and of dimension 1. In simple layman's terms: Can a non-projective variety contain a projective curve? Feel free to replace "projective" by "proper". It probably won't change much.
Sure, $\mathbb{P}_k^2-\{pt\}$ is not proper over $Spec(k)$ and contains a proper $\mathbb{P}_k^1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/129713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show the existence of a complex differentiable function defined outside $|z|=4$ with derivative $\frac{z}{(z-1)(z-2)(z-3)}$ My attempt I wrote the given function as a sum of rational functions (via partial fraction decomposition), namely $$ \frac{z}{(z-1)(z-2)(z-3)} = \frac{1/2}{z-1} + \frac{-2}{z-2} + \frac{3/2}{z-3}. $$ This then allows me to formally integrate the function. In particular, I find that $$ F(z) = 1/2 \log(z-1) - 2 \log(z-2) + 3/2 \log(z-3) $$ is a complex differentiable function on the set $\Omega = \{z \in \mathbb{C}: |z| > 4\}$ with the derivative we want. So this seems to answer the question, as far as I can tell. The question then asks if there is a complex differentiable function on $\Omega$ whose derivative is $$ \frac{z^2}{(z-1)(z-2)(z-3)}. $$ Again, I can write this as a sum of rational functions and formally integrate to obtain the desired function on $\Omega$ with this particular derivative. Woo hoo. My question Is there more to this question that I'm not seeing? I was also able to write the first given derivative as a geometric series and show that this series converged for all $|z| > 3$, but I don't believe this helps me to say anything about the complex integral of this function. In the case that it does, perhaps this is an alternative avenue to head down? Any insight/confirmation that I'm not overlooking something significant would be much appreciated. Note that this an old question that often appears on study guides for complex analysis comps (one being my own), so that's in part why I'm thinking (hoping?) there may be something deeper here. For possible historical context, the question seems to date back to 1978 (see number 7 here): http://math.rice.edu/~idu/Sp05/cx_ucb.pdf Thanks for your time.
The key seems to be that the coefficients $1/2$, $-2$ and $3/2$ sum to 0. So if you choose branch cuts for the three logarithm such that the cuts coincide through the $|z|>4$ region, then jumps at the cuts will cancel each other out (the amount each raw logarithm jumps is always $2\pi i$) and leave a single continuous function on $\{C\in\mathbb C\mid |z|>4\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/129773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Interpreting $F[x,y]$ for $F$ a field. First, is it appropriate to write $F[x,y] = (F[x])[y] $? In particular, if $F$ is a field, then we know $F[x]$ is a Euclidean domain. Are there necessary and sufficient conditions for when $F[x,y]$ is also a Euclidean domain?
In most constructions of polynomial rings (e.g., as almost null sequences with values in the ground ring), the rings $F[x,y]$, $(F[x])(y)$, and $(F[y])[x]$ are formally different objects: $F[x,y]$ is the set of all functions $m\colon \mathbb{N}\times\mathbb{N}\to F$ with $m(i,j)=0$ for almost all $(i,j)$; $(F[x])[y]$ is the set of all functons $f\colon\mathbb{N}\to F[x]$ with $f(n)=0$ for almost all $n$; and $(F[y])[x]$ is the set of all functions $g\colon\mathbb{N}\to F[y]$ with $g(m)=0$ for almost all $m$. However, there are natural isomorphisms between them, $$(F[x])[y]\cong F[x,y]\cong (F[y])[x].$$ Informally, this corresponds to the fact that we can write a polynomial in $x$ and $y$ by "putting $y$'s together" or by "putting $x$'s together. So, for instance, $1+x+2y + 3x^2y - 7xy^4 + 8x^2y^2$ can be written as $$\begin{align*} 1+x+2y + 3x^2y - 7xy^4 + 8x^2y^2 &= 1 + (1-7y^4)x + (3y+8y^2)x^2\\ &= (1+x) + (2+3x^2)y + (8x^2)y^2 - (7x)y^4. \end{align*}$$ So, yes, you can say that $F[x,y]$ is "essentially" equal to $(F[x])[y]$. However, $F[x,y]$ is never a Euclidean domain, because it is never a PID: $\langle x,y\rangle$ is never principal. In fact: Theorem. Let $D$ be a domain. Then $D[x]$ is a PID if and only if $D$ is a field. Proof. If $D$ is a field, then $D[x]$ is a Euclidean domain, hence a PID. If $D$ is not a field, then let $a\in D$ be an element that is a nonzero nonunit. Then $\langle a,x\rangle$ is not a principal ideal: if $\langle a,x\rangle = \langle r\rangle$, then $r|a$, hence $r$ must be a polynomial of degree $0$; since $r|x$, then $r$ must be a unit or an associate of $x$; since it must be degree $0$, $r$ must be a unit. So $\langle a,x\rangle$ is principal if and only if it is equal to the entire ring. But $D[x]/\langle a,x\rangle \cong D/\langle a\rangle\neq 0$ (since $\langle a\rangle\neq D$), so $\langle a,x\rangle\neq D[x]$. Hence $\langle a,x\rangle$ is not principal. $\Box$
{ "language": "en", "url": "https://math.stackexchange.com/questions/129830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Spread out the zeros in a binary sequence Suppose I have a machine that processes units at a fixed rate. If I want to run it at a lower rate, I have to leave gaps in the infeed. For example, if you want to process 3/4, then you would feed in 3, leave a gap, then repeat. This could be encoded as a binary sequence: 1110. If you want to process 3/5, then a simple sequence would be 11100. This however leaves an unnecessarily large gap. Perhaps they want as constant a rate as possible down the line. A better sequence would be 11010. Where the gaps are spread as far as possible (remembering that the sequence repeats). Hopefully this has explained the problem. I shall now attempt to phrase it more mathematically. Given a fraction $a/b$, generate a binary sequence of length $b$, that contains exactly $a$ ones, and where the distance between zeros is maximised if the sequence were repeated. In my attempts so far, I've worked out the building blocks of the sequence, but not quite how to put it together. For instance, 77/100 should be composed of 8 lots of 11110 and 15 lots of 1110. Since $8\times5+15\times4=100$ and $8\times4+15\times3=77$. The next step of splicing together my group of 8 and my group of 15 eludes me (in general).
Consider the following algorithm: let $a_i$ be the digit given on the $i$-th step; $b_i = b_{i-1}+a_i$ ($b_0 = 0$) the number of ones given until $i$-th step (and $i-b_i$ the number of zeros give until $i$-th step); define $a_{i+1}$ to be a zero iff $\frac{b_i}{i+1} > p$ (where $p$ is given probability); one otherwise. This way, you'll get the zeros as far away from each other as possible, and also you'll get the irrational probabilities support. Also this algorithm is periodic for rational probabilities.
{ "language": "en", "url": "https://math.stackexchange.com/questions/129889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Specify the convergence of function series The task is to specify convergence of infinite function series (pointwise, almost uniform and uniform convergence): a) $\displaystyle \sum_{n=1}^{+\infty}\frac{\sin(n^2x)}{n^2+x^2}, \ x\in\mathbb{R}$ b) $\displaystyle\sum_{n=1}^{+\infty}\frac{(-1)^n}{n+x}, \ x\in[0;+\infty)$ c) $\displaystyle\sum_{n=0}^{+\infty}x^2e^{-nx}, \ x\in(0;+\infty)$ I know basic facts about these types of convergence and Weierstrass M-test, but still have problems with using this in practise..
First observe that each of the series converges pointwise on its given interval (using standard comparison tests and results on $p$-series, geometric series, and alternating series. Towards determining uniform convergence, let's first recall the Weierstrass $M$-test: Suppose $(f_n)$ is a sequence of real-valued functions on the set $I$ and $(M_n)$ is a sequence of positive real numbers such that $|f_n(x)|\le M_n$ for $x\in I$, $n\in\Bbb N$. If the series $\sum M_n$ is convergent then $\sum f_n$ is uniformly convergent on $I$. It is worthwhile to consider the heart of the proof of this theorem: Under the given hypotheses, if $m>n$, then for any $x\in I$ $$\tag{1} \bigl| f_{n+1}(x)+\cdots+f_m(x)\bigr| \le| f_{n+1}(x)|+\cdots+|f_m(x)\bigr| \le M_{n+1}+\cdots M_n. $$ So if $\sum M_n$ converges, we can make the right hand side of $(1)$ as small as we wish. Noting that the right hand side of $(1)$ is independent of $x$, we can conclude that $\sum f_n$ is uniformly Cauchy on $I$, and thus uniformly convergent on $I$. Now on to your problem: To apply the $M$-test, you have to find appropriate $M_n$ for the series under consideration. Keep in mind that the $M_n$ have to be positive, summable, and bound the $|f_n|$. Sometimes they are easy to find, as in the series in a). Here note that for any $n\ge 1$ and $x\in\Bbb R$, $$ \biggl| {\sin(n^2x)\over n^2+x^2}\biggr|\le {1\over n^2}. $$ So, take $M_n={1\over n^2}$ and apply the $M$-test. The series in a) converges uniformly on $\Bbb R$. Sometimes finding the $M_n$ is not so easy. This is the case in c). Crude approximations for $f_n(x)=x^2e^{-nx}$ will not help. However, we could try to find the maximum value of $f_n$ over $(0,\infty)$ and perhaps this will give us what we want. And indeed, doing this (using methods from differential calculus), we discover that the maximum value of $f_n(x)=x^2e^{-nx}$ over $(0,\infty)$ is ${4e^{-2}\over n^2}$. And now the road towards using the $M$-test is paved... Sometimes the $M$-test doesn't apply. This is the case for the series in b), the required $M_n$ can't be found (at least, I can't find them). However, here, the proof of the $M$-test gives us an idea. Since the series in b) is alternating (that is, for each $x\in[0,\infty)$, the series $\sum\limits_{n=1}^\infty{(-1)^n\over x+n}$ is alternating), perhaps we can show it is uniformly Cauchy on $[0,\infty)$. Indeed we can: For any $m\ge n$ and $x\ge0$ $$\tag{2} \Biggl|\,{(-1)^n\over n+x}+{(-1)^{n+1}\over (n+1)+x}+\cdots+{ (-1)^m\over m+x}\,\Biggl|\ \le\ {1\over n+x}\le {1\over n}. $$ The term on the right hand side of $(2)$ is independent of $x$ and can be made as small as desired. So, the series in b) is uniformly Cauchy on $[0,\infty)$, and thus uniformly convergent on $[0,\infty)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If "multiples" is to "product", "_____" is to "sum" I know this might be a really simple question for those fluent in English, but I can't find the term that describes numbers that make up a sum. The numbers of a certain product are called "multiples" of that "product". Then what are the numbers of a certain sum called?
To approach the question from another direction: A "multiple" of 7 is a number that is the result of multiplying 7 with something else. If your try to generalize that to sums, you get something like: A "__" of 7 is a number that is the result of adding 7 to something. But that's everything -- at least as long as we allow negative numbers, and if we don't allow negative numbers, then it just means something that is larger than 7. Neither of these concepts feel useful enough in themselves that it is worth it deciding on a particular noun for it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 0 }
Combinations of lego bricks figures in an array of random bricks I have an assignment where a robot should assemble some lego figures of the simpsons. See the figures here: Simpsons figures! To start with we have some identical sized, different colored lego bricks on a conveyor belt. See image. My problem is to find out which combinations of figures to make out of the bricks on the conveyor belt. The bricks on the conveyor belt can vary. Here is an example: Conveyor bricks: 5x yellow, 2x red, 3x blue, 1x white, 4x green, 1x orange From these bricks I can make: * *Homer(Y,W,B), Marge(B,Y,G), Bart(Y,O,B), Lisa(Y,R,Y), Rest(Y,G,G,G), OR *Marge(B,Y,G), Marge (B,Y,G), Marge(B,Y,G), Lisa(Y,R,Y), Rest(R,W,G,O), OR *... Any way to automate this? Any suggestions to literature or theories? Algorithms I should check out? Thank you in advance for any help you can provide.
The problem of minimizing the number of unused blocks is an integer linear programming problem, equivalent to maximizing the number of blocks that you do use. Integer programming problems are in general hard, and I don’t know much about them or the methods used to solve them. In case it turns out to be at all useful to you, though, here’s a more formal description of the problem of minimizing the number of unused blocks. You have seven colors of blocks, say colors $1$ through $7$; the input consists of $c_k$ blocks of color $k$ for some constants $c_k\ge 0$. You have five types of output (Simpsons), which I will number $1$ through $5$ in the order in which they appear in this image. If the colors are numbered yellow $(1)$, white $(2)$, light blue $(3)$, dark blue $(4)$, green $(5)$, orange $(6)$, and red $(7)$, the five output types require colors $(1,2,3),(4,1,5),(1,6,4),(1,7,1)$, and $(1,3)$. To make $x_k$ Simpsons of type $k$ for $k=1,\dots,5$ requires $$\begin{align*} x_1+x_2+x_3+2x_4+x_5&\text{blocks of color }1,\\ x_1&\text{blocks of color }2,\\ x_1+x_5&\text{blocks of color }3,\\ x_2+x_3&\text{blocks of color }4,\\ x_2&\text{blocks of color }5,\\ x_3&\text{blocks of color }6,\text{ and}\\ x_4&\text{blocks of color }7\;. \end{align*}$$ This yields the following system of inequalities: $$\left\{\begin{align*} x_1+x_2+x_3+2x_4+x_5&\le c_1\\ x_1&\le c_2\\ x_1+x_5&\le c_3\\ x_2+x_3&\le c_4\\ x_2&\le c_5\\ x_3&\le c_6\\ x_4&\le c_7\;. \end{align*}\right.\tag{1}$$ Let $b=c_1+c_2+\dots+c_7$, the total number of blocks, and let $$f(x_1,x_2,\dots,x_7)=\sum_{k=1}^7 x_k\;,$$ the number of blocks used. You want to maximize $f(x_1,\dots,x_7)$ subject to the constraints in $(1)$ and the requirement that the $x_k$ be non-negative integers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculating conditional probability for markov chain I have a Markov chain with state space $E = \{1,2,3,4,5\}$ and transition matrix below: $$ \begin{bmatrix} 1/2 & 0 & 1/2 & 0 & 0 \\\ 1/3 & 2/3 & 0 & 0 & 0 \\\ 0 & 1/4 & 1/4 & 1/4 & 1/4 \\\ 0 & 0 & 0 & 3/4 & 1/4 \\\ 0 & 0 & 0 & 1/5 & 4/5\ \end{bmatrix} $$ How would I find the conditional probabilities of $\mathbb{P}(X_2 = 5 | X_0 =1)$ and $\mathbb{P}(X_3 = 1 | X_0 =1)$? I am trying to use the formula (or any other formula, if anyone knows of any) $p_{ij}^{(n)} = \mathbb{P}(X_n = j | X_0 =i)$, the probability of going from state $i$ to state $j$ in $n$ steps. So $\mathbb{P}(X_2 = 5 | X_0 =1) = p_{15}^2$, so I read the entry in $p_{15}$, and get the answer is $0^2$, but the answer in my notes say it is $1/8$? Also, I get for $\mathbb{P}(X_3 = 1 | X_0 =1) = p_{11}^3 = (\frac{1}{2})^3 = 1/8$, but the answer says it is $1/6$?
The notation $p^{(2)}_{15}$ is not to be confused with the square of $p_{15}$ since it stands for the $(1,5)$ entry of the square of the transition matrix. Thus, $$ p^{(2)}_{15}=\sum_{i=1}^5p_{1i}p_{i5}. $$ Likewise for $p^{(3)}_{11}$, which is the $(1,1)$ entry of the cube of the transition matrix, that is, $$ p^{(3)}_{11}=\sum_{i=1}^5\sum_{j=1}^5p_{1i}p_{ij}p_{j1}. $$ In the present case, there are only two ways to start from $1$ and to be back at $1$ after three steps, either the path $1\to1\to1\to1$, or the path $1\to3\to2\to1$. The first path has probability $\left(\frac12\right)^3=\frac18$ and the second path has probability $\frac12\frac14\frac13=\frac1{24}$, hence $p^{(3)}_{11}=\frac18+\frac1{24}=\frac16$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Critical points of $f(x,y)=x^2+xy+y^2+\frac{1}{x}+\frac{1}{y}$ I would like some help finding the critical points of $f(x,y)=x^2+xy+y^2+\frac{1}{x}+\frac{1}{y}$. I tried solving $f_x=0, f_y=0$ (where $f_x, f_y$ are the partial derivatives) but the resulting equation is very complex. The exercise has a hint: think of $f_x-f_y$ and $f_x+f_y$. However, I can't see where to use it. Thanks!
After doing some computations I found the following (lets hope I didn't make any mistakes). You need to solve the equations $$f_x = 2x + y - \frac{1}{x^2} = 0 \quad f_y = 2y + x -\frac{1}{y^2} = 0$$ therefore after subtracting and adding them as in the hint we get $$\begin{align} f_x - f_y &= x - y - \frac{1}{x^2} + \frac{1}{y^2} = 0 \\ f_x + f_y &= 3x + 3y -\frac{1}{x^2} - \frac{1}{y^2} = 0 \end{align} $$ but you can factor them a little bit to get $$ \begin{align} f_x - f_y &= x - y + \frac{x^2 - y^2}{x^2 y^2} = (x - y) \left ( 1 + \frac{x+ y}{x^2 y^2}\right ) = 0\\ f_x + f_y &= 3(x + y) -\frac{x^2 + y^2}{x^2 y^2} = 0 \end{align} $$ Now from the first equation you get two conditions, either $x = y$ or $x+y = -x^2 y^2$. If $x = y$ you can go back to your first equation for $f_x$ and substitute to get $$2x + x - \frac{1}{x^2} = 0 \implies 3x = \frac{1}{x^2} \implies x = \frac{1}{\sqrt[3]{3}}$$ and then you get the critical point $\left ( \dfrac{1}{\sqrt[3]{3}}, \dfrac{1}{\sqrt[3]{3}} \right )$ Now if instead $x + y = -x^2 y^2$ then if you substitute into the equation $f_x + f_y = 0$ we get the following $$ 3(-x^2 y^2) - \frac{x^2 + y^2}{x^2 y^2} = 0 \implies 3x^4 y^4 + x^2 + y^2 = 0 \implies x = y = 0 $$ But this is actually one of the points where the partial derivatives or even your original function are not defined.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Improving Gift Wrapping Algorithm I am trying to solve taks 2 from exercise 3.4.1 from Computational Geometry in C by Joseph O'Rourke. The task asks to improve Gift Wrapping Algorithm for building convex hull for the set of points. Exercise: During the course of gift wrapping, it's sometimes possible to identify points that cannot be on the convex hull and to eliminate them from the set "on the fly". work out rules to accomplish this. What is a worst-case set of points for your improved algorithm. The only thing I came with is we can eliminate all point that already form a convex hull boundary from the candidate list. This can give us the slight improvement $O(\frac{hn}{2})$ instead of $O(hn)$, which is in term of Big-O notation actually the same. After a little bit of searching I found that improvement can be made by ray shooting, but I don't understand how to implement ray shooting to our case (all points are not sorted and don't form a convex hull, therefore there is no efficient search for vertex that can be taken by ray shooting). If you have any idea how to improve gift wrapping algorithm, I'll appreciate sharing it with us. Thanks!
Hint: You're already computing the angles of the lines from the current point to all other points. What does the relationship of these angles to the angle of the line to the starting point tell you? (This also doesn't improve the time complexity of the algorithm.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/130338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the minimal polynomials of trigonometric expressions quickly If if an exam I had to calculate the minimal polynomials of, say, $\sin(\frac{2 \pi}{5})$ or $\cos(\frac{2 \pi}{19})$, what would be the quickest way to do it? I can use the identity $e^{ \frac{2 i \pi}{n}} = \cos(\frac{2 \pi}{n}) + i\sin(\frac{ 2 \pi}{n})$ and raise to the power $n$, but this gets nasty in the $n = 19$ case... Thanks
$\cos(2\pi/19)$ has degree 9, and I doubt anyone would put it on an exam. $\sin(2\pi/5)$ is a bit more reasonable. Note that $\sin(4\pi/5)=2\sin(2\pi/5)\cos(2\pi/5)$, and also $\sin(4\pi/5)=\sin(\pi/5)$, and $\cos(2\pi/5)=1-2\sin^2(\pi/5)$, and with a few more identities like that you should be able to pull out a formula. Or use your idea in the form $2i\sin(2\pi/5)=z-z^{-1}$, where $z=e^{2\pi i/5}$, then, squaring, $-4\sin^2(2\pi/5)=z^2-2+z^{-2}$, $$-8i\sin^3(2\pi/5)=z^3-3z+3z^{-1}-z^{-3}=z^{-2}-3z+3z^{-1}-z^2$$ and a similar formula for the 4th power, and remember that $z^2+z+1+z^{-1}+z^{-2}=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/130412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Residue integral: $\int_{- \infty}^{+ \infty} \frac{e^{ax}}{1+e^x} dx$ with $0 \lt a \lt 1$. I'm self studying complex analysis. I've encountered the following integral: $$\int_{- \infty}^{+ \infty} \frac{e^{ax}}{1+e^x} dx \text{ with } a \in \mathbb{R},\ 0 \lt a \lt 1. $$ I've done the substitution $e^x = y$. What kind of contour can I use in this case ?
there is a nice question related with this problem , and we can evaluate it by using Real analysis $$I=\int_{-\infty }^{\infty }\frac{e^{ax}}{1-e^{x}}dx,\ \ \ \ \ \ \ \ (*)\ \ \ \ \ 0<a<1\\ \\ let \ x\rightarrow -x \ \ then\ \ I=\int_{-\infty }^{\infty }\frac{e^{-ax}}{1-e^{-x}}\ \ \ \ \ (**)\\ \\ adding\ (*)\ and\ (**)\ \ then\ 2I=\int_{-\infty }^{\infty }[\frac{e^{ax}}{1-e^x}+\frac{e^{-ax}}{1-e^{-x}}]dx\\ \\ \therefore I=\frac{1}{2}\int_{-\infty }^{\infty }(\frac{e^{-ax}}{1-e^{-x}}-\frac{e^{-(1-a)x}}{1-e^{-x}})dx\\ \\ \therefore I==\frac{1}{2}\int_{-\infty }^{\infty }[(\frac{e^{-x}}{x}-\frac{e^{-(1-a)x}}{1-e^{-x}})-(\frac{e^{-x}}{x}-\frac{e^{-ax}}{1-e^{-x}})]dx\\ \\ \\ \therefore \therefore I=\int_{0}^{\infty }(\frac{e^{-x}}{x}-\frac{e^{-(1-a)x}}{1-e^{-x}})dx-\int_{0}^{\infty }(\frac{e^{-x}}{x}-\frac{e^{-ax}}{1-e^{-x}})dx\\ \\ \\ \therefore I=\Psi (1-a)-\Psi (a)=\frac{\pi }{tan(\pi a)}$$ this problem proposed by cornel loan, it is very easy to solve by complex analysis, but splendid to evaluate by"Real analysis"
{ "language": "en", "url": "https://math.stackexchange.com/questions/130472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 0 }
Is there a computer program that does diagram chases? After having done many tedious, robotic proofs that various arrows in a given diagram have certain properties, I've begun to wonder whether or not someone has made this automatic. You should be able to tell a program where the objects and arrows are, and which arrows have certain properties, and it should be able to list all of the provable implications. Has this been done? Humans should no longer have to do diagram chases by hand!
I am developing a Mathematica package called WildCats which allows the computer algebra system Mathematica to perform category theory computations (even graphical computations). The posted version 0.36 is available at https://sites.google.com/site/wildcatsformma/ A much more powerful version 0.50 will be available at the beginning of May. The package is totally free (GPL licence), but requires the commercial Mathematica system available at www.wolfram.com. The educational, student or home editions are very moderately priced. One of the things you can already do in Wildcats 0.36 is to apply a functor to a diagram and see the resulting diagram. In the new version 0.50, you will also be able to do diagram chasing. Needless to say, I greatly appreciate users feedback and comments
{ "language": "en", "url": "https://math.stackexchange.com/questions/130527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 1, "answer_id": 0 }
Probability - Coin Toss - Find Formula The problem statement, all variables and given/known data: Suppose a fair coin is tossed $n$ times. Find simple formulae in terms of $n$ and $k$ for a) $P(k-1 \mbox{ heads} \mid k-1 \mbox{ or } k \mbox{ heads})$ b) $P(k \mbox{ heads} \mid k-1 \mbox{ or } k \mbox{ heads})$ Relevant equations: $P(k \mbox{ heads in } n \mbox{ fair tosses})=\binom{n}{k}2^{-n}\quad (0\leq k\leq n)$ The attempt at a solution: I'm stuck on the conditional probability. I've dabbled with it a little bit but I'm confused what $k-1$ intersect $k$ is. This is for review and not homework. The answer to a) is $k/(n+1)$. I tried $P(k-1 \mbox{ heads} \mid k \mbox{ heads})=P(k-1 \cap K)/P(K \mbox{ heads})=P(K-1)/P(K).$ I also was thinking about $$P(A\mid A,B)=P(A\cap (A\cup B))/P(A\cup B)=P(A\cup (A\cap B))/P(A\cup B)=P(A)/(P(A)+P(B)-P(AB))$$
By the usual formula for conditional probability, an ugly form of the answer is $$\frac{\binom{n}{k-1}(1/2)^n}{\binom{n}{k-1}(1/2)^n+\binom{n}{k}(1/2)^n}.$$ Cancel the $(1/2)^n$. Now the usual formula for $\binom{a}{b}$ plus a bit of algebra will give what you want. We can simplify the calculation somewhat by using the fact that $\binom{n}{k-1}+\binom{n}{k}=\binom{n+1}{k}$, which has a nice combinatorial proof, and is built into the "Pascal triangle" definition of the binomial coefficients. As for the algebra, $\binom{n}{k-1}=\frac{n!}{(k-1)!(n-k+1)!}$ and $\binom{n+1}{k}=\frac{(n+1)!}{k!(n-k+1)!}$. When you divide, there is a lot of cancellation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
All partial derivatives are 0. I know that for a function $f$ all of its partial derivatives are $0$. Thus, $\frac{\partial f_i}{\partial x_j} = 0$ for any $i = 1, \dots, m$ and any $j = 1, \dots, n$. Is there any easy way to prove that $f$ is constant? The results seems obvious but I'm having a hard time expressing it in words explicitly why it's true.
It's not hard to see that given $\textbf{p,q}\in\mathbb{R}^M$: $\hspace{2cm}\textbf{f(p)}-\textbf{f(q)}=\textbf{f}\bigl(f_1(\textbf{p})-f_1(\textbf{q}), f_2(\textbf{p})-f_2(\textbf{q}), ... f_N(\textbf{p})-f_N(\textbf{q})\bigr)$ $\hspace{4.5cm} =\bigl(\int_{\gamma} \nabla f_1(\mathbf{r})\cdot d\mathbf{r}, \int_{\gamma} \nabla f_2(\mathbf{r})\cdot d\mathbf{r}, ... \int_{\gamma} \nabla f_N(\mathbf{r})\cdot d\mathbf{r}\bigr)$ $\hspace{4.5cm}=0$ Since according to the Gradient Theorem, given any curve $\gamma$ with end points $\textbf{p},\textbf{q} \in \mathbb{R}^N$, we have: $\hspace{6cm} f_i\left(\mathbf{p}\right)-f_i\left(\mathbf{q}\right) = \int_{\gamma} \nabla f_i(\mathbf{r})\cdot d\mathbf{r} $ and $\nabla f_i = \frac{\partial f_i}{\partial x_1 }\mathbf{e}_1 + \cdots + \frac{\partial f_i}{\partial x_N }\mathbf{e}_N=0$ for all $i$, by assumption.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Notation for modules. Given a module $A$ for a group $G$, and a subgroup $P\leq G$ with unipotent radical $U$, I have encountered the notation $[A,U]$ in a paper. Is this a standard module-theoretic notation, and if so, what does it mean. In the specific case I am looking at, it works out that $[A,U]$ is equal to the submodule of the restriction of $A$ to $P$ generated by the fixed-point space of $A$ with respect to $U$, but whether this is the case in general I do not know. If anyone could enlighten me on this notation, it would be greatly appreaciated.
In group theory it is standard to view $G$-modules $A$ as embedded in the semi-direct product $G \ltimes A$. Inside the semidirect product, the commutator subgroup $[A,U]$ makes sense for any subgroup $U \leq G$ and since $A$ is normal in $G \ltimes A$, we get $[A,U] \leq A$ and by the end we need make no reference to the semi-direct product. If we let $A$ be a right $G$-module written multiplicatively so that the $G$ action is written as exponentiation, then $$[A,U] = \langle [a,u] : a \in A, u \in U \rangle = \langle a^{-1} a^u : a \in A, u \in U \rangle$$ If you have a left $A$-module written additively and $G$-action written as multiplication, then we get $$[A,U] = \langle a - u\cdot a : a \in A, u \in U \rangle = \sum_{u \in U} \operatorname{im}(1-u)$$ which is just the sum of the images of $A$ under the nilpotent operators $1-u$, which is probably a fairly interesting thing to consider. In some sense this is the dual of the centralizer: $A/[A,U]$ is the largest quotient of $A$ centralized by $U$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Where's my mistake? This is partially an electrical engineering problem, but the actual issue apparently lays in my maths. I have the equation $\frac{V_T}{I_0} \cdot e^{-V_D\div V_T} = 100$ $V_T$ an $I_0$ are given and I am solving for $V_D$. These are my steps: Divide both sides by $\frac{V_T}{I_0}$: $$e^{-V_D\div V_T} = 100 \div \frac{V_T}{I_0}$$ Simplify: $$e^{-V_D\div V_T} = \frac{100\cdot I_0}{V_T}$$ Find the natural log of both sides: $$-V_D\div V_T = \ln(\frac{100\cdot I_0}{V_T})$$ Multiply by $V_T$: $$-V_D = V_T \cdot \ln(\frac{100\cdot I_0}{V_T})$$ Multiply by $-1$: $$V_D = -V_T \cdot \ln(\frac{100\cdot I_0}{V_T})$$ Now if I substitute in the numbers; $V_T \approx 0.026$, $I_0 \approx 8 \cdot 10^{-14}$ $$V_D = -0.026 \cdot \ln(\frac{8 \cdot 10^{-12}}{0.026})$$ Simplify a little more: $$ V_D = \frac{26}{1000} \cdot -\ln(\frac{4}{13} \cdot 10^{-9})$$ and you finally end up with $V_D \approx 0.569449942$ There is an extra step to the problem as well: the question calls not for $V_D$, but for $V_I$ which is a source voltage and that can basically be determined by solving this: $$V_I \cdot \frac{1}{40} = V_D$$ I.e. $V_I = 40V_D$; which makes $V_I \approx 22.77799768$. However, this is off by quite a bit (the answer is apparently $1.5742888791$). Official Solution: We find $V_D$ to be $\approx$ 0.57. (Woo! I did get that portion right.) Since we know that $\frac{V_D}{i_D}$ is 100, we find $i_D$ to be 0.00026. Background: $\frac{V_D}{i_D}$ is the resistance that I was originally solving. $V_D$ was the voltage across the element, and $i_D$ was the current through the element. However, either I'm making some stupid mistake but if $i_D = \frac{V_D}{100}$ then how did they get 0.00026? Completing the rest of the solution's method (quite convoluted in comparison to mine, checked mine though another method; $V_I$ does in fact equal $40V_D$), with the correct value, 0.0057, I arrived at exactly the same final value as before. Would it be fair to say that it is likely that my logic is correct?
You are confusing resistance with incremental resistance, I think. The incremental resistance only matters for small signal analysis. The problem is to set the operating point so that the incremental resistance will be the required value. This involves computing $V_D, I_D$. However, you cannot use the incremental resistance to compute $I_D$ in terms of $V_D$. Also, you are forgetting to account for the $3.9k\Omega$ series resistance. You correctly computed $V_D = 0.57 V$ required so the incremental resistance is $100 \Omega$. However, the current through the diode at $V_D$ is given by the diode equation, which you haven't included above. The diode equation is $I_D = I_0 (e^{V_D/V_T}-1)$. Plugging in your numbers gives $I_D = 260 \mu A$ (basically $\frac{V_T}{100}$, since $\frac{V_D}{V_T}$ is large). The question was to figure the input voltage $V_I$ that will set the diode operating point at $V_D =0.57 V, I_D=260 \mu A$. In the first instance, there is a series resistance of $R_S = 3.9 k\Omega$, so to figure the required $V_I$ you need $$ V_I = R_S I_D+V_D$$ Plugging in the numbers gives $V_I = 1.58V$ or so.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Can there be a scalar function with a vector variable? Can I define a scalar function which has a vector-valued argument? For example, let $U$ be a potential function in 3-D, and its variable is $\vec{r}=x\hat{\mathrm{i}}+y\hat{\mathrm{j}}+z\hat{\mathrm{k}}$. Then $U$ will have the form of $U(\vec{r})$. Is there any problem?
That's a perfectly fine thing to do. The classic example of such a field is the temperature in a room: the temperature $T$ at each point $(x,y,z)$ is a function of a $3$-vector, but the output of the function is just a scalar ($T$). $\phi^4$ scalar fields are also an example, as are utility functions in economics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/130953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Proof that $\sum\limits_{k=1}^\infty\frac{a_1a_2\cdots a_{k-1}}{(x+a_1)\cdots(x+a_k)}=\frac{1}{x}$ regarding $\zeta(3)$ and Apéry's proof I recently printed a paper that asks to prove the "amazing" claim that for all $a_1,a_2,\dots$ $$\sum_{k=1}^\infty\frac{a_1a_2\cdots a_{k-1}}{(x+a_1)\cdots(x+a_k)}=\frac{1}{x}$$ and thus (probably) that $$\zeta(3)=\frac{5}{2}\sum_{n=1}^\infty {2n\choose n}^{-1}\frac{(-1)^{n-1}}{n^3}$$ Since the paper gives no information on $a_n$, should it be possible to prove that the relation holds for any "context-reasonable" $a_1$? For example, letting $a_n=1$ gives $$\sum_{k=1}^\infty\frac{1}{(x+1)^k}=\frac{1}{x}$$ which is true. The article is "A Proof that Euler Missed..." An Informal Report - Alfred van der Poorten.
Formally, the first identity is repeated application of the rewriting rule $$\dfrac 1 x = \dfrac 1 {x+a} + \dfrac {a}{x(x+a)} $$ to its own rightmost term, first with $a = a_1$, then $a=a_2$, then $a=a_3, \ldots$ The only convergence condition on the $a_i$'s is that the $n$th term in the infinite sum go to zero. [i.e. that $a_1 a_2 \dots a_n / (x+a_1)(x+a_2) \dots (x+a_n)$ converges to zero for large $n$].
{ "language": "en", "url": "https://math.stackexchange.com/questions/131004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37", "answer_count": 2, "answer_id": 0 }
Finding all reals such that two field extensions are equal. So we want to find an $u$ such that $\mathbb{Q}(u)=\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$. I obtained that if $u$ is of the following form: $$u=\sqrt[6]{2^a5^b}$$Where $a\equiv 1\pmod{2}$, and $a\equiv 0\pmod{3}$, and $b\equiv 0\pmod{2}$ and $ b\equiv 1\pmod{3}$. This works since $$u^3=\sqrt{2^a5^b}=2^{\frac{a-1}{2}}5^{\frac{b}{2}}\sqrt{2}$$and also, $$u^2=\sqrt[3]{2^a5^b}=2^{\frac{a}{3}}5^{\frac{b-1}{3}}\sqrt[3]{5}$$Thus we have that $\mathbb{Q}(\sqrt{2},\sqrt[3]{5})\subseteq \mathbb{Q}(u)$. Note that $\sqrt{2}$ has degree of $2$ (i.e., $[\mathbb{Q}(\sqrt{2}):\mathbb{Q}]=2$) and alsothat $\sqrt[3]{5}$ has degree $3$. As $\gcd(2,3)=1$, we have that $[\mathbb{Q}(\sqrt{2},\sqrt[3]{5}),\mathbb{Q}]=6$. Note that this is also the degree of the extension of $u$, since one could check that the set $\{1,u,...,u^5\}$ is $\mathbb{Q}$-independent. Ergo, we must have equality. That is, $\mathbb{Q}(u)=\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$. My question is: How can I find all such $w$ such that $\mathbb{Q}(w)=\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$? This is homework so I would rather hints rather hints than a spoiler answer. I believe that They are all of the form described above, but apriori I do not know how to prove this is true. My idea was the following, since $\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$ has degree $6$, then if $w$ is such that the desired equality is satisfied, then $w$ is a root of an irreducible polynomial of degree $6$, moreover, we ought to be able to find rational numbers so that $$\sqrt{2}=\sum_{i=0}^5q_iw^i$$ and $$\sqrt[3]{5}=\sum_{i=0}^5p_iw^i$$But from here I do not know how to show that the $u$'s described above are the only ones with this property (It might be false, apriori I dont really know).
If we take $u = \sqrt{2} + \sqrt[3]{5}$, such a $u$ almost always turns out to work. In fact let's try if a rational linear combination of $\sqrt{2}$ and $\sqrt[3]{5}$ will work. Let us now write $u$ as $u = a\sqrt{2} + b\sqrt[3]{5}$ for rationals $a$ and $b$. Clearly we have that $\Bbb{Q}(u)\subseteq \Bbb{Q}(\sqrt{2},\sqrt[3]{5})$. To show the other inclusion, we just need to show that say $\sqrt{2} \in \Bbb{Q}(u)$ for then $\sqrt[3]{5} = \frac{a\sqrt{2} + b\sqrt[3]{5} - a\sqrt{2}}{b}$ will be in $\Bbb{Q}(u)$. Here is a quick and easy way of doing this: Write $u = a\sqrt{2} + b\sqrt[3]{5}$ so that $(\frac{u - a\sqrt{2}}{b})^3 = 5$. We can assume that $a$ and $b$ are simultaneously not zero for then the proof becomes redundant. Then expanding the left hand side by the binomial theorem we get that $$ u^3 - 3\sqrt{2}u^2a + 6ua^2 - 2a^3\sqrt{2} = 5.$$ Rearranging, we get that $$\sqrt{2} = \frac{u^3 + 6ua^2 -5}{ 3u^2a + 2a^3 }.$$ Since $\Bbb{Q}(u)$ is a field the right hand side is in $\Bbb{Q}(u)$ so that $\sqrt{2}$ is in here. Done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/131051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
A more general case of the Laurent series expansion? I was recently reading about Laurent series for complex functions. I'm curious about a seemingly similar situation that came up in my reading. Suppose $\Omega$ is a doubly connected region such that $\Omega^c$ (its complement) has two components $E_0$ and $E_1$. So if $f(z)$ is a complex, holomorphic function on $\Omega$, how can it be decomposed as $f=f_0(z)+f_1(z)$ where $f_0(z)$ is holomorphic outside $E_0$, and $f_1(z)$ is holomorphic outside $E_1$? Many thanks.
I'll suppose both $E_0$ and $E_1$ are bounded. Let $\Gamma_0$ and $\Gamma_1$ be disjoint positively-oriented simple closed contours in $\Omega$ enclosing $E_0$ and $E_1$ respectively, and $\Gamma_2$ a large positively-oriented circle enclosing both $\Gamma_0$ and $\Gamma_1$. Let $\Omega_1$ be the region inside $\Gamma_2$ but outside $\Gamma_0$ and $\Gamma_1$. Then for $z \in \Omega_1$ we have by Cauchy's integral formula, $$ f(z) = \frac{1}{2\pi i} \left( \int_{\Gamma_2} \frac{f(\zeta)\ d\zeta}{\zeta - z} - \int_{\Gamma_0} \frac{f(\zeta)\ d\zeta}{\zeta - z} - \int_{\Gamma_1} \frac{f(\zeta)\ d\zeta}{\zeta - z} \right)$$ If you're not familiar with this version of Cauchy's formula, you can draw thin "corridors" connecting $-\Gamma_0$, $-\Gamma_1$ and $\Gamma_2$ into a single closed contour enclosing $z$. If $$f_k(z) = \frac{1}{2\pi i} \int_{\Gamma_k} \frac{f(\zeta)\ d\zeta}{\zeta - z}$$ this says $f(z) = f_2(z) - f_0(z) - f_1(z)$, where $f_2(z)$ is analytic everywhere inside $\Gamma_2$, $f_0(z)$ is analytic everywhere outside $\Gamma_0$, and $f_1(z)$ is analytic everywhere outside $\Gamma_1$. Moreover, the values of $f_k(z)$ don't depend on the choice of contours, as long as $z$ is inside $\Gamma_2$ and outside $\Gamma_0$ and $\Gamma_1$. By making $\Gamma_2$ sufficiently large and $\Gamma_0$ and $\Gamma_1$ sufficiently close to $E_0$ and $E_1$, any point in $\Omega$ can be included. So we actually have $f(z) = f_2(z) - f_0(z) - f_1(z)$ everywhere in $\Omega$, with $f_2(z)$ entire, $f_0(z)$ analytic outside $E_0$ and $f_1(z)$ analytic outside $E_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/131181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
graph theory connectivity This cut induced confuses me.... I dont really understand what it is saying... I am not understanding what connectivity is in graph theory. I thought connectivity is when you have a tree because all the vertices are connected but the above mentions something weird like components could someone please explain what they are and what connectivity really is?
Connectivity and components Intuitively, a graph is connected if you can't break it into pieces which have no edges in common. More formally, we define connectivity to mean that there is a path joining any two vertices - where a path is a sequence of vertices joined by edges. The example of $Q_3$ in your question is obviously not connected - none of the vertices in the bit on the left are connected to vertices in the bit on the right. Alternatively, there is no path from the vertex marked 000 to the vertex marked 001. (As an aside - all trees are connected - a tree is defined as a connected graph with no cycles. But there are many other connected graphs.) So if a graph is not connected, then we know it can be broken up into pieces which have no edges in common. These pieces are known as components. The components are themselves connected - they are called the maximal connected subgraphs because it is impossible to add another vertex to them and still have a connected graph. All connected graphs have only one component - the graph itself. Cut induced You can think of the cut induced as being the set of edges which connect some collection of vertices to the rest of the graph. In the diagram you give, the set called $A$ is the collection of vertices within the dotted line. The cut induced by $A$ is then the collection of edges which cross the dotted line - the edges which connect the vertices inside the dotted line to those outside it. Edges joining vertices inside the shaded area are not part of the cut induced, and neither are edges joining vertices on the outside of the dotted line. More formally, the complement of $A$ is exactly those vertices which are not in $A$. So the cut induced by $A$ is the collection of edges joining vertices in $A$ to vertices in the complement of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/131240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Probability theory. 100 items and 3 controllers 100 items are checked by 3 controllers. What is probability that each of them will check more than 25 items? Here is full quotation of problem from workbook: "Set of 100 articles randomly allocated to test between the three controllers. Find the probability that each controller has got to test at least 25 articles."
Let $N_i$ denote the number of items checked by controller $i$. One asks for $1-p$ where $p$ is the probability that some controller got less than $k=25$ items. Since $(N_1,N_2,N_3)$ is exchangeable and since at most two controllers can get less than $k$ items, $p=3u-3v$ where $u=\mathrm P(N_1\lt k)$ and $v=\mathrm P(N_1\lt k,N_2\lt k)$. Furthermore, $v\leqslant uw$ with $w=\mathrm P(M\lt k)$ where $M$ is binomial $(m,\frac12)$ with $m=75$ hence $w\ll1$. And $N_1$ is binomial $(n,\frac13)$. Numerically, $u\approx2.805\%$ and $w\approx0.1\%$ hence $1-p\approx1-3u\approx91.6\%$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/131301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Reduction of a $2$ dimensional quadratic form I'm given a matrix $$A = \begin{pmatrix}-2&2\\2&1\end{pmatrix}$$ and we're asked to sketch the curve $\underline{x}^T A \underline{x} = 2$ where I assume $x = \begin{pmatrix}x\\y \end{pmatrix}$. Multiplying this out gives $-2 x^2+4 x y+y^2 = 2$. Also, I diagonalised this matrix by creating a matrix, $P$, of normalised eigenvectors and computing $P^T\! AP = B$. This gives $$B = \begin{pmatrix}-3&0\\0&2\end{pmatrix}$$ and so now multiplying out $\underline{x}^T B \underline{x} = 2$ gives $-3x^2 + 2y^2 = 2$. Plugging these equations into Wolfram Alpha gives different graphs, can someone please explain what I'm doing wrong? Thanks!
A quadratic form is "equivalent" to it's diagonalized form only in the sense that these quadratic forms share an equivalence class, but the resulting polynomials are of course different (as you showed), otherwise we wouldn't need the equivalence classes!
{ "language": "en", "url": "https://math.stackexchange.com/questions/131355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is $2^{2n} = O(2^n)$? Is $2^{2n} = O(2^n)$? My solution is: $2^n 2^n \leq C_{1}2^n$ $2^n \leq C_{1}$, TRUE. Is this correct?
If $2^{2n}=O(2^n)$, then there is a constant $C$ and an integer $M$ such that for all $n\ge M$, the inequality $2^{2n}\le C 2^n$ holds. This would imply that $2^n\cdot 2^n\le C 2^n$ for all $n\ge M$, which in turn implies $$\tag{1} 2^n\le C \quad {\bf for\ all } \quad n\ge M. $$ Can such $C$ and $M$ exist? Note the right hand side of $(1)$ is fixed, and the left hand side...
{ "language": "en", "url": "https://math.stackexchange.com/questions/131420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
The graph of Fourier Transform I am trying to grasp Fourier transform, I read few websites about it, and I think I don't understand it very good. I know how I can transform simple functions but there is few things that are puzzling to me. Fourier transform takes a function from time domain to a frequency domain, so now I have $\widehat{f(\nu)}$, this is complex-valued function, so as I understand for every frequency I get an imaginary number. * *What does this number represent, what is an interpretation of real and imaginary part of $\widehat{f(\nu)}$? *How can I graph $\widehat{f(\nu)}$? As I understand if function is not odd-function, $\widehat{f(\nu)}$ will have complex values and imaginary part will be different then 0. Do I need to plot it in 3d or do I just plot $|\widehat{f(\nu)}|$?. I am asking about plotting, because for example on wikipedia there is a plot of sinc function, which is fourier transform for square function. It is nice, because it is an odd-function in their case. And I am wondering about other functions. I would be also very grateful for any useful links that can shed some light on the idea of fourier transform and some light theory behind it, preferably done step-by step.
You can refer the following link. Here you can find intuitive explanation to Fourier Transform. The Frequency Domain values after Fourier Transform represent the contribution of that each Frequency in the signal
{ "language": "en", "url": "https://math.stackexchange.com/questions/131469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Various types of TQFTs I am interested in topological quantum field theory (TQFT). It seems that there are many types of TQFTs. The first book I pick up is "Quantum invariants of knots and 3-manifolds" by Turaev. But it doesn't say which type of TQFT are dealt in the book. I found at least two TQFTs which contain Turaev's name, namely Turaev-Viro and Turaev-Reshetikhin TQFT. I have searched the definitions of various TQFTs for few days but I couldn't find good resources. I would like to know * *which type of TQFT is dealt in Turaev's book. *good resources for definitions of various TQFTs (or if it is not difficult to answer here, please give me definitions.) *whether they are esentially different objects or some are generalizations of the others.
Review of a recent Turaev book at http://www.ams.org/journals/bull/2012-49-02/S0273-0979-2011-01351-9/ also discussing earlier volumes to some extent. Here we go, the book you ask about: http://www.ams.org/journals/bull/1996-33-01/S0273-0979-96-00621-0/home.html
{ "language": "en", "url": "https://math.stackexchange.com/questions/131545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Positive series problem: $\sum\limits_{n\geq1}a_n=+\infty$ implies $\sum_{n\geq1}\frac{a_n}{1+a_n}=+\infty$ Let $\sum\limits_{n\geq1}a_n$ be a positive series, and $\sum\limits_{n\geq1}a_n=+\infty$, prove that: $$\sum_{n\geq1}\frac{a_n}{1+a_n}=+\infty.$$
We can divide into cases: * *If a(n) has limit zero : It is lower than 1 for all n bigger than n0, then we can compare with a(n)/2 which is lower than a(n)/(1+a(n)). *If a(n) has limit different to zero , also a(n)/1+a(n) and then the series diverges *If a(n) is not bounded it ha a subsequence that converges to infinite, then a(n)/1+a(n) converges to 1 then the series diverges to infinite. *If a(n) is bounded , we can take a subsequence that is convergent. If it does not converges to zero also the sequence a(n)/1+a(n). If all subsequences converge to zero ,then also a(n) and we can apply 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/131678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 6, "answer_id": 4 }
Surface Element in Spherical Coordinates In spherical polars, $$x=r\cos(\phi)\sin(\theta)$$ $$y=r\sin(\phi)\sin(\theta)$$ $$z=r\cos(\theta)$$ I want to work out an integral over the surface of a sphere - ie $r$ constant. I'm able to derive through scale factors, ie $\delta(s)^2=h_1^2\delta(\theta)^2+h_2^2\delta(\phi)^2$ (note $\delta(r)=0$), that: $$h_1=r\sin(\theta),h_2=r$$ $$dA=h_1h_2=r^2\sin(\theta)$$ I'm just wondering is there an "easier" way to do this (eg. Jacobian determinant when I'm varying all 3 variables). I know you can supposedly visualize a change of area on the surface of the sphere, but I'm not particularly good at doing that sadly.
I've come across the picture you're looking for in physics textbooks before (say, in classical mechanics). A bit of googling and I found this one for you! Alternatively, we can use the first fundamental form to determine the surface area element. Recall that this is the metric tensor, whose components are obtained by taking the inner product of two tangent vectors on your space, i.e. $g_{i j}= X_i \cdot X_j$ for tangent vectors $X_i, X_j$. We make the following identification for the components of the metric tensor, $$ (g_{i j}) = \left(\begin{array}{cc} E & F \\ F & G \end{array} \right), $$ so that $E = <X_u, X_u>, F=<X_u,X_v>,$ and $G=<X_v,X_v>.$ We can then make use of Lagrange's Identity, which tells us that the squared area of a parallelogram in space is equal to the sum of the squares of its projections onto the Cartesian plane: $$|X_u \times X_v|^2 = |X_u|^2 |X_v|^2 - (X_u \cdot X_v)^2.$$ Here's a picture in the case of the sphere: This means that our area element is given by $$ dA = | X_u \times X_v | du dv = \sqrt{|X_u|^2 |X_v|^2 - (X_u \cdot X_v)^2} du dv = \sqrt{EG - F^2} du dv. $$ So let's finish your sphere example. We'll find our tangent vectors via the usual parametrization which you gave, namely, $X(\phi,\theta) = (r \cos(\phi)\sin(\theta),r \sin(\phi)\sin(\theta),r \cos(\theta)),$ so that our tangent vectors are simply $$ X_{\phi} = (-r\sin(\phi)\sin(\theta),r\cos(\phi)\sin(\theta),0), \\ X_{\theta} = (r\cos(\phi)\cos(\theta),r\sin(\phi)\cos(\theta),-r\sin(\theta)) $$ Computing the elements of the first fundamental form, we find that $$ E = r^2 \sin^2(\theta), \hspace{3mm} F=0, \hspace{3mm} G= r^2. $$ Thus, we have $$ dA = \sqrt{r^4 \sin^2(\theta)}d\theta d\phi = r^2\sin(\theta) d\theta d\phi $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/131735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 6, "answer_id": 0 }
Deriving the exponential distribution from a shift property of its expectation (equivalent to memorylessness). Suppose $X$ is a continuous, nonnegative random variable with distribution function $F$ and probability density function $f$. If for $a>0,\ E(X|X>a)=a+E(X)$, find the distribution $F$ of $X$.
About the necessary hypotheses (and in relation to a discussion somewhat buried in the comments to @bgins's answer), here is a solution which does not assume that the distribution of $X$ has a density, but only that $X$ is integrable and unbounded (otherwise, the identity in the post makes no sense). A useful tool here is the complementary PDF $G$ of $X$, defined by $G(a)=\mathrm P(X\gt a)$. For every $a\geqslant0$, let $m=\mathrm E(X)$. The identity in the post is equivalent to $\mathrm E(X-a\mid X\gt a)=m$, which is itself equivalent to $\mathrm E((X-a)^+)=m\mathrm P(X\gt a)=mG(a)$. Note that $m\gt0$ by hypothesis. Now, for every $x$ and $a$, $$ (x-a)^+=\int_a^{+\infty}[x\gt z]\,\mathrm dz. $$ Integrating this with respect to the distribution of $X$ yields $$ \mathrm E((X-a)^+)=\int_a^{+\infty}\mathrm P(X\gt z)\,\mathrm dz, $$ hence, for every $a\gt0$, $$ mG(a)=\int_a^{+\infty}G(z)\,\mathrm dz. $$ This proves ${}^{(\ast)}$ that $G$ is infinitely differentiable on $(0,+\infty)$ and that $mG'(a)=-G(a)$ for every $a\gt0$. Since the derivative of the function $a\mapsto G(a)\mathrm e^{ma}$ is zero on $a\gt0$ and $G$ is continuous from the right on $(0,+\infty)$, one gets $G(a)=G(0)\mathrm e^{-ma}$ for every $a\geqslant0$. Two cases arise: either $G(0)=1$, then the distribution of $X$ is exponential with parameter $1/m$; or $G(0)\lt1$, then the distribution of $X$ is a barycenter of a Dirac mass at $0$ and an exponential distribution. If the distribution of $X$ is continuous, the former case occurs. ${}^{(\ast)}$ By the usual seesaw technique: the RHS converges hence the RHS is a continuous function of $a$, hence the LHS is also a continuous function of $a$, hence the RHS integrates a continuous function of $a$, hence the RHS is a $C^1$ function of $a$, hence the LHS is also a $C^1$ function of $a$... and so on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/131807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Similarity between $I+N$ and $e^N$ when $N$ is nilpotent Let $$ N=\begin{pmatrix}0&1&&\\&\ddots&\ddots&\\&&0&1\\&&&0 \end{pmatrix}_{n\times n} $$ and $I$ is the identity matrix of order $n$. How to prove $I+N\sim e^N$? Clarification: this is the definition of similarity, which is not the same as equivalence. Update: I noticed a stronger relation, that $A\sim N$, if $$ A=\begin{pmatrix}0&1&*&*\\&\ddots&\ddots&*\\&&0&1\\&&&0 \end{pmatrix}_{n\times n} $$ and $*$'s are arbitrary numbers.
By subtracting $I$ this is equivalent to asking about the similarity class of a nilpotent square matrix of size $N$. The similarity type of $N$ is determined by the dimensions of the kernels of powers of $N$. In the upper triangular case the list of dimensions of $\ker N^i$ is $1,2,3,4,...,n$ for both of the matrices you consider. Hence they are similar.
{ "language": "en", "url": "https://math.stackexchange.com/questions/131865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Integrating a spherically symmetric function over an $n$-dimensional sphere I've become confused reading over some notes that I've received. The integral in question is $\int_{|x| &lt \sqrt{R}} |x|^2\, dx$ where $x \in \mathbb{R}^n$ and $R > 0$ is some positive constant. The notes state that because of the spherical symmetry of the integrand this integral is the same as $\omega_n \int_0^{\sqrt{R}} r^2 r^{n-1}\, dr$. Now neither $\omega_n$ nor $r$ are defined. Presumably $r = |x|$, but I am at a loss as to what $\omega_n$ is (is it related maybe to the volume or surface area of the sphere?). I am supposing that the factor $r^{n-1}$ comes from something like $n-1$ successive changes to polar coordinates, but I am unable to fill in the details and would greatly appreciate any help someone could offer in deciphering this explanation.
The $\omega_n$ is meant to be the volume of the unit $n$-sphere. See http://en.wikipedia.org/wiki/N-sphere for notation. Also, you are correct that $r = |x|$, and the $r^{n-1}$ comes from the Jacobian of the transformation from rectangular to spherical coordinates. (The $\omega_n$ also comes from this transformation).
{ "language": "en", "url": "https://math.stackexchange.com/questions/131925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is the axiom of choice separated from the other axioms? I don't know much about set theory or foundational mathematics, this question arose just out of curiosity. As far as I know, the widely accepted axioms of set theory is the Zermelo-Fraenkel axioms with the axiom of choice. But the last axiom seems to be the most special out of these axioms. A lot of theorems specifically mention that they depend on the axiom of choice. So, what is so special about this axiom? I know that a lot of funny results occur when we assume the axiom of choice, such as the Banach-Tarski paradox. However, we are assuming the other ZF axioms at the same time. So why do we blame it to the axiom of choice, not the others? To me, the axiom of regularity is less convincing than the axiom of choice (though it's probably due to my lack of understanding).
The basic axiom of "naive set theory" is general comprehension: For any property $P$, you may form the set consisting of all elements satisfying $P$. Russell's paradox shows that general comprehension is inconsistent, so you need to break it down into more restricted types of comprehension. The other axioms of ZF (except for well-foundedness) are all special cases of general comprehension. For example, the Power Set axiom asserts that the class of all subsets of $X$ is a set. Replacement with respect $\phi(x,y)$ asserts that the class of pairs $(x,y)$ satisfying $\phi(x,y)$ is a set. Separation asserts is obviously a sub-case of general comprehension. Choice is very different, because it asserts the existence of a set which does not satisfy a specific defining sentence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47", "answer_count": 4, "answer_id": 0 }
simple-connectedness of convex and star domains Let $D$ be a convex domain in the complex plane, and is domain $D$ a simply connected domain? What about when $D$ is a star domain?
Yes star-domains are simply connected, since every path is homotopy equivalent to one going through the center. The disc with one point removed is not simply connected, but also not convex. Open convex sets are among the star-domains. All that is not special to the $\mathbb{C}$, but any $\mathbb{R}^d$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to expand undifferential function as power series? If a function has infinite order derivative at $0$ and $\lim_{n\to \infty}(f(x)-\sum_{i=1}^{n} a_{n}x^n)=0$ for every $x \in (-r,r)$,then it can be expand as power series$\sum a_{n}x^n$, My question is if this function is not differential at $0$,how to expand it as $\sum a_{n}x^n$ satisfied with $\lim_{n\to \infty}(f(x)-\sum_{i=1}^{n} a_{n}x^n)=0$ for every $x \in (-r,r)$?Is it unique ?
If $\sum_{j=0}^\infty a_j x^j$ converges for every $x$ in an interval $(-r,r)$, then the radius of convergence of the series is at least $r$, and the sum is analytic in the disk $\{z \in {\mathbb C}: |z| &lt r\}$. So if $f(x)$ is not analytic in $(-r,r)$, in particular if it is not differentiable at $0$, there is no way to represent it as $\sum_n a_n x^n$ with $\sum_{j=0}^n a_j x^j \to f(x)$. However, you can try a series $\sum_n a_n x^n$ such that some subsequence of partial sums $P_N(x) = \sum_{j=0}^N a_j x^j$ converges to $f(x)$. Suppose $f$ is continuous on $[-r,r]$ except possibly at $0$. I'll let $a_0 = f(0)$ and $N_0 = 0$. Given $a_j$ for $0 \le j \le N_k$, let $g_k(x) = (f(x) - P_{N_k}(x))/x^{N_k}$ for $x \ne 0$, $g_k(0) = 0$. Since $g_k$ is continuous on $E_k = [-r, -r/(k+1)] \cap \{0\} \cap [r/(k+1), r]$, Stone-Weierstrass says there is a polynomial $h_k(x)$ with $|g_k(x) - h_k(x)| &lt r^{-N_k}/(k+1)$ on $E_k$. Moreover we can assume $h_k(0) = g_k(0) = 0$. Let $N_{k+1} = N_k + \deg(h_k)$, and let $a_j$ be the coefficient of $x^j$ in $x^{N_k} h_k(x)$ for $N_k &lt j \le N_{k+1}$. Thus $P_{N_{k+1}}(x) = P_{N_k}(x) + x^{N_k} h_k(x)$ so that $|P_{N_{k+1}}(x) - f(x)| = |x|^{N_k} |g_k(x) - h_k(x)| &lt 1/(k+1)$ for $x \in E_k \backslash \{0\}$ (we already know $P_{N_{k+1}}(0) = f(0)$). Since the union of the $E_k$ is all of $[-r,r]$, the partial sums $P_{N_k}(x)$ converge to $f(x)$ pointwise on $[-r,r]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Can asymptotic of a Mellin (or laplace inverse ) be evaluated? I mean, given the Mellin inverse integral $ \int_{c-i\infty}^{c+i\infty}dsF(s)x^{-s} $, can we evaluate this integral, at least as $ x \rightarrow \infty $? Can the same be made for $ \int_{c-i\infty}^{c+i\infty}dsF(s)\exp(st) $ as $ x \rightarrow \infty $? Why or why not can this be evaluated in order to get the asymptotic behaviour of Mellin inverse transforms?
yes we can evaluate above integral but it depends on F(s).what is your F(s).then we can see how to solve it.above integral is inverse mellin transform.Some times it is v difficult to find inverse,it all depends on what F(s) is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
convergence with respect to integral norm but not pointwise I want to give an example of a sequence of functions $f_1 \dots f_n$ that converges with respect to the metric $d(f,g) = \int_a^b |f(x) - g(x)| dx$ but does not converge pointwise. I'm thinking of a function $f_n$ that is piecewise triangle, whose area converges to some constant function, but doesn't converge pointwise. I just can't manage to formalize it.
You can suppose that $g(x) = 0$, because your metric is translation invariant ; $d(f_n,g) = d(f_n-g,0)$. Think of the sequence $f_n : [0,1] \to \mathbb R$ defined by $f_n(x) = x^n$ if $n$ is odd, and $(1-x)^n$ if $n$ is even. Therefore, $$ d(f_{2n+1},0) = \int_0^1 (1-x)^{2n+1} \, dx = \frac 1{2n+2} \underset{ n \to \infty} {\longrightarrow} 0 $$ or $$ d(f_{2n},0) = \int_0^1 x^{2n} \, dx = \frac 1{2n+1} \underset{ n \to \infty} {\longrightarrow} 0 $$ but $f_n$ does not converge pointwise at $0$ and $1$ because the values oscillates between $0$ and $1$. I know my answer uses the idea of peaking "the triangles" alternatively at $0$ and $1$ of Brian, but I still think it's worth it to see a more-or-less "trivial" example (using not-defined-by-parts polynomial functions), so I kept my answer there anyway. Hope that helps,
{ "language": "en", "url": "https://math.stackexchange.com/questions/132238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Finding the singularities of affine and projective varieties I'm having trouble calculating singularities of varieties: when my professor covered it it was the last lecture of the course and a shade rushed. I'm not sure if the definition I've been given is standard, so I will quote it to be safe: the tangent space of a projective variety $X$ at a point $a$ is $T_aX = a + \mbox{ker}(\mbox{Jac}(X))$. A variety is smooth at $a \in X$ if $a$ lives in a unique irreducible component $x_i$ of $X$ and $\dim T_a(X) = \dim X_i$, where dimension of a variety has been defined to be the degree of the Hilbert polynomial of $X$. A projective variety is smooth if its affine cone is. I tried to calculate a few examples and it all went very wrong. Example: The Grassmannian $G(2,4)$ in its Plucker embedding is $V(X_{12} x_{34} - x_{13}x_{24}+ x_{14}x_{23}) \subset \mathbb{P}^5$ I calculated the Hilbert polynomial to be $\frac{1}{12}d^4+...$, so it has dimension 4 (as expected), but I get $$\mbox{Jac}(G(2,4))= [x_{34}, -x_{24}, x_{23}, x_{14}, -x_{13}, x_{12}]$$ Which has rank 1 where $x \ne 0$, so nullity 5. So assumedly $\dim T_aX = \dim( a + \mbox{ker}(\mbox{Jac}(X))) = \dim \mbox{ker} \mbox{Jac}(X) = \mbox{nullity} (\mbox{Jac}(X))$. Which isn't 4? Which is a bit silly, as the Grassmannian is obviously smooth. I'm probably going wrong somewhere, but I've gotten myself thoroughly confused. Any help would be greatly appreciated. Thanks!
You are confusing $\mathbb P^5$ and $\mathbb A^6$. The calculation you did is valid for the cone $C\subset \mathbb A^6$ with equation $ x_{12} x_{34} - x_{13}x_{24}+ x_{14}x_{23}=0$. It is of codimension $1$ (hence of dimension $5$), and smooth outside of the origin, as your jacobian matrix shows. The image $\mathbb G(2,4)\subset \mathbb P^5$ of the grassmannim under the Plücker embedding is $\mathbb P(C\setminus \lbrace 0\rbrace) \subset \mathbb P^5$ It is also smooth of codimension $1$, hence of dimension $4$ as expected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
nilpotent ideals Possible Duplicate: The set of all nilpotent element is an ideal of R An element $a$ of a ring $R$ is nilpotent if $a^n = 0$ for some positive integer $n$. Let $R$ be a commutative ring, and let $N$ be the set of all nilpotent elements of $R$. (a) I'm trying to show that $N$ is an ideal, and that the only nilpotent element of $R/N$ is the zero element. (c) What are the nilpotent elements of $R = \mathbb{Z}_{24}$? And what is the quotient ring $R/N$ in that case? (what known ring is it isomorphic to?)
The first question on why the set of all nilpotent elements in a commutative ring $R$ is an ideal (also called the nilradical of $R$, denoted by $\mathfrak{R}$) has already been answered numerous times on this site. I will tell you why $R/\mathfrak{R}$ has no nilpotent elements. Suppose in the quotient we have an element $\overline{a}$ such that $\overline{a}^n = 0$. But then multiplication in the quotient is well defined so that $\overline{a}^n = \overline{a^n}$. This means that $a^n$ must be in $\mathfrak{R}$, so that there exists a $k$ such that $(a^n)^k =0$. But then this means that $a^{nk} = 0$ so that $a \in \mathfrak{R}$. In otherwords, in the quotient $\overline{a} = 0$ proving that the quotient ring $R/ \mathfrak{R}$ has no nilpotent elements. For question (c) I think you can do it by bashing out the algebra, but let me tell you that $\Bbb{Z}_{24}$ is guaranteed to have nilpotent elements because $24$ is not square free (it's prime factorisation is $24 = 2^{3} \cdot 3$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/132369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to find the Laplace transform? Jobs arrive to a computer facility according to a Poisson process with rate $\lambda$ jobs / hour. Each job requires a service time $X$ which is uniformly distributed between $0$ and $T$ hours independently of all other jobs. Let $Y$ denote the service time for all jobs which arrive during a one hour period. How should I find the Laplace transform of $Y$?
The general form of a Poisson process (or a Levy process) can be defined as; the number of events in time interval $(t, t + T]$ follows a Poisson distribution with associated parameter λT. This relation is given as \begin{equation} P[(N(t+T)-N(t))=k] = \frac{(\lambda T)^k e^{- \lambda T}}{k!} \end{equation} Where $N(t + T) − N(t) = k$ is the number of events in time interval $(t, t + T]$. It will be obvious to you that $\lambda$ is the rate parameter. Assume in the simplest case for some $T$ where $k=1$ then \begin{equation} f(\lambda) = \lambda e^{-\lambda T} \end{equation} Then taking Laplace transforms yields \begin{equation} \widehat{f(t)} = \frac{\lambda}{\lambda + s} \end{equation} I'll leave it to you to fill in the more specific details, I've only dropped the textbook conclusions from a Poisson process and the Laplace transform of such a process.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the runtime of a modulus operation Hi I have an algorithm for which I would like to provide the total runtime: def foo(x): s = [] if(len(x)%2 != 0): return false else: for i in range(len(x)/2): //some more operations return true The loop is in O(n/2) but what is O() of the modulus operation? I guess it is does not matter much for the overall runtime of the Algorithm, but I would really like to know.
There are two meanings for "run time". Many people have pointed out that if we assume the numbers fit in one register, the mod operaton is atomic and takes constant time. If we want to look at arbitrary values of $x$, we need to look at the model of computation that is used. The standard way of measuring computational complexity for arbitrary inputs is with the Turing machine model using binary representation for numbers. In this model, the run time to compute $x \% 2$ is $O(\log x)$, which is also the number of bits in the binary representation of $x$. This is because $x \% 2$ is just the last bit of $x$ in binary notation. Moreover, it is impossible to use this model and compute the function in time $f(x)$ for any $f$ with $\lim_{x \to \infty} f(x)/\log(x) < 1$. This is because with that sort of bound, there would be values of $x$ large enough that the machine could not even get to the last bit of $x$ before time expires. So, in this strong sense, the run time is "exactly" $\log x$ in the Turing machine model.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Two linearly independent eigenvectors with eigenvalue zero What is the only $2\times 2$ matrix that only has eigenvalue zero but does have two linearly independent eigenvectors? I know there is only one such matrix, but I'm not sure how to find it.
Answer is the zero matrix obviously. EDIT, here is a simple reason: let the matrix be $(c_1\ c_2)$, where $c_1$ and $c_2$ are both $2\times1$ column vectors. For any eigenvector $(a_1 \ a_2)^T$ with eigenvalue $0$, $a_1c_1 + a_2c_2 = 0$. Similarly, for another eigenvector $(b_1 \ b_2)^T$, $b_1c_1 + b_2c_2 = 0$. So $(a_2b_2 - a_1b_2)c_1 = 0$, therefore $c_1=0$ as the eigenvectors are linearly independent. From this, $c_2=0$ also.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Inequality ${n \choose k} \leq \left(\frac{en}{ k}\right)^k$ This is from page 3 of http://www.math.ucsd.edu/~phorn/math261/9_26_notes.pdf (Wayback Machine). Copying the relevant segment: Stirling’s approximation tells us $\sqrt{2\pi n} (n/e)^n \leq n! \leq e^{1/12n} \sqrt{2\pi n} (n/e)^n$. In particular we can use this to say that $$ {n \choose k} \leq \left(\frac{en}{ k}\right)^k$$ I tried the tactic of combining bounds from $n!$, $k!$ and $(n-k)!$ and it didn't work. How does this bound follow from stirling's approximation?
First of all, note that $n!/(n-k)! \le n^k$. Use Stirling only for $k!$. ${n \choose k} \le \frac{n^k}{k!} \le \frac{n^k}{(\sqrt{2\pi k}(k/e)^k)} \le \frac{n^k}{(k/e)^k} = (\frac{en}{k})^k$
{ "language": "en", "url": "https://math.stackexchange.com/questions/132625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 2 }
What does $2^x$ really mean when $x$ is not an integer? We all know that $2^5$ means $2\times 2\times 2\times 2\times 2 = 32$, but what does $2^\pi$ mean? How is it possible to calculate that without using a calculator? I am really curious about this, so please let me know what you think.
If this helps: For all n: $$2^n=e^{n\log 2}$$ This is a smooth function that is defined everywhere. Another way to think about this (in a more straightforward manner then others described): We know $$a^{b+c}=a^ba^c$$ Then say, for example, $b=c=1/2$. Then we have: $$a^{1}=a=a^{1/2}a^{1/2}$$ Thus $a^{1/2}=\sqrt{a}$ is a number that equals $a$ when multiplied by itself. Now we can find the value of (for some p and q), $a^{p/q}$. We know: $(a^x)^y=a^{xy}$ thus $(a^{p/q})^{q/p}=a^1=a$ Other exponents may be derived similarly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "218", "answer_count": 8, "answer_id": 2 }
Existence of a sequence. While reading about Weierstrass' Theorem and holomorphic functions, I came across a statement that said: "Let $U$ be any connected open subset of $\mathbb{C}$ and let $\{z_j\} \subset U$ be a sequence of points that has no accumulation point in $U$ but that accumulates at every boundary point of $U$." I was curious as to why such a sequence exists. How would I be able to construct such a sequence?
Construct your sequence in stages indexed by positive integers $N$. At stage $N$, enumerate those points $(j+ik)/N^2$ for integers $j,k$ with $|j|+|k|\le N^3$ that are within distance $1/N$ of the boundary of $G$. EDIT: Oops, not so clear that this will give you points in $G$. I'll have to be a bit less specific. At stage $N$, consider $K_N = \partial G \cap \overline{D_N(0)}$ where $\partial G$ is the boundary of $G$ and $D_r(a) = \{z: |z-a|&ltr\}$ . Since this is compact, it can be covered by finitely many open disks $D_{1/N}(a_k)$, $k=1,\ldots,m$, centred at points $a_k \in K_N$. Since $a_k \in \partial G$, $G \cap D_{1/N}(a_k) \ne \emptyset$. So we take a point $z_j \in G \cap D_{1/N}(a_k)$ for $k=1,\ldots,m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
give a counterexample of monoid If $G$ is a monoid, $e$ is its identity, if $ab=e$ and $ac=e$, can you give me a counterexample such that $b\neq c$? If not, please prove $b=c$. Thanks a lot.
Lets look at the endomorphisms of a particular vector space: namely let our vector space $V := \mathbb{R}^\infty$ so an element of $V$ looks like a countable (not necessarily convergent) sequence of real numbers. The set of linear maps $\phi: V \rightarrow V$ form a monoid under composition(prove it!). Let $R: V \rightarrow V$ be the right shift map, namely it takes $R: (x_0, x_1, \dots) \mapsto (0,x_0,x_1, \dots)$. Let $L: V \rightarrow V$ be the left shift map $L: (x_0, x_1, \dots) \mapsto (x_1, x_2, \dots)$, clearly $L \circ R = \textrm{id}_V = e$. Now define $R' : V \rightarrow V$ where $R' : (x_0, x_1, \dots) \mapsto (x_0, x_0, x_1, x_2, \dots)$. We also have $L \circ R' = \textrm{id}_V = e$, but these are different maps. There are probably simpler examples, but this is pretty explicit so I thought it would be good to see.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 0 }
Capelli Lemma for polynomials I have seen this lemma given without proof in some articles (see example here), and I guess it is well known, but I couldn't find an online reference for a proof. It states like this: Let $K$ be a field and $f,g \in K[x]$. Let $\alpha$ be a root of $f$ in the algebraic closure of $K$. Then $f \circ g$ is irreducible over $K$ if and only if $f$ is irreducible over $K$ and $g-\alpha$ is irreducible over $K(\alpha)$. Can you please give a proof for this?
Let $\theta$ be a root of $g-\alpha$. From $g(\theta)=\alpha$ we get that $f(g(\theta))=0$. Now all it is a matter of field extensions. Notice that $[K(\theta):K]\le \deg (f\circ g)=\deg f\deg g$, $[K(\theta):K(\alpha)]\le \deg(g-\alpha)$ $=$ $\deg g$ and $[K(\alpha):K]\le\deg f$. Each inequality becomes equality iff the corresponding polynomial is irreducible. But $[K(\theta):K]=[K(\theta):K(\alpha)][K(\alpha):K]$ and then $f\circ g$ is irreducible over $K$ iff $g -\alpha$ is irreducible over $K(\alpha)$ and $f$ is irreducible over $K$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Correct precedence of division operators Say i have the followingv operation - $6/3/6$, i get different answers depending on which division i perform first. $6/3/6 = 2/6 = .33333...$ $6/3/6 = 6/.5 = 12$ So which answer is correct?
By convention it's done from left to right, but by virtually universal preference it's not done at all; one uses parentheses. However, I see students writing fractions like this: $$ \begin{array}{c} a \\ \hline \\ b \\ \hline \\ c \end{array} $$ Similarly they write $\sqrt{b^2 - 4a} c$ or $\sqrt{b^2 - 4}ac$ or even $\sqrt{b^2 -{}}4ac$ when they actually need $\sqrt{b^2 - 4ac}$, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/132919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Probability of a specific event at a given trial Recently I have founded some problems about probabilities, that ask to find the probability of an event at a given trial. A dollar coin is toss several times until ones get "one dollar" face up. What is the probability to toss the coin at least $3$ times? I thought to apply for the binomial law.But the binomial law gives the probability for a number of favorable trials, and the question ask for a specific trial.How can I solve this kind of problem? Is there any methodology that one can apply for this kind of problems?
Hint: It is the same as the probability of two tails in a row, because you need to toss at least $3$ times precisely if the first two tosses are tails. And the probability of two tails in a row is $\frac{1}{4}$. Remark: For fun, let's also do this problem the hard way. We need at least $6$ tosses if we toss $2$ tails then a head, or if we toss $3$ tails then a head, or we toss $4$ tails then a head, or we toss $5$ tails then a head, or $\dots$. The probability of $2$ tails then a head is $\left(\frac{1}{2}\right)^3$. The probability of $3$ tails then a head is $\left(\frac{1}{2}\right)^4$. The probability of $4$ tails then a head is $\left(\frac{1}{2}\right)^5$. And so on. Add up. The required probability is $$\left(\frac{1}{2}\right)^3 +\left(\frac{1}{2}\right)^4+\left(\frac{1}{2}\right)^5+\cdots.$$ This is an infinite geometric series, which can be summed in the usual way. We get $\frac{1}{4}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Splitting field of $x^6+x^3+1$ over $\mathbb{Q}$ I am trying to find the splitting field of $x^6+x^3+1$ over $\mathbb{Q}$. Finding the roots of the polynomial is easy (substituting $x^3=t$ , finding the two roots of the polynomial in $t$ and then taking a 3-rd root from each one). The roots can be seen here [if there is a more elegant way of finding the roots it will be nice to hear] Is is true the that the splitting field is $\mathbb{Q}((-1)^\frac{1}{9})$ ? I think so from the way the roots look, but I am unsure. Also, I am having trouble finding the minimal polynomial of $(-1)^\frac{1}{9}$, it seems that it would be a polynomial of degree 9, but of course the degree can't be more than 6...can someone please help with this ?
You've got something wrong: the roots of $t^2+t+1$ are the complex cubic roots of one, not of $-1$: $t^3-1 = (t-1)(t^2+t+1)$, so every root of $t^2+t+1$ satisfies $\alpha^3=1$). That means that you actually want the cubic roots of some of the cubic roots of $1$; that is, you want some ninth roots of $1$ (not of $-1$). Note that $$(x^6+x^3+1)(x-1)(x^2+x+1) = x^9-1.$$ So the roots of $x^6+x^3+1$ are all ninth roots of $1$. Moreover, those ninth roots should not be equal to $1$, nor be cubic roots of $1$ (the roots of $x^2+x+1$ are the nonreal cubic roots of $1$): since $x^9-1$ is relatively prime to $(x^9-1)' = 9x^8$, the polynomial $x^9-1$ has no repeated roots. So any root of $x^9-1$ is either a root of $x^6+x^3+1$, or a root of $x^2+x+1$, or a root of $x-1$, but it cannot be a root of two of them. If $\zeta$ is a primitive ninth root of $1$ (e.g., $\zeta = e^{i2\pi/9}$), then $\zeta^k$ is also a ninth root of $1$ for all $k$; it is a cubic root of $1$ if and only if $3|k$, and it is equal to $1$ if and only if $9|k$. So the roots of $x^6+x^3+1$ are precisely $\zeta$, $\zeta^2$, $\zeta^4$, $\zeta^5$, $\zeta^7$, and $\zeta^8$. They are all contained in $\mathbb{Q}(\zeta)$, which is necessarily contained in the splitting field. Thus, the splitting field is $\mathbb{Q}(\zeta)$, where $\zeta$ is any primitive ninth root of $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
How to find x that defined in the picture? $O$ is center of the circle that surrounds the ABC triangle. $|EF| // |BC|$ we only know $a,b,c$ $(a=|BC|, b=|AC|,c=|AB|)$ $x=|EG|=?$ Could you please give me hand to see an easy way to find the x that depends on given a,b,c?
This can be done using trigonometry. Let $D$ be the foot of perpendicular from $O$ to $BC$. Then we have that $\angle{BOD} = \angle{BAC} (= \alpha, \text{say})$. Let $\angle{CBA} = \beta$. Let the radius of the circumcircle be $R$. Let $I$ be the foot of perpendicular from $G$ on $BC$. Then we have that $DB = R\sin \alpha$, by considering $\triangle BOD$ $GI = OD = R \cos \alpha$. By considering $\triangle BGI$, $BI = GI \cot \beta = R \cos \alpha \cot \beta$. Thus $x = R - OG = R - (BD - BI) = R - R\sin \alpha + R \cos \alpha \cot \beta$. Now, $R$ and trigonometric functions of $\alpha$ and $\beta$ can be expressed in terms of $a,b,c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Explaining Horizontal Shifting and Scaling I always find myself wanting for a clear explanation (to a college algebra student) for the fact that horizontal transformations of graphs work in the opposite way that one might expect. For example, $f(x+1)$ is a horizontal shift to the left (a shift toward the negative side of the $x$-axis), whereas a cursory glance would cause one to suspect that adding a positive amount should shift in the positive direction. Similarly, $f(2x)$ causes the graph to shrink horizontally, not expand. I generally explain this by saying $x$ is getting a "head start". For example, suppose $f(x)$ has a root at $x = 5$. The graph of $f(x+1)$ is getting a unit for free, and so we only need $x = 4$ to get the same output before as before (i.e. a root). Thus, the root that used to be at $x=5$ is now at $x=4$, which is a shift to the left. My explanation seems to help some students and mystify others. I was hoping someone else in the community had an enlightening way to explain these phenomena. Again, I emphasize that the purpose is to strengthen the student's intuition; a rigorous algebraic approach is not what I'm looking for.
What does the graph of $g(x) = f(x+1)$ look like? Well, $g(0)$ is $f(1)$, $g(1)$ is $f(2)$, and so on. Put another way, the point $(1,f(1))$ on the graph of $f(x)$ has become the point $(0,g(0))$ on the graph of $g(x)$, and so on. At this point, drawing an actual graph and showing how the points on the graph of $f(x)$ move one unit to the left to become points on the graph of $g(x)$ helps the student understand the concept. Whether the student absorbs the concept well enough to utilize it correctly later is quite another matter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 18, "answer_id": 6 }
Why this element in this tensor product is not zero? $R=k[[x,y]]/(xy)$, $k$ a field. This ring is local with maximal ideal $m=(x,y)R$. Then the book proves that $x\otimes y\in m\otimes m$ is not zero, but I don't understand what's going on, if the tensor product is $R$-linear, then $x\otimes y=1\otimes xy=1\otimes 0=0$, where is the mistake? And also the book proves that this element is torsion: $(x+y)(x\otimes y)=(x+y)x\otimes y=(x+y)\otimes(xy)=(x+y)\otimes0=0$ why $(x+y)x\otimes y=(x+y)\otimes(xy)$?
For your first question, $1$ does not lie in $m$, so $1 \otimes xy$ is not actually an element of $m \otimes m$. $R$-linearity implies $a(b\otimes c)=(ab)\otimes c$, but (and this is important), if you have $(ab)\otimes c$, and $b$ does not lie in $m$, then "$ab$" is not actually a factorization of the element inside $m$, and if we try to factor out $a$, we get $a(b\otimes c)$, which is non-sensical since $b$ does not lie in $m$. For your second question, $(x+y)x\otimes y=x((x+y)\otimes y)=(x+y)\otimes(xy)$. The reason we were allowed to take out the $x$ in this case was because $x+y$ lies in $m$, so all of the elements in the equations remained in $m \otimes m$. Cheers, Rofler
{ "language": "en", "url": "https://math.stackexchange.com/questions/133391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Techniques for (upper-)bounding LP maximization I have a huge maximization linear program (variables grow as a factorial of a parameter). I would like to bound the objective function from above. I know that looking at the dual bounds the objective function of the primal program from below. I know the structure of the constraints (I know how they are generated, from all permutations of a certain set). I am asking if there are some techniques to find an upper bound on the value of the objective function. I realize that the technique is very dependent on the structure of the constraints, but I am hoping to find many techniques so that hopefully one of them would be suitable for my linear program.
In determining the value of a primal maximization problem, primal solutions give lower bounds and dual solutions give upper bounds. There's really only one technique for getting good solutions to large, structured LPs, and that's column generation, which involves solving a problem-specific optimization problem over the set of variables.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133461", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving that a limit does not exist Given the function $$f(x)= \left\{\begin{matrix} 1 & x \gt 0 \\ 0 & x =0 \\ -1 & x \lt 0 \end{matrix}\right\}$$ What is $\lim_{a}f$ for all $a \in \mathbb{R},a \gt 0$? It seems easy enough to guess that the limit is $1$, but how do I take into account the fact that $f(x)=-1$ when $x \lt 0$? Thanks
Here $\lim_{x\to 0^+}f(x)=1$ & $\lim_{x\to 0^-}f(x)=-1$ both limit are not equal. therefore limit is not exists
{ "language": "en", "url": "https://math.stackexchange.com/questions/133534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
When is $X^n-a$ is irreducible over F? Let $F$ be a field, let $\omega$ be a primitive $n$th root of unity in an algebraic closure of $F$. If $a$ in $F$ is not an $m$th power in $F(\omega)$ for any $m\gt 1$ that divides $n$, how to show that $x^n -a$ is irreducible over $F$?
I will assume "$m \geq 1$", since otherwise $a \in F(\omega)$, but $F(\omega)$ is $(n-1)$th extension and not $n$th extension, so $x^n-a$ must have been reducible. Let $b^n=a$ (from the algebraic closure of $F$). $x^n-a$ is irreducible even over $F(\omega)$. Otherwise $$f= \prod_{k=0}^n (x-\omega^k b) = (x^p + \cdots + \omega^o b^p)(x^{n-p} + \cdots + \omega^ó b^{n-p}),$$ so $b^p$ and $b^{n-p}$ are in $F(\omega)$. Consequenty $b^{\gcd(p,n-p)}$ is in $F(\omega)$, but $\gcd(p,n-p)$ divides $n$, so $(b^{\gcd})^\frac{n}{\gcd} = a$, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 2, "answer_id": 0 }
Question about the independence definition. Why does the independence definition requires that every subfamily of events $A_1,A_2,\ldots,A_n$ satisfies $P(A_{i1}\cap \cdots \cap A_{ik})=\prod_j P(A_{ij})$ where $i_1 &lt i_2 &lt \cdots &lt i_n$ and $j &lt n$. My doubt arose from this: Suppose $A_1,A_2$ and $A_3$ such as $P(A_1\cap A_2\cap A_3)=P(A_1)P(A_2)P(A_3)$. Then $$P(A_1\cap A_2)=P(A_1\cap A_2 \cap A_3) + P(A_1\cap A_2 \cap A_3^c)$$ $$=P(A_1)P(A_2)(P(A_3)+P(A_3^c))=P(A_1)P(A_2).$$ So it seems to me that if $P(A_1\cap A_2\cap A_3)=P(A_1)P(A_2)P(A_3)$ then $P(A_i\cap A_j)=P(A_i)P(A_j)$, i.e., the biggest collection independence implies the smaller ones. Why am I wrong? The calculations seems right to me, maybe my conclusion from it are wrong?
$P(ABC)=P(A)P(B)P(C)$ does not imply that $P(ABC^C)=P(A)P(B)P(C^C)$, which it seems you're using. Consider, for instance, $C=\emptyset$. However, see this question. Another example: Let $S=\{a,b,c,d,e,f\}$ with $P(a)=P(b)={1\over8}$, and $P(c)=P(d)=P(e)=P(f)={3\over16}$. Let $A=\{a,d,e\}$, $B=\{a,c,e\}$, and $C=\{a,c,d\}$. Then $\ \ \ \ \ \ P(ABC)=P(\{a\})={1\over8}$ and $\ \ \ \ \ \ P(A)P(B)P(C)= {1\over2}\cdot{1\over2}\cdot{1\over2}={1\over8}$. But $\ \ \ \ \ \ P(ABC^C)=P(\{e\})= {3\over16}$ while $\ \ \ \ \ \ P(A)P(B)P(C^C) = {1\over2}\cdot{1\over2}\cdot{1\over2}={1\over8}$. In fact no two of the events $A$, $B$, and $C$ are independent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Problem of finding subgroup without Sylow's Thm. Let $G$ is a group with order $p^n$ where $p$ is prime and $n \geq 3$. By Sylow's Thm, we know that $G$ has a subgroup with order $p^2$. But, I wonder to proof without Sylow's Thm.
Well, we could apply the fact that the center of such a group $G$ is nontrivial (proof). Since the center is nontrivial, it either has order $p$ or $p^m$ for $1&ltm\leq n$. In the former case, $G/Z(G)$ is of order $p^{n-1}$ so $Z(G/Z(G))$ is nontrivial, hence has order a power of $p$. Since $Z(G/Z(G))$ is abelian, it is easy to find such a subgroup (i.e. using the Fundamental Theorem of Finite Abelian Groups). The latter case is similarly easy, as $Z(G/Z(G))$ is an abelian group of $p^m$ for $m>1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
limits of the sequence $n/(n+1)$ Given the problem: Determine the limits of the sequnce $\{x_n\}^ \infty_{ n=1}$ $$x_n = \frac{n}{n+1}$$ The solution to this is: step1: $\lim\limits_{n \rightarrow \infty} x_n = \lim\limits_{n \rightarrow \infty} \frac{n}{n + 1}$ step2: $=\lim\limits_{n \rightarrow \infty} \frac{1}{1+\frac{1}{n}}$ step3: $=\frac{1}{1 + \lim\limits_{n \rightarrow \infty} \frac{1}{n}}$ step4: $=\frac{1}{1 + 0}$ step5: $=1$ I get how you go from step 2 to 5 but I don't understand how you go from step 1 to 2. Again, I'm stuck on the basic highschool math. Please help
Divide the numerator and denominator by $n$. Why is this legal, in other words, why does this leave your fraction unchanged? Because $$\frac {\frac a n} {\frac b n}=\frac {a \cdot \frac 1 n} {b \cdot \frac 1 n}=\frac a b$$ where the last equality is because $\dfrac 1 n$'s get cancelled. Further, remember the fact that: $$\frac{a+b}{n}=\frac a n+\frac b n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/133796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Integral related to $\sum\limits_{n=1}^\infty\sin^n(x)\cos^n(x)$ Playing around in Mathematica, I found the following: $$\int_0^\pi\sum_{n=1}^\infty\sin^n(x)\cos^n(x)\ dx=0.48600607\ldots =\Gamma(1/3)\Gamma(2/3)-\pi.$$ I'm curious... how could one derive this?
For giggles: $$\begin{align*} \sum_{n=1}^\infty\int_0^\pi \sin^n u\,\cos^n u\;\mathrm du&=\sum_{n=1}^\infty\frac1{2^n}\int_0^\pi \sin^n 2u\;\mathrm du\\ &=\frac12\sum_{n=1}^\infty\frac1{2^n}\int_0^{2\pi} \sin^n u\;\mathrm du\\ &=\frac12\sum_{n=1}^\infty\frac1{2^{2n}}\int_0^{2\pi} \sin^{2n} u\;\mathrm du\\ &=2\sum_{n=1}^\infty\frac1{2^{2n}}\int_0^{\pi/2} \sin^{2n} u\;\mathrm du\\ &=\pi\sum_{n=1}^\infty\frac1{2^{4n}}\binom{2n}{n}=\pi\sum_{n=1}^\infty\frac{(-4)^n}{16^n}\binom{-1/2}{n}\\ &=\pi\left(\frac1{\sqrt{1-\frac14}}-1\right)=\pi\left(\frac2{\sqrt 3}-1\right) \end{align*}$$ where the oddness of the sine function was used in the third line to remove zero terms, the Wallis formula and the binomial identity $\dbinom{2n}{n}=(-4)^n\dbinom{-1/2}{n}$ were used in the fifth line, after which we finally recognize the binomial series and evaluate accordingly. Of course, Alex's solution is vastly more compact...
{ "language": "en", "url": "https://math.stackexchange.com/questions/133858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Why use absolute value for Cauchy Schwarz Inequality? I see the Cauchy-Schwarz Inequality written as follows $$|\langle u,v\rangle| \leq \lVert u\rVert \cdot\lVert v\rVert.$$ Why the is the absolute value of $\langle u,v\rangle$ specified? Surely it is apparent if the right hand side is greater than or equal to, for example, $5$, then it will be greater than or equal to $-5$?
I assumed that we work in a real inner product space, otherwise of course we have to put the modulus. The inequality $\langle u,v\rangle\leq \lVert u\rVert\lVert v\rVert$ is also true, but doesn't give any information if $\langle u,v\rangle\leq 0$, since in this case it's true, and just the trivial fact that a non-negative number is greater than a non-positive one. What is not trivial is that $\lVert u\rVert\lVert v\rVert$ is greater than the absolute value. But in fact the assertions $$\forall u,v \quad \langle u,v\rangle\leq \lVert u\rVert\lVert v\rVert$$ and $$\forall u,v\quad |\langle u,v\rangle|\leq \lVert u\rVert\lVert v\rVert$$ are equivalent. Indeed, the second implies the first, and consider successively $u$ and $-u$ in the first to get the second one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/133945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Changing the bilinear form on a Euclidean space with an orthonormal basis. I'm having trouble getting my head around how Euclidean spaces, bilinear forms and dot product all link in with each other. I am told that on a Euclidean space any bilinear form is denoted by $$\tau(u,v) = u\cdot v$$ and in an orthonormal basis we have $$u \cdot v = \underline{u}^T \underline{v}$$ but what, say, if we have an orthonormal basis on a vector space together with a positive definite symmetric bilinear form (so a Euclidean space) then $$\tau(u,v) = \underline{u}^T \underline{v}$$ but what now if we keep the same vector space, the same orthonormal basis, and the same vectors $u,v$ but we change the positive definite symmetric bilinear form $\tau$ surely the computation $\underline{u}^T \underline{v}$ will be the same but the computation $\tau(u,v)$ will surely change? Can someone please explain this?
Orthonormal is defined with respect to $\tau$. That is, with no positive definite symmetric bilinear form $\tau$ around, the statement "$\{v_1,...,v_n\}$ is a an orthonormal basis" is meaningless. Once you have such a $\tau$, then you can say $\{v_1,...,v_n\}$ is an orthonormal basis with respect to $\tau$." This means that $\tau(v_i,v_i) = 1$ and $\tau(v_i, v_j) = 0$ when $i\neq j$. Often, once a $\tau$ has been chosen, one doesn't write "with respect to $\tau$", but technically it should always be there. So, when you say ...what now if we keep the same vector space, the same orthonormal basis, and the same vectors u,v but we change the positive definite symmetric bilinear form τ surely the computation $\underline{u}^T\underline{v}$ will be the same but the computation $\tau(u,v)$ will surely change (emphasis mine) you have to be careful because you can't stick with the same orthonormal basis. When you change $\tau$, this changes whether or not your orthnormal basis is still orthonormal. So before computing $\underline{u}^T\underline{v}$, you must first find a new orthonormal basis, then compute $u$ and $v$ in this basis, and then compute $\underline{u}^T\underline{v}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/134002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Summing an unusual series: $\frac {x} {2!(n-2)!}+\frac {x^{2}} {5!(n-5)!}+\dots +\frac {x^{\frac{n}{3}}} {(n-1)!}$ How to sum the following series $$\frac {x} {2!(n-2)!}+\frac {x^{2}} {5!(n-5)!}+\frac {x^{3}} {8!(n-8)!}+\dots +\frac {x^{\frac{n}{3}}} {(n-1)!}$$ n being a multiple of 3. This question is from a book, i did not make this up. I can see a pattern in each term as the ith term can be written as $\frac {x^i}{(3i-1)!(n+1-3i)!}$ but i am unsure what it going on with the indexing variable's range. Any help would be much appreciated ?
Start with $$ (1 + x)^n = \sum_{r=0}^{n} \binom{n}{r} x^r$$ Multiply by $x$ $$ f(x) = x(1 + x)^n = \sum_{r=0}^{n} \binom{n}{r} x^{r+1}$$ Now if $w$ is a primitive cube-root of unity then $$f(x) + f(wx) + f(w^2 x) = 3\sum_{k=1}^{n/3} \binom{n}{3k-1} x^{3k}$$ Replace $x$ by $\sqrt[3]{x}$ and divide by $n!$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/134148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Finding an ON basis of $L_2$ The set $\{f_n : n \in \mathbb{Z}\}$ with $f_n(x) = e^{2πinx}$ forms an orthonormal basis of the complex space $L_2([0,1])$. I understand why its ON but not why its a basis?
It is known that orthonormal system $\{f_n:n\in\mathbb{Z}\}$ is a basis if $$ \operatorname{cl}_{L_2}(\operatorname{span}(\{f_n:n\in\mathbb{Z}\}))=L_2([0,1]) $$ where $\operatorname{cl}_{L_2}$ means the closure in the $L_2$ norm. Denote by $C_0([0,1])$ the space of continuous functions on $[0,1]$ which equals $0$ at points $0$ and $1$. It is known that for each $f\in C_0([0,1])$ the Feier sums of $f$ uniformly converges to $f$. This means that $$ \operatorname{cl}_{C}(\operatorname{span}(\{f_n:n\in\mathbb{Z}\}))=C_0([0,1]) $$ where $\operatorname{cl}_{C}$ means the closure in the uniform norm. Since we always have inequality $\|f\|_{L_2([0,1])}\leq\|f\|_{C([0,1])}$, then $$ \operatorname{cl}_{L_2}(\operatorname{span}(\{f_n:n\in\mathbb{Z}\}))=C_0([0,1]) $$ It is remains to say that $C_0([0,1])$ is dence subspace of $L_2([0,1])$, i.e. $$ \operatorname{cl}_{L_2}(C_0([0,1]))=L_2([0,1]) $$ then we obtain $$ \operatorname{cl}_{L_2}(\operatorname{span}(\{f_n:n\in\mathbb{Z}\}))= \operatorname{cl}_{L_2}(C_0([0,1]))=L_2([0,1]) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/134332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Irreducibility of polynomials This is a very basic question, but one that has frustrated me somewhat. I'm dealing with polynomials and trying to see if they are irreducible or not. Now, I can apply Eisenstein's Criterion and deduce for some prime p if a polynomial over Z is irreducible over Q or not and I can sort of deal with basic polynomials that we can factorise easily. However I am looking at the polynomial $t^3 - 2$. I cannot seem to factor this down, but a review book is asking for us to factorise into irreducibles over a) $\mathbb{Z}$, b) $\mathbb{Q}$, c) $\mathbb{R}$, d) $\mathbb{C}$, e) $\mathbb{Z}_3$, f) $\mathbb{Z}_5$, so obviously it must be reducible in one of these. Am I wrong in thinking that this is irreducible over all? (I tried many times to factorise it into any sort of irreducibles but the coefficients never match up so I don't know what I am doing wrong). I would really appreciate if someone could explain this to me, in a very simple way. Thank you.
$t^3-2=(t-\sqrt[3]{2})(t^2+(\sqrt[3]2)t+\sqrt[3]4)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/134408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
A basic estimate for Sobolev spaces Here is a statement that I came upon whilst studying Sobolev spaces, which I cannot quite fill in the gaps: If $s>t>u$ then we can estimate: \begin{equation} (1 + |\xi|)^{2t} \leq \varepsilon (1 + |\xi|)^{2s} + C(\varepsilon)(1 + |\xi|)^{2u} \end{equation} for any $\varepsilon > 0$ (here $\xi \in \mathbb{R}^n$ and $C(\varepsilon)$ is a constant, dependent on $\varepsilon$). How can I show this? Many thanks for hints!
Let $f(x)=(1+x)^{2(t-u)}-\varepsilon(1+x)^{2(s-u)}$. We have $f(0)=1-\varepsilon$ and since $s-u>t-u$ and $\varepsilon>0$, we know $f(x)\to-\infty$ as $x\to\infty$. Hence $C(\varepsilon):=\displaystyle\sup_{0\leq x&lt\infty}f(x)&lt\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/134473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Are groups algebras over an operad? I'm trying to understand a little bit about operads. I think I understand that monoids are algebras over the associative operad in sets, but can groups be realised as algebras over some operad? In other words, can we require the existence of inverses in the structure of the operad? Similarly one could ask the same question about (skew-)fields.
No, there is no operad whose algebras are groups. Since there are many variants of operads a more precise answer is that if one considers what are known as (either symmetric or non-symmetric) coloured operads then there is no operad $P$ such that morphisms $P\to \bf Set$, e.g., $P$-algebras, correspond to groups. In general, structures that can be captured by symmetric operads are those that can be defined by a first order equational theory where the equations do not repeat arguments (for non-symmetric operads one needs to further demand that the order in which arguments appear on each side of an equation is the same). The common presentation of the theory of monoids is such and indeed there is a corresponding operad. The common presentation of the theory of groups is not of this form (because of the axiom for existence of inverses). This however does not prove that no operad can describe groups since it does not show that no other (super clever) presentation of groups can exist which is of the desired form. It can be shown that the category of algebras in $Set$ for an operad $P$ has certain properties that are not present in the category $Grp$ of groups. This does establish that no operad exists that describes groups.
{ "language": "en", "url": "https://math.stackexchange.com/questions/134594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
A finitely presented group Given the presented group $$G=\Bigl\langle a,b\Bigm| a^2=c,\ b(c^2)b,\ ca(b^4)\Bigr\rangle,$$ determine the structure of the quotient $G/G'$,where G' is the derived subgroup of $G$ (i.e., the commutator subgroup of $G$). Simple elimination shows $G$ is cyclic (as it's generated by $b$) of order as a divisor of $10$, how to then obtain $G/G'$? Note $G'$ is the derived group, i.e it's the commutator subgroup of $G$.
Indeed, the group $G/G'$ is generated by $bG'$: let $\alpha$ denote the image of $a$ in $G/G'$ and $\beta$ the image of $b$. Then we have the relations $\alpha^4\beta^2 = \alpha^3\beta^4 = 1$; from there we obtain $$\beta^2 = \alpha^{-4} = \alpha^{-1}\alpha^{-3} = \alpha^{-1}\beta^{4},$$ so $\alpha = \beta^{2}$. And therefore $\alpha^4\beta^2 = \beta^8\beta^2 = \beta^{10}=1$. So the order of $\beta$ divides $10$. Therefore $G/G'$ is a quotient of $\langle x\mid x^{10}\rangle$, the cyclic group of order $10$. Now consider the elements $x^2$ and $x$ in $K=\langle x\mid x^{10}\rangle$. We have $x\Bigl( (x^4)^2\Bigr)x=1$ and $x^4x^2(x^4) =1$. Therefore, there is a homomorphism $G\to K$ that maps $a$ to $x^2$ and $b$ to $x^{10}$, which trivially factors through $G/G'$. Therefore, $G/G'$ has the cyclic group of order $10$ as a quotient. Since $G/G'$ is a quotient of the cyclic group of order $10$ and has the cyclic group of order $10$ as a quotient, it follows that $G/G'$ is cyclic of order $10$ (generated by $bG'$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/134655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
solution to a differential equation I have given the following differential equation: $x'= - y$ and $y' = x$ How can I solve them? Thanks for helping! Greetings
Let $\displaystyle X(t)= \binom{x(t)}{y(t)}$ so $$ X' = \left( \begin{array}{ccc} 0 & -1 \\ 1 & 0 \\ \end{array} \right)X .$$ This has solution $$ X(t)= \exp\biggr( \left( \begin{array}{ccc} 0 & -t \\ t & 0 \\ \end{array} \right) \biggr) X(0)= \left( \begin{array}{ccc} 0 & e^{-t} \\ e^t & 0 \\ \end{array} \right)\binom{x(0)}{y(0)}$$ so $$ x(t) = y(0) e^{-t} \ \ \text{ and } \ \ y(t) = x(0) e^{t} . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/134720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
If p then q misunderstanding? The statement $P\rightarrow Q$ means: if $P$ then $Q$. p | q | p->q _____________ T | F | F F | F | T T | T | T F | T | T Lets say: if I'm hungry $h$ - I'm eating $e$. p | q | p->q _______________________ h | not(e) | F not(h) | not(e) | T h | e | T not(h) | e | T // ????? If I'm not hungry, I'm eating? (This does not make any sense...) Can you please explain that for me?
Rather than your example about food — which is not very good, there is a lot of people starving and not eating —, let consider a more mathematical one : if $n$ equals 2 then $n$ is even. I — How to interpret the truth table ? Fix $n$ an integer. Let $p$ denote the assertion “$n = 2$”, and $q$ the assertion “$n$ is even”. These two assertions can be true or false, depending on $n$. What does mean that the assertion “$p \to q$” is true ? This means precisely : * *$p$ true and $q$ false is not possible ; *$p$ false and $q$ false is a priori possible (e.g. with $n = 3$) ; *$p$ true and $q$ true is a priori possible (e.g. with $n = 2$) ; *$p$ false and $q$ true is a priori possible (e.g. with $n = 4$). II — Some common errors The assertion “($n$ is divided by 2) $\to$ ($n$ is divided by 3)” is definitely not true for all $n$, but it can be true, for example for $n=3$, or $n=6$. The fact that “false implies true” should not be read as “if not $p$ then $q$”. Indeed “false implies false” also holds, so if $p$ is false then either $q$ is false, either it is true, which is not a big deal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/134809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Taking the derivative of $y = \dfrac{x}{2} + \dfrac {1}{4} \sin(2x)$ Again a simple problem that I can't seem to get the derivative of I have $\frac{x}{2} + \frac{1}{4}\sin(2x)$ I am getting $\frac{x^2}{4} + \frac{4\sin(2x)}{16}$ This is all very wrong, and I do not know why.
You deal with the sum of functions, $f(x) = \frac{x}{2}$ and $g(x)= \frac{1}{4} \sin(2 x)$. So you would use linearity of the derivative: $$ \frac{d}{d x} \left( f(x) + g(x) \right) = \frac{d f(x)}{d x} + \frac{d g(x)}{d x} $$ To evaluate these derivatives, you would use $\frac{d}{d x}\left( c f(x) \right) = c \frac{d f(x)}{d x}$, for a constant $c$. Thus $$ \frac{d}{d x} \left( \frac{x}{2} + \frac{1}{4} \sin(2 x) \right) = \frac{1}{2} \frac{d x}{d x} + \frac{1}{4} \frac{d \sin(2 x)}{d x} $$ To evaluate derivative of the sine function, you would need a chain rule: $$ \frac{d}{d x} y(h(x)) = y^\prime(h(x)) h^\prime(x) $$ where $y(x) = \sin(x)$ and $h(x) = 2x$. Now finish it off using table of derivatives.
{ "language": "en", "url": "https://math.stackexchange.com/questions/134855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 1 }
Can someone check my work on this integral? $$ \begin{align} \int_0^{2\pi}\log|e^{i\theta} - 1|d\theta &= \int_0^{2\pi}\log(1-\cos(\theta))d\theta \\ &= \int_0^{2\pi}\log(\cos(0) - \cos(\theta))\,d\theta\\ &= \int_0^{2\pi}\log\left(-2\sin\left(\frac{\theta}{2}\right)\sin\left(\frac{-\theta}{2}\right)\right)\,d\theta\\ &= \int_0^{2\pi}\log\left(2\sin^2\left(\frac{\theta}{2}\right)\right)\,d\theta\\ &= \int_0^{2\pi}\log(2)d\theta + 2\int_0^{2\pi}\log\left(\sin\left(\frac{\theta}{2}\right)\right)\,d\theta\\ &= 2\pi \log(2) + 4\int_0^\pi \log\big(\sin(t)\big)\,dt\\ &=2\pi \log(2) - 4\pi \log(2) = -2\pi \log(2) \end{align} $$ Where $\int_0^\pi \log(\sin(t))\,dt = -\pi \log(2)$ according to this. The first step where I removed the absolute value signs is the one that worries me the most. Thanks.
You can use Jensen's Formula, I believe: http://mathworld.wolfram.com/JensensFormula.html Edit: Jensen's formula seems to imply that your integral is zero...
{ "language": "en", "url": "https://math.stackexchange.com/questions/134902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Morphism between projective schemes induced by surjection of graded rings Ravi Vakil 9.2.B is "Suppose that $S \rightarrow R$ is a surjection of graded rings. Show that the induced morphism $\text{Proj }R \rightarrow \text{Proj }S$ is a closed embedding." I don't even see how to prove that the morphism is affine. The only ways I can think of to do this are to either classify the affine subspaces of Proj S, or to prove that when closed morphisms are glued, one gets a closed morphism. Are either of those possible, and how can this problem be done?
I think a good strategy could be to verify the statement locally, and then verify that the glueing is successful, as you said. Let us call $\phi:S\to R$ your surjective graded morphism, and $\phi^\ast:\textrm{Proj}\,\,R\to \textrm{Proj}\,\,S$ the corresponding morphism. Note that $$\textrm{Proj}\,\,R=\bigcup_{t\in S_1}D_+(\phi(t))$$ because $S_+$ (the irrelevant ideal of $S$) is generated by $S_1$ (as an ideal), so $\phi(S_+)R$ is generated by $\phi(S_1)$. For any $t\in S_1$ you have a surjective morphism $S_{(t)}\to R_{\phi(t)}$ (sending $x/t^n\mapsto \phi(x)/\phi(t)^n$, for any $x\in S$), which corresponds to the canonical closed immersion of affine schemes $\phi^\ast_t:D_+(\phi(t))\hookrightarrow D_+(t)$. It remains to glue the $\phi^\ast_t$'s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/134964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
How to show that if a matrix A is diagonalizable, then a similar matrix B is also diagonalizable? So a matrix $B$ is similar to $A$ if for some invertible $S$, $B=S^{-1}AS$. My idea was to start with saying that if $A$ is diagonalizable, that means $A={X_A}^{-1}\Lambda_A X_A$, where $X$ is the eigenvector matrix of $A$, and $\Lambda$ is the eigenvalue matrix of $A$. And I basically want to show that $B={X_B}^{-1}\Lambda_B X_B$. This would mean $B$ is diagonalizable right? I am given that similar matrices have the same eigenvalues, and if $x$ is an eigenvector of $B$, then $Sx$ is an eigenvector of $A$. That is, $Bx=\lambda x \implies A(Sx)=\lambda(Sx)$. Can someone enlighten me please? Much appreciated.
Hint: Substitute $A = X_A^{-1} \Lambda X_A$ into $B = S^{-1} A S$ and use the formula $D^{-1}C^{-1} = (CD)^{-1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/135020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
what's the ordinary derivative of the kronecker delta function? What's ordinary derivative of the kronecker delta function? I have used "ordinary" in order not to confuse the reader with the covariant derivative. I have tried the following: $$\delta[x-n]=\frac{1}{2\pi}\int_{0}^{2\pi}e^{i(x-n)t}dt$$ but that doesn't work since. $x,n \in \mathbb{Z}$, while I look for the case $x \in \mathbb{R}$
May be it is already too late, but I will answer. If I am wrong, please correct me. Let's have a Kronecker delta via the Fourier transform getting a $Sinc$ function: $$\delta_{k,0} = \frac{1}{a}\int_{-\frac{a}{2}}^{\frac{a}{2}} e^{-\frac{i 2 \pi k x}{a}} \, dx = \frac{\sin (\pi k)}{\pi k}$$ This function looks like: Fourier transform of "1" (Sinc) and Kronecker delta (orange dots) Calculating the derivative we get: $$\frac{d \delta_{k,0}}{dk} = \frac{\cos (\pi k)}{k}-\frac{\sin (\pi k)}{\pi k^2} = \frac{\cos (\pi k)}{k}$$ for $k \in \mathbb Z$ On a plot it looks like Derivative of Sinc and Kronecker delta
{ "language": "en", "url": "https://math.stackexchange.com/questions/135064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
How to verify the following function is convex or not? Consider function $$f(x)=\frac{x^{n_{1}}}{1-x}+\frac{(1-x)^{n_{2}}}{x},x\in(0,1)$$ where $n_{1}$ and $n_2$ are some fixed positive integers. My question: Is $f(x)$ convex for any fixed $n_1$ and $n_2$? The second derivation of function $f$ is very complex, so I wish there exists other method to verify convex property.
In mathematics, a real-valued function defined on an interval is called convex (or convex downward or concave upward) if the graph of the function lies below the line segment joining any two points of the graph. Equivalently, a function is convex if its epigraph (the set of points on or above the graph of the function) is a convex set. More generally, this definition of convex functions makes sense for functions defined on a convex subset of any vector space.according to wikipedia A real valued function f : X → R defined on a convex set X in a vector space is called convex if, for any two points x1,x2 in X and any t belongs [0 1] we have $f(t*x1+(1-t)*x2)&lt=(t*f(x1)+(1-t)f(x2))$ now let's take $n1$ and $n2$ some fixed values,let say 5 and 10,and try it
{ "language": "en", "url": "https://math.stackexchange.com/questions/135231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Probability that a coin lands on tails an odd number of times when it is tossed $100$ times A coin is tossed 100 times , Find the probability that tail occurs odd number of times! I do not know the answer, but I tried this, that there are these $4$ possible outcomes in which tossing of a coin $100$ times can unfold. * *head occurs odd times *head occurs even times *tail occurs odd times *tail occurs even times Getting a head is equally likely as getting a tail, similarly for odd times and even times. Thus, all of these events must have same the probability, i.e. $\dfrac{1}{4}$. Is this the correct answer? Is there an alternate way of solving this problem? Lets hear it!
There are only two possible outcomes: Either both heads and tails come out an even number of times, or they both come out an odd number of times. This is so because if heads came up $x$ times and tails came up $y$ times then $x+y=100$, and the even number 100 can't be the sum of an even and an odd number. A good way to solve this problem is to notice that if we have a sequence of 100 coin tosses in which tails came up an odd number of times, than by flipping the result of the first toss you get a sequence where tails came up an even number of times (and no matter what came up in the first toss!). Hence you have a bijection between the set of sequences where tails occurs an odd number of times, and the set of sequences where tails occurs an even number of times.
{ "language": "en", "url": "https://math.stackexchange.com/questions/135325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
Convergence to the stable law I am reading the book Kolmogorov A.N., Gnedenko B.V. Limit distributions for sums of independent random variables. From the general theory there it is known that if $X_i$ are symmetric i.i.d r.v such that $P(|X_1|>x)=x^{-\alpha},\, x \geq 1$, then $(X_1+\ldots+X_n)n^{-1/\alpha}\to Y$, where c.f. of $Y$ equals $\varphi_Y(t)=e^{-c|t|^{\alpha}}, \alpha \in (0,2]$, so $Y$ has stable law of distribution. I want to check it without using that general theorems. So I start as the following, $X_1$ has density of distribution $f_X(x)=|x|^{-\alpha-1}\alpha/2, |x|>1$. Using Levy theorem one must prove that $\varphi^n_{X_1}(t/n^{1/\alpha})\to \varphi_Y(t),\, n \to \infty$ for all $t\in \mathbb R$. $$\varphi_{X_1}(t/n^{1/\alpha})=\int_{1}^{\infty}\cos(tx/n^{1/\alpha})\alpha x^{-\alpha-1}\,dx,$$ for all it is evident that $t$ $\varphi_{X_1}(t/n^{1/\alpha})\to 1, n \to \infty$ so we have indeterminate form $1^\infty$. So we are to find $n(\varphi_{X_1}(t/n^{1/\alpha})-1)$, but $\varphi_{X_1}(t/n^{1/\alpha})\sim 1+1/2(2txn^{-1/\alpha})^2$, and I can only say something about $\alpha=2$ and I got stuck here. Perhaps, I made a mistake somewhere. Could you please help me? Thanks.
I yr integral make the change of variables $z = \frac y {n^{\frac 1 {\alpha}}}$. This brings a factor $\frac 1n$ out front. The write $cos(tz) = 1 + (cos(tz) -1)$. Integrate the 1 explicitly, and the integral invlving $cos(tz)-1$ converges because it is nice at zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/135401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Newton polygons This question is primarily to clear up some confusion I have about Newton polygons. Consider the polynomial $x^4 + 5x^2 +25 \in \mathbb{Q}_{5}[x]$. I have to decide if this polynomial is irreducible over $\mathbb{Q}_{5}$. So, I compute its Newton polygon. On doing this I find that the vertices of the polygon are $(0,2)$, $(2,1)$ and $(4,0)$. The segments joining $(0,2)$ and $(2,1)$, and $(2,1)$ and $(4,0)$ both have slope $-\frac{1}{2}$, and both segments have length $2$ when we take their projections onto the horizontal axis. Am I correct in concluding that the polynomial $x^4 +5x^2 +25$ factors into two quadratic polynomials over $\mathbb{Q}_{5}$, and so is not irreducible? I am deducing this on the basis of the following definition of a pure polynomial given in Gouvea's P-adic Numbers, An Introduction (and the fact that irreducible polynomials are pure): A polynomial is pure if its Newton polygon has one slope. What I interpret this definition to mean is that a polynomial $f(x) = a_nx^n + ... + a_0$ $\in \mathbb{Q}_{p}[x]$ (with $a_na_0 \neq 0$) is pure, iff the only vertices on its Newton polygon are $(0,v_p(a_0))$ and $(n, v_p(a_n))$. Am I right about this, or does the polynomial $x^4 + 5x^2+25$ also qualify as a pure polynomial?
There is no vertex at $(2,1)$. In my opinion, the right way to think of a Newton Polygon of a polynomial is as a closed convex body in ${\mathbb{R}}^2$ with vertical sides on both right and left. A point $P$ is only a vertex if there's a line through it touching the polygon at only one point. So this polynomial definitely is pure, and N-polygon theory does not help you at all. Easiest, I suppose, will be to write down what the roots are and see that any one of them generates an extension field of ${\mathbb{Q}}_5$ of degree $4$: voilà, your polynomial is irreducible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/135451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Find all connected 2-sheeted covering spaces of $S^1 \lor S^1$ This is exercise 1.3.10 in Hatcher's book "Algebraic Topology". Find all the connected 2-sheeted and 3-sheeted covering spaces of $X=S^1 \lor S^1$, up to isomorphism of covering spaces without basepoints. I need some start-help with this. I know there is a bijection between the subgroups of index $n$ of $\pi_1(X) \approx \mathbb{Z} *\mathbb{Z}$ and the n-sheeted covering spaces, but I don't see how this can help me find the covering spaces (preferably draw them). From the pictures earlier in the book, it seems like all the solutions are wedge products of circles (perhaps with some orientations?). So the question is: How should I think when I approach this problem? Should I think geometrically, group-theoretically, a combination of both? Small hints are appreciated. NOTE: This is for an assignment, so please don't give away the solution. I'd like small hints or some rules on how to approach problems like this one. Thanks!
A covering space of $S^1 \lor S^1$ is just a certain kind of graph, with edges labeled by $a$'s and $b$'s, as shown in the full-page picture on pg. 58 of Hatcher's book. Just try to draw all labeled graphs of this type with exactly two or three vertices. Several of these are already listed in parts (1) through (6) of the figure, but there are several missing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/135497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
Using the definition of a concave function prove that $f(x)=4-x^2$ is concave (do not use derivative). Let $D=[-2,2]$ and $f:D\rightarrow \mathbb{R}$ be $f(x)=4-x^2$. Sketch this function.Using the definition of a concave function prove that it is concave (do not use derivative). Attempt: $f(x)=4-x^2$ is a down-facing parabola with origin at $(0,4)$. I know that. But what $D=[-2,2]$ is given for. Is it domain or a point? Then, how do I prove that $f(x)$ is concave using the definition of a concave function? I got the inequality which should hold for $f(x)$ to be concave: For two distinct non-negative values of $x (u$ and $v$) $f(u)=4-u^2$ and $f(v)=4-v^2$ Condition of a concave function: $ \lambda(4-u^2)+(1-\lambda)(4-v^2)\leq4-[(\lambda u+(1-\lambda)v]^2$ I do not know what to do next.
If you expand your inequality, and fiddle around you can end up with $$ (\lambda u-\lambda v)^2\leq (\sqrt{\lambda}u-\sqrt{\lambda}v)^2. $$ Without loss of generality, you may assume that $u\geq v$. This allows you to drop the squares. Another manipulation gives you something fairly obvious. Now, work your steps backwards to give a valid proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/135553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Evaluate the $\sin$, $\cos$ and $\tan$ without using calculator? Evaluate the $\sin$, $\cos$ and $\tan$ without using calculator? $150$ degree the right answer are $\frac{1}{2}$, $-\frac{\sqrt{3}}{2}$and $-\frac{1}{\sqrt{3}} $ $-315$ degree the right answer are $\frac{1}{\sqrt{2}}$, $\frac{1}{\sqrt{2}}$ and $1$.
You can look up cos and sin on the unit circle. The angles labelled above are those of the special right triangles 30-60-90 and 45-45-90. Note that -315 ≡ 45 (mod 360). For tan, use the identity $\tan{\theta} = \frac{\sin{\theta}}{\cos \theta}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/135698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Classical contradiction in logic I am studying for my logic finals and I came across this question: Prove $((E\implies F)\implies E)\implies E$ I don't understand how there is a classical contradiction in this proof. Using natural deduction, could some one explain why there is a classical contradiction in this proof? Thank you in advance!
This proof follows the OP's attempt at a proof. The main difference is that explosion (X) is used on line 5: The classical contradictions are lines 4 and 8. According to Wikipedia, In classical logic, a contradiction consists of a logical incompatibility between two or more propositions. For line 4, the two lines that show logical incompatibility are 2 and 3. For line 8, the two lines that show logical incompatibility are 2 and 7.
{ "language": "en", "url": "https://math.stackexchange.com/questions/135760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Combinatorics - An unproved slight generalization of a familiar combinatorial identity One of the most basic and famous combinatorial identites is that $$\sum_{i=0}^n \binom{n}{i} = 2^n \; \forall n \in \mathbb{Z^+} \tag 1$$ There are several ways to make generalizations of $(1)$, one is that: Rewrite $(1)$: $$\sum_{a_1,a_2 \in \mathbb{N}; \; a_1+a_2=n} \frac{n!}{a_1! a_2!} = 2^n \; \forall n \in \mathbb{Z^+} \tag 2$$ Generalize $(2)$: $$\sum_{a_1,...,a_k \in \mathbb{N}; \; a_1+...+a_k=n} \frac{n!}{a_1!...a_k!} = k^n \; \forall n,k \in \mathbb{Z^+} \tag 3$$ Using Double counting, it is easy to prove that $(3)$ is true. So we have a generalization of $(1)$. The question is whether we can generalize the identity below using the same idea $$\sum_{i=0}^n \binom{n}{i}^2 = \binom{2n}{n} \; \forall n \in \mathbb{Z^+} \tag 4$$ which means to find $f$ in $$\sum_{a_1,...,a_k \in \mathbb{N}; \; a_1+...+a_k=n} \left ( \frac{n!}{a_1!...a_k!} \right )^2 = f(n,k) \; \forall n,k \in \mathbb{Z^+} \tag 5$$ That is the problem I try to solve few days now but not achieve anything. Anyone has any ideas, please share. Thank you. P.S: It is not an assignment, and sorry for my bad English. Supplement 1: I think I need to make it clear: The problem I suggested is about to find $f$ which satisfies $(5)$. I also show the way I find the problem, and the only purpose of which is that it may provide ideas for solving. Supplement 2: I think I have proved the identity of $f(n,3)$ in the comment below $$f(n,3) = \sum_{i=0}^n \binom{n}{i}^2 \binom{2i}{i} \tag 6$$ by using Double Counting: We double-count the number of ways to choose a sex-equal subgroup, half of which are on the special duty, from the group which includes $n$ men and $n$ women (the empty subgroup is counted). The first way of counting: The number of satisfying subgroups which contain $2i$ people is $\binom{n}{i}^2 \binom{2i}{i}$. So we have the number of satisfying subgroups is $RHS(6)$. The second way of counting: The number of satisfying subgroups which contain $2(a_2+a_3)$ people, $a_2$ women on the duty and $a_3$ men on the duty is $$\left ( \frac{n!}{a_1!a_2!a_3!} \right )^2$$. So the number of satisfying subgroups is $LHS(6)$. Hence, $(6)$ is proved.
Induction using Vandermonde's identity $$ \sum_{i_1+i_2=m}\binom{n_1}{i_1}\binom{n_2}{i_2}=\binom{n_1+n_2}{m}\tag{1} $$ yields $$ \sum_{i_1+i_2+\dots+i_k=m}\binom{n_1}{i_1}\binom{n_2}{i_2}\dots\binom{n_k}{i_k}=\binom{n_1+n_2+\dots+n_k}{m}\tag{2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/135813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Differential operator - what is $\frac{\partial f}{\partial x}x$? given $f$ - a smooth function, $f\colon\mathbb{R}^2\to \mathbb{R}$. I have a differential operator that takes $f$ to $\frac{\partial f}{\partial x}x$, but I am unsure what this is. If, for example, the operator tooked $f$ to $\frac{\partial f}{\partial x}$ then I understand that this operator derives by $x$, $\frac{\partial f}{\partial x}x$ derives by $x$ and then what ? Can someone please help with this ? (I'm confused...)
Let $D$ be a (first order) differential operator (i.e. a derivation) and let $h$ be a function. (In your example $h$ is the function $x$ and $D$ is $\frac{\partial}{\partial x}$). I claim that there is a differential operator $hD$ given by $$ f \mapsto hDf. $$ (This means multiply $h$ by $Df$ pointwise.) For this claim to hold, we just need it to be true that the Leibniz (product) rule holds, i.e. $$ hD(fg) = f(hD)g + g(hD)f. $$ Since $D$ itself satisfies the Leibniz rule, this is true. (We also need the operator $hD$ to take sums to sums, which it does.) Put less formally, to compute $x \frac{\partial}{\partial x}$ on a function $f$, take the partial with respect to $x$, then multiply what you get by $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/135940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
moment generating function of exponential distribution I have a question concerning the aforementioned topic :) So, with $f_X(t)={\lambda} e^{-\lambda t}$, we get: $$\phi_X(t)=\int_{0}^{\infty}e^{tX}\lambda e^{-\lambda X}dX =\lambda \int_{0}^{\infty}e^{(t-\lambda)X}dX =\lambda \frac{1}{t-\lambda}\left[ e^{(t-\lambda)X}\right]_0^\infty =\frac{\lambda}{t-\lambda}[0-(1)]$$ but only, if $(t-\lambda)<0$, else the integral does not converge. But how do we know that $(t-\lambda)<0$? Or do we not know it at all, and can only give the best approximation? Or (of course) am I doing something wrong? :) Yours, Marie!
You did nothing wrong. The moment generating function of $X$ simply isn't defined, as your work shows, for $t\ge\lambda$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Prove that $|e^{a}-e^{b}|<|a-b|$ Possible Duplicate: $|e^a-e^b| \leq |a-b|$ Could someone help me through this problem? Let a, b be two complex numbers in the left half-plane. Prove that $|e^{a}-e^{b}|<|a-b|$
By mean value theorem, $$ |e^a - e^b| \leqslant |a-b|\max_{x\in [a,b]} e^x $$ But $a$ and $b$ have a negative real part, and then all $x$ in $[a,b]$ also have a negative real part. Hence the $\max$ is less than one. And thus $$ |e^a - e^b| &lt|a-b|. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/136075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding $\frac{dy}{dt}$ given a curve and the speed of the particle Today I was doing some practice problems for the AP Calculus BC exam and came across a question I was unable to solve. In the xy-plane, a particle moves along the parabola $y=x^2-x$ with a constant speed of $2\sqrt{10}$ units per second. If $\frac{dx}{dt}>0$, what is the value of $\frac{dy}{dt}$ when the particle is at the point $(2,2)$? I first tried to write the parabola as a parametric equation such that $x(t)=at$ and $y(t)=(at)^2-at$ and then find a value for $a$ such that $\displaystyle\int_0^1\sqrt{(x'(t))^2+(y'(t))^2}dt=2\sqrt{10}$. However, since it was a multiple choice question we were probably not supposed to spend more than 3min on the question so I though that my approach was probably incorrect. The only information that I know for sure is that since $\frac{dy}{dt}=\frac{dy}{dx}\frac{dx}{dt}\rightarrow\frac{dy}{dt}=(2x-1)\frac{dx}{dt}$ and we are evaluating at $x=2$ and so $\frac{dy}{dt}=3\frac{dx}{dt}$. Other than that I am not sure how to proceed and so any help would be greatly appreciated!
As you noticed, $dy=(2x-1)dx$. The speed is constant, so that $\dot{x}^2+\dot{y}^2=40$. So you get the system $$\begin{cases} \dot{y}=(2x-1)\dot{x} \\\ \dot{x}^2+\dot{y}^2=40 \end{cases}.$$ By substitution, $\left[1+(2x-1)^2 \right]\dot{x}^2 = 40$, whence, at $x=2$, $\dot{x}^2=4$. Since $\dot{x}>0$, you find $\dot{x}=2$, and then $\dot{y}=(2x-1)\dot{x}=3 \cdot 2 = 6$. I hope I did not make mistakes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Wedge product $d(u\, dz)= \bar{\partial}u \wedge dz$. How to show that if $u \in C_0^\infty(\mathbb{C})$ then $d(u\, dz)= \bar{\partial}u \wedge dz$. Obrigado.
Note that $$d(u\,dz)=du\wedge dz+(-1)^0u\,ddz=du\wedge dz=(\partial u+\bar{\partial} u)\wedge dz.$$ Since $$\partial u=\frac{\partial u}{\partial z}dz\hspace{2mm}\mbox{ and }\hspace{2mm}\bar{\partial} u=\frac{\partial u}{\partial \bar{z}}d\bar{z},$$ we have $$d(u\,dz)=\frac{\partial u}{\partial z}dz\wedge dz+\frac{\partial u}{\partial \bar{z}}d\bar{z}\wedge dz=\frac{\partial u}{\partial \bar{z}}d\bar{z}\wedge dz=\bar{\partial} u\wedge dz,$$ where we have used $dz\wedge dz=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to show that every connected graph has a spanning tree, working from the graph "down" I am confused about how to approach this. It says: Show that every connected graph has a spanning tree. It's possible to find a proof that starts with the graph and works "down" towards the spanning tree. I was told that a proof by contradiction may work, but I'm not seeing how to use it. Is there a visual, drawing-type of proof? I appreciate any tips or advice.
Let G be a simple connected graph, if G has no cycle, then G is the spaning tree of itself. If G has cycles, remove one edge from each cycle,the resulting graph will be a spanning tree.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 3 }
How to prove this equality $ t(1-t)^{-1}=\sum_{k\geq0} 2^k t^{2^k}(1+t^{2^k})^{-1}$? Prove the equality $\quad t(1-t)^{-1}=\sum_{k\geq0} 2^k t^{2^k}(1+t^{2^k})^{-1}$. I have just tried to use the Taylor's expansion of the left to prove it.But I failed. I don't know how the $k$ and $2^k$ in the right occur. And this homework appears after some place with the $Jacobi$ $Identity$ in the book $Advanced$ $Combinatorics$(Page 118,EX10 (2)). Any hints about the proof ?Thank you in advance.
This is all for $|t|&lt1$. Start with the geometric series $$\frac{t}{1-t} = \sum_{n=1}^\infty t^n$$ On the right side, each term expands as a geometric series $$\frac{2^k t^{2^k}}{1+t^{2^k}} = \sum_{j=1}^\infty 2^k (-1)^{j-1} t^{j 2^k}$$ If we add this up over all nonnegative integers $k$, for each integer $n$ you get a term in $t^n$ whenever $n$ is divisible by $2^k$, with coefficient $+2^k$ when $n/2^k$ is odd and $-2^k$ when $n/2^k$ is even. So if $2^p$ is the largest power of $2$ that divides $n$, the coefficient of $t^n$ will be $2^p -\sum_{k=0}^{p-1} 2^k = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
What is so interesting about the zeroes of the Riemann $\zeta$ function? The Riemann $\zeta$ function plays a significant role in number theory and is defined by $$\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} \qquad \text{ for } \sigma > 1 \text{ and } s= \sigma + it$$ The Riemann hypothesis asserts that all the non-trivial zeroes of the $\zeta$ function lie on the line $\text{Re}(s) = \frac{1}{2}$. My question is: Why are we interested in the zeroes of the $\zeta$ function? Does it give any information about something? What is the use of writing $$\zeta(s) = \prod_{p} \biggl(1-\frac{1}{p^s}\biggr)^{-1}$$
Here is a visual supplement to Eric's answer, based on this paper by Riesel and Göhl, and Mathematica code by Stan Wagon: The animation demonstrates the eventual transformation from Riemann's famed approximation to the prime counting function $$R(x)=\sum_{k=1}^\infty \frac{\mu(k)}{k} \mathrm{li}(\sqrt[k]{x})=1+\sum_{k=1}^\infty \frac{(\log\,x)^k}{k\,k!\zeta(k+1)}$$ to the actual prime-counting function $\pi(x)$, through a series of successive corrections based on the nontrivial roots of $\zeta(s)$. (Here, $\mu(k)$ is the Möbius function and $\mathrm{li}(x)$ is the logarithmic integral.) See the Riesel/Göhl paper for more details.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "72", "answer_count": 5, "answer_id": 2 }
Find the area of a surface of revolution I'm a calculus II student and I'm completely stuck on one question: Find the area of the surface generated by revolving the right-hand loop of the lemniscate $r^2 = \cos2 θ$ about the vertical line through the origin (y-axis). Can anyone help me out? Thanks in advance
Note some useful relationships and identities: $r^2 = x^2 + y^2$ ${cos 2\theta} = cos^2\theta - sin^2\theta$ ${sin \theta} = {y\over r} = {y\over{\sqrt{x^2 + y^2}}}$ ${sin^2 \theta} = {y^2\over {x^2 + y^2}}$ These hint at the possibility of doing this in Cartesian coordinates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Show that $ a,b,c, \sqrt{a}+ \sqrt{b}+\sqrt{c} \in\mathbb Q \implies \sqrt{a},\sqrt{b},\sqrt{c} \in\mathbb Q $ Assume that $a,b,c, \sqrt{a}+ \sqrt{b}+\sqrt{c} \in\mathbb Q$ are rational,prove $\sqrt{a},\sqrt{b},\sqrt{c} \in\mathbb Q$,are rational. I know that can be proved, would like to know that there is no easier way $\sqrt a + \sqrt b + \sqrt c = p \in \mathbb Q$, $\sqrt a + \sqrt b = p- \sqrt c$, $a+b+2\sqrt a \sqrt b = p^2+c-2p\sqrt c$, $2\sqrt a\sqrt b=p^2+c-a-b-2p\sqrt c$, $4ab=(p^2+c-a-b)+4p^2c-4p(p^2+c-a-b)\sqrt c$, $\sqrt c=\frac{(p^2+c-a-b)+4p^c-4ab}{4p(p^2+c-a-b)}\in\mathbb Q$.
[See here and here for an introduction to the proof. They are explicitly worked special cases] As you surmised, induction works, employing our prior Lemma (case $\rm\:n = 2\:\!).\:$ Put $\rm\:K = \mathbb Q\:$ in Theorem $\rm\ \sqrt{c_1}+\cdots+\!\sqrt{c_{n}} = k\in K\ \Rightarrow \sqrt{c_i}\in K\:$ for all $\rm i,\:$ if $\rm\: 0 < c_i\in K\:$ an ordered field. Proof $\: $ By induction on $\rm n.$ Clear if $\rm\:n=1.$ It is true for $\rm\:n=2\:$ by said Lemma. Suppose that $\rm\: n>2.$ It suffices to show one of the square-roots is in $\rm K,\:$ since then the sum of all of the others is in $\rm K,\:$ so, by induction, all of the others are in $\rm K$. Note that $\rm\:\sqrt{c_1}+\cdots+\sqrt{c_{n-1}}\: =\: k\! -\! \sqrt{c_n}\in K(\sqrt{c_n})\:$ so all $\,\rm\sqrt{c_i}\in K(\sqrt{c_n})\:$ by induction. Therefore $\rm\ \sqrt{c_i} =\: a_i + b_i\sqrt{c_n}\:$ for some $\rm\:a_i,\:\!b_i\in K,\:$ for $\rm\:i=1,\ldots,n\!-\!1$. Some $\rm\: b_i < 0\:$ $\Rightarrow$ $\rm\: a_i = \sqrt{c_i}-b_i\sqrt{c_n} = \sqrt{c_i}+\!\sqrt{b_i^2 c_n}\in K\:\Rightarrow \sqrt{c_i}\in K\:$ by Lemma $\rm(n=2).$ Else all $\rm b_i \ge 0.\:$ Let $\rm\: b = b_1\!+\cdots+b_{n-1} \ge 0,\:$ and let $\rm\: a = a_1\!+\cdots+a_{n-1}.\:$ Then $$\rm \sqrt{c_1}+\cdots+\!\sqrt{c_{n}}\: =\: a+(b\!+\!1)\:\sqrt{c_n} = k\in K\:\Rightarrow\:\!\sqrt{c_n}= (k\!-\!a)/(b\!+\!1)\in K$$ Note $\rm\:b\ge0\:\Rightarrow b\!+\!1\ne 0.\:$ Hence, in either case, one of the square-roots is in $\rm K.\ \ $ QED Remark $ $ Note that the proof depends crucially on the positivity of the square-root summands. Without such the proof fails, e.g. $\:\sqrt{2} + (-\sqrt{2})\in \mathbb Q\:$ but $\rm\:\sqrt{2}\not\in\mathbb Q.\:$ It is instructive to examine all of the spots where positivity is used in the proof (above and Lemma), e.g. to avoid dividing by $\,0$. See also this post on linear independence of square-roots (Besicovic's theorem).
{ "language": "en", "url": "https://math.stackexchange.com/questions/136556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
$\lim\limits_{n \to{+}\infty}{\sqrt[n]{n!}}$ is infinite How do I prove that $ \displaystyle\lim_{n \to{+}\infty}{\sqrt[n]{n!}}$ is infinite?
Using $\text{AM} \ge \text{GM}$ $$ \frac{1 + \frac{1}{2} + \dots + \frac{1}{n}}{n} \ge \sqrt[n]{\frac{1}{n!}}$$ $$\sqrt[n]{n!} \ge \frac{n}{H_n}$$ where $H_n = 1 + \frac{1}{2} + \dots + \frac{1}{n} \le \log n+1$ Thus $$\sqrt[n]{n!} \ge \frac{n}{\log n+1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/136626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58", "answer_count": 12, "answer_id": 4 }
How to explain integrals and derivatives to a 10 years old kid? I have a sister that is interested in learning "what I do". I'm a 17 years old math loving person, but I don't know how to explain integrals and derivatives with some type of analogies. I just want to explain it well so that in the future she could remember what I say to her.
This is a question for a long car journey - both speed and distance travelled are available, and the relationship between the two can be explored and sampled, and later plotted and examined. And the questions as to why both speed and distance are important can be asked etc
{ "language": "en", "url": "https://math.stackexchange.com/questions/136664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
Reference request for "Hodge Theorem" I have been told about a theorem (it was called Hodge Theorem), which states the following isomorphism: $H^q(X, E) \simeq Ker(\Delta^q).$ Where $X$ is a Kähler Manifold, $E$ an Hermitian vector bundle on it and $\Delta^q$ is the Laplacian acting on the space of $(0,q)$-forms $A^{0,q}(X, E)$. Unfortunately I couldn´t find it in the web. Anyone knows a reliable reference for such a theorem? (In specific I´m looking for a complete list of hypothesis needed and for a proof.) Thank you!
There is a proof of Hodge theorem in John Roe's book, Elliptic Operators, topology, and asymptotic expansion of heat kernel. The proof is only two page long and very readable. However he only proved it for the classical Laplace operator, and the statement holds for any generalized Laplace operator. Another place you can find a reference is Richard Melrose's notes on microlocal analysis, which you can find in his homepage. But the proof is difficult to read without some background.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Cardinality of the power set of the set of all primes Please show me what I am doing wrong... Given the set $P$ of all primes I can construct the set $Q$ being the power set of P. Now let $q$ be an element in $Q$. ($q = \{p_1,p_2,p_3,\ldots\}$ where every $p_n$ is an element in $P$.) Now I can map every $q$ to a number $k$, where $k$ is equal to the product of all elements of $q$. ($k = p_1p_2p_3\ldots$) (for an empty set $q$, $k$ may be equal to one) Let the set $K$ consist of all possible values of $k$. Now because of the uniqueness of the prime factorization I can also map every number $k$ in $K$ to a $q$ in $Q$. (letting $k=1$ map to $q=\{\}$) Thus there exists a bijection between $Q$ and $K$. But $K$ is a subset of the natural numbers which are countable, and $Q$, being the power set of $P$, needs to be uncountably infinite (by Cantor's theorem), since $P$ is countably infinite. This is a contradiction since there cannot exist a bijection between two sets of different cardinality. What am I overlooking?
Many (most) of the elements $q$ have an infinite number of elements. Then you cannot form the product of those elements. You have shown that the set of finite subsets of the primes is countable, which is correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/136799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }