Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Given a simple graph and its complement, prove that either of them is always connected. I was tasked to prove that when given 2 graphs $G$ and $\bar{G}$ (complement), at least one of them is a always a connected graph.
Well, I always post my attempt at solution, but here I'm totally stuck. I tried to do raw algebraic manipulations with # of components, circuit ranks, etc, but to no avail.
So I really hope someone could give me a hint on how to approach this problem.
| Let $G$ be a simple disconnected graph
$\Longrightarrow\ \exists$ atleast $2$ vertices, say $u$ and $v$ such that there does not exist a path between $u$ and $v$.
$\Longrightarrow$ All vertices of $G$ are not adjacent to both $u$ and $v$ (Why?). $u$ and $v$ are obviously not adjacent.
$\Longrightarrow$ In $\bar{G}$ all the vertices are adjacent to either $u$ or $v$ or both and $u$ is adjacent to $v$.
$\Longrightarrow$ Take any two vertices in $\bar{G}$, say $a$ and $b$, we see that $a$ can be adjacent to either $u$ or $v$ or both. Same holds for $b$. Since, $u$ and $v$ are adjacent, there must exist a path through either $u$ or $v$ or both that connects $a$ and $b$.
$\Longrightarrow$ Hence, $\bar{G}$ is connected.
Same holds for $G$, if $\bar{G}$ is disconnected. Thus, at least one of $G$ and $\bar{G}$ must be connected.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34",
"answer_count": 9,
"answer_id": 3
} |
Compressed sensing, approximately sparse, Power law An x in $\mathbb{R}^n$ is said to be sparse if many of it's coefficients are zeroes. x is said to be compressible(approximately sparse) if many of its coefficients are close to zero.ie Let $x=(x_1,x_2,....x_n)$. Sort the absolute values of the coefficients in decreasing order with new indices as $|x_{(1)}|\geq|x_{(2)}|\geq,..,|x_{(n)}|$. x is said to be compressible if $|x_{(k)}|\leq C k^{-r},r>0$ for all $k=1,2,..,n$ where $C$ is a fixed constant. Coefficients $x_k$ don't follow any probability distribution.
My questions:
1) With this definition, every $x\in\mathbb{R}^n$ satisfies this power law as I can always find a constant(because I have only finite values) $C$ such that above power law is satisfied. But this shouldn't be the case because not every $x$ in $\mathbb{R}^n$ is compressible.
2)How do I choose $C$ so that this definition makes sense?
3) Is this definition even right? What am I missing?
| The definition looks just like the rule for determining convergent series. Using the ratio test, you can tell if a sequence of roots converges. As I recall, $\frac{1}{x}$ doesn't converge to zero. Rational values of $r$ don't converge quickly, $\frac{1}{\sqrt{x}}$, I wouldn't bet any money that's a sparse system. Are you sure $r > 0$ and not $r > 1$ ?
I should have made this a comment, sorry...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Getting the angles of a non-right triangle when all lengths are known I have a triangle that I know the lengths of all the sides. But I need to know the angles. Needs to work with non-right triangles as well.
I know it is possible, and I could have easily done this years ago when I was in trig, but it has completely slipped my mind.
Id prefer a solution that I can code into a function, or something that does not require constructing right triangles from it.
| As has been mentioned in comments, the formula that you're looking for is the Law of Cosines.
The three formulations are:
$$c^2 = a^2 + b^2 - 2ab \cos(C)$$
$$b^2 = a^2 + c^2 -2ac \cos(B)$$
$$a^2 = b^2 + c^2 - 2bc \cos(A)$$
You can use this law to find all three angles. Alternatively, once you have one of the angles, you can find the next using the Law of Sines, which states that:
$${\sin{A} \over a} = {\sin{B} \over b} = {\sin{C} \over c}$$
And of course, once you have solved for two of the angles, you need only subtract their sum from $180$ degrees to get the measure of the third.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Binomial coefficient $\sum_{k \leq m} \binom{m-k}{k} (-1)^k$ This is example 3 from Concrete Mathematics (Section 5.2 p.177 in 1995 edition). Although the proof is given in the book (based on a recurrent expression), I were trying to find an alternative solution, noticing that half of the terms are 0 (denote for simplicity $\frac{m}{2}=s$):
$$
\sum_{k=0}^{m} \binom{m-k}{k} (-1)^k = \sum_{k=0}^{s}\binom{s+k}{s-k}(-1)^{s-k}
$$
I think it would be good to absorb the $(-1)^k$ term in the binomial coefficient but can't think of much else. Wolfram Alpha doesn't give closed expression for either this or the one in the book
| For a nice proof, see Arthur T. Benjamin and Jennifer J. Quinn: Proofs that Really Count, Identity 172, pp. 85-86.
Sketch of the proof:
Clearly, $\binom{m-k}k$ counts the number of ways that you can tile a $1×m$ board with $1×2$ dominoes (denoted by $d$) and $1×1$ squares (denoted by $s$) such that the number of dominoes is $k$ (and the number of squares is $m-2k$).
Let $\mathcal E$ denote the set of $m$-tilings with an even number of dominoes, let $\mathcal O$ denote the set of $m$-tilings with an odd number of dominoes.
With these notations, we have to calculate $|\mathcal E|-|\mathcal O|$. The answer is $0$ or $\pm1$, depending on the remainder of $m$ modulo $6$ (see sigma.z.1980's answer for the details). As a proof, we will give a(n almost) bijection between $\mathcal E$ and $\mathcal O$.
Since I don't want to draw figures, I encode tilings in a sequence (from left to right), for example $sdd$ means a square followed by two dominoes. For a tiling (sequence) $S$, I denote the concatenation $S\dots S$ ($t$ times) by $S^t$, and let $S^0$ be the empty sequence. The size of a domino is $2$, the size of a square is $1$, and the size of a sequence is the sum of the sizes of its elements.
With these notations, every tiling $S$ can be uniquely written as $S=(sd)^tR$ for some $t\in\Bbb N_0$, where $R$ is a tiling such that it doesn't start with $sd$. If the size of $R$ is at least $2$, then $R$ either starts with $d$ or with $ss$. If it starts with $d$, we replace this $d$ with $ss$; if it starts with $ss$, we replace this $ss$ with $d$. Note that we defined an involution $\phi$ on the set of $m$-tilings here and $\phi$ changes the parity of the number of dominoes, so we found the required correspondence between $\mathcal E$ and $\mathcal O$.
If $m$ has the form $3l+2$, then $\phi(S)$ is defined for all tilings $S$, thus $|\mathcal E|-|\mathcal O|=0$. If $m$ has the form $6l+1$, then $\phi(S)$ is not defined for $S=(sd)^{2l}s$ (which is in $\mathcal E$), and it is defined everywhere else, thus $|\mathcal E|-|\mathcal O|=1$. The rest of the cases are analogous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
orthonormal basis in $l^{2}$ I need an orthonormal basis in $l^{2}$. One possible choice would be to take as such the sequences $\{1,0,0,0,...\}, \{0,1,0,0,...\}, \{0,0,1,0,...\}$, but I need a basis where only finitely many components of the basis vectors are zero. Does anyone know a way to construct such a basis? One possible vector for such a basis would be $\{1,1/2,1/3,1/4,...\}$ devided by its norm. However, I don't know how to find similar vectors that are orthogonal to this one and to each other.
| How about this: consider $w = (1,1/2,1/3,1/4,\dots)$ (or any other vector with no no-zero entry), and complement it with all the standard basis vectors $e_1$, $e_2$,
$\dots$ to a complete set and apply Gram-Schmidt orthonormalization?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Do two injective functions prove bijection? I'm trying to prove $|A| = |B|$, and I have two injective functions $f:A \to B$ and $g:B \to A$. Is this enough proof for a bijection, which would prove $|A| = |B|$? It seems logical that it is, but I can't find a definitive answer on this.
All I found is this yahoo answer:
One useful tool for proving that two sets admit a bijection between
them is a theorem which says that if there is an injective function $f: A \to B$ and an injective function $g: B \to A$ then there is a bijective
function $h: A \to B$. The theorem doesn't really tell you how to find $h$,
but it does prove that $h$ exists. The theorem has a name, but I forget
what it is.
But he doesn't name the theorem name and the yahoo answers are often unreliable so I don't dare to base my proof on just this quote.
| Yes this is true, it is called Cantor–Bernstein–Schroeder theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Complex number: equation I would like an hint to solve this equation: $\forall n\geq 1$
$$\sum_{k=0}^{2^n-1}e^{itk}=\prod_{k=1}^{n}\{1+e^{it2^{k-1}}\} \qquad \forall t \in \mathbb{R}.$$
I went for induction but without to much success; I will keep trying, but if you have an hint...
Many thanks.
| Fix $n\in \mathbb{N}$ and $t\in \mathbb{R}$.
*
*If $t=0 \mod 2\pi$, your equality is obviously true, for it reduces to:
$$\sum_{k=0}^{2^n -1} 1 =2^n = \prod_{k=1}^n 2$$
(remember that $e^{\imath\ t}$ is $2\pi$-periodic).
*Now, assume $t\neq 0 \mod 2\pi$. Evaluate separately:
$$\begin{split}
(1-e^{\imath\ t})\ \sum_{k=0}^{2^n-1}e^{itk} &= \sum_{k=0}^{2^n-1}e^{itk} - \sum_{k=1}^{2^n}e^{itk}\\
&= 1-e^{\imath\ t\ 2^n}
\end{split}$$
and:
$$\begin{split}
(1-e^{\imath\ t})\ \prod_{k=1}^{n} (1+e^{it2^{k-1}}) &= (1-e^{\imath\ t})\ (1+e^{\imath\ t})\ \prod_{k=2}^{n} (1+e^{it2^{k-1}})\\
&= (1-e^{\imath\ t\ 2})\ (1+e^{\imath\ t\ 2})\ \prod_{k=3}^{n} (1+e^{it2^{k-1}})\\
&= (1-e^{\imath\ t\ 4})\ (1+e^{\imath\ t\ 4})\ \prod_{k=4}^{n} (1+e^{it2^{k-1}})\\
&= \cdots\\
&= (1-e^{\imath\ t\ 2^{n-1}})\ (1+e^{\imath\ t\ 2^{n-1}})\\
&= 1-e^{\imath\ t\ 2^n}\; ;
\end{split}$$
therefore you got:
$$(1-e^{\imath\ t})\ \sum_{k=0}^{2^n-1}e^{itk} = (1-e^{\imath\ t})\ \prod_{k=1}^{n} (1+e^{it2^{k-1}})\; ,$$
which yields the desidered equality (because $1-e^{\imath\ t}\neq 0$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
What's the difference between tuples and sequences? Both are ordered collections that can have repeated elements. Is there a difference? Are there other terms that are used for similar concepts, and how are these terms different?
| Using a basic set theoretic definition, a tuple (a, b, c, ..) represents an element of the Cartesian product of sets A x B x C ...
In a vector space the tuple represents the components of a vector in terms of basis vectors.
A sequence on the other hand represents a function (usually of the natural numbers) to some set A, and strictly speaking a sequence is then a subset of N x A.
For numeric sequences, It makes sense to consider whether they are convergent. One could add the elements of a numeric sequence to get a series and consider if the corresponding series is convergent. I can't think of any equivalent concept for tuples even when they comprise numeric values.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 8,
"answer_id": 0
} |
Double Subsequences Suppose that $\{a_{n}\}$ and $\{b_{n}\}$ are bounded. Prove that $\{a_{n}b_{n}\}$ has a convergent subsequence.
In class this is how my professor argued:
By the Bolzano-Weierstrass Theorem, there exists a subsequence $\{a_{n_k}\}$ that converges to $a$. Since $\{b_n\}$ is bounded, $\{b_{n_k}\}$ is also bounded. So by the Bolzano-Weierstrass Theorem, there exists a subsequence of $\{b_{n_k}\}$ namely $\{b_{n_{{k_j}}}\}$ such that $\{b_{n_{{k_j}}}\}$ converges to $b$.
In particular $\{a_{n_{{k_j}}}\}$ will converge to $a$. And note that $\{a_{n_{{k_j}}}b_{n_{{k_j}}}\}$ is a subsequence of $\{a_{n}b_{n}\}$. So $a_{n_{{k_j}}}b_{n_{{k_j}}} \to ab$.
My question is why do we have to use so many subsequences. Is it wrong to argue as follows?
$\{a_{n}\},\{ b_{n} \}$ are both bounded, so by the Bolzano-Weierstrass Theorem, both sequences have a convergent subsequence. Namely $a_{n_k} \to a$ and $b_{n_k} \to b$. Then note that $\{a_{n_k}b_{n_k}\}$ is a subsequence of $\{a_{n}b_{n}\}$ which converges to $ab$. And we are done.
| The problem with your argument is that by writing $\{a_{n_k}\}$ and $\{b_{n_k}\}$ you are implicitly (and incorrectly) assuming that the convergent subsequences of $\{a_n\}$ and $\{b_n\}$ involve the same terms. In general, there is no reason why this should be the case. We need to introduce all of the subsequences that we do to get around this problem.
EDIT: Here's a specific example.
Take $a_n$ to be $0$ for $n$ odd, and $(-1)^{n/2}$ for $n$ even. Take $b_n$ to be $(-1)^{(n+1)/2}$ for $n$ odd, and $0$ for $n$ even. These are bounded sequences. Bolzano-Weierstrass guarantees that some subsequence converges (and indeed, there are plenty of convergent subsequences, with different limit points). One possibility would be to take the terms $a_n$ for $n = 0,4,8,16,...$ and $b_n$ for $n = 0,2,4,6,...$ In other words, $a_{n_k} = a_{4k}$, and $b_{n_k} = b_{2k}$. Therefore, your candidate subsequence $\{a_{n_k} b_{n_k}\}$ of $\{a_n b_n\}$ would be the sequence $\{a_{4k} b_{2k}\}$. But this doesn't make any sense, because the terms of the product sequence $\{a_n b_n\}$ are products where the indices on the $a$'s and $b$'s match.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Partial derivative involving trace of a matrix Suppose that I have a symmetric Toeplitz $n\times n$ matrix
$$\mathbf{A}=\left[\begin{array}{cccc}a_1&a_2&\cdots& a_n\\a_2&a_1&\cdots&a_{n-1}\\\vdots&\vdots&\ddots&\vdots\\a_n&a_{n-1}&\cdots&a_1\end{array}\right]$$
where $a_i \geq 0$, and a diagonal matrix
$$\mathbf{B}=\left[\begin{array}{cccc}b_1&0&\cdots& 0\\0&b_2&\cdots&0\\\vdots&\vdots&\ddots&\vdots\\0&0&\cdots&b_n\end{array}\right]$$
where $b_i = \frac{c}{\beta_i}$ for some constant $c>0$ such that $\beta_i>0$. Let
$$\mathbf{M}=\mathbf{A}(\mathbf{A}+\mathbf{B})^{-1}\mathbf{A}$$
Can one express a partial derivative $\partial_{\beta_i} \operatorname{Tr}[\mathbf{M}]$ in closed form, where $\operatorname{Tr}[\mathbf{M}]$ is the trace operator?
| Define some variables for convenience
$$\eqalign{
P &= {\rm Diag}(\beta) \cr
B &= cP^{-1} \cr
b &= {\rm diag}(B) \cr
S &= A+B \cr
M &= AS^{-1}A \cr
}$$
all of which are symmetric matrices, except for $b$ which is a vector.
Then the function and its differential can be expressed in terms of the Frobenius (:) product as
$$\eqalign{
f &= {\rm tr}(M) \cr
&= A^2 : S^{-1} \cr\cr
df &= A^2 : dS^{-1} \cr
&= -A^2 : S^{-1}\,dS\,S^{-1} \cr
&= -S^{-1}A^2S^{-1} : dS \cr
&= -S^{-1}A^2S^{-1} : dB \cr
&= -S^{-1}A^2S^{-1} : c\,dP^{-1} \cr
&= c\,S^{-1}A^2S^{-1} : P^{-1}\,dP\,P^{-1} \cr
&= c\,P^{-1}S^{-1}A^2S^{-1}P^{-1} : dP \cr
&= c\,P^{-1}S^{-1}A^2S^{-1}P^{-1} : {\rm Diag}(d\beta) \cr
&= {\rm diag}\big(c\,P^{-1}S^{-1}A^2S^{-1}P^{-1}\big)^T d\beta \cr
}$$
So the derivative is
$$\eqalign{
\frac{\partial f}{\partial\beta} &= {\rm diag}\big(c\,P^{-1}S^{-1}A^2S^{-1}P^{-1}\big) \cr
&= \frac{1}{c}\,{\rm diag}\big(BS^{-1}A^2S^{-1}B\big) \cr
&= \Big(\frac{b\circ b}{c}\Big)\circ{\rm diag}\big(S^{-1}A^2S^{-1}\big) \cr\cr
}$$
which uses Hadamard ($\circ$) products in the final expression. This is the same as joriki's result, but with more details.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Formula for calculating residue at a simple pole. Suppose $f=P/Q$ is a rational function and suppose $f$ has a simple pole at $a$. Then a formula for calculating the residue of $f$ at $a$ is
$$
\text{Res}(f(z),a)=\lim_{z\to a}(z-a)f(z)=\lim_{z\to a}\frac{P(z)}{\frac{Q(z)-Q(a)}{z-a}}=\frac{P(a)}{Q'(a)}.
$$
In the second equality, how does the $Q(z)-Q(a)$ appear? I only see that it would equal $\lim_{z\to a}\frac{P(z)}{\frac{Q(z)}{z-a}}$.
| Since the pole at $\,a\,$ is simple we have that
$$Q(z)=(z-a)H(z)\,\,,\,H(z)\,\,\text{a polynomial}\,\,,\,P(a)\cdot H(a)\neq 0\,$$
Thus, as polynomials are defined and differentiable everywhere:
$$Res_{z=a}(f)=\lim_{z\to a}\frac{P(z)}{H(z)}=\frac{P(a)}{H(a)}$$
and, of course,
$$Q'(z)=H(z)+(z-a)H'(z)\xrightarrow [z\to a]{}H(a)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Theorem formulation "Given ..., then ..." or "For all ..., ..."? When formulating a theorem, which of the following forms would be preferred, and why? Or is there another even better formulation? Are there reasons for or against mixing them in one paper?
Formulation 0: If $x\in X$, then (expression involving $x$).
Formulation 1: Given $x$ in $X$, then (expression involving $x$).
Formulation 2: For all $x$ in $X$ it holds that (expression involving $x$).
| I guess I'm a simpleton, I always preferred a theorem stated memorably:
Theorem 1. Even numbers are interesting.
Compared to...
Theorem 2. If $x$ is an even number, then $x$ is interesting.
...which is long-winded; or...
Theorem 3. Given an even number $x$, $x$ is interesting.
Theorem 3 also has odd consonance (writing "...$x$, $x$ is ..." seems like bad style).
Theorem 4. For all even numbers $x$, we have $x$ be interesting.
This also suffers from peculiar consonance ("we have $x$ be interesting" is quite alien, despite being proper grammar).
Theorem 1 has a quick and succinct enunciation ("Even numbers are interesting"), the proof can begin with a specification "Let $x$ be an even number. Then [proof omitted]."
Also we see theorem 1 has additional merit: it avoids needless symbols.
There is one warning I should give: each theorem is different. Some theorems can be stated beautifully without symbols (e.g., theorem 1). Others cannot be coherently stated without symbols. There is not "iron law" on how theorems should be formulated; it's a case-by-case problem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
What is the inverse cycle of permutation? Given cyclic permutations, for example,
$σ = (123)$, $σ_{2} = (45)$,
what are the inverse cycles $σ^{-1}$, $σ_2^{-1}$?
Regards.
| Every permutation n>1 can be expressed as a product of 2-cycles.
And every 2-cycle (transposition) is inverse of itself.
Therefore the inverse of a permutations is
Just reverse products of its 2-cycles
(ab)^-1 = b^-1 a^-1
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 4,
"answer_id": 3
} |
What are the probabilities of getting a "Straight flush" in a poker game? I'm not a pretty much fun of Poker, but I'd like to study that game.
What are the probabilities of getting a Straight flush in a Poker game considering this factors?
Number of playersHow are cards dealtWho is the first player
| If you are dealt five cards, there are $4\times10 =40$ possible straight flushes ($4\times 9 =36$ if you exclude royal flushes) out of the ${52 \choose 5}= 2598960$ possible hands. So the probability is $\dfrac{40}{2598960} = \dfrac{1}{64974} \approx 0.00001539\ldots$.
The probability will increase if you can have more than five cards to choose from. The probability that somebody will have a straight flush will increase if the number of players increases. It may reduce if you might drop out of the betting before seeing all five cards.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/122984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Definition of the gamma function I know that the Gamma function with argument $(-\frac{1}{ 2})$ -- in other words $\Gamma(-\frac{1}{2})$ is equal to $-2\pi^{1/2}$. However, the definition of $\Gamma(k)=\int_0^\infty t^{k-1}e^{-t}dt$ but how can $\Gamma(-\frac{1}{2})$ be obtained from the definition? WA says it does not converge...
| The functional equation $\Gamma(z+1)=z\Gamma(z)$ allows you to define $\Gamma(z)$ for all $z$ with real part greater than $-1$, other than $z=0$: just set $\Gamma(z) = \Gamma(z+1)/z$, and the integral definition of $\Gamma(z+1)$ does converge. The value $\Gamma(\frac12)=\sqrt\pi$ is well known and can be derived from Euler's reflection formula.
(Repeated use of the functional equation shows that $\Gamma$ can be defined for all complex numbers other than nonpositive integers.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
If $f$ continuous differentiable and $f'(r) < 1,$ then $x'=f(x/t)$ has no other solution tangent at zero to $\phi(t)=rt$ Suppose $f:\mathbb{R}\to\mathbb{R}$ is a continuous differentiable function such that $f(r)=r,$ for some $r.$ Then how to show that
If $f'(r) < 1,$ then the problem
$$x'=f(x/t)$$ has no other solution tangent at zero to $\phi(t)=rt, t>0$.
Tangent here means
$$\lim_{t\to 0^{+}}\frac{\psi(t)-\phi(t)}{t}=0$$
I could only prove that $\psi(0^+)=0,$ and $\psi'(0^+)=r.$ The problem was to use the fact that $f'(r) < 1.$
| Peter Tamaroff gave a very good hint in comments. Here is what comes out of it.
Suppose that $x$ is a solution tangent to $rt$ and not equal to it. Since solution curves do not cross, either (i) $x(t)>rt$ for all $t>0$, or (ii) $x(t)<rt$ for all $t>0$. I will consider (i), the other case being similar.
By assumption, $x/t\to r$ as $t\searrow 0$. From $$f(x/t)=r+f'(r)(x/t-r)+o(x/t-r)$$ and $f'(r)<1$ we obtain
$$t(x/t)'=f(x/t)-x/t = (f'(r)-1)(x/t-r)+o(x/t-r)$$
which is negative for small $t$. This means that $x/t$ increases as $t\searrow 0$, contradicting $x/t\to r$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proof by exhaustion: all positive integral powers of two end in 2, 4, 6 or 8 While learning about various forms of mathematical proofs, my teacher presented an example question suitable for proof by exhaustion:
Prove that all $2^n$ end in 2, 4, 6 or 8 ($n\in\mathbb{Z},n>0$)
I have made an attempt at proving this, but I cannot complete the proof without making assumptions that reduce the rigour of the answer.
All positive integral powers of two can be represented as one of the four cases ($k\in\mathbb{Z},k>0$, same for $y$):
*
*$2^{4k}=16^k=10y+6$
*$2^{4k+1}=2*16^k=10y+2$
*$2^{4k+2}=4*16^k=10y+4$
*$2^{4k+3}=8*16^k=10y+8$
The methods of proving the four cases above were similar; here is the last one:
$8*16^k=8*(10+6)^k$
Using binomial expansion,
$8*(10+6)^k=8*\sum\limits_{a=0}^k({k \choose a}10^k6^{k-a})$
All of the sum terms where $a\neq0$ end in zero, as they are a multiple of $10^k$, and therefore, a multiple of 10. The sum term where $a=0$ is $6^k$, because ${k\choose0}=10^0=1$. Therefore, the result of the summation ends in six.
Assuming that all positive integral powers of six end in six, and eight multiplied by any number ending in six ends in eight, all powers of two of the form $2^{4k+3}$ end in eight.
That conclusion doesn't seem very good because of the two assumptions I make. Can I assume them as true, or do I need to explicitly prove them? If I do need to prove them, how can I do that?
| Hint $\ $ mod $10,\:$ the powers of $\:2\:$ repeat in a cycle of length $4,\:$ starting with $2,\:$ since
$$\rm 2^{K+4} = 2^K(1+15) = 2^K + 30\cdot2^{K-1}\equiv\: 2^K\ \ (mod\ 10)\quad for\quad K\ge 1$$
Now it suffices to prove by induction that if $\rm\:f:\mathbb N\to \mathbb N = \{1,2,3\ldots\}\:$ then
$$\rm f(n+4)\: =\: f(n)\ \ \Rightarrow\ \ f(n)\in \{f(1),\:f(2),\:f(3),\:f(4)\}$$
Informally: once a cyclic recurrence begins to loop, all subsequent values remain in the loop.
Similarly, suppose there are integers $\rm\:a,b,\:$ such that $\rm\: f(n+2)\ =\: a\:f(n+1) + b\:f(n)\:$ for all $\rm\:n\ge 1.\:$ Show that $\rm\:f(n)\:$ is divisible by $\rm\:gcd(f(1),f(2))\:$ for $\rm\:n\ge 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Proving that a natural number is divisible by $3$ I am trying to show that $n^2 \bmod 3 = 0$ implies $n \bmod 3 = 0$.
This is a part a calculus course and I don't know anything about numbers theory. Any ideas how it can be done? Thanks!
| Hint $\rm\ (1+3k)^2 = 1 + 3\:(2k+3k^2)$
and $\rm\ \ \ (2+3k)^2 = 1 + 3\:(1+4k+3k^2)$
Said mod $3\!:\ (\pm1)^2 \equiv 1\not\equiv 0\ \ $ (note $\rm\: 2\equiv -1$)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Analysis Problem: Prove $f$ is bounded on $I$ Let $I=[a,b]$ and let $f:I\to {\mathbb R}$ be a (not necessarily continuous) function with the property that for every $x∈I$, the function $f$ is bounded on a neighborhood $V_{d_x}(x)$ of $x$. Prove that $f$ is bounded on $I$.
Thus far I have that,
For all $n∈I$ there exist $x_n∈[a,b]$ such that $|f(x_n)|>n$. By the Bolzano Weierstrass theorem since $I$ is bounded we have the sequence $X=(x_n)$ is bounded. This implies there is a convergent sub-sequence $X'=(x_{n_r})$ of $X$ that converges to $c$, $c∈[a,b]$. Since $I$ is closed and the element of $X'$ belongs to $I$, it follows from a previous theorem that I proved that $c∈I$. Here is where I get stuck, I want to use that the function $f$ is bounded on a neighborhood $V_{d_x}(x)$ somehow to show that $f$ is bounded on $I$. I'm not sure how to proceed.
$f$ is bounded on $I$ means if there exist a d-neighborhood $V_d(c)$ of $c$ and a constant $M>0$ such that we have $|f(x)|\leq M$ for all $x$ in $A ∩ V_d(c)$.
I would like to do try a proof by contradiction somehow.
| If you cannot use the Heine-Borel theorem, argue via sup. Here's a sketch:
Let $A= \{ x \in I : f \text{ is bounded in } [a,x] \}$. Then $A$ is not empty because $a\in A$. Also, $A$ is bounded above because $A\le b$.
Prove that if $x\in A$ and $x<b$ then $x+h\in A$ for some $h>0$ using that $f$ is locally bounded at $x$. This means that no $x< b$ is an upper bound for $A$, which implies that $b=\sup A$.
Finally, using that $f$ is locally bounded at $b$, argue that $b \in A$, thus proving that $f$ is bounded in $I$.
This proof appears in Spivak's Calculus.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 2
} |
The preimage of $(-\infty,a]$ under $f$ is closed for $a \in \mathbb{R}$, then $f$ is semi-continuous. So I've been thinking about this for the last two hours, but I am stuck.
Suppose $f:X \to \mathbb{R}$ where $X$ is a topological space.
$f$ is said to be semicontinuous if for any $x \in X$ and $\epsilon > 0$, there is a neighborhood of $x$ such that $f(x) - f(x') < \epsilon$ for all $x'$ in the neighborhood of $x$.
The question is the if $f^{-1}((-\infty,a])$ is closed for $a \in \mathbb{R}$, then $f$ is lower semi-continuous.
I started with choosing an $x \in f^{-1}((-\infty,a])$ and letting $\epsilon > 0$. So far I don't know much characterization of closed sets in a topological space except it is the complements of open sets.
Not sure if this is correct, but I approached this problem with the idea of nets. Since $f^{-1}(-\infty,a]$ is closed, then for each $x \in f^{-1}(-\infty,a]$, there's a net $\{x_i\}_{i \in I}$ such that it converges to $x$ (not sure if I'm allowed to do that). Pick any neighborhood of $x$ denote by $N_x$ (which will contain terms from $f^{-1}(-\infty,a]$), which contains an open set which has $x$. Let $f(x) = b$. Then $N_{x'}:=[N_x \backslash f^{-1}(-\infty, b)] \backslash [\mathrm{boundary \ of \ this \ set \ to \ the \ left}]$. So this gives me an open set such that it contains $x$ and such that $f(x) - f(x') < \epsilon$ for all $x' \in N_{x'}$. So $f$ must be semi-continuous.
Not sure if there is more to know about closed sets in a topological space, except its complement is open.
Any hint on how to think about this is appreciated.
| I’m guessing from what you’ve written that your definition of lower semi-continuity is such that you want to start with an arbitrary $x_0\in X$ and $\epsilon>0$ and show that there is an open nbhd $U$ of $x_0$ such that $f(x)>f(x_0)-\epsilon$ for every $x\in U$. You know that for any $a\in\Bbb R$, $f^{-1}[(-\infty,a]]$ is closed, so its complement, which is $f^{-1}[(a,\infty)]$, must be open. Take $a=x_0-\epsilon$, and let $U=f^{-1}[(a,\infty)]$. Does this $U$ meet the requirements?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What does proving the Collatz Conjecture entail? From the get go: i'm not trying to prove the Collatz Conjecture where hundreds of smarter people have failed. I'm just curious.
I'm wondering where one would have to start in proving the Collatz Conjecture. That is, based on the nature of the problem, what's the starting point for attempting to prove it? I know that it can be represented in many forms as an equation(that you'd have to recurse over):
$$\begin{align*}
f(x) &=
\left\{
\begin{array}{ll}
n/2 &\text{if }n \bmod2=0 \\
3n+1 &\text{if }n \bmod2=1
\end{array}
\right.\\
\strut\\
a_i&=
\left\{
\begin{array}{ll}
n &\text{if }n =0\\
f(a_i-1)&\text{if }n>0
\end{array}
\right.\\
\strut\\
a_i&=\frac{1}{2}a_{i-1} - \frac{1}{4}(5a_{i-1} + 2)((-1)^{a_i-1} - 1)
\end{align*}$$
Can you just take the equation and go from there?
Other ways I thought of would be attempting to prove for only odd or even numbers, or trying to find an equation that matches the graph of a number vs. its "Collatz length"
I'm sure there's other ways; but I'm just trying to understand what, essentially, proving this conjecture would entail and where it would begin.
| Proving this conjecture indirectly would entail two things:
*
*Proving that there is no number n which increases indefinitely
*Proving there is no number n which loops indefinitely (besides the 4, 2, 1) loop
If one does these things then you have an answer to the collatz conjecture (and if you find a case of either of these things you have disproven the collatz conjecture obviously)
Of course this is just one approach that comes to mind, there are other possible methods which are beyond my own knowledge
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32",
"answer_count": 6,
"answer_id": 3
} |
How do i scale my errorbars when i scale my data? I am plotting distributions of data with the standard deviation and median of my data. Now when i want to scale my median by a another variable, how do i need to modify the standart deviation?
| Let $X$ be some real-value random variable and $m$ be its median:
$$
\mathsf P\{X\leq m\} = \mathsf P\{X>m\}.
$$
Clearly, to scale median by the factor $\lambda> 0$ you just scale $X$ by the same factor since
$$
\mathsf P\{\lambda X\leq \lambda m\} = \mathsf P\{\lambda X>\lambda m\}.
$$
Note that although for the variance we have$\mathsf V[\lambda X] = \lambda^2 \mathsf V[X]$, the standard deviation scales with the same factor $\lambda$ being the square root of the variance:
$$
\sigma[\lambda X] = |\lambda| \sigma[X].
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Multiplying exponents with variables inside Why is
$$(-1)^n(2^{n+2}) = (-2)^{n+2} ?$$
My thinking is that $-1^n \times 2^{n+2}$ should be $-2^{2n+2}$ but clearly this is not the case. Why is the variable essentially ignored, is there a special case of multiplication I'm unaware of?
| The exponent rules (for positive integer exponents, at any rate) are:
*
*$(a^n)^m = a^{nm}$
*$(ab)^n = a^nb^n$
*$a^na^m = a^{n+m}$.
Here, $a$ and $b$ are any real numbers, and $n$ and $m$ are positive integers. (The rules are valid in greater generality, but one has to be careful with the values of $a$ and $b$; also, the 'explanation' below is not valid for exponents that are not positive integers.)
To see these, remember what the symbols mean: $a^1 = a$, and $a^{n+1}=a^na$; that is, $a^k$ "means"
$$a^k = \underbrace{a\times a\times\cdots\times a}_{k\text{ factors}}$$
The following can be proven formally with induction, but informally we have:
$$\begin{align*}
(a^n)^m &= \underbrace{a^n\times a^n\times\cdots\times a^n}_{m\text{ factors}}\\
&= \underbrace{\underbrace{a\times\cdots\times a}_{n\text{ factors}}\times\cdots \times \underbrace{a\times\cdots\times a}_{n\text{ factors}}}_{m\text{ products}}\\
&= \underbrace{a\times\cdots \times a\times a\times\cdots \times a\times\cdots \times a}_{nm\text{ factors}}\\
&= a^{nm}
\end{align*}$$
Similarly,
$$\begin{align*}
(ab)^n &= \underbrace{(ab)\times (ab)\times\cdots\times (ab)}_{n\text{ factors}}\\
&= \underbrace{(a\times a\times\cdots \times a)}_{n\text{ factors}}\times\underbrace{(b\times b\times\cdots \times b)}_{n\text{ factors}}\\
&= a^nb^n,
\end{align*}$$
and
$$\begin{align*}
a^{n+m} &= \underbrace{a\times a\times\cdots\times a}_{n+m\text{ factors}}\\
&= \underbrace{(a\times a\times \cdots \times a)}_{n\text{ factors}}\times\underbrace{(a\times a\times \cdots \times a)}_{m\text{ factors}}\\
&= a^na^m.
\end{align*}$$
You have
$$(-1)^n2^{n+2}.$$
Because the bases are different ($-1$ and $2$), you do not apply rule 3 above (which is what you seem to want to do); instead, you want to try to apply rule 2. You can't do that directly because the exponents are different. However, since $(-1)^2 = (-1)(-1) = 1$, we can first do this:
$$(-1)^n2^{n+2} = 1(-1)^n2^{n+2} = (-1)^2(-1)^n2^{n+2};$$
then we apply rule 3 to $(-1)^2(-1)^n$ to get $(-1)^{2+n} = (-1)^{n+2}$, and now we have the situation of rule 2, so we get:
$$(-1)^n2^{n+2} = (-1)^{n+2}2^{n+2} = \bigl( (-1)2\bigr)^{n+2} = (-2)^{n+2}.$$
(You seem to be trying to apply a weird combination of rules 2 and 3, to get that $a^nb^m = (ab)^{n+m}$; this is almost always false; the exponent rules don't let you do that)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Coercivity vs boundedness of operator The definition of coercivity and boundedness of a linear operator $L$ between two $B$ spaces looks similar: $\lVert Lx\lVert\geq M_1\lVert x\rVert$ and $\lVert Lx\rVert\leq M_2\lVert x\rVert$ for some constants $M_1$ and $M_2$. Thus in order to show the existence of a PDE $Lu=f$ one needs to show that it is coercive. However if my operator $L$ happen to be bounded and $M_2 \leq M_1$?
What is the intuition behind those two concepts because they are based on computation of the same quantities and comparing the two?
| With boundedness everything is clear, because it is well known that linear operator $L$ is continuous iff $L$ is bounded. With continuity of $L$ you can solve the equation with sequential approximations. Moreover, you can apply the whole theory developed for continuous functions and, in particular, for continuous linear operators. Since continuity is very natural condition when solving differential equations, we require $L$ to be bounded.
As for the coercivity, note that it, in particular, implies injectivity. Injectivity guarantees us uniqueness of the solution $u$ of the equation $Lu=f$. But when you are solving such an equation, it is desirable that solution depends on right hand side $f$ continuously. Well, this property depends on $L$, and it is sufficient to require coercivity of $L$. Speaking functional-analytically bounded coercive operator perform linear homeomorphism between domain and its range. Hence there is a "nice" correspondence between initial data $f$ and solution $u$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Evaluate $\int\limits_0^{\frac{\pi}{2}} \frac{\sin(2nx)\sin(x)}{\cos(x)}\, dx$
How to evaluate
$$ \int\limits_0^{\frac{\pi}{2}} \frac{\sin(2nx)\sin(x)}{\cos(x)}\, dx $$
I don't know how to deal with it.
| Method 1. Let $I(n)$ denote the integral. Then by addition formula for sine and cosine,
$$\begin{align*}
I(n+1) + I(n)
&= \int_{0}^{\frac{\pi}{2}} \frac{[\sin((2n+2)x) + \sin(2nx)]\sin x}{\cos x} \; dx \\
&= \int_{0}^{\frac{\pi}{2}} 2\sin((2n+1)x) \sin x \; dx \\
&= \int_{0}^{\frac{\pi}{2}} [\cos(2nx) - \cos((2n+2)x)] \; dx \\
&= 0,
\end{align*}$$
if $n \geq 1$. Thus we have $I(n+1) = -I(n)$ and by double angle formula for sine,
$$I(1) = \int_{0}^{\frac{\pi}{2}} \frac{\sin (2x) \sin x}{\cos x} \; dx = \int_{0}^{\frac{\pi}{2}} 2 \sin^2 x \; dx = \frac{\pi}{2}.$$
Therefore we have
$$I(n) = (-1)^{n-1} \frac{\pi}{2}.$$
Method 2. By the substitution $x \mapsto \pi - x$ and $x \mapsto -x$, we find that
$$\int_{0}^{\frac{\pi}{2}} \frac{\sin (2nx) \sin x}{\cos x} \; dx = \int_{\frac{\pi}{2}}^{\pi} \frac{\sin (2nx) \sin x}{\cos x} \; dx = \int_{-\frac{\pi}{2}}^{0} \frac{\sin (2nx) \sin x}{\cos x} \; dx.$$
Thus we have
$$\begin{align*}
I(n)
& = \frac{1}{4} \int_{-\pi}^{\pi} \frac{\sin (2nx) \sin x}{\cos x} \; dx \\
& = \frac{1}{4} \int_{|z|=1} \frac{\left( \frac{z^{2n} - z^{-2n}}{2i} \right) \left( \frac{z - z^{-1}}{2i} \right)}{\left( \frac{z + z^{-1}}{2} \right)} \; \frac{dz}{iz} \\
& = \frac{i}{8} \int_{|z|=1} \frac{(z^{4n} - 1) (z^2 - 1)}{z^{2n+1}(z^2 + 1)} \; dz.
\end{align*}$$
The last integrad has poles only at $z = 0$. (Note that singularities at $z = \pm i$ is cancelled since numerator also contains those factors.) Expanding partially,
$$
\begin{align*}
\frac{(z^{4n} - 1) (z^2 - 1)}{z^{2n+1}(z^2 + 1)}
& = \frac{z^{2n-1} (z^2 - 1)}{z^2 + 1} - \frac{z^2 - 1}{z^{2n+1}(z^2 + 1)} \\
& = \frac{z^{2n-1} (z^2 - 1)}{z^2 + 1} - \frac{1}{z^{2n+1}} + \frac{2}{z^{2n+1}(z^2 + 1)} \\
& = \frac{z^{2n-1} (z^2 - 1)}{z^2 + 1} - \frac{1}{z^{2n+1}} + 2 \sum_{k=0}^{\infty} (-1)^{k} z^{2k-2n-1}.
\end{align*}$$
Thus the residue of the integrand at $z = 0$ is $2 (-1)^n$, and therefore
$$I(n) = \frac{i}{8} \cdot 2\pi i \cdot 2 (-1)^{n} = (-1)^{n-1}\frac{\pi}{2}.$$
Method 3. (Advanced Calculus) This method is just a sledgehammer method, but it reveals an interesting fact that even a nice integral with nice value at each integer point can yield a very bizarre answer for non-integral argument.
By the substitution $x \mapsto \frac{\pi}{2} - x$, we have
$$I(n) = (-1)^{n-1} \int_{0}^{\frac{\pi}{2}} \frac{\sin 2nx}{\sin x} \cos x \; dx.$$
Now, from a lengthy calculation, we find that for all $w > 0$,
$$ \int_{0}^{\frac{\pi}{2}} \frac{\sin 2wx}{\sin x} \cos x \; dx = \frac{\pi}{2} + \left[ \log 2 - \psi_0 (1 + w) + \psi_0 \left( 1 + \frac{w}{2}\right) - \frac{1}{2w} \right] \sin \pi w.$$
Thus plugging a positive integer $n$, we obtain
$$ \int_{0}^{\frac{\pi}{2}} \frac{\sin 2nx}{\sin x} \cos x \; dx = \frac{\pi}{2},$$
which immediately yields the formula for $I(n)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
Let $p$ be a prime. Prove that $p$ divides $ab^p−ba^p$ for all integers $a$ and $b$. Let $p$ be a prime. Prove that $p$ divides $ab^p−ba^p$ for all integers
$a$ and $b$.
| $$ab^p-ba^p = ab(b^{p-1}-a^{p-1})$$
If $p|ab$, then $p|(ab^p-ba^p)$ and also if $p \nmid ab$, then gcd$(p,a)=$gcd$(p,b)=1, \Rightarrow b^{p-1} \equiv a^{p-1} \equiv 1\pmod{p}$ (by Fermat's little theorem).
This further implies that $\displaystyle{p|(b^{p-1}-a^{p-1}) \Rightarrow p|(ab^p-ba^p)}$.
Q.E.D.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/123910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
The Fundamental Theorem of Algebra and Complex Numbers We had a quiz recently in a linear algebra course, and one of the true/false question states that
The Fundamental Theorem of Algebra asserts that addition, subtraction, multiplication and division for real numbers can be carried over to complex numbers as long as division by zero is avoided.
According to our teacher, the above statement is true. When asked him of the reasoning behind it, he said something about the FTA asserts that the associative, commutative and distributive laws are valid for complex numbers, but I couldn't see this. Can someone explain whether the above statement is true and why? Thanks.
| The statement is false.
The Fundamental Theorem of Algebra asserts that any non-constant polynomial with complex coefficients has a root in the complex numbers. This does not state anything about the relationship between the complex numbers and the real numbers; and any proof of the FTA will certainly use the associativity and commutativity of addition and multiplication in the complex numbers, as well as multiplication's distributivity over addition, so the FTA can't imply those properties.
The statements
*
*the associative, commutative and distributive laws are valid for complex numbers
*addition, subtraction, multiplication and division for real numbers can be carried over to complex numbers as long as division by zero is avoided
might be summarized by the statement "the complex numbers form a ring, which is a division algebra over the real numbers".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/124014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Sum of two closed sets in $\mathbb R$ is closed? Is there a counterexample for the claim in the question subject, that a sum of two closed sets in $\mathbb R$ is closed? If not, how can we prove it?
(By sum of sets $X+Y$ I mean the set of all sums $x+y$ where $x$ is in $X$ and $y$ is in $Y$)
Thanks!
| It's worth mentioning that :
if one is closed + bounded, another one is closed,then the addition is closed
Since closedness can be charaterized by sequence in $\Bbb{R}^n$,if $(x_n) \in A+B$ we need to show limit of the convergence sequence still lies in it.assume $A$ is compact $B $ is closed.
Since $x_n= a_n +b_n \to x$,compactness implies sequential compactness hence $a_{n_k} \to a\in A$ for some subsequence. now $x_{n_k} \to x$ which means subsequence $b_{n_k}\to x-a$ converge,since $B$ is closed,$x-a \in B$ ,hence $x = a+b \in A+B$,which means the sum is closed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/124130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "61",
"answer_count": 5,
"answer_id": 0
} |
How to evaluate one improper integral Please show me the detailed solution to the question:
Compute the value of
$$\int_{0}^{\infty }\frac{\left( \ln x\right) ^{40021}}{x}dx$$
Thank you a million!
| Since this is an exercise on improper integrals, it is natural to replace the
upper and lower limits by $R$, $\frac{1}{R}$ respectively and define the integral to be the limit as $R \rightarrow \infty$ . Then write the integral as the sum of the integral from $\frac{1}{R}$ to $1$ and from $1$ to $R$. In the second integral make the usual transformation replacing $x$ by $\frac{1}{x}$. The two integrals cancel.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/124190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
} |
What is the difference between Green's Theorem and Stokes Theorem? I don't quite understand the difference between Green's Theorem and Stokes Theorem. I know that Green's Theorem is in $\mathbb{R}^2$ and Stokes Theorem is in $\mathbb{R}^3$ and my lecture notes give Greens Theorem and Stokes Theorem as
$$\int \!\! \int_{\Omega} curl \, \underline{v} \, \mathrm{d}A
= \int_{\partial \Omega} \underline{v} \, \mathrm{d} \underline{r}$$
and
$$\int \!\! \int_\Omega \nabla \times \underline{v} . \underline{n} \, \mathrm{d}A
= \int_{\partial \Omega} \underline{v} \, \mathrm{d} \underline{r}$$
respectively. So why does being in $\mathbb{R}^3$ constitute the unit normal $\underline{n}$ to be dotted with the curl?
Thanks!
| Green's Theorem is a special case of Stokes's Theorem. Since your surface is in the plane and oriented counterclockwise, then your normal vector is $n = \hat{k}$, the unit vector pointing straight up.
Similarly, if you compute $\nabla \times v$, where $v\, dr = Mdx + Ndy$, you would get $\left( \frac{\partial N}{\partial x} - \frac{\partial M}{\partial y} \right) \hat{k} = \text{curl}\, v \,\hat{k}$ as a result.
When you dot $(\nabla \times v) \cdot n$, you get the product $\text{curl}\, v$.
So even in $\mathbb{R}^2$ you are dotting the curl with the unit normal vector; but because the unit normal is aligned with the curl vector, the dot product is simply the magnitude of the curl. (In order words, when you look at the dot product formula $a \cdot b = |a||b| \cos \theta$, $\theta = 0$, $a$ is your curl and $|b| = 1$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/124262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 1
} |
Primes modulo which a given quadratic equation has roots Given a quadratic polynomial $ax^2 + bx + c$, with $a$, $b$ and $c$ being integers, is there a characterization of all primes $p$ for which the equation
$$ax^2 + bx + c \equiv 0 \pmod p$$
has solutions?
I have seen it mentioned that it follows from quadratic reciprocity that the set is precisely the primes in some arithmetic progression, but the statement may require some tweaking. The set of primes modulo which $1 + \lambda = \lambda^2$ has solutions seems to be $$5, 11, 19, 29, 31, 41, 59, 61, 71, 79, 89, 101, 109, 131, 139, 149, 151, 179, 181, 191, 199, \dots$$ which are ($5$ and) the primes that are $1$ or $9$ modulo $10$.
(Can the question also be answered for equations of higher degree?)
| I never noticed this one before.
$$ x^3 - x - 1 \equiv 0 \pmod p $$
has one root for odd primes $p$ with $(-23|p) = -1.$
$$ x^3 - x - 1 \equiv 0 \pmod p $$
has three distinct roots for odd $p$ with $(-23|p) = 1$ and $p = u^2 + 23 v^2 $ in integers.
$$ x^3 - x - 1 \equiv 0 \pmod p $$
has no roots for odd $p$ with $(-23|p) = 1$ and $p = 3u^2 + 2 u v + 8 v^2 $ in integers (not necessarily positive integers).
Here we go, no roots $\pmod 2,$ but a doubled root and a single $\pmod {23},$ as
$$ x^3 - x - 1 \equiv (x - 3)(x-10)^2 \pmod {23}. $$
Strange but true. Easy to confirm by computer for primes up to 1000, say.
The example you can see completely proved in books, Ireland and Rosen for example, is $x^3 - 2,$ often with the phrase "the cubic character of 2" and the topic "cubic reciprocity." $2$ is a cube for primes $p=2,3$ and any prime $p \equiv 2 \pmod 3.$ Also, $2$ is a cube for primes $p \equiv 1 \pmod 3$ and $p = x^2 + 27 y^2$ in integers. However, $2$ is not a cube for primes $p \equiv 1 \pmod 3$ and $p = 4x^2 +2 x y + 7 y^2$ in integers. (Gauss)
$3$ is a cube for primes $p=2,3$ and any prime $p \equiv 2 \pmod 3.$ Also, $3$ is a cube for primes $p \equiv 1 \pmod 3$ and $p = x^2 + x y + 61 y^2$ in integers. However, $3$ is not a cube for primes $p \equiv 1 \pmod 3$ and $p = 7x^2 +3 x y + 9 y^2$ in integers. (Jacobi)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/124331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
About the exchange of $\sum$ and LM Given $f_i,g_i\in k[x_1,\cdots,x_n],1\leq i\leq s$, fix a monomial order on $k[x_1,\cdots,x_n]$, I was wondering whether there is an effective criterion to judge if this holds,$$\text{LM}(\sum_{i=1}^sf_ig_i)=\sum_{i=1}^s\text{LM}(f_ig_i),$$
where LM( ) is the leading monomial with respect to the fixed monomial order defined as follows,
$$\text{LM}(f)=x^{\text{multideg}(f)}.$$
And $\text{multideg}(f)=\text{max}(\alpha\in\mathbb Z_{\geq 0}^{n}:a_{\alpha}\neq0),$ where $f=\sum_{\alpha}a_{\alpha}x^{\alpha}.$
| In characteristic 0, assuming all terms are non-zero so that LM is defined, this only works if $s=1$: taking the sum of coefficients on both sides of the equation you obtain the equation $1=s$. So you can never exchange a non-trivial sum and LM.
Note that only the image of LM needs to be of characteristic 0: this still holds for any $k$ if you view LM as a map from $k[x_1,\dots,x_n]$ to $\mathbb Z[x_1,\dots,x_n]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/124401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
There's a real between any two rationals, a rational between any two reals, but more reals than rationals? The following statements are all true:
*
*Between any two rational numbers, there is a real number (for example, their average).
*Between any two real numbers, there is a rational number (see this proof of that fact, for example).
*There are strictly more real numbers than rational numbers.
While I accept each of these as true, the third statement seems troubling in light of the first two. It seems like there should be some way to find a bijection between reals and rationals given the first two properties.
I understand that in-between each pair of rationals there are infinitely many reals (in fact, I think there's $2^{\aleph_0}$ of them), but given that this is true it seems like there should also be in turn a large number of rationals between all of those reals.
Is there a good conceptual or mathematical justification for why the third statement is tue given that the first two are as well?
Thanks! This has been bothering me for quite some time.
| Here's an attempt at a moral justification of this fact. One (informal) way of understanding the difference between a rational number and a real number is that a rational number somehow encodes a finite amount of information, whereas an arbitrary real number may encode a (countably) infinite amount of information. The fact that the algebraic numbers (roots of polynomial equations with integer coefficients) are countable suggests that this perspective is not unreasonable.
Naturally, when your objects are free to encode an infinite amount of information, you can expect more variety, and that is ultimately what causes the cardinality of $\mathbb{R}$ to exceed that of $\mathbb{N}$, as in Cantor's Diagonal Argument. However, because real numbers encode a countable amount of information, any two distinct real numbers disagree after some finite point, and that is why we may introduce a rational in the middle.
All in all, this is seen to boil down to the way we constructed $\mathbb{R}$: as the set of limit points of rational cauchy sequences. This is because a limiting process is built out of "finite" steps, and so we can approximate the immense complexity of an uncountable set with a countable collection of finite objects.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/124458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Constructive proof need to know the solutions of the equations Observe the following equations:
$2x^2 + 1 = 3^n$ has two solutions $(1, 1) ~\text{and}~ (2, 2)$
$x^2 + 1 = 2 \cdot 5^n$ has two solutions $(3, 1) ~\text{and}~ (7, 2)$
$7x^2 + 11= 2 \cdot 3^n$ has two solutions $(1, 2) ~\text{and}~ (1169, 14)$
$x^2 + 3 = 4 \cdot 7^n$ has two solutions $(5, 1) ~\text{and}~ (37, 3)$
How one can determine the only number of solutions are two or three or four...depends up on the equation. especially, the above equations has only two solutions. How can we prove there is no other solutions? Or how can we get solutions by any particular method or approach?
| All four of your equations (and many more) are mentioned in Saradha and Srinivasan, Generalized Lebesgue-Ramanujan-Nagell equations, available at http://www.math.tifr.res.in/~saradha/saradharev.pdf. The solutions are attributed to Bugeaud and Shorey, On the number of solutions of the generalized Ramanujan-Nagell equation, J Reine Angew. Math. 539 (2001) 55-74, MR1863854 (2002k:11041).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/124565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
finding a minorant to $(\sqrt{k+1} - \sqrt{k})$ Need help finding a minorant to
$(\sqrt{k+1} - \sqrt{k})$ which allows me to show that the series $\sum_{k=1}^\infty (\sqrt{k+1} - \sqrt{k})$ is divergent.
| You should observe that your series telescopes, i.e.:
$$\sum_{k=0}^n (\sqrt{k+1} - \sqrt{k}) = (\sqrt{1} -\sqrt{0}) + (\sqrt{2} -\sqrt{1}) +\cdots + (\sqrt{n}-\sqrt{n-1}) +(\sqrt{n+1}-\sqrt{n}) = \sqrt{n+1}-1\; ,$$
and therefore:
$$\sum_{k=0}^\infty (\sqrt{k+1} - \sqrt{k}) = \lim_{n\to \infty} \sum_{k=0}^n (\sqrt{k+1} - \sqrt{k}) = \lim_{n\to \infty} \sqrt{n+1}-1 = \infty\; .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/124708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
A question on pointwise convergence. The function $f_n(x):[-1,1] \to \mathbb{R}, \, \, \,f_n(x) = x^{2n-1}$ tends pointwise to the function $$f(x) = \left\{\begin{array}{l l}1&\textrm{if} \quad x=1\\0&\textrm{if} \quad -1<x<1\\-1&\textrm{if} \quad x=-1\end{array}\right.$$ but not uniformly (for obvious reasons as $f(x)$ isn't continuous). But then surely in this case $||f_n(x) - f(x)||_\infty \to 0$ as for any $x \in (-1,1)$ and any $\epsilon > 0$ you can make $n$ large enough so that the max distance between $f_n(x)$ and $f(x)$ at that particular $x$ is less than $\epsilon$? What am I doing wrong?
Thanks!
| Your argument
*
*for any $x \in [-1,1]$ and any $\epsilon > 0$ you can make $n$ large enough so that the max distance between $f_n(x)$ and $f(x)$ at that particular $x$ is less than $\epsilon$
is a great proof that for any $x$, the sequence $f_n(x)$ tends to $f(x)$. In other words, it's a proof that $f_n$ tends to $f$ pointwise.
But the assertion $||f_n(x) - f(x)||_\infty \to 0$ is the assertion that $f_n$ tends to $f$ uniformly on $[-1,1]$, which as you've noted is false.
The difference in the two statements (hence in how you'd need to prove them) is in the order of quantifiers. For pointwise convergence, given any $\epsilon>0$, it's "for all $x$, there exists $n$ such that..." - in other words, $n$ can depends on $x$ as well as $\epsilon$. For uniform convergence, given any $\epsilon>0$, it's "there exists $n$ such that for all $x$, ..." - in other words, $n$ cannot depend on $x$.
If you try reorganizing your proof so that you have to choose $n$ before $x$ is given, then you'll see the proof break down.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/124754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Combinatorial Probability-Rolling 12 fair dice My text says, regarding combinatorial probability, "The number of outcomes associated with any problem involving the rolling of n six-sided dice is $6^n$."
I know that in combinatorial probability $P(A)=m/n$ where $m$ is the number of ways $A$ can happen and $n$ is the number of ways to perform the operation in question. But since we must be consistent in numerator and denominator with respect to order, wouldn't the number in the denominator depend on the whether the example I am in respects order? I.e, in any problem involving rolling $n$ dice, would I expect $P=X/6^n$ regardless of context?
| That is usually very good advice, at least until you simplify.
So for example, to find the probability that rolling twelve dice gives three prime numbers (not necessarily distinct) is $ \frac {3^{12}}{6^{12}}$ but that simplifies to $\frac {1}{2^{12}}$.
But it is possible to devise a problem where this advice does not work. For example, roll a die; then roll a second and keep rolling the second until it is distinct from the first; and then roll a third until it is distinct from each of the first two. What is the probability the three dice show three prime numbers? And if you do this four times to use all $12$ dice? It is $\left(\frac 36 \times \frac 25 \times \frac 14\right)^4 = \frac{1}{20^4}$ and the advice turns out to be wrong in this artificial case.
My guess is that given the advice in the text, you have a reasonable expectation that all questions will be of the first sort. But you should remain aware that the second kind is not impossible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/124826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Are the eigenvalues of $AB$ equal to the eigenvalues of $BA$? First of all, am I being crazy in thinking that if $\lambda$ is an eigenvalue of $AB$, where $A$ and $B$ are both $N \times N$ matrices (not necessarily invertible), then $\lambda$ is also an eigenvalue of $BA$?
If it's not true, then under what conditions is it true or not true?
If it is true, can anyone point me to a citation? I couldn't find it in a quick perusal of Horn & Johnson. I have seen a couple proofs that the characteristic polynomial of $AB$ is equal to the characteristic polynomial of $BA$, but none with any citations.
A trivial proof would be OK, but a citation is better.
| If $v$ is an eigenvector of $AB$ for some nonzero $\lambda$, then $Bv\ne0$
and $$\lambda Bv=B(ABv)=(BA)Bv,$$ so $Bv$ is an eigenvector for $BA$ with the same eigenvalue. If $0$ is an eigenvalue of $AB$ then $0=\det(AB)=\det(A)\det(B)=\det(BA)$ so $0$ is also an eigenvalue of $BA$.
More generally, Jacobson's lemma in operator theory states that for any two bounded operators $A$ and $B$ acting on a Hilbert space $H$ (or more generally, for any two elements of a Banach algebra), the non-zero points of the spectrum of $AB$ coincide with those of the spectrum of $BA$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/124888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "87",
"answer_count": 4,
"answer_id": 2
} |
Is this out-of-context theorem true? Can someone tell me if the following proposition is true ?
Theorem If $u=g + i h$ is a holomorphic function in $\Omega\subseteq \mathbb{C}$ and $\Omega$ is simply connected, then $v(z)=u(w)+ \int_\gamma \,g_x(z)-ih_y (z) \,dz$ is a primitive function of $u$ (where $w\in \Omega$ is fixed and $\gamma$ is some path from $w$ to $z$).
(I have come across (the implicit use of) this proposition by reading about something not really related to complex analysis and since I know very little about it, if wondered if it actually would be true taken out of context like this. I also wouldn't mind a proof, if it is true and someone would have the time.)
| I shall assume that $g$ and $h$ are realvalued. Since $u:=g+ih$ is holomorphic it follows from the the CR equations that $h_y=g_x$. Therefore for any curve $\gamma\subset\Omega$ connecting the point $z_0$ with a variable point $z$ one has
$$\int_\gamma (g_x- i h_y)\ dz=(1-i)\int_\gamma g_x\ (dx+i dy) =(1-i)\int_\gamma(g_x\ dx + i h_y dy)=(1-i)\Bigl(g(z)-g(z_0)+i\bigl(h(z)-h(z_0)\bigr)\Bigr)=(1-i)\bigl(u(z)-u(z_0)\bigr)\ .$$
It follows that
$$v(z):=u(z_0)+\int_{z_0}^z (g_x- i h_y)\ dz=i u(z_0)+(1-i) u(z)\ ,$$
which shows that your $v$ is more or less the given $u$ again, and not a primitive of $u$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/124994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to determine the limit of this sequence? I was wondering how to determine the limit of $ (n^p - (\frac{n^2}{n+1})^p)_{n\in \mathbb{N}}$ with $p>0$, as $n \to \infty$?
For example, when $p=1$, the sequence is $ (\frac{n}{n+1})_{n\in \mathbb{N}}$, so its limit is $1$.
But I am not sure how to decide it when $p \neq 1$. Thanks in advance!
| Given the Binomial theorem, we have (see Landau notations)
$$\begin{array}{} (n+1)^p-n^p=\Theta(n^{p-1}) & \implies 1-\left(\frac{n}{n+1}\right)^p=\Theta \left( \frac{1}{n} \right) \\ & \implies n^p - \left(\frac{n^2}{n+1}\right)^p=\Theta(n^{p-1}) \end{array}$$
Thus the limit is $0$ for $p<1$, is $1$ for $p=1$ (already computed), and blows up to $\infty$ for $p>1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/125043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 1
} |
If a graph of $2n$ vertices contains a Hamiltonian cycle, then can we reach every other vertex in $n$ steps?
Problem: Given a graph $G,$ with $2n$ vertices and at least one triangle. Is it possible to show that you can reach every other vertex in $n$ steps if $G$ contains a Hamilton cycle (HC)?
EDIT: Sorry, I forgot to mention, that $G$ is planar and 3-connected. A complete proof for $3$-regular graphs would also be accepted/rewarded.
Does the following work as proof?
Choose a starting vertex $v_0$ and a direction.
*
*If you walk along the HC you'll reach a vertex $v_{n-1}$ with maximal distance from $v_0$ in $n$ steps.
*You'll reach $v_{n-2}$ by doing a round in the triangle and
*$v_{n-3}$ by stepping backwards at the last step.
*By combining these moves, you'll reach half of all $v_k$.
*By choosing the other direction at the beginning you'll reach the other half.
*$v_0$ is free to choose.
Showing or disproving the "only if"-part would also be nice!
| The answer is no.
Question: Let $G$ be a 3-connected, hamiltonian, planar graph with $2n$ vertices and at least one triangle. Is it true that for all vertex pairs $x,y$, that there is a walk of exactly $n$ steps from $x$ to $y$?
The following graph and vertex pair is a counter example
It is clear that the graph is planar and has a triangle. It can be easily verified that the graph is 3-connected. To show that the graph is hamiltonian, I have highlighted a hamiltonian cycle here
Since the vertex has 16 vertices, we need to verify that there is no walk of length 8 from $x$ to $y$. Since $n$ is equal, we can not reach $y$ without using some of the four vertices on the right. Now it is easy to verify by hand, that there is no walk from $x$ to $y$ of length exactly 8.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/125159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
What are the minimal conditions on a topological space for its closure to equal the sequential closure? My question is: what are the minimal conditions on a topological space for it have the following property?
$$x\in \bar{A}\iff \exists (x_n)\subset A | x_n \to x$$
| In this paper there is the answer (section 2, on Fréchet spaces, also known as Fréchet-Urysohn spaces): Your property defines the notion of a Fréchet space and he shows that these spaces are the pseudo-open images of metric spaces. He also defines the weaker concept of a sequential space and in the follow up paper he shows that a sequential space is Fréchet iff all of its subspaces are sequential (hereditarily sequential).
A sequential space has the cleaner characterization: a space $X$ is sequential iff there is a metric space $M$ and a surjective quotient map $f: M \rightarrow X$ ($X$ is a quotient image of a metric space).
As said, Fréchet spaces can be similarly characterized, using not quotient maps but pseudo-open maps: $f: X \rightarrow Y$ is pseudo-open iff for every $y \in Y$ and every open neighborhood $U$ of $f^{-1}[\{y\}]$ we have that $y \in \operatorname{int}(f[U])$. Every open or closed surjective map is pseudo-open and all pseudo-open maps are quotient.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/125287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
How to solve $\int_0^\pi{\frac{\cos{nx}}{5 + 4\cos{x}}}dx$? How can I solve the following integral?
$$\int_0^\pi{\frac{\cos{nx}}{5 + 4\cos{x}}}dx, n \in \mathbb{N}$$
| To elaborate on Pantelis Damianou's answer
$$
\newcommand{\cis}{\operatorname{cis}}
\begin{align}
\int_0^\pi\frac{\cos(nx)}{5+4\cos(x)}\mathrm{d}x
&=\frac12\int_{-\pi}^\pi\frac{\cos(nx)}{5+4\cos(x)}\mathrm{d}x\\
&=\frac12\int_{-\pi}^\pi\frac{\cis(nx)}{5+2(\cis(x)+\cis(-x))}\mathrm{d}x\\
&=\frac12\int_{-\pi}^\pi\frac{\cis(x)\cis(nx)}{2\cis^2(x)+5\cis(x)+2}\mathrm{d}x\\
&=\frac{1}{2i}\int_{-\pi}^\pi\frac{\cis(nx)}{2\cis^2(x)+5\cis(x)+2}\mathrm{d}\cis(x)\\
&=\frac{1}{2i}\oint\frac{z^n}{2z^2+5z+2}\mathrm{d}z
\end{align}
$$
where the integral is counterclockwise around the unit circle and $\cis(x)=e^{ix}$.
Factor $2z^2+5z+2$ and use partial fractions. However, I only get a singularity at $z=-\frac12$ (and one at $z=-2$, but that is outside the unit circle, so of no consequence).
Now that a complete solution has been posted, I will finish this using residues:
$$
\begin{align}
\frac{1}{2i}\oint\frac{z^n}{2z^2+5z+2}\mathrm{d}z
&=\frac{1}{6i}\oint\left(\frac{2}{2z+1}-\frac{1}{z+2}\right)\,z^n\,\mathrm{d}z\\
&=\frac{1}{6i}\oint\frac{z^n}{z+1/2}\mathrm{d}z\\
&=\frac{\pi}{3}\left(-\frac12\right)^n
\end{align}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/125399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 0
} |
annihilator is the intersection of sets If $W$ is a subspace of a finite dimensional vector space $V$ and $\{g_{1},g_{2},\cdots, g_{r}\}$ is a basis of the annihilator $W^{\circ}=\{f \in V^{\ast}| f(a)=0, \forall a \in W\}$, then $W=\cap_{i=1}^{r} N_{g_{i}}$, where for $f \in V^{\ast}$, $N_{f}=\{a \in V| f(a)=0\}$
How shall I prove this?
| We wish to prove that
$$W = \bigcap_{i=1}^{r} N_{g_{i}}$$
Step $1$: Proving $W \subset \bigcap_{i=1}^{r} N_{g_{i}}$
Let $w \in W$. We know that the annihilator $W^{o}$ is the set of linear functionals that vanish on $W$. If $g_{i}$ is in the basis for $W^{o}$, it is certainly in $W^{o}$. Thus, the $g_{i}$ all vanish on $W$, so that $W \subset N_{g_i}$. We thus see that $w \in N_{g_{i}}$ for all $1 \leq i \leq r$. Hence
$$w \in \bigcap_{i=1}^{r} N_{g_{i}}$$
But $w$ was arbitrary, so
$$W \subset \bigcap_{i=1}^{r} N_{g_{i}}$$
Step $2$: Proving $W \supset \bigcap_{i=1}^{r} N_{g_{i}}$
Let $\{\alpha_{1}, \cdots, \alpha_{s}\}$ be a basis for $W$, and extend it to a basis for $V$, $\{ \alpha_{1} ,\cdots, \alpha_{n}\}$. Likewise, extend $\{g_{1}, \cdots, g_{r}\}$ to a basis for $V^{*}$, $\{g_{1}, \cdots, g_{n}\}$, noting that $\mbox{dim}(V) = \mbox{dim} (V^{*})$. It is easily seen that one can choose these two bases to be dual to each other, as the dual basis to $\{\alpha_{1}, \cdots, \alpha_{s}\}$ is not contained in $W^{o}$, and the dual basis for $V \setminus W$ must be contained in $W^{o}$, just by the nature of the dual basis.
(To make it easier to choose the correct basis for $V \setminus W$, try looking at the double dual $V^{**}$, which is naturally isomorphic to $V$, and looking at the dual basis for $W^{o}$ in $V^{**}$)
We get
$$ g_{i}(\alpha_{j}) = \left\{ \begin{array}{cc}
1 & i= j\\
0 & i \neq j
\end{array}
\right.$$
Let $v \in \bigcap_{i=1}^{r} N_{g_{i}}$, so that $g_{i}(v) = 0$ for all $1 \leq i \leq r$, and write
$$v = c_{1}\alpha_{1} + \cdots + \cdots c_{n}\alpha_{n}$$
Taking $g_{i}$ of both sides, where $i$ ranges from $1$ through $r$,
$$g_{i}(v) = c_{1}g_{i}(\alpha_{1}) + \cdots + c_{n}g_{i}(\alpha_{n}) = 0$$
However, these $\{ g_{i}: 1 \leq i \leq r\}$ form a dual basis to $V \setminus W$. Hence, if $v$ had a nonzero component $c_{k}\alpha_{k}$ in $V \setminus W$, it would not vanish on $g_{k}$, $1 \leq k \leq r$. Then $g_{k}(r) \neq 0$, contrary to what we have shown. Thus $v$ must be contained in $W$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/125451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If we define $\sin x$ as series, how can we obtain the geometric meaning of $\sin x$? In Terry Tao's textbook Analysis, he defines $\sin x$ as below:
*
*Define rational numbers
*Define Cauchy sequences of rational numbers, and equivalence of Cauchy sequences
*Define reals as the space of Cauchy sequences of rationals modulo equivalence
*Define limits (and other basic operations) in the reals
*Cover a lot of foundational material including: complex numbers, power series, differentiation, and the complex exponential
*Eventually (Chapter 15!) define the trigonometric functions via the complex exponential. Then show the equivalence to other definitions.
My question is how can we obtain the geometry interpretation of $\sin x$, that is, the ratio of opposite side and hypotenuse.
| In this hint I suggest showing from the power series that if
$$
\sin(x)=\sum_{k=0}(-1)^k\frac{x^{2k+1}}{(2k+1)!}\tag{1}
$$
and
$$
\cos(x)=\frac{\mathrm{d}}{\mathrm{d}x}\sin(x)=\sum_{k=0}(-1)^k\frac{x^{2k}}{(2k)!}\tag{2}
$$
that $\frac{\mathrm{d}}{\mathrm{d}x}\cos(x)=-\sin(x)$ and from there that
$$
\sin^2(x)+\cos^2(x)=1\tag{3}
$$
Therefore, $(\cos(x),\sin(x))$ lies on the unit circle.
To see that $(\cos(x),\sin(x))$ moves around the unit circle at unit speed, note that $(3)$ implies
$$
\left|\frac{\mathrm{d}}{\mathrm{d}x}(\cos(x),\sin(x))\right|=\left|(-\sin(x),\cos(x))\right|=1\tag{4}
$$
Thus, $(3)$ and $(4)$ say that $(\cos(x),\sin(x))$ moves around the unit circle at unit speed. Note also that $(-\sin(x),\cos(x))$ is at a right angle counter-clockwise from $(\cos(x),\sin(x))$. Therefore, $(\cos(x),\sin(x))$ moves counter-clockwise around the unit circle at unit speed, starting at $(1,0)$. This should be sufficient to show that $\sin(x)$ and $\cos(x)$ are the standard trigonometric functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/125511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 6,
"answer_id": 3
} |
Continuous but not Hölder continuous function on $[0,1]$ Does there exist a continuous function $F$ on $[0,1]$ which is not Hölder continuous of order $\alpha$ at any point $X_{0}$ on $[0,1]$. $0 < \alpha \le 1$.
I am trying to prove that such a function does exist. also I couldn't find a good example.
| ($1$-dimensional) Brownian motion is almost surely continuous and nowhere Hölder continuous of order $\alpha$ if $\alpha > 1/2$. IIRC one can define random Fourier series that will be almost surely continuous but nowhere Hölder continuous for any $\alpha > 0$.
EDIT: OK, here's a construction. Note that $f$ is not Hölder continuous of order $\alpha$ at any point of $I = [0,1]$ if for every $C$ and every $x \in I$ there are $s,t \in I$ with $s \le x \le t$ and
$|f(t)-f(s)|>C(t-s)^\alpha$.
I'll define $f(x) = \sum_{n=1}^\infty n^{-2} \sin(\pi g_n x)$, where $g_n$ is an increasing sequence of integers such that $2 g_n$ divides $g_{n+1}$. This series converges uniformly to a continuous function. Let $f_N(x)$ be the partial sum $\sum_{n=1}^N n^{-2} \sin(\pi g_n x)$. Note that $\max_{x \in [0,1]} |f_N'(x)| \le \sum_{n=1}^N n^{-2} \pi g_n \le B g_N$ for some constant $B$ (independent of $N$).
Now suppose $s = k/g_N$ and $t = (k+1/2)/g_N$
where $k \in \{0,1,\ldots,g_N-1\}$. We have $f(s) = f_{N-1}(s)$ and
$f(t) = f_{N-1}(t) \pm N^{-2}$. Now $|f_{N-1}(t) - f_{N-1}(s)| \le B g_{N-1} (t-s) = B g_{N-1}/(2 g_N)$, so $|f(t) - f(s)| \ge N^{-2} - B g_{N-1}/(2 g_N)$. The same holds for
$s = (k+1/2)/g_N$ and $t = (k+1)/g_N$. So let $g_n$ grow rapidly enough that $g_{n-1}/g_n = o(n^{-2})$ ($g_n = (3n)!$ will do, and also satisfies the requirement that $2g_n$ divides $g_{n+1}$). Then for every $\alpha > 0$, $(t-s)^\alpha = (2g_N)^{-\alpha} = o(N^{-2}) = o(|f(t) - f(s)|)$. Since for each $N$ the intervals $[s,t]$ cover all of $I$, we are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/125581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Show $ I = \int_0^{\pi} \frac{\mathrm{d}x}{1+\cos^2 x} = \frac{\pi}{\sqrt 2}$ Show $$ I = \int_0^{\pi} \frac{\mathrm{d}x}{1+\cos^2 x} = \frac{\pi}{\sqrt 2}$$
| In case KV's solution seems a bit magical, it may be reassuring to know that there's a systematic way to integrate rational functions of trigonometric functions, the Weierstraß substitution.
With $\cos x=(1-t^2)/(1+t^2)$ and $\mathrm dx=2/(1+t^2)\mathrm dt$,
$$
\begin{eqnarray}
\int_0^\pi \frac{\mathrm dx}{1+\cos^2 x}
&=&
\int_0^\infty\frac2{1+t^2} \frac1{1+\left(\frac{1-t^2}{1+t^2}\right)^2}\mathrm dt
\\
&=&
\int_0^\infty \frac{2(1+t^2)}{(1+t^2)^2+(1-t^2)^2}\mathrm dt
\\
&=&
\int_0^\infty \frac{1+t^2}{1+t^4}\mathrm dt\;.
\end{eqnarray}
$$
Here's where it gets a bit tedious. The zeros of the denominator are the fourth roots of $-1$, and assembling the conjugate linear factors into quadratic factors yields
$$
\begin{eqnarray}
\int_0^\infty \frac{1+t^2}{1+t^4}\mathrm dt
&=&
\int_0^\infty \frac{1+t^2}{(t^2+\sqrt2t+1)(t^2-\sqrt2t+1)}\mathrm dt
\\
&=&
\frac12\int_0^\infty \frac1{(t^2+\sqrt2t+1)}+\frac1{(t^2-\sqrt2t+1)}\mathrm dt
\\
&=&
\frac12\left[\sqrt2\arctan(1+\sqrt2t)-\sqrt2\arctan(1-\sqrt2t)\right]_0^\infty
\\
&=&
\frac\pi{\sqrt2}\;.
\end{eqnarray}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/125637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Example to show the distance between two closed sets can be 0 even if the two sets are disjoint Let $A$ and $B$ be two sets of real numbers. Define the distance from $A$ to $B$ by $$\rho (A,B) = \inf \{ |a-b| : a \in A, b \in B\} \;.$$ Give an example to show that the distance between two closed sets can be $0$ even if the two sets are disjoint.
| Consider the sets $\mathbb N$ and $\mathbb N\pi = \{n\pi : n\in\mathbb N\}$. Then $\mathbb N\cap \mathbb N\pi=\emptyset$ as $\pi$ is irrational, but we have points in $\mathbb N\pi$ which lie arbitrarily close to the integers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/125709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 3,
"answer_id": 2
} |
Counting words with subset restrictions I have an alphabet of N letters {A,B,C,D...N} and would like to count how many L-length words do not contain the pattern AA.
I've been going at this all day, but continue to stumble on the same problem.
My first approach was to count all possible combinations, (N^L) and subtract the words that contain the pattern.
I tried to count the number of ways in which I can place 'AA' in L boxes, but I realized early on that I was double counting, since some words can contain the pattern more than once.
I figured that if I had a defined length for the words and the set, I could do it by inclusion/exclusion, but I would like to arrive at a general answer to the problem.
My gut feeling is that somehow I could overcount, and then find a common factor to weed out the duplicates, but I can't quite see how.
Any help would be appreciated!
| Call the answer $x_L$.
Then $x_L=Nx_{L-1}-y_{L-1}$, where $y_L$ is the number of allowable words of length $L$ ending in $A$.
And $y_L=x_{L-1}-y_{L-1}$.
Putting these together we get $Nx_L-x_{L+1}=x_{L-1}-(Nx_{L-1}-x_L)$, which rearranges to $x_{L+1}=(N-1)x_L+(N-1)x_{L-1}$.
Now: do you know how to solve homogeneous constant coefficient linear recurrences?
EDIT. If all you want is to find the answer for some particular values of $L$ and $N$ then, as leonbloy notes in a comment to your answer, you can use the recurrence to do that. You start with $x_0=1$ (the "empty word") and $x_1=N$ and then you can calculate $x_2,x_3,\dots,x_L$ one at a time from the formula, $x_{L+1}=(N-1)x_L+(N-1)x_{L-1}$.
On the other hand, if what you want is single formula for $x_L$ as a function of $L$ and $N$, it goes like this:
First, consider the quadratic equation $z^2-(N-1)z-(N-1)=0$. Use the quadratic formula to find the two solutions; I will call them $r$ and $s$ because I'm too lazy to write them out.
Now it is known that the formula for $x_L$ is $$x_L=Ar^L+Bs^L$$ for some numbers $A$ and $B$. If we let $L=0$ and then $L=1$ we get the system $$\eqalign{1&=A+B\cr N&=rA+sB\cr}$$ a system of two equations for the two unknowns $A$ and $B$. So you solve that system for $A$ and $B$, and then you have your formula for $x_L$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/125778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to get rid of the integral in this equation $\int\limits_{x_0}^{x}{\sqrt{1+\left(\dfrac{d}{dx}f(x)\right)^2}dx}$? How to get rid of the integral $\int\limits_{x_0}^{x}{\sqrt{1+\left(\dfrac{d}{dx}f(x)\right)^2}dx}$ when $f(x)=x^2$?
| Summarising the comments, you'll get
$$
\int\limits_{x_0}^{x}{\sqrt{1+\left(\dfrac{d}{dt}f(t)\right)^2}dt}
=\int\limits_{x_0}^{x}{\sqrt{1+\left(\dfrac{d}{dt}t^2\right)^2}dt}
=\int\limits_{x_0}^{x}{\sqrt{1+4t^2}dt}
$$
To solve the last one substitute $t=\tan(u)/2$ and $dt=\sec^2(u)/2du$. Then $\sqrt{1+4t^2}= \sqrt{\tan^2(u)+1}=\sec(u)$, so we get as antiderviative:
$$
\begin{eqnarray}
\frac{1}{2}\int \sec^3(u) du&=&\frac{1}{4}\tan(u)\sec(u)+\frac{1}{4}\int \sec(u)du+\text{const.}\\
&=&\frac{1}{4}\tan(u)\sec(u)+\frac{1}{4}\log(\tan(u)+\sec(u))+\text{const.} \\
&=& \frac{t}{2}\sqrt{1+4t^2}+\frac{1}{4}\log(2t+\sqrt{1+4t^2})+\text{const.}\\
&=& \frac{1}{4}\left(2t\sqrt{1+4t^2}+\sinh^{-1}(2t) \right)+\text{const.}.
\end{eqnarray}
$$
Put in your limits and your done:
$$
\int\limits_{x_0}^{x}{\sqrt{1+4t^2}dx}=\left[\frac{1}{4}\left(2t\sqrt{1+4t^2}+\sinh^{-1}(2t) \right) \right]_{x_0}^x
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/125828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
The $n^{th}$ root of the geometric mean of binomial coefficients. $\{{C_k^n}\}_{k=0}^n$ are binomial coefficients. $G_n$ is their geometrical mean.
Prove
$$\lim\limits_{n\to\infty}{G_n}^{1/n}=\sqrt{e}$$
| In fact, we have
$$ \lim_{n\to\infty}\left[\prod_{k=0}^{n}\binom{n}{k}\right]^{1/n^2} = \exp\left(1+2\int_{0}^{1}x\log x\; dx\right) = \sqrt{e}.$$
This follows from the identity
$$\frac{1}{n^2}\log \left[\prod_{k=0}^{n}\binom{n}{k}\right] = 2\sum_{j=1}^{n}\frac{j}{n}\log\left(\frac{j}{n}\right)\frac{1}{n} + \left(1+\frac{1}{n}\right)\log n - \left(1+\frac{2}{n}\right)\frac{1}{n}\log (n!),$$
together with the Stirling's formula.
In fact, I tried to write down the detailed derivation of this identity, but soon gave up since it's painstrikingly demanding to type $\LaTeX$ formulas in iPad2!
But you may begin with the identity
$$\log\binom{n}{k} = \log n! - \log k! - \log (n-k)!$$
and
$$ \log k! = \sum_{j=1}^{k} \log j,$$
and then you can change the order of summation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/125890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 3
} |
Homotopy inverses need not induce inverse homomorphisms Let $f:X \rightarrow Y$ and $g : Y \rightarrow X$ be homotopy inverses, ie. $f \circ g$ and $g\circ f$ are homotopic to the identities on $X$ and $Y$. We know that $f_*$ and $g_*$ are isomorphisms on the fundamental groups of $X$ and $Y$. However, it is my understanding that they need not be inverse isomorphisms. Is there an explicit example where they are not?
| If $f,g$ are pointed maps (which is necessary so that $f_*,g_*$ make sense): No, they induce inverse homomorphsism.
Homotopic maps induce the same maps on homotopy groups, in particular fundamental groups. This means that we have a functor $\pi_1 : \mathrm{hTop}_* \to \mathrm{Grp}$. Every functor maps two inverse isomorphisms to the corresponding two inverse isomorphisms.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/125959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Distance between bounded and compact sets Let $(X,d)$ be a metric space and define for $B\subset X$ bounded, i.e.
$$\operatorname{diam}(B)= \sup \{ d(x,y) \colon x,y\in B \} < \infty,$$
the measure
$$\beta(B) = \inf\{r > 0\colon\text{there exist finitely many balls of radius r which cover } B\},$$
or equivalently,
$$\beta(B)=\inf\big\lbrace r > 0|\exists N=N(r)\in{\bf N} \text{ and } x_1,\ldots x_N\in X\colon B\subset\bigcup_{k=1}^N B(x_k,r)\big\rbrace,$$
where $B(x,r)=\{y\in X\colon d(x,y)\}$ denotes the open ball of radius $r$ centered at $x\in X$.
Let ${\bf K}(X)$ denote the collection of (non-empty) compact subsets in $X$.
I would like to prove
$$\beta(B)=d_H\big(B,{\bf K}(X)\big),$$ where $d_H$ is the Hausdorff
distance.
I proved $d_H\big(B,{\bf K}(X)\big)\le\beta(B)$. Is there someone that knows how to prove the other inequality?
| Let $d_0=d_H(B,K(X))$. So for $d>d_0$ we can find compact $K$ with $d_H(B,K)<d$. In particular, $B \subseteq \cup_{k\in K} B(k,d)$.
As $K$ is compact, for any $\epsilon>0$ we can find $k_1,\cdots,k_n\in K$ with $K\subseteq \cup_i B(k_i,\epsilon)$. For $b\in B$, we can find $k\in K$ with $d(b,k)<d$. Then we can find $i$ with $d(k,k_i)<\epsilon$, and so $d(b,k_i)<d+\epsilon$. Thus $\beta(B)\leq d+\epsilon$.
As $\epsilon>0$ and $d>d_0$ were arbitrary, we find that $\beta(B) \leq d_0$ as required.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/126031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Divisor/multiple game Two players $A$ and $B$ play the following game:
Start with the set $S$ of the first 25 natural numbers: $S=\{1,2,\ldots,25\}$.
Player $A$ first picks an even number $x_0$ and removes it from $S$: We have $S:=S-\{x_0\}$.
Then they take turns (starting with $B$) picking a number $x_n\in S$ which is either divisible by $x_{n-1}$ or divides $x_{n-1}$ and removing it from $S$.
The player who can not find a number in $S$ which is a multiple or is divisble by the previous number looses.
Is there a winning strategy?
| Second player (B) wins.
Consider the following pairing:
$2,14$
$3,15$
$4,16$
$5,25$
$6,12$
$7,21$
$8,24$
$9,18$
$10,20$
$11,22$
The left out numbers are $1,13,17,19,23$.
Now whatever number player one (A) picks, the second player (B) picks the paired number from the above pairings.
Ultimately, player one (A) will be out of numbers, and will have to pick $1$, and then player two (B) picks $23$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/126098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Inequality involving the regularized gamma function Prove that $$Q(x,\ln 2) := \frac{\int_{\ln 2}^{\infty} t^{x-1} e^{-t} dt}{\int_{0}^{\infty} t^{x-1} e^{-t} dt} \geqslant 1 - 2^{-x}$$ for all $x\geqslant 1$.
($Q$ is the regularized gamma function.)
| We have
$$
\frac{\int_{\ln 2}^{\infty} t^{x-1} e^{-t} \,dt}{\int_{0}^{\infty} t^{x-1} e^{-t} \,dt} = \frac{\int_{0}^{\infty} t^{x-1} e^{-t} \,dt - \int_{0}^{\log 2} t^{x-1} e^{-t} \,dt}{\int_{0}^{\infty} t^{x-1} e^{-t} \,dt} = 1 - \frac{\int_{0}^{\log 2} t^{x-1} e^{-t} dt}{\int_{0}^{\infty} t^{x-1} e^{-t} \,dt},
$$
so we need to show that
$$
\frac{\int_{0}^{\log 2} t^{x-1} e^{-t} \,dt}{\int_{0}^{\infty} t^{x-1} e^{-t} \,dt} \leq 2^{-x},
$$
or, equivalently,
$$
2^x \int_{0}^{\log 2} t^{x-1} e^{-t} \,dt \leq \int_{0}^{\infty} t^{x-1} e^{-t} \,dt.
$$
To do this we will show that
$$
2^x \int_{0}^{\log 2} t^{x-1} e^{-t} \,dt \leq \left(\frac{e^a}{e^a-1}\right)^x \int_{0}^{a} t^{x-1} e^{-t} \,dt
\tag{1}
$$
for all $a \geq \log 2$, then let $a \to \infty$. In fact, we will show that the quantity on the right-hand side of the above inequality is nondecreasing in $a$ when $a > 0$ for fixed $x \geq 1$ (and strictly increasing in $a$ when $a > 0$ for fixed $x > 1$).
To start, define
$$
f_x(a) = \left(\frac{e^a}{e^a-1}\right)^x \int_{0}^{a} t^{x-1} e^{-t} \,dt.
$$
Then
$$
\begin{align}
f_x'(a) &= a^{x-1} e^{-a} \left(\frac{e^a}{e^a-1}\right)^x - x \left(\frac{e^a}{e^a-1}\right)^{x-1} \frac{e^a}{(e^a-1)^2} \int_{0}^{a} t^{x-1} e^{-t} \,dt \\
&= e^{ax} \left(e^a-1\right)^{-x-1} \left[a^{x-1} \left(1-e^{-a}\right) - x \int_{0}^{a} t^{x-1} e^{-t} \,dt\right].
\end{align}
$$
Since we're only concerned with the sign of the above expression, define
$$
\begin{align}
g_x(a) &= e^{-ax}(e^a - 1)^{x+1} f_x'(a) \\
&= a^{x-1} \left(1-e^{-a}\right) - x \int_{0}^{a} t^{x-1} e^{-t} \,dt.
\end{align}
$$
If $g_x(a) \geq 0$ for all $a > 0$ then $f_x'(a) \geq 0$ for all $a > 0$, and hence $f_x(a) \geq f_x(\log 2)$ for all $a \geq \log 2$, which is $(1)$.
Well, it will certainly be true that $g_x(a) \geq 0$ for all $a > 0$ if
$$
g_x(0) \geq 0 \hspace{1cm} \text{and} \hspace{1cm} g_x'(a) \geq 0 \,\,\text{ for all }\,\, a \geq 0.
\tag{2}
$$
Indeed, $g_x(0) = 0$, and for $x \geq 1$ we have
$$
g_x'(a) = a^{x-2} e^{-a} (x-1) (e^a - a - 1) \geq 0
$$
since the function $h(a) = e^a - a - 1$ is nondecreasing when $a \geq 0$ and $h(0) = 0$.
By the remarks immediately before $(2)$ this is sufficient to prove $(1)$, from which the result follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/126156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Calculate the slope of a line passing through the intersection of two lines Let say I have this figure,
I know slope $m_1$, slope $m_1$, $(x_1, y_1)$, $(x_2, y_2)$ and $(x_3, y_3)$. I need to calculate slope $m_3$. Note the line with $m_3$ slope will always equally bisect line with $m_1$ slope and line with $m_2$.
|
We understand that:
$$m_1=\tan(\alpha)$$
$$m_2=\tan(\beta),$$
Then:
$$
m_3=\tan\left(\frac{\alpha+\beta}2\right).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/126237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Solving $217 x \equiv 1 \quad \text{(mod 221)}$ I am given the problem:
Find an integer $x$ between $0$ and $221$ such that
$$217 x \equiv 1 \quad \text{(mod 221)}$$
How do I solve this? Unfortunately I am lost.
| In this special case, you can multiply the congruence by $-1$ and you'll get
$$4x\equiv 220 \pmod{221}.$$
(Just notice that $-217 \equiv 4 \pmod{221}$ and $-1\equiv220\pmod{221}$.)
This implies that $x\equiv 55 \pmod{221}$ is a solution. (And since $\gcd(4,221)=1$, there is only one solution modulo $221$.)
In general, for questions of this type you can use extended Euclidean algorithm see Wikipedia.
You can find some examples at this site, e.g. here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/126286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Help me understand a 3d graph I've just seen this graph and while it's isn't the first 3d graph I've seen, as
a math "noob" I never thought how these graphs are plotted. I can draw 2d graphs on paper by marking the input and output values of a function. It's also easy for me to visualize what the graph I'm seeing says about the function but what about graphs for functions with 2 variables? How do I approach drawing and understanding the visualization?
| Set your function equal to a given constant, this give you a function you are used to, and varying the height (ie what you set your function equal to) gives you the graph (2d) of the surface intersected with planes parallel to the xy-plane. Its essentially the same as a contour map of a mountain.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/126401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Is there an abelian category of topological groups? There are lots of reasons why the category of topological abelian groups (i.e. internal abelian groups in $\bf Top$) is not an abelian category. So I'm wondering:
Is there a "suitably well behaved" subcategory of $\bf Top$, say $\bf T$, such that $\bf Ab(T)$ is an abelian category?
My first guess was to look for well behaved topological spaces (locally compact Hausdorff, compactly generated Hausdorff, and so on...) Googling a little shows me that compactly generated topological groups are well known animals, but the web seems to lack of a more categorical point of view.
Any clue? Thanks in advance.
| This was alluded to in the comments and may not be what you're looking for, but it surely deserves mention that you can take $\mathbf{T}$ to be the category of compact Hausdorff spaces. The category $\mathbf{Ab}(\mathbf{T})$ is the the category of compact abelian groups, which is equivalent to $\mathbf{Ab}^{op}$ and hence abelian by Pontryagin duality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/126537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 2,
"answer_id": 0
} |
re-writing a $\min(X,Y)$ function linearly for LP problem I am trying to formulate an LP problem. In the problem I have a $\min(X,Y)$ that I would like to formulate linearly as a set of constraints. For example, replacing $\min(X,Y)$ with some variable $Z$, and having a set of constraints on Z.
I believe that there are a minimum of two constraints:
subto: $Z \le X$
subto: $Z \le Y$
That will make it take a value that is less than or equal to $\min(X,Y)$. But, I want it to take the minimum value of $X$ or $Y$. I am missing one constraint, that seems to have the following logic: "$Z \ge X$ or $Z \ge Y$" ... so that it isn't just less than the minimum, it IS the minimum. I know I'm missing something basic.
In addition to fabee's response, I also have found this representation to work well which uses either-or constraint representation. Note that M must be large, see 15.1.2.1 in this document.
param a := 0.8;
param b := 0.4;
param M := 100;
var z;
var y binary;
minimize goal: z;
subto min_za:
z <= a;
subto min_zb:
z <= b;
subto min_c1:
-z <= -a + M*y;
subto min_c2:
-z <= -b + M*(1-y);
| You could use $\min(x,y) = \frac{1}{2}(x + y - |x - y|)$ where $|x - y|$ can be replaced by the variables $z_1 + z_2$ with constraints $z_i \ge 0$ for $i=1,2$ and $z_1 - z_2 = x - y$. $z_1$ and $z_2$ are, therefore, the positive or the negative part of $|x-y|$.
Edit:
For the reformulation to work, you must ensure that either $z_1=0$ or $z_2=0$ at the optimum, because we want
$$z_1 = \begin{cases}
x-y & \mbox{ if }x-y\ge0\\
0 & \mbox{ otherwise}
\end{cases}$$
and
$$z_2 = \begin{cases}
y-x & \mbox{ if }x-y\le0\\
0 & \mbox{ otherwise}
\end{cases}.$$
You can check that the constraints will be active if the objective function can always be increase/decreased by making one of the $z_i$ smaller. That is the reason why your maximization worked, because the objective function could be increased by making one of the $z_i$ smaller.
You could fix your minimization example by requiring that $0\le z_i \le |x-y|$. In that case, the objective function will be smallest if $z_i$ are largest. However, since they still need to be $3$ apart, one of the $z_i$ will be three and the other one will be zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/126602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
finding final state of numbers after certain operations There are $N$ children sitting along a circle, numbered $1,2,\dots,n$ clockwise. The $i$-th child has a piece of paper with number $a_i$ written on it. They play the following game:
In the first round, the child numbered $x$ adds to his number the sum of the numbers of his neighbors. In the second round, the child next in clockwise order adds to his number the sum of the numbers of his neighbors, and so on. The game ends after $M$ rounds have been played.
Any idea about how to get the value of $j$-th element when the game is started at the $i$-th position after $M$ rounds.
Is there any closed form equation?
| In principle there is, but in practice I doubt that there’s anything very useful.
It really suffices to solve the problem when $i=1$, since for any other value of $i$ we can simply relabel the children. If we start at position $1$, we can define $a_{kn+i}$ to child $i$’s number after $k$ rounds have been played. Then the rules ensure that $$a_{kn+i}=a_{(k-1)n+i}+a_{(k-1)n+i+1}+a_{kn+i-1}\;.\tag{1}$$ That is, child $i$’s number after $k$ rounds is his number after $k-1$ rounds plus child $(i+1)$’s number after $k-1$ rounds + child $(i-1)$’s number after $k$ rounds. (You can check that this works even when $i$ is $1$ or $n$.)
We can simplify $(1)$ to the homogeneous $n$-th order linear recurrence
$$a_m=a_{m-1}+a_{m-n+1}+a_{m-n}\;,\tag{2}$$
where the terms on the righthand side are in the opposite order from those in $(1)$. The general solution to this recurrence involves powers of the solutions of the auxiliary equation $$x^n-x^{n-1}-x-1=0\;,$$ and those solutions aren’t going to be at all nice.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/126677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Field extension, primitive element theorem I would like to know if it is true that
$\mathbb{Q}(\sqrt{2}-i, \sqrt{3}+i) = \mathbb{Q}(\sqrt{2}-i+2(\sqrt{3}+i))$.
I can prove, that $\mathbb{Q}(\sqrt{2}-i, \sqrt{3}+i) = \mathbb{Q}(\sqrt{2},\sqrt{3},i)$, so the degree of this extension is 8. Would it be enough to show that the minimal polynomial of $\sqrt{2}-i+2(\sqrt{3}+i)$ has also degree 8?
It follows from the proof of the primitive element theorem that only finitely many numbers $\mu$ have the property that $\mathbb{Q}(\sqrt{2}-i, \sqrt{3}+i)\neq \mathbb{Q}(\sqrt{2}-i+\mu(\sqrt{3}+i))$. Obviously $\mu=1$ is one of them, but how to check, whether 2 also has this property?
Thanks in advance,
| Let $\alpha=\sqrt{2}-i+2(\sqrt{3}+i)$.
Since $\alpha\in\mathbb{Q}(\sqrt{2}-i,\sqrt{3}+i)$, it follows that $\mathbb{Q}(\alpha)=\mathbb{Q}(\sqrt{2}-i,\sqrt{3}+i)$ if and only if their degrees over $\mathbb{Q}$ are equal. The degree $[\mathbb{Q}(\alpha):\mathbb{Q}]$ is equal to the degree of the monic irreducible of $\alpha$ over $\mathbb{Q}$, so you are correct that if you can show that the monic irreducible of $\alpha$ is of degree $8$, then it follows that $\mathbb{Q}(\alpha)=\mathbb{Q}(\sqrt{2}-i,\sqrt{3}+i)$.
I will note, however, that your interpretation of the Primitive Element Theorem is incorrect. The Theorem itself doesn't really tell you what you claim it tells you. The argument in the proof relies on the fact that there are only finitely many fields between $\mathbb{Q}$ and $\mathbb{Q}(\sqrt{2}-i,\sqrt{3}+i)$, and so by the Pigeonhole Principle there are only finitely many rationals $\mu$ such that $\mathbb{Q}(\sqrt{2}-i,\sqrt{3}+i)\neq\mathbb{Q}(\sqrt{2}-i+\mu(\sqrt{3}+i))$. But this is not a consequence of the Primitive Element Theorem, but rather of the fact that there are only finitely many fields in between.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/126749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Cancelling summands in a direct sum decomposition Let $M$ be a Noetherian and Artinian module. Suppose that:
$$\bigoplus_{i=1}^{q} A_{i} \oplus \bigoplus_{i=1}^{t} B_{i} \cong \bigoplus_{i=1}^{q} A_{i} \oplus \bigoplus_{i=1}^{r} C_{i}$$
where all $A_{i},B_{i},C_{i}$ are indecomposable submodules of $M$.
Can we always guarantee that $B_{i} \cong C_{i}$ for all $i \in \{1,2,...,t\}$? That is, can we "cancel" the term $\displaystyle\bigoplus_{i=1}^{q} A_{i}$?
| Cancellation means that for modules $M,N,P$ over a ring $R$ (not assumed commutative) we have the implication
$$M\oplus N\cong M\oplus P \implies N\cong P$$
Cancellation holds for modules that are only assumed artinian (which of course answers your question in the affirmative) thanks to a theorem by Camps and Dicks.
This is quite astonishing, since
Krull-Schmidt does not hold for modules that are just supposed artinian.
And, again astonishingly, a counter-example was found only in 1995 .
Finally, let me point out that a very general Krull-Schmidt theorem was proved in a categorical setting by Atiyah. The main application of his results is to coherent sheaves in algebraic geometry .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/126817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Computing the best constant in classical Hardy's inequality Classical Hardy's inequality (cfr. Hardy-Littlewood-Polya Inequalities, Theorem 327)
If $p>1$, $f(x) \ge 0$ and $F(x)=\int_0^xf(y)\, dy$ then
$$\tag{H} \int_0^\infty \left(\frac{F(x)}{x}\right)^p\, dx < C\int_0^\infty (f(x))^p\, dx $$
unless $f \equiv 0$. The best possibile constant is $C=\left(\frac{p}{p-1}\right)^p$.
I would like to prove the statement in italic regarding the best constant. As already noted by Will Jagy here, the book suggests stress-testing the inequality with
$$f(x)=\begin{cases} 0 & 0\le x <1 \\ x^{-\alpha} & 1\le x \end{cases}$$
with $1/p< \alpha < 1$, then have $\alpha \to 1/p$. If I do so I get for $C$ the lower bound
$$\operatorname{lim sup}_{\alpha \to 1/p}\frac{\alpha p -1}{(1-\alpha)^p}\int_1^\infty (x^{-\alpha}-x^{-1})^p\, dx\le C$$
but now I find myself in trouble in computing that lim sup. Can someone lend me a hand, please?
UPDATE: A first attempt, based on an idea by Davide Giraudo, unfortunately failed. Davide pointed out that the claim would easily follow from
$$\tag{!!} \left\lvert \int_1^\infty (x^{-\alpha}-x^{-1})^p\, dx - \int_1^\infty x^{-\alpha p }\, dx\right\rvert \to 0\quad \text{as}\ \alpha \to 1/p. $$
But this is false in general: for example if $p=2$ we get
$$\int_1^\infty (x^{-2\alpha} -x^{-2\alpha} + 2x^{-\alpha-1}-x^{-2})\, dx \to \int_1^\infty(2x^{-3/2}-x^{-2})\, dx \ne 0.$$
|
We have the operator $T: L^p(\mathbb{R}^+) \to L^p(\mathbb{R}^+)$ with $p \in (1, \infty)$, defined by$$(Tf)(x) := {1\over x} \int_0^x f(t)\,dt.$$Calculate $\|T\|$.
For the operator $T$ defined above, the operator norm is $p/(p - 1)$. We will also note that this is also a bounded operator for $p = \infty$, but not for $p = 1$.
Assume $1 < p < \infty$, and let $q$ be the dual exponent, $1/p + 1/q = 1$. By the theorem often referred to as "converse Hölder,"$$\|Tf\|_p = \sup_{\|g\|_q = 1}\left|\int_0^\infty (Tf)(x)g(x)\,dx\right|.$$So, assume that$\|g\|_q = 1$,\begin{align*} \left| \int_0^\infty (Tf)(x)g(x)\,dx\right| & \le \int_0^\infty |Tf(x)||g(x)|\,dx \le \int_0^\infty \int_0^x {1\over x}|f(t)||g(x)|\,dt\,dx \\ & = \int_0^\infty \int_0^1 |f(ux)||g(x)|\,du\,dx = \int_0^1 \int_0^\infty |f(ux)||g(x)|\,dx\,du \\ & \le \int_0^1 \left(\int_0^\infty |f(ux)|^pdx\right)^{1\over p} \left(\int_0^\infty |g(x)|^q dx\right)^{1\over q}du \\ & = \int_0^1 u^{-{1\over p}}\|f\|_p \|g\|_q du = {p\over{p - 1}}\|f\|_p.\end{align*}So that gives us that the operator norm is at most $p/(p - 1)$. To show that this is tight, let $f(x) = 1$ on $(0, 1]$ and zero otherwise. We have $\|f\|_p = 1$ for all $p$. We can then compute that$$(Tf)(x) = \begin{cases} 1 & 0 < x \le 1 \\ {1\over x} & x > 1\end{cases}$$and by direct computation, $\|Tf\|_p = p/(p - 1)$ for $p > 1$.
The same example also shows that we can have $f \in L^1$ but $Tf \notin L^1$, so we must restrict to $p > 1$. However, it is straightforward to show that $T$ is bounded from $L^\infty \to L^\infty$ with norm $1$. Note that the range of the operator in that case is contained within the bounded continuous functions on $(0, \infty)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/126889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33",
"answer_count": 5,
"answer_id": 4
} |
Simple Logic Question I've very little understanding in logic, how can I simply show that this is true:
$$((X \wedge \neg Y)\Rightarrow \neg Z) \Leftrightarrow ((X\wedge Z)\Rightarrow Y)$$
Thanks a lot.
| You want to show that
$$((X \wedge \neg Y)\Rightarrow \neg Z) \Leftrightarrow ((X\wedge Z)\Rightarrow Y).$$
It is hard to know without context what "show" might mean. For example, we could be working with a specific set of axioms. Since an axiom system was not specified, I will assume we are looking for a precise but not axiom-based argument.
Truth tables are nicely mechanical, so they are a very good way to verify the assertion. Below we give a "rhetorical" version of the truth table argument. It probably shows that truth tables would have been a better choice! However, it is important to be able to scan a sentence and understand under what conditions that sentence is true.
We want to show that (a) if $(X \wedge \neg Y)\Rightarrow \neg Z$ is true then $(X\wedge Z)\Rightarrow Y$ is true and (b) if $(X\wedge Z)\Rightarrow Y$ is true then $(X \wedge \neg Y)\Rightarrow \neg Z$ is true.
We deal with (a). There are two ways for $(X \wedge \neg Y)\Rightarrow \neg Z$ to be true: (i) if $\neg Z$ is true or (ii) if $X \wedge \neg Y$ is false. In case (i), $Z$ is false, which implies that $X\wedge Z$ is false, which implies that
$(X\wedge Z)\Rightarrow Y$ is true. In case (ii), $X$ is false or $Y$ is true. If $X$ is false, then $X\land Z$ is false, and as in case (i), $(X\wedge Z)\Rightarrow Y$ is true. If $Y$ is true, then automatically $(X\wedge Z)\Rightarrow Y$ is true. We now have completed proving (a).
The proof for the direction (b) is very similar.
Another way: We can also use (Boolean) algebraic manipulation to show that each side is logically equivalent to $(\neg X \lor \neg Z)\lor Y$.
Note that
$(X \wedge \neg Y)\Rightarrow \neg Z$ is equivalent to $\neg(X\wedge \neg Y)\lor \neg Z$, which is equivalent to $(\neg X \lor Y)\lor \neg Z$, which is equivalent to $(\neg X \lor \neg Z)\lor Y$.
Note also that $(X\wedge Z)\Rightarrow Y$ is equivalent to $\neg(X\wedge Z)\lor Y$, which is equivalent to $(\neg X\lor \neg Z)\lor Y$. This completes the argument.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/126940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 7,
"answer_id": 5
} |
The sum of an Irreducible Representation I was hoping someone could help me with the following question.
Let $\rho$ be an irreducible presentation of a finite group $G.$ Prove
\begin{equation*}
\sum_{g \in G} \rho(g) = 0
\end{equation*}
unless $\rho$ is the trivial representation of degree $1$.
I think I have to use Schur's Lemma which states the following. Let $\rho: G \longrightarrow GL(n,\mathbb{C})$ be a representation of G. Then $\rho$ is irreducible if and only if every $n \times n$ matrix $A$ which satisfies
\begin{equation*}
\rho(g)A = A\rho(g) \ \ \ \forall \ g \in G
\end{equation*}
has the form $A = \lambda I_n \, $ with $\lambda \in \mathbb{C}$.
But I am really not sure how the lemma can be applied to this question?
| Let $t=\sum_{g\in G}\rho(g)$, which is an linear endomorphism of $V$. The subset $t(V)$ of $V$ is a $G$-submodule of $V$, as you can easily check. Moreover, $G$ acts trivially on all elements of $t(V)$.
If $V$ is irreducible, then either $t(V)=0$ or $t(V)=V$. In the first case, we have that in fact $t=0$. In the second one, we see that $G$ acts trivially on all of $V$, so $V$ must be of dimension $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/127002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Complex Analysis: Liouville's theorem Proof I'm being asked to find an alternate proof for the one commonly given for Liouville's Theorem in complex analysis by evaluating the following given an entire function $f$, and two distinct, arbitrary complex numbers $a$ and $b$:
$$\lim_{R\to\infty}\oint_{|z|=R} {f(z)\over(z-a)(z-b)} dz $$
What I've done so far is I've tried to apply the cauchy integral formula, since there are two singularities in the integrand, which will fall in the contour for $R$ approaches infinity. So I got:
$$2{\pi}i\biggl({f(a)\over a-b}+{f(b)\over b-a}\biggr)$$
Which equals
$$2{\pi}i\biggl({f(a)-f(b)\over a-b}\biggr)$$
and I got stuck here I don't quite see how I can get from this, plus $f(z)$ being bounded and analytic, that can tell me that $f(z)$ is a constant function. Ugh, the more well known proof is so much simpler -.-
Any suggestions/hints? Am I at least on the right track?
| You can use the $ML$ inequality (with boundedness of $f$) to show $\displaystyle \lim_{R\rightarrow \infty} \oint_{|z|=R} \frac{f(z)}{(z-a)(z-b)}dz = 0$.
Combining this with your formula using the Cauchy integral formula, you get $$ 0 = 2\pi i\bigg(\frac{f(b)-f(a)}{b-a}\bigg)$$ from which you immediately conclude $f(b) = f(a)$. Since $a$ and $b$ are arbitrary, this means $f$ is constant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/127046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Integration Problem Proof ($\sin x$) Problem: Integration of $\displaystyle\int_{-1}^1 {\sin x\over 1+x^2} \; dx = 0 $
(according to WolframAlpha Definite Integral Calculator)
But I don't understand how. I tried to prove using integration by parts.
Here's the work:
$$
\int_{-1}^1 {\sin x\over 1+x^2} \; dx = \int_{-1}^1 {\sin x}{1\over 1+x^2} \; dx
$$
Let $u = \sin x,\quad du = \cos x\; dx\;$ and $v = \tan^{-1}x,\quad dv = {1\over 1+x^2}dx\;$.
So
$$
\int_{-1}^1 u dv = \left[uv\right]_{-1}^1 - \int_{-1}^1 v du =\left[ \sin x (\tan^{-1}x)\right]_{-1}^1 - \int_{-1}^1 \tan^{-1}x \cos x\; dx.
$$
Next let $u = \tan^{-1}x, du = {1\over 1+x^2}$ and $dv = \cos x, v = \sin x$...
I stopped here, because I feel like I'm going in a circle with this problem. What direction would I take to solve this because I don't know whether integration by parts is the way to go? Should I use trig substitution?
Thanks.
| You don’t have to do any actual integration. Let $$f(x)=\frac{\sin x}{1+x^2}\;$$ then $$f(-x)=\frac{\sin(-x)}{1+(-x)^2}=\frac{-\sin x}{1+x^2}=-f(x)\;,$$ so $f(x)$ is an odd function. The signed area between $x=-1$ and $x=0$ is therefore just the negative of the signed area from $x=0$ to $x=1$, and the whole thing cancels out.
In more detail, let $$A=\int_0^1 f(x) dx=\int_0^1\frac{\sin x}{1+x^2} dx\;,$$ and let $$B=\int_{-1}^0 f(x) dx=\int_{-1}^0\frac{\sin x}{1+x^2} dx\;.\tag{1}$$ Now substitute $u=-x$ in $(1)$: $f(u)=f(-x)=-f(x)$, $du=-dx=(-1)dx$ so $dx=-du$, and $u$ runs from $1$ to $0$, so
$$B=\int_1^0 -f(x)(-1)dx=\int_1^0f(x)dx=-\int_0^1f(x)dx=-A\;.$$
Thus, $$\int_{-1}^1 f(x)dx=A+B=A-A=0\;.$$
Note that the specific function $f$ didn’t matter: we used only the fact that $f$ is an odd function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/127122",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Proving an asymptotic lower bound for the integral $\int_{0}^{\infty} \exp\left( - \frac{x^2}{2y^{2r}} - \frac{y^2}{2}\right) \frac{dy}{y^s}$ This is a follow up to the great answer posted to https://math.stackexchange.com/a/125991/7980
Let $ 0 < r < \infty, 0 < s < \infty$ , fix $x > 1$ and consider the integral
$$ I_{1}(x) = \int_{0}^{\infty} \exp\left( - \frac{x^2}{2y^{2r}} - \frac{y^2}{2}\right) \frac{dy}{y^s}$$
Fix a constant $c^* = r^{\frac{1}{2r+2}} $ and let $x^* = x^{\frac{1}{1+r}}$.
Write $f(y) = \frac{x^2}{2y^{2r}} + \frac{y^2}{2}$ and note $c^* x^*$ is a local minimum of $f(y)$ so that it is a global max for $-f(y)$ on $[0, \infty)$.
We are trying to determine if there exist upper and lower bounds of the same order for large x. The coefficients in our bounds can be composed of rational functions in x or even more complicated as long as they do not have exponential growth. The Laplace expansion presented in the answer to the question cited above gives upper bounds.
In particular can we prove a specific lower bound:
Does there exist a positive constant $c_1(r,s)$ and such that for x>1 we have
$$I_1 (x) > \frac{c_1(r,s)}{x} \exp( - f(c^* x^*))$$ (it is ok in the answer if the function $\frac{1}{x}$ in the upper bound is replaced by any rational function or power of $x$)
| I think that if you make the change of variables $y = \lambda z$ with $\lambda = x^{\frac 2 {r+1} }\;i.e. \frac {x^2} {\lambda^{2r}} = \lambda^2$ you convert it into $\lambda ^{s-1} \int e^{-\lambda^2 \frac 12(z^{-2r} + z^2)} \frac {dz}{z^s}$ which looks like a fairly normal laplace type expansion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/127177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
two subgroups of $S_{n}$ and $S_{m}$ If $H\subseteq S_{n}$ and $K\subseteq S_{m}$ how can I then show that I can think of $H\times K$ as it was a subgroup of $S_{m+n}$?
| In hopes of getting this off the Unanswered list, here’s a hint expanding on Jyrki’s first comment.
$K$ is a group of permutations of the set $\{1,\dots,m\}$, so each $k\in K$ is a bijection $$k:\{1,\dots,m\}\to\{1,\dots,m\}\;.$$ For each $k\in K$ let $$\hat k:\{n+1,\dots,n+m\}\to\{n+1,\dots,n+m\}:n+i\mapsto n+k(i)\;,$$ and let $\widehat K=\{\hat k:k\in K\}$.
Show that $\widehat K$ is a group of permutations of $\{n+1,\dots,n+m\}$ and is isomorphic to $K$.
For each $\langle h,k\rangle\in H\times K$ let $$g_{\langle h,k\rangle}:S_{n+m}\to S_{n+m}:i\mapsto\begin{cases}
h(i),&\text{if }1\le i\le n\\
\hat k(i),&\text{if }n+1\le i\le n+m\;,
\end{cases}$$
and let $G=\{g_{\langle h,k\rangle}:\langle h,k\rangle\in H\times K\}$.
Show that $G$ is a subgroup of $S_{n+m}$ and is isomorphic to $H\times K$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/127236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Root Calculation by Hand Is it possible to calculate and find the solution of $ \; \large{105^{1/5}} \; $ without using a calculator? Could someone show me how to do that, please?
Well, when I use a Casio scientific calculator, I get this answer: $105^{1/5}\approx " 2.536517482 "$. With WolframAlpha, I can an even more accurate result.
| Another way of doing this would be to use logarithm, just like Euler did:
$$
105^{1/5} = \mathrm{e}^{\tfrac{1}{5} \log (105)} = \mathrm{e}^{\tfrac{1}{5} \log (3)}
\cdot \mathrm{e}^{\tfrac{1}{5} \log (5)} \cdot \mathrm{e}^{\tfrac{1}{5} \log (7)}
$$
Use $$\log(3) = \log\left(\frac{2+1}{2-1}\right) = \log\left(1+\frac{1}{2}\right)-\log\left(1-\frac{1}{2}\right) = \sum_{k=0}^\infty \frac{2}{2k+1} \cdot \frac{1}{2^{2k+1}} = 1 + \frac{1}{12} + \frac{1}{80} + \frac{1}{448} = 1.0.83333+0.0125 + 0.0022 = 1.09803$$
$$
\log(5) = \log\frac{4+1}{4-1} + \log(3) = \log(3) + \sum_{k=0}^\infty \frac{2}{2k+1} \cdot \frac{1}{4^{2k+1}} = \log(3) + \frac{1}{2} + \frac{1}{96} +\frac{1}{2560}
$$
$$
\log(7) = \log\frac{8-1}{8+1} + 2 \log(3) = 2 \log(3) - \sum_{k=0}^\infty \frac{2}{2k+1} \cdot \frac{1}{8^{2k+1}} = 2 \cdot \log(3) - \frac{1}{4} - \frac{1}{768}
$$
Thus
$$
\frac{1}{5} \left( \log(3) + \log(5) + \log(7)\right) = \frac{4}{5} \log(3) + \frac{1}{5} \left( \frac{1}{2} - \frac{1}{4} + \frac{1}{96} - \frac{1}{768} + \frac{1}{2560} \right) = \frac{4}{5} \log(3) + \frac{1993}{38400}= 0.9303 = 1-0.0697
$$
Now
$$
\exp(0.9303) = \mathrm{e} \cdot \left( 1 - 0.0697 \right) = 2.71828 \cdot 0.9303 = 2.5288
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/127310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "92",
"answer_count": 5,
"answer_id": 3
} |
Prove that $||x|-|y||\le |x-y|$ I've seen the full proof of the Triangle Inequality
\begin{equation*}
|x+y|\le|x|+|y|.
\end{equation*}
However, I haven't seen the proof of the reverse triangle inequality:
\begin{equation*}
||x|-|y||\le|x-y|.
\end{equation*}
Would you please prove this using only the Triangle Inequality above?
Thank you very much.
| For all $x,y\in \mathbb{R}$, the triangle inequality gives
\begin{equation}
|x|=|x-y+y| \leq |x-y|+|y|,
\end{equation}
\begin{equation}
|x|-|y|\leq |x-y| \tag{1}.
\end{equation}
Interchaning $x\leftrightarrow y$ gives
\begin{equation}
|y|-|x| \leq |y-x|
\end{equation}
which when rearranged gives
\begin{equation}
-\left(|x|-|y|\right)\leq |x-y|. \tag{2}
\end{equation}
Now combining $(2)$ with $(1)$, gives
\begin{equation}
-|x-y| \leq |x|-|y| \leq |x-y|.
\end{equation}
This gives the desired result
\begin{equation}
\left||x|-|y|\right| \leq |x-y|. \blacksquare
\end{equation}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/127372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "118",
"answer_count": 7,
"answer_id": 0
} |
If a holomorphic function $f$ has modulus $1$ on the unit circle, why does $f(z_0)=0$ for some $z_0$ in the disk? I don't understand the final step of an argument I read.
Suppose $f$ is holomorphic in a neighborhood containing the closed unit disk, nonconstant, and $|f(z)|=1$ when $|z|=1$. There is some point $z_0$ in the unit disk such that $f(z_0)=0$.
By the maximum modulus principle, it follows that $|f(z)|<1$ in the open unit disk. Since the closed disk is compact, $f$ obtains a minimum on the closed disk, necessarily on the interior in this situation.
But why does that imply that $f(z_0)=0$ for some $z_0$? I'm aware of the minimum modulus principle, that the modulus of a holomorphic, nonconstant, nonzero function on a domain does not obtain a minimum in the domain. But I'm not sure if that applies here.
| If not, consider $g(z)=\frac 1{f(z)}$ on the closure of the unit disc. We have $|g(z)|=1$ if $|z|=1$ and $|g(z)|>1$ if $|z|<1$. Since $g$ is holomorphic on the unit disk, the maximum modulus principle yields a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/127401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 1,
"answer_id": 0
} |
Is the solution to a driftless SDE with Lipschitz variation a martingale? If $\sigma$ is Lipschitz, with Lipschitz constant $K$, and $(X_t)_{t\geq 0}$ solves
$$dX_t=\sigma(X_t)dB_t,$$ where $B$ is a Brownian motion, then is $X$ a martingale? I'm having difficulty getting past the self-reference here. I tried showing that, for $t\geq 0$, $\mathbb{E}[X]_t$ is finite. Perhaps Gronwall's lemma is needed?
Thank you.
| Yes.
$$[X]_t = \int_0^t\sigma(X_u)^2du,$$
so
$$\begin{align} \mathbb{E}([X]_t) \le \int_0^t \mathbb{E}\left[(x_0 + K|X_u-x_0|)^2\right]du.\\
\end{align}
$$
$X$ is locally bounded in $L^2$. See, for example, Karatzas and Shreve equation 5.2.15 (p. 289). So it follows easily that $\mathbb{E}([X]_t)<\infty$, for each $t$. Hence $X$ is a martingale.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/127546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Find the radius of the circle? Two Circle of an equal of an radii are drawn , without any overlap , in a semicircle of radius 2 cm.
If these are the largest possible circles that the semicircle can accomodate , then
what is the radius of each of the circles?
Thanks in advance.
| Due to symmetry two circles in a semicircle is the same problem as one in a quatercircle or four in a full circle. If we look at a quatercircle originating at the origin, with radius $r$ and completely contained in the first quadrant, then the circle has to be centered at a point $(c,c)$, touching the x axis, the y axis and the quatercircle at $(r/\sqrt{2},r/\sqrt{2})$.
To find out how large the circle may be, without crossing the x and y axes, consider the distance from the center of this circle to one axis being equal to the distance to the point $(r/\sqrt{2},r/\sqrt{2})$
$$ c=\sqrt{2\left(\frac{r}{\sqrt{2}}-c\right)^2}=\sqrt{2} \left(\frac{r}{\sqrt{2}}-c\right)$$
$$ \Rightarrow c = \frac{r}{1+\sqrt{2}} $$
For $r=2\,\text{cm}$, $c\approx 0.83\,\text{cm} $.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/127639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Proving that a set is countable by finding a bijection
$Z$ is the set of non-negative integers including $0$. Show that $Z \times Z \times Z$ is countable by constructing the actual bijection $f: Z\times Z\times Z \to \mathbb{N}$ ($\mathbb{N}$ is the set of all natural numbers). There is no need to prove that it is a bijection.
After searching for clues on how to solve this, I found $(x+y-1)(x+y-z)/z+y$ but that is only two dimensional and does not include $0$. Any help on how to solve this?
| If you don't need an actual "formula", then you can write
$$
\mathbb{Z}\times\mathbb{Z}\times\mathbb{Z} = \bigcup_{n=0}^\infty \{ (x,y,z)\in \mathbb{Z}^3 : |x|+|y| +|z| = n \}
$$
and then rely on the fact that each term in this union is a finite set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/127695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
} |
Do countable unital rings with uncountably many distinct right ideals have uncountably many maximal right ideals? Suppose we are given a countable unital ring $R$ with uncountably many distinct right ideals. Does it follow from this that $R$ has uncountably many maximal right ideals?
| No.
Take $R= \mathbb Q[X_0,X_1,...,X_n,...]/\langle X_iX_j\mid i,j\in \mathbb N\rangle=\mathbb Q[x_0,x_1,...,x_n,...]$
The only maximal ideal (actually only prime ideal !) is
$\langle x_0,x_1,...,x_n,...\rangle$ but $R$ has a family of distinct ideals indexed by the uncountably many subsets $P\subset \mathbb N$, namely $$ I_P=\langle x_i\mid i\in P\rangle=\operatorname {vect}_\mathbb Q (x_i\mid i\in P)$$
Edit
I hadn't seen Mariano's answer when I posted mine a few minutes later, but our rings are actually isomorphic : if his $V$ has basis $(v_i)_{i\in \mathbb N}$ over $\mathbb Q$ we have an isomorphism (of $\mathbb Q$-algebras even)
$$ \mathbb Q\oplus V \stackrel {\cong}{\to} \mathbb Q[x_0,x_1,...,x_n,...]:(q,\sum q_iv_i)\mapsto q+\sum q_ix_i$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/127773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Use of determinants I have been teaching myself maths (primarily calculus) throughout this and last year, and was stumped with the use of determinants. In the math textbooks I have, they simply show how to compute a determinant and some properties about them (i.e. Cramer's Rule), but not why they are used and how they work.
So my question is, how do they work and why/when would I know to use them to help solve something?
| Here's another example of use of determinants:
Let $F$ be a field, let $K$ be a field containing $F$, and finite-dimensional as a vector space over $F$. Let $\alpha$ be an element of $K$. The map $T:K\to K$ given by $T(x)=\alpha x$ is a linear transformation. Given a basis for $K$ as a vector space over $F$, one can find a matrix $A$ representing $T$. The matrix depends on the basis chosen, but its determinant does not; it only depends on $\alpha$, and it's called the norm of $\alpha$ (strictly speaking, the norm of $\alpha$ from $K$ to $F$). And the norm is a very important concept in Field Theory and Algebraic Number Theory.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/127834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38",
"answer_count": 3,
"answer_id": 2
} |
What is standard coordinates? What is meant by the word standard in "Euclidean space is special in having a standard set of global coordinates."?
Then "A manifold in general does not have standard coordinates"
This makes me think standard means something else then 'most common used'.
Is R^n special in any sense, as a manifold?
This is from Loring W. Tu - Introduction to manifolds
| Usually when we write "$\mathbb{R}^n$" we are thinking of an explicit description of it as $n$-tuples of real numbers. This description "is" the standard set of global coordinates, namely the coordinate functions $x_i$. But this description isn't part of $\mathbb{R}^n$ "as a manifold", in that it contains more information than just the diffeomorphism type of this manifold. What I mean by this is that if I give you an abstract manifold which is diffeomorphic to $\mathbb{R}^n$ -- but I don't tell you an explicit diffeomorphism -- then there isn't a "standard" set of coordinates for it. And if you have a manifold which isn't diffeomorphic to any $\mathbb{R}^n$, then there is no set of global coordinates for it (otherwise you could use those coordinates to produce a diffeomorphism to $\mathbb{R}^n$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/127981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the correct terminology for Permutation & Combination formulae that allow repeating elements. Let me explain by example.
Q: Given four possible values, {1,2,3,4}, how many 2 value permutations are there ? | In "permutations", the order matters. In "combinations", the order does not matter.
The basic rules of counting are the Product Rule and the Sum Rule. See here, for example.
*
*Permutations with repetitions allowed:
If you have $n$ objects, and you want to count how many permutations of length $m$ there are: there are $n$ possibilities for the first term, $n$ for the second term, $n$ for the third term, etc. So the total number is $n^m$.
*Permutations without repetitions allowed:
If you have $n$ objects, and you want to count permutations of length $m$ with no repetitions (sometimes called "no replacement"): there are $n$ possibilities for the first term, $n-1$ for the second (you've used up one), $n-2$ for the third, etc. So the total number is
$$P^n_m = n(n-1)(n-2)\cdots(n-m+1) = \frac{n!}{(n-m)!}.$$
*Combinations without repetitions allowed:
In combinations we don't care about the order. If you select $m$ objects from $n$ possibilities and they are all distinct, then there are $P^m_m = n!$ ways of ordering them. So if you count permutations instead, you will count each distinct choice of elements exactly $m!$ times. Therefore, $\binom{n}{m}$ (also denoted $nCm$, both read "$n$ choose $m$") is given by:
$$\binom{n}{m} = \frac{1}{m!}P^n_m = \frac{n!}{m!(n-m)!}.$$
*Combinations with repetitions allowed:
You have $n$ objects, and you want to select $m$ of them, allowing repetitions. To simplify things, let us assume our $n$ objects are precisely the numbers $1$, $2,\ldots,n$. Given a selection of $m$ objects with possible repetitions, $a_1,\ldots,a_m$, order them in ascending order, $a_1\leq a_2\leq\cdots\leq a_m$, and think of them as an $m$-tuple: $(a_1,a_2,\ldots,a_m)$. We want to count the number of such tuples with entries between $1$ and $n$, repetitions allowed.
Let us associate to each such tuple an $m$-tuple with no repetitions and with entries between $1$ and $n+m-1$ as follows: given $(a_1,a_2,\ldots,a_m)$, with $1\leq a_1\leq a_2\leq\cdots\leq a_m\leq n$, we associate the tuple $(a_1,a_2+1,a_3+2,\ldots,a_m+m-1)$. Note that
$$1\leq a_1\lt a_2+1\lt\cdots \lt a_m+m-1\leq n+m-1.$$
Conversely, given a tuple $(b_1,b_2,\ldots,b_m)$ with $1\leq b_1\lt b_2\lt\cdots\lt b_m\leq n+m-1$, we have $b_i\geq n+i$; so this tuple "comes from" the tuple $(b_1,b_2-1,b_3-2,\ldots,b_m-m+1)$, which is a nondecreasing tuple with values between $1$ and $n$; i.e., it is one of our original $(a_1,\ldots,a_n)$s.
That means that counting the nondecreasing tuples $(a_1,\ldots,a_m)$ with $1\leq a_1\leq\cdots\leq a_m\leq n$ is equivalent to counting the number of strictly increasing tuples $(b_1,b_2,\ldots,b_m)$ with $1\leq b_1\lt b_2\lt\cdots\lt b_m\leq n+m-1$. This is, in turn, equivalent to counting the number of ways of selecting $m$ objects from among $\{1,2,\ldots,n+m-1\}$, with no repetitions allowed. This is just $\binom{n+m-1}{m}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/128048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
Where is the highest point of $f(x)=\sqrt[x]{x}$ in the $x$-axis? I mean, the highest point of the $f(x)=\sqrt[x]{x}$ is when $x=e$.
I'm trying to calculate how can I prove that or how can it be calculated.
| Well, write $$f(x) = e^{\frac{1}{x}\ln(x)}$$ and differentiate and set equal to 0 to get:
$$\dfrac{d}{dx}(e^{\frac{1}{x}\ln(x)})=\bigg(-\frac{1}{x^2}\ln(x)+\frac{1}{x^2}\bigg)e^{\frac{1}{x}\ln(x)}=0$$
Which implies (after dividing by the exponential term) that
$$\frac{1}{x^2}(1-\ln(x))=0$$
Whence $1=\ln(x)$ or $x=e$.
Now you just need to check whether this gives you a local minimum or maximum via the second derivative.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/128114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Prove that the Lie derivative of a vector field equals the Lie bracket: $\frac{d}{dt} ((\phi_{-t})_* Y)|_{t=0} = [X,Y]$ Let $X$ and $Y$ be vector fields on a smooth manifold $M$, and let $\phi_t$ be the flow of $X$, i.e. $\frac{d}{dt} \phi_t(p) = X_p$. I am trying to prove the following formula:
$\frac{d}{dt} ((\phi_{-t})_* Y)|_{t=0} = [X,Y],$
where $[X,Y]$ is the commutator, defined by $[X,Y] = X\circ Y - Y\circ X$.
This is a question from these online notes: http://www.math.ist.utl.pt/~jnatar/geometria_sem_exercicios.pdf .
| Here is a simple proof which I found in the book "Differentiable Manifolds: A Theoretical Phisics Approach" of G. F. T. del Castillo. Precisely it is proposition 2.20.
We denote $(\mathcal{L}_XY)_x=\frac{d}{dt}(\phi_t^*Y)_x|_{t=0},$ where $(\phi^*_tY)_x=(\phi_{t}^{-1})_{*\phi_t(x)}Y_{\phi_t(x)}.$
Recall also that $(Xf)_x=\frac{d}{dt}(\phi_t^*f)_x|_{t=0}$ where $\phi_t^*f=f\circ\phi_t$ and use that $(\phi^*_tYf)_x=(\phi^*_tY)_x(\phi^*_tf).$
We claim that $(\mathcal{L}_XY)_x=[X,Y]_x.$
Proof:$$(X(Yf))_x=\lim_{t\to 0}\frac{(\phi_t^*Yf)_x-(Yf)_x}{t}=\lim_{t\to 0}\frac{(\phi^*_tY)_x(\phi^*_tf)-(Yf)_x}{t}=\star$$
Now we add and subtract $(\phi^*_tY)_xf.$ Hence
$$\star=\lim_{t\to 0}\frac{(\phi^*_tY)_x(\phi^*_tf)-(\phi^*_tY)_xf+(\phi^*_tY)_xf-Y_xf}{t}=$$
$$=\lim_{t\to 0}(\phi^*_tY)_x\frac{(\phi^*_tf)-f}{t}+\lim_{t\to 0}\frac{(\phi^*_tY)_x-Y_x}{t}f=Y_xXf+(\mathcal{L}_XY)_xf.$$
So we get that $XY=YX+\mathcal{L}_XY.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/128195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 5,
"answer_id": 0
} |
Non-principal Ideals in a Complete Lattice Given a complete lattice is it possible to have orderideals which are not principal? Can one not always just join together every element of the ideal to get its maximal, generating element? What about for frames?
Thanks!
| Short answer is no. In order to get a counterexample, consider the Boolean algebra of subsets of the natural numbers $\mathcal P(\mathbb N)$ and let $FIN$ denote the ideal of finite subsets of $\mathbb N$. Observe that $\mathcal P(\mathbb N)$ is a complete lattice and $FIN$ is a not principal ideal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/128236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Ideal not finitely generated Let $R=\{a_0+a_1 X+a_2 X^2 +\cdots + a_n X^n\}$, where $a_0$ is an integer and the rest of the coefficients are rational numbers.
Let $I=\{a_1 X+a_2 X^2+\cdots +a_n X^n\}$ where all of the coefficients are rational numbers.
Prove that I is an ideal of R.
Show further that I is not finitely generated as an R-module.
I have managed to prove that I is an ideal of R, by showing that I is the kernel of the evaluation map that maps a polynomial in Q[x] to its constant term. Hence I is an ideal of R.
However, I am stuck at showing I is not finitely generated as an R-module.
Sincere thanks for any help.
| This ring is an example of a Bézout domain that is not a unique factorization domain (since not all nonzero noninvertible elements decompose into irreducibles in the first place; for instance $X$ does not). The wikipedia page gives a proof of the Bézout property, namely that any finitely generated ideal is in fact a principal ideal. So if $I$ were finitely generated, it would have a single generator. But it cannot, since an element without constant term and with coefficient $c$ of $X$ only generates elements whose coefficient of $X$ is an integer multiple of $c$. (One also sees directly that a finite set of elements of $I$ only generate elements whose coefficient of $X$ is an integer multiple of the gcd of their coefficients of $X$ which cannot be all of $\mathbb Q$, as is indicated in the anwers by Alexander Thumm and Alex Becker.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/128300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Help Calculating a certain integral I study an article, and I got stuck on a problem of calculating an integral. Whatever I do, I do not get the result mentioned there. The notations are $u,\tilde u$ are functions defined on $\Omega \subset \Bbb{R}^N$ with values in $\Bbb{R}^n$:
$$ \eta_\varepsilon = \int_\Omega \tilde u(x)dx -\int_\Omega u(x)dx \in \Bbb{R}^n $$
The idea is that we try to correct $\tilde u$ such that it has the same integral as $u$. So pick a ball $B_\varepsilon=B(x_0,\varepsilon^{1/N})$ contained in the interior of $\Omega$. We define the function
$$ v(x)=\begin{cases} \tilde u(x) & x \notin B_\varepsilon \\
\tilde u(x)+h_\varepsilon(1-\varepsilon^{-1/N}|x-x_0|) & x \in B_\varepsilon \end{cases}$$
where $h_\varepsilon \in \Bbb{R}^n$ should be chosen such that $\int_\Omega v=\int_\Omega u$, i.e.
$$ \int_{B_\varepsilon} h_\varepsilon(1-\varepsilon^{-1/N}|x-x_0|)dx=-\eta_\varepsilon $$
Since $h_\varepsilon$ is a constant, it should be enough to find the value of
$$(I) \ \ \ \ \int_{B_\varepsilon} (1-\varepsilon^{-1/N}|x-x_0|)dx $$
which I calculated with the coarea formula and got
$$ |B_\varepsilon|-\varepsilon^{-1/N}\int_0^{\varepsilon^{1/N}}r^N \cdot N \omega_N dr=\frac{ \varepsilon\omega_N}{N+1}$$
where $\omega_N$ is the volume of the unit ball in $\Bbb{R}^N$. This would lead to $$ h_\varepsilon=-\eta_\varepsilon \frac{N+1}{\varepsilon \omega_N}$$
However, in the article the answer is
$$ h_\varepsilon = -\eta_\varepsilon \frac{N}{ \varepsilon^{\frac{N-1}{N}}\omega_{N-1}}$$
This seems wrong to me. The different constants are not a problem but some facts presented in the following of this rely crucially on the fact that the power of $\varepsilon$ in $h_\varepsilon$ is $\frac{1-N}{N}$ and not $-1$ as I got.
Is my calculation of the integral $(I)$ correct, or I missed something?
| Independent of the details of your calculation, the book's answer can't be right since $|B_\epsilon|$ clearly goes as $\epsilon\omega_N$ and not as $\epsilon^{(N-1)/N}\omega_{N-1}$. It looks as if they were calculating the integral over the sphere rather than the ball, but I don't see why they would do that. Are you sure that $\Omega\subset\mathbb R^n$? Sometimes $\Omega$ is used to denote solid angle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/128363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to determine the number of directed/undirected graphs? I'm kind of stuck on this homework problem, could anyone give me a springboard for it? If we have $n\in\mathbb{Z}^+$, and we let the set of vertices $V$ be a set of size $n$, how can we determine the number of directed graphs/undirected graphs/graphs with loops etc.? Is there a formula for this? I feel like it can be done using combinatorics but I can't quite figure it out. Any ideas?
Thanks!
| A start: We will show how to count labelled, loopless, undirected graphs. There are $\binom{n}{2}$ ways to choose a set $\{u,v\}$ of two vertices. For every such set, we say yes or no depending on whether we have decided to join $u$ and $v$ by an edge. Alternately, but somewhat less concretely, let $P$ be the set of all (unordered) pairs. This set $P$ has cardinality $\binom{n}{2}$. To specify a loopless undirected graph, we choose a subset of $P$ and connect any unordered pair in that subset by an edge. How many subsets does $P$ have?
To extend to graphs with possible loops (but at most one per vertex) there is a similar yes/no process.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/128439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Prove Algebric Identity
Possible Duplicate:
Value of $\sum\limits_n x^n$
Given $a\in\mathbb{R}$ and $0<a<1$
let $(X_n)$ be a sequence defined: $X_n=1+a+a^2+...+a^n$, $\forall n\in\mathbb{N}$.
How do I show that $X_n=\frac{1-a^{n+1}}{1-a}$
Thanks.
| $$\begin{align*}
(1-a)(1+a+a^2+\dots+a^n)&=(1+a+\dots+a^n)-a(1+a++\dots+a^n)\\
&=(1+\color{red}{a+\dots+a^n})-(\color{red}{a+a^2++\dots+a^n}+a^{n+1})\\
&=1-a^{n+1}
\end{align*}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/128517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Limit and continuity For this question, should I use differentiation method or the integration method ?
$\lim_{x\to \infty} (\frac{x}{x+2})^{x/8}$
this is what i got so far:
Note: $\lim \limits_{n\to\infty} [1 + (a/n)]^n = e^{\underline{a}}\ldots\ldots (1)$
$$
L = \lim \left[\frac{x}{x+2}\right]^{x/8} = \lim\left[\frac{1}{\frac{x+2}{x}}\right]^{x/8} =\frac{1}{\lim [1 + (2/x)]^x]^{1/8}}
$$
but i'm not sure where to go from there
| You are probably intended to use the fact that
$$\lim_{t\to\infty}\left(1+\frac{1}{t}\right)^t=e.$$
A manipulation close to what you were doing gets us there.
We have
$$\left(1+\frac{2}{x}\right)^{x/8}=\left(\left(1+\frac{2}{x}\right)^{x/2}\right)^{1/4}.$$
Let $t=\frac{x}{2}$. Then $\frac{2}{x}=\frac{1}{t}$. Note that as $x\to\infty$, we have $t\to\infty$. It follows that
$$\lim_{x\to \infty}\left(1+\frac{2}{x}\right)^{x/8}=\lim_{t\to\infty}\left(\left(1+\frac{1}{t}\right)^{t}\right)^{1/4}=e^{1/4}.$$
So our limit is $1/e^{1/4}$, or equivalently $e^{-1/4}$.
If you are allowed to use the fact that in general $\lim_{x\to\infty}\left(1+\frac{a}{x}\right)^x=e^a$, then you can simply take $a=2$, and conclude that
$$\lim \left[\frac{x}{x+2}\right]^{x/8} = \lim\left[\frac{1}{\frac{x+2}{x}}\right]^{x/8} =\frac{1}{\lim [1 + (2/x)]^x]^{1/8}}=\frac{1}{(e^2)^{1/8}}=e^{-1/4}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/128568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Does the group of Diffeomorphisms act transitively on the space of Riemannian metrics? Let $M$ be a smooth manifold (maybe compact, if that helps). Denote by $\operatorname{Diff}(M)$ the group of diffeomorphisms $M\to M$ and by $R(M)$ the space of Riemannian metrics on $M$. We obtain a canonical group action
$$ R(M) \times \operatorname{Diff}(M) \to R(M), (g,F) \mapsto F^*g, $$
where $F^*g$ denotes the pullback of $g$ along $F$. Is this action transitive? In other words, is it possible for any two Riemannian metrics $g,h$ on $M$ to find a diffeomorphism $F$ such that $F^*g=h$? Do you know any references for this type of questions?
| This map will not be transitive in general. For example, if $g$ is a metric and $\phi \in Diff(M)$ then the curvature of $\phi^* g$ is going to be the pullback of the curvature of $g$. So there's no way for a metric with zero curvature to be diffeomorphic to a manifold with non-zero curvature. Or for example, if $g$ is einstein $(g = \lambda Ric)$ then so is $\phi^* g$. So there are many diffeomorphism invariants of a metric.
Indeed, this should make sense because you can think of a diffeomorphism as passive, i.e. as just a change of coordinates. Then all of the natural things about the Riemannian geometry of a manifold should be coordinate ($\Leftrightarrow$ diffeomorphism) invariant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/128651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
} |
What matrices preserve the $L_1$ norm for positive, unit norm vectors? It's easy to show that orthogonal/unitary matrices preserve the $L_2$ norm of a vector, but if I want a transformation that preserves the $L_1$ norm, what can I deduce about the matrices that do this? I feel like it should be something like the columns sum to 1, but I can't manage to prove it.
EDIT:
To be more explicit, I'm looking at stochastic transition matrices that act on vectors that represent probability distributions, i.e. vectors whose elements are positive and sum to 1. For instance, the matrix
$$
M = \left(\begin{array}{ccc}1 & 1/4 & 0 \\0 & 1/2 & 0 \\0 & 1/4 & 1\end{array}\right)
$$
acting on
$$
x=\left(\begin{array}{c}0 \\1 \\0\end{array}\right)
$$
gives
$$
M \cdot x = \left(\begin{array}{c}1/4 \\1/2 \\1/4\end{array}\right)\:,
$$
a vector whose elements also sum to 1.
So I suppose the set of vectors whose isometries I care about is more restricted than the completely general case, which is why I was confused about people saying that permutation matrices were what I was after.
Sooo... given the vectors are positive and have entries that sum to 1, can we say anything more exact about the matrices that preserve this property?
| The matrices that preserve the set $P$ of probability vectors are those whose columns are members of $P$. This is obvious since if $x \in P$, $M x$ is a convex combination of the columns of $M$ with coefficients given by the entries of $x$. Each column of $M$ must be in $P$ (take $x$ to be a vector with a single $1$ and all else $0$), and $P$ is a convex set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/128702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 0
} |
Summing numbers which increase by a fixed amount (arithmetic progression) An auditorium has 21 rows of seats. The first row has 18 seats, and each succeeding row has two more seats than the previous row. How many seats are there in the auditorium?
Now I supposed you could use sigma notation since this kind of problem reminds me of it, but I have little experience using it so I'm not sure.
| You could write the total using sigma notation as $$\sum_{k=0}^{20}(18+2k)\,$$ among many other ways, but I’m pretty sure that what’s wanted here is the actual total. You can add everything up by hand, which is a bit tedious, or you can use the standard formula for the sum of an arithmetic progression, if you know it, or you can be clever and arrange the sizes of the rows like this:
$$\begin{array}{}
18&20&22&24&26&28&30&32&34&36&38\\
58&56&54&52&50&48&46&44&42&40\\ \hline
76&76&76&76&76&76&76&76&76&76&38
\end{array}$$
The bottom row is the sum of the top two, so adding it up gives you the total number of seats. And that’s easy: it’s $10\cdot 76+38=760+38=798$. This calculation is actually an adaptation to this particular problem of the usual derivation of the formula for the sum of an arithmetic progression, which in this particular case looks like this:
$$\begin{array}{}
18&20&22&24&26&\dots&50&52&54&56&58\\
58&56&54&52&50&\dots&26&24&22&20&18\\ \hline
76&76&76&76&76&\dots&76&76&76&76&76
\end{array}$$
The top row is the original set of row sizes; the second consists of the same numbers in reverse order; and the bottom row is again the sum of the top two. That now counts each seat twice, so the total number of seats is $$\frac12(21\cdot 76)=798\;.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/128786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 1
} |
Maximum area of rectangle with fixed perimeter. How can you, with polynomial functions, determine the maximum area of a rectangle with a fixed perimeter.
Here's the exact problem—
You have 28 feet of rabbit-proof fencing to install around your vegetable garden. What are the dimensions of the garden with the largest area?
I've looked around this Stack Exchange and haven't found an answer to this sort of problem (I have, oddly, found a similar one for concave pentagons).
If you can't give me the exact answer, any hints to get the correct answer would be much appreciated.
| The result you need is that for a rectangle with a given perimeter the square has the largest area. So with a perimeter of 28 feet, you can form a square with sides of 7 feet and area of 49 square feet.
This follows since given a positive number $A$ with $xy = A$ the sum $x + y$ is smallest when $x = y = \sqrt{A}$.
You have $2x + 2y = P \implies x + y = P/2$, and you want to find the maximum of the area, $A = xy$.
Since $x + y = P/2 \implies y = P/2 - x$, you substitute to get $A = x(P/2-x) = (P/2)x - x^2$. In your example $P = 28$, so you want to find the maximum of $A = 14x - x^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/128825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 2
} |
inequality $(a+c)(a+b+c)<0$, prove $(b-c)^2>4a(a+b+c)$
If $(a+c)(a+b+c)<0,$
prove $$(b-c)^2>4a(a+b+c)$$
I will use the constructor method that want to know can not directly prove it?
| Consider the quadratic
$$ f(x) = ax^2 - (b-c)x + (a+b+c) $$
$$f(1)f(0) = 2(a+c)(a+b+c) \lt 0$$
Thus if $a \neq 0$, then this has a real root in $(0,1)$ and so
$$(b-c)^2 \ge 4a(a+b+c)$$
If $(b-c)^2 = 4a(a+b+c)$, then we have a double root in $(0,1)$ in which case, $f(0)$ and $f(1)$ will have the same sign.
Thus $$(b-c)^2 \gt 4a(a+b+c)$$
If $a = 0$, then $c(b+c) \lt 0$, and so we cannot have $b=c$ and thus $(b-c)^2 \gt 0 = 4a(a+b+c)$
And if you want a more "direct" approach, we show that $(p+q+r)r \lt 0 \implies q^2 \gt 4pr$ using the following identity:
$$(p+q+r)r = \left(p\left(1 + \frac{q}{2p}\right)\frac{q}{2p}\right)^2 + \left(r - \frac{q^2}{4p}\right)^2 + p\left(r - \frac{q^2}{4p}\right)\left(\left(1 + \frac{q}{2p}\right)^2 + \left(\frac{q}{2p}\right)^2\right)$$
If $(p+q+r)r \lt 0$, then we must have have $p\left(r - \frac{q^2}{4p}\right) \lt 0$, as all the other terms on the right side are non-negative.
Of course, this was gotten by completing the square in $px^2 + qx + r$ and setting $x=0$ and $x=1$ and multiplying.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/128898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Disprove Homeomorphism I have a problem that puzzles me. I need to show that the two sets
$A = \{(x,y) \in \mathbb{R}^2 \, \, \vert \, \, |x| \leq 1 \}$
and
$B = \{(x,y) \in \mathbb{R}^2 \, \, \vert \, \, x \geq 0 \}$
are not homeomorphic; but I'm not able to figure out how start or what I need to arrive at.
| What about this: if A and B are not homeomorphic, then there exists a continuous bijection $ f: \, A \rightarrow B $ such that $ f^{-1} $ is not continuous. Then look at the function $f(x,y) = \left( \tan\left( \frac{(1+x) \pi}{4} \right), y \right)$. Problem at $ x = 1 $ ?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/128952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Evaluating $\int \dfrac {2x} {x^{2} + 6x + 13}dx$ I am having trouble understanding the first step of evaluating
$$\int \dfrac {2x} {x^{2} + 6x + 13}dx$$
When faced with integrals such as the one above, how do you know to manipulate the integral into:
$$\int \dfrac {2x+6} {x^{2} + 6x + 13}dx - 6 \int \dfrac {1} {x^{2} + 6x + 13}dx$$
After this first step, I am fully aware of how to complete the square and evaluate the integral, but I am having difficulties seeing the first step when faced with similar problems. Should you always look for what the $"b"$ term is in a given $ax^{2} + bx + c$ function to know what you need to manipulate the numerator with? Are there any other tips and tricks when dealing with inverse trig antiderivatives?
| Just keep in mind which "templates" can be applied. The LHS in your second line is "prepped" for the $\int\frac{du}{u}$ template. Your choices for a rational function with a quadratic denominator are limited to polynomial division and then partial fractions for the remainder, if the denominator factors (which it always will over $\mathbb{C}$). The templates depend on the sign of $a$ and the number of roots. Here are some relevant "templates":
$$
\eqalign{
\int\frac1{ax+b}dx &= \frac1{a}\ln\bigl|ax+b\bigr|+C \\
\int\frac{dx}{(ax+b)^2} &= -\frac1{a}\left(ax+b\right)^{-1}+C \\
\int\frac1{x^2+a^2}dx &= \frac1{a}\arctan\frac{x}{a}+C \\
\int\frac1{x^2-a^2}dx &= \frac1{2a}\ln\left|\frac{x-a}{x+a}\right|+C \\
}
$$
So, in general, to tackle
$$
I = \int\frac{Ax+B}{ax^2+bx+c}dx
$$
you will want to write
$Ax+B$ as $\frac{A}{2a}\left(2ax+b\right)+\left(B-\frac{Ab}{2a}\right)$
to obtain
$$
\eqalign{
I &
= \frac{A}{2a}\int\frac{2ax+b}{ax^2+bx+c}dx
+ \left(B-\frac{Ab}{2a}\right) \int\frac{dx}{ax^2+bx+c}
\\&
= \frac{A}{2a}\ln\left|ax^2+bx+c\right|
+ \left(\frac{B}{a}-\frac{Ab}{2a^2}\right) \int\frac{dx}{x^2+\frac{b}{a}x+\frac{c}{a}}
}
$$
and to tackle the remaining integral, you can find the roots from the quadratic equation or complete the squares using the monic version (which is easier to do substitution with). If $a=0$, use the first "template" above. If you complete the squares and it's a perfect square, or if you get one double root, then use the second. If the roots are complex or there are two distinct real roots, then (after substituting $u=x+\frac{b}{2a}$) use the third or fourth "template".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/129093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Question about primes in square-free numbers For any prime, what percentage of the square-free numbers has that prime as a prime factor?
| Let $A(n)=\{\mathrm{squarefree~numbers~\le n}\}$ and $B_p(n)=\{x\in A(n); p\mid x\}$.
Then the asymptotic density of $B_p$ in $A$ is $b_p = \lim_{n\rightarrow \infty} |B_p(n)|/|A(n)|$. (It seems from the comments that this is not what @RudyToody is looking for, but I thought it's worth writing up anyway.) Let the density of $A$ in $\mathbb{N}$ be $a = \lim_{n\rightarrow \infty} |A|/n$.
Observe $B_p(pn) = \{px; x\in A(n),p\nmid x\}$, so for $N$ large $b_p$ must satisfy
$$
\begin{align}
b_p a (pN) & \simeq (1-b_p)aN \\
b_p &= \frac{1}{p+1}
\end{align}
$$
as @joriki already noted.
To illustrate, here are some counts for squarefree numbers $<10^7$. $|A(10^7)|=6079291$.
$$
\begin{array}{c|c|c|c}
p & |B_p(10^7)| & |A(10^7)|/(p+1) & \Delta=(p+1)|B|/|A|-1 \\
\hline \\
2 & 2026416 & 2026430.3 & -7\times 10^{-6} \\
3 & 1519813 & 1519822.8 & -6\times 10^{-6} \\
5 & 1013234 & 1013215.2 & 1.9\times 10^{-5} \\
7 & 759911 & 759911.4 & -5\times 10^{-7} \\
71 & 84438 & 84434.6 & 4\times 10^{-5} \\
173 & 34938 & 34938.5 & -1.3 \times 10^{-5}
\end{array}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/129148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Cross product in complex vector spaces When inner product is defined in complex vector space, conjugation is performed on one of the vectors. What about is the cross product of two complex 3D vectors?
I suppose that one possible generalization is
$A\otimes B \rightarrow \left ( A\times B \right )^*$ where $\times$ denotes the normal cross product. The conjugation here is to ensure that the result of the cross product is orthogonal to both vectors $A$ and $B$. Is that correct ?
| Yes, this is correct definition. If $v$, $w$ are perpendicular vectors in $\Bbb C^3$ (according to hermitian product) then $v,w,v\times w$ form matrix in $SU_3$.
We can define complex cross product using octonion multiplication (and vice versa). Let's use Cayley-Dickson formula twice: $$(a+b^\iota)(c+d^\iota)=ac-\bar db+(b\bar c+da)^\iota$$ for quaternions $a,b,c,d$. Next set $a=u\mathbf j, b=v+w\mathbf j, c=x\mathbf j, d=y+z\mathbf j$ for complex numbers $u,v,w,x,y,z$. Then we obtain from above formula $$-u\bar x-v\bar y-\bar w z+(\bar vz-w\bar y)\mathbf j+[w\bar x-\bar uz+(-vx+uy)\mathbf j]^\iota$$
Applying complex conjugation to third complex coordinate we obtain formula for cross product. The first term is hermitian product of the vectors $(u,v,w)$, $(x,y,z)$. $$\begin {bmatrix}u\\ v \\ w \end{bmatrix}\times \begin {bmatrix}x\\ y \\ z \end{bmatrix}=\begin {bmatrix}\overline {vz}-\overline{wy}\\ \overline{wx}-\overline{uz} \\ \overline{uy}-\overline{vx} \end{bmatrix}$$
$SU_3$ is subgroup of octonion automorphism group $G_2$. Any automorphism of octonions can be obtained by fixing unit vector $\mathbf i$ on imaginary sphere $S^6$. It defines complex structure on perpendicular space $R^6$ via multiplication. Now in this complex structure any $SU_3$ element is octonion automorphism. So $G_2$ is fiber bundle $S^6 \times \times SU_3$.
Now going to "vice versa". Let's define octonions as pairs $(a,\mathbf v)$ where $a$ is complex number and $\mathbf v$ vector in $\Bbb C^3$. Then octonion multiplication can be defined as $$(a,\mathbf v)(b,\mathbf w)=(ab-\mathbf {v\cdot w},a\mathbf w+b \mathbf v + \mathbf {v \times w})$$
I hope that above argument with double Cayley-Dickson formula can be used to prove it although I have not done myself this calculation. The reader is urged to do it as an exercise :)
We can extend the definition of cross product to quaternions the same way. Extending it to octonions we need to be more careful. Freudenthal has done this using 3x3 matrices over octonions - so called Jordan algebra. Some kind of "cross product" is present in all exceptional Lie groups $F_4$, $E_6$, $E_7$, $E_8$ as these groups are called by Rosenfeld as automorphism groups of 2-dimensional projective planes over $\Bbb {O,C\otimes O, H \otimes O, O \otimes O}$.
Have I flied away too far from original question ?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/129227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 1
} |
What's the meaning of a set to the power of another set? ${ \mathbb{N} }^{ \left\{ 0,1 \right\} }$ and ${ \left\{ 0,1 \right\} }^{ \mathbb{N} }$ to be more specific, and is there a countable subset in each one of them? How do I find them?
| The syntax $X^Y$ where $X$ and $Y$ are sets means the set of functions from $Y$ to $X$.
Recall for the following that the von Neumann ordinal $2 = \left\{0,1\right\}$.
We often identify the powerset $\mathcal{P}(X)$ with the set of functions $2^X$, since we can think of the latter set as the set of characteristic functions of subsets of $X$. Let $f \in 2^X$ be a function and let $Z \subseteq X$. We can stipulate that if $f(x) = 1$ then $x \in Z$, and if $f(x) = 0$ then $x \not\in Z$.
The exponent notation makes sense because of the following identity from cardinal arithmetic:
$$|X|^{|Y|} = |X^Y|.$$
That is, the cardinality of one cardinal raised to the power of another is just the cardinality of the functions from the latter to the former. The name 'powerset' also makes sense once thought of in these terms, since we can easily construct a bijection between $\mathcal{P}(X)$ and $2^X$ by identifying subsets with their characteristic functions from $X$.
As Rahul Narain points out, the set $X^2$ is naturally identified with the cartesian product $X \times X$. Each function $f \in X^2$ will be of the form $\left\{(0, x), (1, y)\right\}$ for $x, y \in X$. By taking $f(0)$ as the first component and $f(1)$ as the second, we can construct a pair $(f(0) = x, f(1) = y)$. The collection of all such pairs will just be the cartesian product of $X$ with itself.
Let $X$ be an infinite set. We seek to show that $X^2$ and $2^X$ are infinite, by showing that they both have infinite subsets. This is most easily done by constructing a bijection between those subsets and $X$. Firstly, suppose that
$$Y = \left\{ f : f(0) = f(1), f \in X^2 \right\}.$$
Clearly $Y \subseteq X$. Now let $G : Y \rightarrow X$, $G(f) = f(0)$. This is an injection since the uniqueness of $f(0)$ is guaranteed by the construction of $Y$. The inverse function $G^{-1} : X \rightarrow Y$ is easily defined as
$$G^{-1}(x) = \left\{ (0, x), (1, x) \right\}$$
which again must be an injection. Note that the identity relation $1_X = \left\{ (x, x) : x \in X \right\}$ bears the same relation to $Y$ as the full cartesian product does to $X^2$.
The proof that $2^X$ is infinite proceeds by the same basic method. Let
$$Z = \left\{ f : \exists{x} ( f(x) = 1 \wedge \forall{y} ( y \in X \wedge y \neq x \rightarrow f(y) = 0 ) ), f \in 2^X \right\}.$$
We will construct a bijection between $Z$ and $X$. Let $H : Z \rightarrow X$, $H(f) = x$ where $f(x) = 1$. Then let $H^{-1} : X \rightarrow Z$, $H^{-1}(x) = f$ for the unique $f$ such that $f(x) = 1$ and $\forall{y} ( y \neq x \rightarrow f(y) = 0)$.
Note that what we've essentially done here is set up a bijection between $X$ and all its subsets which are singletons. More precisely, the bijection is between $X$ and the characteristic functions of those singletons. Since $X$ is infinite, $Z$ must be infinite and thus so is $2^X$.
However, something much stronger also holds: the cardinality of $2^X$ is strictly greater than the cardinality of $X$. This is Cantor's theorem.
Lastly, since $\mathbb{N}$ is infinite, so are $\mathbb{N}^2$ and $2^\mathbb{N}$: just take $X = \mathbb{N}$ in the above proof. $Y$ and $Z$ will be countable since the proof constructs bijections between them and $X$, so they will have the same cardinality as $\mathbb{N}$, namely $\aleph_0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/129301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 2,
"answer_id": 1
} |
Dual of a finite dimensional algebra is a coalgebra (ex. from Sweedler) Let $(A, M, u)$ be a finite dimensional algebra where $M: A\otimes A \rightarrow A$ denotes multiplication and $u: k \rightarrow A$ denotes unit.
I want to prove that $(A^*, \Delta, \varepsilon) $ is a colagebra where
$\Delta: A^*\rightarrow A^* \otimes A^*$ is a composition:
$$A^* \overset{M^*}{\rightarrow}(A\otimes A)^* \overset{\rho^{-1}}{\rightarrow}A^*\otimes A^*$$
And $\rho: V^*\otimes W^* \rightarrow (V\otimes W)^*$ is given by $<\rho(v^*, w^*), v\otimes w>=<v^*, v><w^*,w>$.
I have proven that $\rho$ is injective and since $A$ is finite dimensional $\rho$ is also bijective and we can take the inverse $\rho^{-1}$.
But I have problems understanding how does $\Delta$ work.
By definition we have $<M^*(c^*), a\otimes b>=<c^*, M(a\otimes b)>=c^*(ab)$. But I can't understand what is $\rho^{-1}(M^*(c^*))$, or in other words which element of $A^*\otimes A^*$ can act like $M^*(c^*)$ via $\rho$?
P.S. Please correct me if I have grammar mistakes. Thanks!
| Given $M^*c^*=:d^* \in (A \otimes A)^*$, $\rho^{-1}(d^*)=d_1^* \otimes d_2^* \in A^* \otimes A^*$, where $d_1^*(a)=d^*(a \otimes 1)$ and $d_2^*(a)=d^*(1 \otimes a)$. Notice that $\rho (d_1^* \otimes d_2^*)=d^*$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/129361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
Square Root Of A Square Root Of A Square Root Is there some way to determine how many times one must root a number and its subsequent roots until it is equal to the square root of two or of the root of a number less than two?
sqrt(16)=4
sqrt(4)=2
sqrt(2) ... 3
--
sqrt(27)=5.19615...
sqrt(5.19615...)=2.27950...
sqrt(2.27950...)=1.509803...
sqrt(1.509803...) ... 4
--
Also, using the floor function...
sqrt(27)=floor(5.19615...)
sqrt(5)=floor(2.23606...)
sqrt(2) ... 3
| Let's do this by example, and I'll let you generalize.
Say we want to know about $91$, one of my favorite numbers because it is the lowest number that I think most people might, at first thought, say is prime even though it isn't (another way of saying it isn't divisible by the 'easy-to-see' primes).
Well, I note that $16 = 2^4 < 91 < 256 = 2^8$
So if we take 2 square roots, then our number will be bigger than 2, as it's greater than $2^{4/4} = 2$. But If we take 3, as our number is less than $2^{8}$, then it's 3rd iterated square root will be less than $2^{8/8} = 2$.
So the third square root of $91$ will be between $\sqrt 2$ and $2$. So the 4th will place it below $\sqrt 2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/129446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Subsets and Splits