Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Are $|X|$ and $\operatorname{sgn}(X)$ independent? Let $X$ be a real valued random variable.
Let $\operatorname{sgn}(x)$ be $1$ when $x>0$, $-1$ when $x<0$ and $0$ when $x=0$.
Why are $|X|$ and $\operatorname{sgn}(X)$ independent, when the density function of $X$ is symmetric with respect to $0$?
Are $|X|$ and $\operatorname{sgn}(X)$ independent, when the density function of $X$ is not necessarily symmetric with respect to $0$?
Thanks!
| If $X$ is a continuous random variables (absolutely continuous with respect to Lebesgue),
$$P(|X| \leq x|sgn(X)=1) = P(|X| \leq x|X > 0) = \frac{P(0 < X \leq x)}{P(X > 0)} =
\frac{\int_{0}^{x}{f(x)dx}}{\int_{0}^{\infty}f(x)dx}$$
$$P(|X| \leq x|sgn(X)=-1) = P(|X| \leq x|X < 0) = \frac{P(-x \leq X < 0)}{P(X < 0)} =
\frac{\int_{-x}^{0}{f(x)dx}}{\int_{-\infty}^{0}f(x)dx}$$
and $P(sgn(X)=0)$. If $f(x)$ is symmetric with respect to $0$, observe that $P(|X| \leq x|sgn(X)=1) = P(|X| \leq x|sgn(X)=-1)$ for any $x \geq 0$. Hence, $|X|$ is independent of sgn(x).
Now consider $X$ to be uniform in $(-1,2)$. Observe that $P(sgn(X)=1)=2/3$ and $P(sgn(X)=1||X|>1)=1$. Hence, $|X|$ and $sgn(X)$ are not independent.
Also observe that it was important for $X$ to be continuous. For example, consider $X $ uniform in $\{-1,0,1\}$. Its mass function is symmetric with respect to $0$ but $P(sgn(X)=0) = 1/3$ and $P(sgn(X)=0||X|=0)=1$ and, thus sgn(X) and |X| are not independent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164806",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Existence of Consecutive Quadratic residues For any prime $p\gt 5$,prove that there are consecutive quadratic residues of $p$ and consecutive non-residues as well(excluding $0$).I know that there are equal number of quadratic residues and non-residues(if we exclude $0$), so if there are two consecutive quadratic residues, then certainly there are two consecutive non-residues,therefore, effectively i am seeking proof only for existence of consecutive quadratic residues. Thanks in advance.
| The number of $k\in[0,p-1]$ such that $k$ and $k+1$ are both quadratic residues is equal to:
$$ \frac{1}{4}\sum_{k=0}^{p-1}\left(1+\left(\frac{k}{p}\right)\right)\left(1+\left(\frac{k+1}{p}\right)\right)+\frac{3+\left(\frac{-1}{p}\right)}{4}, $$
where the extra term is relative to the only $k=-1$ and $k=0$, in order to compensate the fact that the Legendre symbol $\left(\frac{0}{p}\right)$ is $0$, although $0$ is a quadratic residue. Since:
$$ \sum_{k=0}^{p-1}\left(\frac{k}{p}\right)=\sum_{k=0}^{p-1}\left(\frac{k+1}{p}\right)=0, $$
the number of consecutive quadratic residues is equal to
$$ \frac{p+3+\left(\frac{-1}{p}\right)}{4}+\frac{1}{4}\sum_{k=0}^{p-1}\left(\frac{k(k+1)}{p}\right). $$
By the multiplicativity of the Legendre symbol, for $k\neq 0$ we have $\left(\frac{k}{p}\right)=\left(\frac{k^{-1}}{p}\right)$, so:
$$ \sum_{k=1}^{p-1}\left(\frac{k(k+1)}{p}\right) = \sum_{k=1}^{p-1}\left(\frac{1+k^{-1}}{p}\right)=\sum_{k=2}^{p}\left(\frac{k}{p}\right)=-1,$$
and we have $\frac{p+3}{4}$ consecutive quadratic residues if $p\equiv 1\pmod{4}$ and $\frac{p+1}{4}$ consecutive quadratic residues if $p\equiv -1\pmod{4}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
A book useful to learn lattices (discrete groups) Does anyone know a good book about lattices (as subgroups of a vector space $V$)?
| These notes of mine on geometry of numbers begin with a section on lattices in Euclidean space. However they are a work in progress and certainly not yet fully satisfactory. Of the references I myself have been consulting for this material, the one I have found most helpful with regard to basic material on lattices is C.L. Siegel's Lectures on the Geometry of Numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Tensors: Acting on Vectors vs Multilinear Maps I have the feeling like there are two very different definitions for what a tensor product is. I was reading Spivak and some other calculus-like texts, where the tensor product is defined as
$(S \otimes T)(v_1,...v_n,v_{n+1},...,v_{n+m})= S(v_1,...v_n) * T(v_{n+1},...,v_{n+m}) $
The other definition I read in a book on quantum computation, its defined for vectors and matrices and has several names, "tensor product, Kronecker Product, and Outer product": http://en.wikipedia.org/wiki/Outer_product#Definition_.28matrix_multiplication.29
I find this really annoying and confusing. In the first definition, we are taking tensor products of multilinear operators (the operators act on vectors) and in the second definition the operation IS ON vectors and matrices. I realize that matrices are operators but matrices aren't multilinear. Is there a connection between these definitions?
| I can't comment yet (or I don't know how to if I can), but echo Thomas' response and want to add one thing.
The tensor product of two vector spaces (or more generally, modules over a ring) is an abstract construction that allows you to "multiply" two vectors in that space. A very readable and motivated introduction is given in Dummit & Foote's book. (I always thought the actual construction seemed very strange before reading on D&F -- they manage to make it intuitive, motivated as trying to extend the set of scalars you can multiply with).
The collection of $k$-multilinear functions on a vector space is itself a vector space -- each multilinear map is a vector of that space. The connection between the two seemingly different definitions is that you're performing the "abstract" tensor product on those spaces of multilinear maps.
It always seemed to me that the tensor product definition in Spivak was a particularly nice, concrete example of the more general, abstract definition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/164975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 3,
"answer_id": 2
} |
${10 \choose 4}+{11 \choose 4}+{12 \choose 4}+\cdots+{20 \choose 4}$ can be simplified as which of the following? ${10 \choose 4}+{11 \choose 4}+{12 \choose 4}+\cdots+{20 \choose 4}$ can be simplified as ?
A. ${21 \choose 5}$
B. ${20 \choose 5}-{11 \choose 4}$
C. ${21 \choose 5}-{10 \choose 5}$
D. ${20 \choose 4}$
Please give me a hint. I'm unable to group the terms.
By brute force, I'm getting ${21 \choose 5}-{10 \choose 5}$
| What is the problem in this solution?
If $S=\binom{10}{4} + \binom{11}{4} + \cdots + \binom{20}{4}$ we have
\begin{eqnarray}
S&=& \left \{ \binom{4}{4} + \binom{5}{4} \cdots + \binom{20}{4}\right \} - \left \{\binom{4}{4} + \binom{5}{4} \cdots + \binom{9}{4}\right \} &=& \binom{21}{5} - \binom{10}{5}
\end{eqnarray}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
How can a Bézier curve be periodic? As I know it, a periodic function is a function that repeats its values in regular intervals or period. However Bézier curves can also be periodic which means closed as opposed to non-periodic which means open. How is this related or possible?
| A curve $C$ parameterised over the interval $[a,b]$ is closed if $C(a) = C(b)$. Or, in simpler terms, a curve is closed if its start point coincides with its end-point. A Bézier curve will be closed if its initial and final control points are the same.
A curve $C$ is periodic if $C(t+p) = C(t)$ for all $t$ ($p \ne 0$ is the period). Bézier curves are described by polynomials, so a Bézier curve can not be periodic. Just making its start and end tangents match (as J. M. suggested) does not make it periodic.
A spline (constructed as a string of Bézier curves) can be periodic.
People in the CAD field are sloppy in this area -- they often say "periodic" when they mean "closed".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Burnside's Lemma I've been trying to understand what Burnside's Lemma is, and how to apply it, but the wiki page is confusing me. The problem I am trying to solve is:
You have 4 red, 4 white, and 4 blue identical dinner plates. In how
many different ways can you set a square table with one plate on each
side if two settings are different only if you cannot rotate the table
to make the settings match?
Could someone explain how to use it for this problem, and if its not too complicated, try to explain to me what exactly it is doing in general?
| There are four possible rotations of (clockwise) 0, 90, 180 and 270 degrees respectively. Let us denot the 90 degree rotation by $A$, so the other rotations are then its powers $A^i,i=0,1,2,3$. The exponent is only relevant modulo 4, IOW we have a cyclic group of 4 elements. These rotations act on the set of plate arrangements. So if $RWBR$ denotes the arrangement, where there is a Red plate on the North side, White on the East side, Blue on the South, and another Red plate on the Western seat, then
$$
A(RWBR)=RRWB,
$$
because rotating the table 90 degrees clockwise moves the North seat to East et cetera.
The idea in using Burnside's lemma is to calculate how many arrangements are fixed under the various rotations (or whichever motions you won't count as resulting in a distinct arrangement). Let's roll.
All $3^4=81$ arrangements are fixed under not doing anything to the table. So $A^0$ has $81$ fixed points.
If an arrangement stays the same upon having a 90 degree rotation act on it, then the plate colors at North and East, East and South, South and West, West and North must all match. IOW we use a single color only. Therefore the rotation $A^1$ has only $3$ fixed points: RRRR, WWWW and BBBB.
The same applies to the 270 degree rotation $A^3$. Only 3 fixed points.
The 180 degree rotation $A^2$ is more interesting. This rotation swaps the North/South and East/West pairs of plates. For an arrangement to be fixed under this rotation, it is necessary and sufficient that those pairs of plates had matching colors, but we can use any of the three colors for both N/S and E/W, so altogether there are 9 arrangements stable under the 180 degree rotation.
The Burnside formula then tells that the number of distinguishable table arrangements is
$$
\frac14(81+3+9+3)=\frac{96}4=24.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
} |
Separatedness of a composition, where one morphism is surjective and universally closed. I'm stuck with the following problem:
Let $f:X \rightarrow Y$ and $g:Y \rightarrow Z$ be scheme morphisms such that f is surjective and universally closed and such that $g \circ f$ is separated. The claim is then that g is also separated.
I've been trying to use the fact that f is (as any surjective morphism) universally surjective, and somehow deduce that the diagonal $\Delta(Y)$ is closed in $Y \times_Z Y$ but I haven't gotten that far. I would love some hints on how to do this. Full answers are OK, but I would prefer to have hints!
Thank you!
| You know that the image $\Delta_X (X)$ of $X$ under $\Delta_X$is closed in $X\times_Z X$, because $g\circ f$ is separated .
Take the image $Im=(f\times f) (\Delta_X (X)) \subset Y\times_Z Y$ of this closed set $\Delta_X (X)$ under $f\times f$ .
This image $Im$ is then closed in $Y\times_Z Y$ (because $f$ is universally closed) and it coincides with...
Yes, exactly: bingo!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Does a closed form formula for the series ${n \choose n-1} + {n+1 \choose n-2} + {n+2 \choose n-3} + \cdots + {2n - 1 \choose 0}$ exist. $${n \choose n-1} + {n+1 \choose n-2} + {n+2 \choose n-3} + \cdots + {2n - 1 \choose 0}$$
For the above series, does a closed form exist?
| Your series is
$$\sum_{k=0}^{n-1}\binom{n+k}{n-k-1}=\sum_{k=0}^{n-1}\binom{2n-1-k}k\;,$$
which is the special case of the series
$$\sum_{k\ge 0}\binom{m-k}k$$
with $m=2n-1$. It’s well-known (and easy to prove by induction) that
$$\sum_{k\ge 0}\binom{m-k}k=f_{m+1}\;,$$
where $f_m$ is the $m$-th Fibonacci number: $f_0=0,f_1=1$, and $f_n=f_{n-1}+f_{n-2}$ for $n>1$. Thus, the sum of your series is $f_{2n}$.
The Binet formula gives a closed form for $f_{2n}$: $$f_{2n}=\frac{\varphi^{2n}-\hat\varphi^{2n}}{\sqrt5}\;,$$ where $\varphi=\frac12\left(1+\sqrt5\right)$ and $\hat\varphi=\frac12\left(1-\sqrt5\right)$. A computationally more convenient expression is $$f_{2n}=\left\lfloor\frac{\varphi^{2n}}{\sqrt5}+\frac12\right\rfloor\;.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove if a function is bijective? I am having problems being able to formally demonstrate when a function is bijective (and therefore, surjective and injective). Here's an example:
How do I prove that $g(x)$ is bijective?
\begin{align}
f &: \mathbb R \to\mathbb R \\
g &: \mathbb R \to\mathbb R \\
g(x) &= 2f(x) + 3
\end{align}
However, I fear I don't really know how to do such. I realize that the above example implies a composition (which makes things slighty harder?). In any case, I don't understand how to prove such (be it a composition or not).
For injective, I believe I need to prove that different elements of the codomain have different preimages in the domain. Alright, but, well, how?
As for surjective, I think I have to prove that all the elements of the codomain have one, and only one preimage in the domain, right? I don't know how to prove that either!
EDIT
f is a bijection. Sorry I forgot to say that.
| The way to verify something like that is to check the definitions one by one and see if $g(x)$ satisfies the needed properties.
Recall that $F\colon A\to B$ is a bijection if and only if $F$ is:
*
*injective: $F(x)=F(y)\implies x=y$, and
*surjective: for all $b\in B$ there is some $a\in A$ such that $F(a)=b$.
Assuming that $R$ stands for the real numbers, we check.
Is $g$ injective?
Take $x,y\in R$ and assume that $g(x)=g(y)$. Therefore $2f(x)+3=2f(y)+3$. We can cancel out the $3$ and divide by $2$, then we get $f(x)=f(y)$. Since $f$ is a bijection, then it is injective, and we have that $x=y$.
Is $g$ surjective?
Take some $y\in R$, we want to show that $y=g(x)$ that is, $y=2f(x)+3$. Subtract $3$ and divide by $2$, again we have $\frac{y-3}2=f(x)$. As before, if $f$ was surjective then we are about done, simply denote $w=\frac{y-3}2$, since $f$ is surjective there is some $x$ such that $f(x)=w$. Show now that $g(x)=y$ as wanted.
Alternatively, you can use theorems. What sort of theorems? The composition of bijections is a bijection. If $f$ is a bijection, show that $h_1(x)=2x$ is a bijection, and show that $h_2(x)=x+2$ is also a bijection. Now we have that $g=h_2\circ h_1\circ f$ and is therefore a bijection.
Of course this is again under the assumption that $f$ is a bijection.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 4,
"answer_id": 1
} |
When will these two trains meet each other I cant seem to solve this problem.
A train leaves point A at 5 am and reaches point B at 9 am. Another train leaves point B at 7 am and reaches point A at 10:30 am.When will the two trains meet ? Ans 56 min
Here is where i get stuck.
I know that when the two trains meets the sum of their distances travelled will be equal to the total sum , here is what I know so far
Time traveled from A to B by Train 1 = 4 hours
Time traveled from B to A by Train 2 = 7/2 hours
Now if S=Total distance from A To B and t is the time they meet each other then
$$\text{Distance}_{\text{Total}}= S =\frac{St}{4} + \frac{2St}{7} $$
Now is there any way i could get the value of S so that i could use it here. ??
| We do not need $S$.
The speed of the train starting from $A$ is $S/4$ while the speed of the train starting from $B$ is $S/(7/2) = 2S/7$.
Let the trains meet at time $t$ where $t$ is measured in measured in hours and is the time taken by the train from $B$ when the two trains meet. Note that when train $B$ is about to start train $A$ would have already covered half its distance i.e. a distance of $S/2$.
Hence, the distance traveled by train $A$ when they meet is $\dfrac{S}2 + \dfrac{S \times t}4$.
The distance traveled by train $B$ when they meet is $\dfrac{2 \times S \times t}7$.
Hence, we get that $$S = \dfrac{S}2 + \dfrac{S \times t}{4} + \dfrac{S \times 2 \times t}{7}$$ We can cancel the $S$ since $S$ is non-zero to get $$\dfrac12 = \dfrac{t}4 + \dfrac{2t}7$$ Can you solve for $t$ now? (Note that $t$ is in hours. You need to multiply by $60$ to get the answer in minutes.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
Square units of area in a circle I'm studying for the GRE and came across the practice question quoted below. I'm having a hard time understanding the meaning of the words they're using. Could someone help me parse their language?
"The number of square units in the area of a circle '$X$' is equal to $16$ times the number of units in its circumference. What are the diameters of circles that could fit completely inside circle $X$?"
For reference, the answer is $64$, and the "explanation" is based on $\pi r^2 = 16(2\pi r).$
Thanks!
| Let the diameter be $d$. Then the number of square units in the area of the circle is $(\pi/4)d^2$. This is $16\pi d$. That forces $d=64$.
Remark: Silly problem: it is unreasonable to have a numerical equality between area and circumference. Units don't match, the result has no geometric significance.
"The number of square units in the area of" is a fancy way of saying "the area of."
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Is this vector derivative correct? I want to comprehend the derivative of the cost function in linear regression involving Ridge regularization, the equation is:
$$L^{\text{Ridge}}(\beta) = \sum_{i=1}^n (y_i - \phi(x_i)^T\beta)^2 + \lambda \sum_{j=1}^k \beta_j^2$$
Where the sum of squares can be rewritten as:
$$L^{}(\beta) = ||y-X\beta||^2 + \lambda \sum_{j=1}^k \beta_j^2$$
For finding the optimum its derivative is set to zero, which leads to this solution:
$$\beta^{\text{Ridge}} = (X^TX + \lambda I)^{-1} X^T y$$
Now I would like to understand this and try to derive it myself, heres what I got:
Since $||x||^2 = x^Tx$ and $\frac{\partial}{\partial x} [x^Tx] = 2x^T$ this can be applied by using the chain rule:
\begin{align*}
\frac{\partial}{\partial \beta} L^{\text{Ridge}}(\beta) = 0^T &= -2(y - X \beta)^TX + 2 \lambda I\\
0 &= -2(y - X \beta) X^T + 2 \lambda I\\
0 &= -2X^Ty + 2X^TX\beta + 2 \lambda I\\
0 &= -X^Ty + X^TX\beta + 2 \lambda I\\
&= X^TX\beta + 2 \lambda I\\
(X^TX + \lambda I)^{-1} X^Ty &= \beta
\end{align*}
Where I strugle is the next-to-last equation, I multiply it with $(X^TX + \lambda I)^{-1}$ and I don't think that leads to a correct equation.
What have I done wrong?
| You have differentiated $L$ incorrectly, specifically the $\lambda ||\beta||^2$ term. The correct expression is: $\frac{\partial L(\beta)}{\partial \beta} = 2(( X \beta - y)^T X + \lambda \beta^T)$, from which the desired result follows by equating to zero and taking transposes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Your favourite application of the Baire Category Theorem I think I remember reading somewhere that the Baire Category Theorem is supposedly quite powerful. Whether that is true or not, it's my favourite theorem (so far) and I'd love to see some applications that confirm its neatness and/or power.
Here's the theorem (with proof) and two applications:
(Baire) A non-empty complete metric space $X$ is not a countable union of nowhere dense sets.
Proof: Let $X = \bigcup U_i$ where $\mathring{\overline{U_i}} = \varnothing$. We construct a Cauchy sequence as follows: Let $x_1$ be any point in $(\overline{U_1})^c$. We can find such a point because $(\overline{U_1})^c \subset X$ and $X$ contains at least one non-empty open set (if nothing else, itself) but $\mathring{\overline{U_1}} = \varnothing$ which is the same as saying that $\overline{U_1}$ does not contain any open sets hence the open set contained in $X$ is contained in $\overline{U_1}^c$. Hence we can pick $x_1$ and $\varepsilon_1 > 0$ such that $B(x_1, \varepsilon_1) \subset (\overline{U_1})^c \subset U_1^c$.
Next we make a similar observation about $U_2$ so that we can find $x_2$ and $\varepsilon_2 > 0$ such that $B(x_2, \varepsilon_2) \subset \overline{U_2}^c \cap B(x_1, \frac{\varepsilon_1}{2})$. We repeat this process to get a sequence of balls such that $B_{k+1} \subset B_k$ and a sequence $(x_k)$ that is Cauchy. By completeness of $X$, $\lim x_k =: x$ is in $X$. But $x$ is in $B_k$ for every $k$ hence not in any of the $U_i$ and hence not in $\bigcup U_i = X$. Contradiction. $\Box$
Here is one application (taken from here):
Claim: $[0,1]$ contains uncountably many elements.
Proof: Assume that it contains countably many. Then $[0,1] = \bigcup_{x \in (0,1)} \{x\}$ and since $\{x\}$ are nowhere dense sets, $X$ is a countable union of nowhere dense sets. But $[0,1]$ is complete, so we have a contradiction. Hence $X$ has to be uncountable.
And here is another one (taken from here):
Claim: The linear space of all polynomials in one variable is not a Banach space in any norm.
Proof: "The subspace of polynomials of degree $\leq n$ is closed in any norm because it is finite-dimensional. Hence the space of all polynomials can be written as countable union of closed nowhere dense sets. If there were a complete norm this would contradict the Baire Category Theorem."
| One of my favorite (albeit elementary) applications is showing that $\mathbb{Q}$ is not a $G_{\delta}$ set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "248",
"answer_count": 28,
"answer_id": 3
} |
Combinatorics thinking I saw this question in a book I've been reading: in a group of four mathematicians and five physicians, how many groups of four people can be created if at least two people are mathematicians?
The solution is obtained by ${4 \choose 2}{5 \choose 2} + {4 \choose 3}{5 \choose 1} + {4 \choose 4}{5 \choose 0} = 81$. But I thought of the present (wrong) solution:
Step 1. Choose two mathematicians. It gives ${4 \choose 2}$ different ways of choosing.
Step 2. Choose two people from the seven people left. It gives ${7 \choose 2}$ ways of choosing.
Step 3. Multiply. ${4 \choose 2}{7 \choose 2} = 126 = {9 \choose 4}$. It is equivalent of choosing four people in one step. Clearly wrong.
I really don't know what's wrong in my "solution". It seems like I am counting some cases twice, but I wasn't able to find the error. What am I missing here?
PS. It is not homework.
| You have counted the number of ways to chose two mathematicians leading the group, plus two regular members which can be mathematicians or physicians.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Trying to find the name of this Nim variant Consider this basic example of subtraction-based Nim before I get to my full question:
Let $V$ represent all valid states of a Nim pile (the number of stones remaining):
$V = 0,1,2,3,4,5,6,7,8,9,10$
Let $B$ be the bound on the maximum number of stones I can remove from the Nim pile in a single move (minimum is always at least 1):
$B = 3$
Optimal strategy in a two-player game then is to always ensure that at the end of your turn, the number of stones in the pile is a number found in $V$, and that it is congruent to $0$ modulo $(B+1)$. During my first move I remove 2 stones because $8$ modulo $4$ is $0$.
My opponent removes anywhere from 1-3 stones, but it doesn't matter because I can then reduce the pile to $4$ because $4$ modulo $4$ is $0$. Once my opponent moves, I can take the rest of the pile and win.
This is straightforward, but my question is about a more advanced version of this, specifically when $V$ does not include all the numbers in a range. Some end-states of the pile are not valid, which implies that I cannot find safe positions by applying the modulo $(B+1)$ rule.
Does this particular variant of Nim have a name that I can look up for further research? Is there another way to model the pile?
| These are known as subtraction games; in general, for some set $S=\{s_1, s_2, \ldots s_n\}$ the game $\mathcal{S}(S)$ is the game where each player can subtract any element of $S$ from a pile. (So your simplified case is the game $\mathcal{S}(\{1\ldots B\})$) The nim-values of single-pile positions in these games are known to be ultimately periodic, and there's a pairing phenomenon that shows up between 0-values and 1-values, but generically there isn't much known about these games with $n\gt 2$. Berlekamp, Conway, and Guy's Winning Ways For Your Mathematical Plays has a section on subtraction games; as far as I can tell, it's still the canonical reference for combinatorial game theory. The Games of No Chance collections also have some information on specific subtraction games, and it looks like an article on games with three-element subtraction sets showed up recently on arxiv ( http://arxiv.org/abs/1202.2986 ); that might be another decent starting point for references into the literature.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Question About Concave Functions It easy to prove that no non-constant positive concave function exists (for example by integrating:
$ u'' \leq 0 \to u' \leq c \to u \leq cx+c_2 $ and since $u>0$ , we obviously get a contradiction.
Can this result be generalized to $ \Bbb R^2 $ and the Laplacian? Is there an easy way to see this? (i.e.- no non-constant positive real-valued function with non-positive l
Laplacian exists)
Thanks!
| Let $u$ strictly concave and twice diferentiable in $\mathbb{R^{2}}$ então $v(x) = u(x,0)$ is strictly concave and twice diferentiable in $\mathbb{R}$. Hence assume negative value.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Images of sections of a sheaf I'm currently reading a paper by X. Caicedo containing an introduction to sheaves.
On page 8 he claims, that for every sheaf of sets $p:E\to X$ and every section $\sigma:U\to E$ (U being open in X) the image $\sigma(U)$ is open.
This statement is proved by picking a point $e\in\sigma(U)$, an open neighborhood S of e, which satisfies
*
*$p(S)$ is open in X,
*$p\restriction S$ is a homeomorphism
and arriving at an open set $\sigma(U)\supseteq S\cap\sigma(U)=p^{-1}(p(S)\cap U)$.
I think the "$\supseteq$" part of this equation does not hold, if for example E is equipped with the discrete topology and the stalk of $p(e)$ has more than one element.
I have tried to show that $(p\restriction S)^{-1}(p(S)\cap U) = p^{-1}(U)\cap S$ is contained in $\sigma(U)$, but all attempts at that felt quite clumsy, leading me to believe I have missed something important about the structure of a sheaf.
| In order to prove $\sigma(U)$ is open, since $p(S)\cap U$ is open in $X$ and $p|S$ is a homeomorphism, it suffices to show $p|S(S\cap \sigma(U))=p(S)\cap U$. Clearly, this is true($p\sigma(u)=u$, for $u\in U$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Average number of times it takes for something to happen given a chance Given a chance between 0% and 100% of getting something to happen, how would you determine the average amount of tries it will take for that something to happen?
I was thinking that $\int_0^\infty \! (1-p)^x \, \mathrm{d} x$ where $p$ is the chance would give the answer, but doesn't that also put in non-integer tries?
| Here's an easy way to see this, on the assumption that the average actually exists (it might otherwise be a divergent sum, for instance). Let $m$ be the average number of trials before the event occurs.
There is a $p$ chance that it occurs on the first try. On the other hand, there is a $1-p$ chance that it doesn't happen, in which case we have just spent one try, and on average it will take $m$ further tries.
Therefore $m = (p)(1) + (1-p)(1+m) = 1 + (1-p)m$, which is easily rearranged to obtain $m = 1/p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/165993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 1
} |
Linear system with positive semidefinite matrix I have a linear system $Ax=b$, where
*
*$A$ is symmetric, positive semidefinite, and positive. $A$ is a variance-covariance matrix.
*vector $b$ has elements $b_1>0$ and the rest $b_i<0$, for all $i \in \{2, \dots, N\}$.
Prove that the first component of the solution is positive, i.e., $x_1>0$.
Does anybody have any idea?
| I don't think $x_1$ must be positive.
A counter example might be a positive definite matrix $A = [1 \space -0.2 ; \space -0.2 \space 1]$ with its inverse matrix $A^{-1}$ having $A_{11}, A_{12} > 0$.
-
Edit:
Sorry. A counter example might be a normalized covariance matrix
$ A= \left( \begin{array}{ccc}
1 & 0.6292 & 0.6747 & 0.7208 \\
0.6292 & 1 & 0.3914 & 0.0315 \\
0.6747 & 0.3914 & 1 & 0.6387 \\
0.7208 & 0.0315 & 0.6387 & 1 \end{array} \right) $.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Determine whether $\sum\limits_{i=1}^\infty \frac{(-3)^{n-1}}{4^n}$ is convergent or divergent. If convergent, find the sum. $$\sum\limits_{i=1}^\infty \frac{(-3)^{n-1}}{4^n}$$
It's geometric, since the common ratio $r$ appears to be $\frac{-3}{4}$, but this is where I get stuck. I think I need to do this: let $f(x) = \frac{(-3)^{x-1}}{4^x}$.
$$\lim\limits_{x \to \infty}\frac{(-3)^{x-1}}{4^x}$$
Is this how I handle this exercise? I still cannot seem to get the answer $\frac{1}{7}$
| If $\,a, ar, ar^2,...\,$ is a geometric series with $\,|r|<1\,$ ,then
$$\sum_{n=0}^\infty ar^n=\lim_{n\to\infty} ar^n=\lim_{n=0}\frac{a(1-r^n)}{1-r}=\frac{a}{1-r}$$since $\,r^n\xrightarrow [n\to\infty]{} 0\Longleftrightarrow |r|<1\,$ , and thus
$$\sum_{n=1}^\infty\frac{(-3)^{n-1}}{4^n}=\frac{1}{4}\sum_{n=0}^\infty \left(-\frac{3}{4}\right)^n=\frac{1}{4}\frac{1}{1-\left(-\left(\frac{3}{4}\right)\right)}=\frac{1}{4}\frac{1}{\frac{7}{4}}=\frac{1}{7}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
$10+10\times 0$ equals $0$ or $10$ I thought $10+10\times 0$ equals $0$ because:
$$10+10 = 20$$
And
$$20\times 0 = 0$$
I know about BEDMAS and came up with conclusion it should be $$0$$ not $$10$$
But as per this, answer is $10$, are they right?
| To elucidate what you said above in the original post, consider that $20\times0=0$, and consider also that $10+10=20$. If we have two equations like that, with one number $n$ on one side of an equation by itself with $m$ on the other side of the equation, and $n$ also appearing in the middle of a formula elsewhere, we should then have the ability to replace $n$ by $m$ in the middle of the formula elsewhere. The rule of replacement basically says this. In other words, $20\times0=10+10\times0$, since $20=10+10$, and $20\times0=0$, we replace $20$ by $10+10$ in $20\times0=0$ and obtain $10+10\times0=0$, right? But, BEDMAS says that $10+10\times0=10$. Why the difference?
The catch here lies in that the $10+10\times0$ obtained in the first instance does NOT mean the same thing as $10+10\times0$, given by fiat, in the second instance. Technically speaking, neither $20\times0=0$ and $10+10=20$ ends up as quite correct enough that you can use the rule of replacement as happened above. More precisely, $20\times0=0$ abbreviates $(20\times0)=0$, and $10+10=20$ abbreviates $(10+10)=20$. Keeping that in mind then we can see that using the rule of replacement here leads us to $((10+10)\times0)=0$ or more shortly $(10+10)\times0=0$. BEDMAS says that $10+10\times0$ means $(10+(10\times0))$ or more shortly $10+(10\times0)$, which differs from $(10+10)\times0$. So, the problem here arises, because the infix notation you've used makes it necessary to keep parentheses in formulas if you wish the rule of replacement as mechanically as you did.
If you do express all formulas with complete parenthesis in infix notation, then BEDMAS becomes unnecessary. If you wish to drop parentheses and use the rule of replacement mechanically as you did, then you'll need to write formulas in either Polish notation or Reverse Polish notation, or fully parenthesized infix notation, instead of partially parenthesized, "normal" infix notation. If you wish to keep BEDMAS and like conventions around, and write in normal infix notation, then you have to refrain from applying the rule of replacement as mechanically as you did. Conventional mathematicians and authors of our era generally appear to prefer the latter. I don't claim to understand why they appear to prefer a notation that makes such a simple logical rule, in some cases at least, harder to use than needed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Proving $\mathrm e <3$ Well I am just asking myself if there's a more elegant way of proving $$2<\exp(1)=\mathrm e<3$$ than doing it by induction and using the fact of $\lim\limits_{n\rightarrow\infty}\left(1+\frac1n\right)^n=\mathrm e$, is there one (or some) alternative way(s)?
| It's equivalent to show that the natural logarithm of 3 is bigger than 1, but this is
$$
\int_1^3 \frac{dx}{x}.
$$
A right hand sum is guaranteed to underestimate this integral, so you just need to take a right hand sum with enough rectangles to get a value larger than 1.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33",
"answer_count": 7,
"answer_id": 3
} |
De-arrangement in permutation and combination This article talks about de-arrangement in permutation combination.
Funda 1: De-arrangement
If $n$ distinct items are arranged in a row, then the number of ways they can be rearranged such that none of them occupies its original position is,
$$n! \left(\frac{1}{0!} – \frac{1}{1!} + \frac{1}{2!} – \frac{1}{3!} + \cdots + (-1)^n \frac{1}{n!}\right).$$
Note: De-arrangement of 1 object is not possible.
$\mathrm{Dearr}(2) = 1$; $\mathrm{Dearr}(3) = 2$; $\mathrm{Dearr}(4) =12 – 4 + 1 = 9$; $\mathrm{Dearr}(5) = 60 – 20 + 5 – 1 = 44$.
I am not able to understand the logic behind the equation. I searched in the internet, but could not find any links to this particular topic.
Can anyone explain the logic behind this equation or point me to some link that does it ?
| A while back, I posted three ways to derive the formula for derangements. Perhaps reading those might provide some insight into the equation above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to solve infinite repeating exponents How do you approach a problem like (solve for $x$):
$$x^{x^{x^{x^{...}}}}=2$$
Also, I have no idea what to tag this as.
Thanks for any help.
| I'm just going to give you a HUGE hint. and you'll get it right way. Let $f(x)$ be the left hand expression. Clearly, we have that the left hand side is equal to $x^{f(x)}$. Now, see what you can do with it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
FFT with a real matrix - why storing just half the coefficients? I know that when I perform a real to complex FFT half the frequency domain data is redundant due to symmetry. This is only the case in one axis of a 2D FFT though. I can think of a 2D FFT as two 1D FFT operations, the first operates on all the rows, and for a real valued image this will give you complex row values. In the second stage I apply a 1D FFT to every column, but since the row values are now complex this will be a complex to complex FFT with no redundancy in the output. Hence I only need width / 2 points in the horizontal axis, but you still need height points in the vertical axis. (thanks to Paul R)
My question is: I read that the symmetry is just "every term in the right part is the complex conjugated of the left part"
I have a code that I know for sure that is right that does this:
*
*take a real matrix as input -> FFT -> stores JUST half width (half coefficients, the nonredundant ones) but full height
*perform a pointwise-multiplication (alias circular convolution in time, I know, the matrices are padded) with another matrix
*Return with a IFFT on the half-width and full height matrix -> to real values. And that's the convolution result.
Why does it work? I mean: the conjugated complex numbers skipped (the negative frequencies) aren't of any use to the multiplication? Why is that?
To ask this as simple as I can: why do I discard half of the complex data from a real FFT to perform calculations? Aren't they important too? They're complex conjugated numbers afterall
| Real input compared to complex input contains half that information(since the zero padded part contains no information). The output is in the complex form, that means a double size container for the real input. So from the complex output we can naturally eliminate the duplicate part without any loss. I tried to be as simple as possible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Are there problems that are optimally solved by guess and check? For example, let's say the problem is: What is the square root of 3 (to x bits of precision)?
One way to solve this is to choose a random real number less than 3 and square it.
1.40245^2 = 1.9668660025
2.69362^2 = 7.2555887044
...
Of course, this is a very slow process. Newton-Raphson gives the solution much more quickly. My question is: Is there a problem for which this process is the optimal way to arrive at its solution?
I should point out that information used in each guess cannot be used in future guesses. In the square root example, the next guess could be biased by the knowledge of whether the square of the number being checked was less than or greater than 3.
| There are certainly problems where a brute force search is quicker than trying to remember (or figure out) a smarter approach. Example: Does 5 have a cube root modulo 11?
An example of a slightly different nature is this recent question where an exhaustive search of the (very small) solution space saves a lot of grief and uncertainty compared to attempting to perfect a "forward" argument.
A third example: NIST is currently running a competition to design a next-generation cryptographic hash function. One among several requirements for such a function is that it should be practically impossible to find two inputs that map to the same output (a "collision"), so if anyone finding any collision by any method automatically disqualifies a proposal. One of the entries built on cellular automata, and its submitter no doubt thought it would be a good idea because there is no nice known way to run a general cellular automaton backwards. The submission, however, fell within days to (what I think must have been) a simple guess-and-check attack -- it turned out that there were two different one-byte inputs that hashed to the same value. Attempting to construct a complete theory that would allow one to derive a collision in an understanding-based way would have been much more difficult than just seeing where some initial aimless guessing takes you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Existence of such points in compact and connected topological space $X$ Let $X$ be a topological space which is compact and connected.
$f$ is a continuous function such that;
$f : X \to \mathbb{C}-\{0\}$.
Explain why there exists two points $x_0$ and $x_1$ in $X$ such that $|f(x_0)| \le |f(x)| \le |f(x_1)|$ for all $x$ in $X$.
| Let $g(x)=|f(x)|$, observe that the complex norm is a continuous function from $\mathbb C$ into $\mathbb R$, therefore $g\colon X\to\mathbb R$ is continuous.
Since $X$ is compact and connected the image of $g$ is compact and connected. All connected subsets of $\mathbb R$ are intervals (open, closed, or half-open, half-closed); and all compact subsets of $\mathbb R$ are closed and bounded (Heine-Borel theorem).
Therefore the image of $g$ is an interval of the form $[a,b]$. Let $x_0,x_1\in X$ such that $g(x)=a$ and $g(x_1)=b$.
(Note that the connectedness of $X$ is not really needed, because compact subsets of $\mathbb R$ are closed and bounded, and thus have minimum and maximum.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
How to get the characteristic equation? In my book, this succession defined by recurrence is presented:
$$U_n=3U_{n-1}-U_{n-3}$$
And it says that the characteristic equation of such is:
$$x^3=3x^2-1$$
Honestly, I don't understand how. How do I get the characteristic equation given a succession?
| "Guess" that $U(n) = x^n$ is a solution and plug into the recurrence relation:
$$
x^n = 3x^{n-1} - x^{n-3}
$$
Divide both sides by $x^{n-3}$, assuming $x \ne 0$:
$$
x^3 = 3x^2 - 1
$$
Which is the characteristic equation you have.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 5,
"answer_id": 0
} |
Unitisation of $C^{*}$-algebras via double centralizers In most of the books I read about $C^{*}$-algebras, the author usually embeds the algebra, say, $A$, as an ideal of $B(A)$, the algebra of bounded linear operators on $A$, by identifying $a$ and $M_a$, the left multiplication of $a$.
However, in Murphy's $C^{*}$-algebras and operator theory, $A$ is embedded as an ideal of the space of 'double centralizers'. See p39 of his book.
I do not quite understand why we need this complicated construction since the effect is almost the same as the usual embedding. The author remarked that this construction is useful in certain approaches to K-theory, which further confuses me.
Can somebody give a hint?
Thanks!
| The set of double centralizers of a $C^*$-algebra $A$ is usually also called the multiplier algebra $\mathcal{M}(A)$. It is in some sense the largest $C^*$-algebra containing $A$ as an essential ideal and unital. If $A$ is already unital it is equal to $A$. (Whereas in your construction of a unitalisation we have that for unital $A$ the unitalisation is isomorphic as an algebra to $A\oplus \mathbb{C}$.
Multiplier algebras can also be constructed as the algebra of adjointable operators of the Hilbert module $A$ over itself. Since in $KK$ theory Hilbert modules are central object, and $K$ theory is special case of $KK$ theory this could be one reason why it is good to introduce these multiplier algebras quite early.
But if you want to learn basic theory I think this concept is not that important yet.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
Detailed diagram with mathematical fields of study Some time ago, I was searching for a detailed diagram with mathematical fields of study the nearest one I could find is in this file, second page.
I want something that shows information like: "Geometry leads to I topic, Geometry and Algebra leads do J topic and so on.
Can you help me?
| Saunders Mac Lane's book Mathematics, Form and Function (Springer, 1986) has a number of illuminating diagrams showing linkages between various fields of mathematics (and also to some related areas.)
For example,
p149: Functions & related ideas of image and composition;
p184: Concepts of calculus;
p306: Interconnections of mathematics and mechanics;
p408: Sets, functions and categories;
p416: Ideas arising within mathematics;
p422-3: various ideas and subdivisions;
p425: Interconnections for group theory;
p426: Connections of analysis with classical applied mathematics;
p427: Probability and related ideas;
p428: Foundations.
Mac Lane summarises on p428: "We have thus illustrated many subjects and branches of mathematics, together with diagrams of the partial networks in which they appear. The full network of mathematics is suggested thereby, but it is far too extensive and entangled with connections to be captured on any one page of this book."
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 4,
"answer_id": 0
} |
In set theory, what does the symbol $\mathfrak d$ mean? What's meaning of this symbol in set theory as following, which seems like $b$?
I know the symbol such as $\omega$, $\omega_1$, and so on, however, what does it denote in the lemma?
Thanks for any help:)
| It is the German script $\mathfrak{d}$ given by the LaTeX \mathfrak{d}. It probably represents a cardinal number (sometimes $\mathfrak{c}$ is used to represent the cardinality of the real numbers), but it would definitely depend on the context of what you are reading.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/166995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Show that there exists $n$ such that $i^n = 0$ for all $i$ in the ideal $I$ I'm new to this medium, but I'm quite stuck with an exercise so hopefully someone here can help me.
This is the exercise:
Let $I$ be an ideal in a Noetherian ring $A$, and assume that for every $i\in I$ there exists an $n_i$ such that $i^{n_i} = 0$. Show that there is an $n$ such that $i^n = 0$ for every $i\in I$.
I thought about this:
$A$ is Noetherian so $I$ is finitely generated. That means, there exist $i_1, \ldots, i_m$ such that all of the elements in $I$ are linear combinations of $i_1, \ldots, i_m$.
Now is it possible to take $n = n_1n_2\cdots n_m$?
I was thinking, maybe I have to use something like this: $(a + b)^p = a^p + b^p$. This holds in a field of characteristic $p$. If this is true, then indeed $(a_1i_1 + \ldots + a_mi_m)^n = 0 + \ldots + 0 = 0.$
| Pritam's binomial approach is the easiest way to solve this problem in the commutative setting. In case you're interested though, it's also true in the noncommutative setting.
Levitzky's theorem says that any nil right ideal in a right Noetherian ring is nilpotent. This implies your conclusion (in fact, even more.) However this is not as elementary as the binomial proof in the commutative case :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/167067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Extension of morphisms on surfaces Consider two regular integral proper algebraic surfaces $X$ and $Y$ over a DVR $\mathcal O_K$ with residue field $k$. Let $U \subset X$ be an open subset, s.t. $X\setminus U$ consists of finitely many closed points lying in the closed fiber $X_k$. Assume that all points in $X\setminus U$ considered as points in $X_k$ are regular. Consider now an $\mathcal O_K$-morphism $f: U \to Y$.
Is there any extension of $f$ to $X$?
I know that $Y_k$ is proper. Since every $x \in U_k$ is regular, $\mathcal O_{X_k,x}$ is a DVR, therefore by the valuative criterion of properness
$$
Hom_k(U_k,Y_k) \cong Hom_k(X_k,Y_k),
$$
so $f_k$ can be uniquely extended to $X_k$. Thus, set-theoretically an extension of $f$ exists.
On the other hand, if there is an extension of $f$ to $X$, then on the closed fiber it coincides with $f_k$. Unfortunately, I don't see, how to construct such an extension scheme-theoretically.
Motivation
Consider a subset $\mathcal C$ of the set of irreducible components of $Y_k$. Assume that the contraction morphism $g: Y \to X$ of $\mathcal C$ exists, i.e. $X$ is proper over $\mathcal O_K$, $g$ is birational and $g(C)$ is a points if and only if $C \in \mathcal C$. Since $g$ is birational we have a section $f: U \to Y$ of $g$ over an open $U \subset X$. In fact, $X\setminus U = f(\mathcal C)$. If we now assume that all $x \in X\setminus U$ are regular as points in $X_k$, we will obtain the above situation.
| Suppose $f : U\to Y$ is dominante. Then $f$ extends to $X$ if and only if $Y\setminus f(U)$ is finite. In particular, in your situation, $f$ extends to $X$ only when $g$ is an isomorphism (no component is contracted).
One direction (the one that matters for you) is rather easy: suppose $f$ extends to $f' : X\to Y$. As $X$ is proper, $f'(X)$ is closed and dense in $Y$, so $f'(X)=Y$ and $Y\setminus f(U)\subset f(X\setminus U)$ is finite.
For the other direction, consider the graph $\Gamma\subset X\times_{O_K} Y$ of the rational map $f : X - -\to Y$. Let $p: \Gamma\to X$ be the first projection. Then $\Gamma\setminus p^{-1}(U)$ is contained in the finite set $(X\setminus U)\times (Y\setminus f(U))$. This implies that $p$ is quasi-finite. As $p : \Gamma\to X$ is birational and proper (thus finite), and $X$ is normal, this implies that $p$ is an isomorphism. So $f$ extends to $X$ via the $p^{-1}$ and the second projection $\Gamma\to Y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/167151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Can we possibly combine $\int_a^b{g(x)dx}$ plus $\int_c^d{h(x)dx}$ into $\int_e^f{j(x)dx}$? I'm wondering if this is possible for the general case. In other words, I'd like to take $$\int_a^b{g(x)dx} + \int_c^d{h(x)dx} = \int_e^f{j(x)dx}$$ and determine $e$, $f$, and $j(x)$ from the other (known) formulas and integrals. I'm wondering what restrictions, limitations, and problems arise.
If this is not possible in the general case, I'm wondering what specific cases this would be valid for, and also how it could be done. It's a curiosity of mine for now, but I can think of some possible problems and applications to apply it to.
| Here's a method that should allow one a large degree of freedom as well as allowing Rieman Integration (instead of Lesbegue Integration or some other method):
Let $\tilde{g}$ be such that: $$\int_a^b{g(x)dx} = \int_e^f{\tilde{g}(x)dx}$$
...and $\tilde{h}$ follows similarly. Then they can be both added inside a single integral.
The first method that comes to mind is to let $\tilde{g}(x) = \dot{g}\cdot g(x)dx$, where $\dot{g}$, a constant, is the ratio of the old and new integrations. A similar method that comes to mind is to actually let $\dot{g}$ be a function.
Another method that I'm exploring, and is somwhat questionable, is to attempt to use $e$ and $f$ as functions, possibly even of $x$, although this may be undefined or just plain wrong.
I'll add ideas to this as I hopefully come up with better methods.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/167221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Limit of a sequence of real numbers If $(a_n), (b_n)$ are two sequences of real numbers so that $(a_n)\rightarrow a,\,\,(b_n)\rightarrow b$ with $a, b\in \mathbb{R}^+$. How to prove that $a_n^{b_n}\rightarrow a^b$ ?
| Note: The statement doesn't require $b > 0$. We don't assume it here.
If we take the continuity of $\ln$ and $\exp$ for granted, the problem essentially boils down to showing $b_n x_n \to bx$ where $x_n = \ln a_n, x = \ln a$ since $a_n^{b_n} = e^{b_n x_n}$. This is what the work in the first proof below goes toward (the "add-and-subtract $b_n \ln a_n$" trick comes up elsewhere too). But if you can use $b_n \to b \text{ and } x_n \to x \implies b_n x_n \to bx$ then the proof simplifies to the second one below.
Proof: Given $\varepsilon > 0$, there exist $K_1, K_2 \in \mathbb{N}$ such that for $n \in \mathbb{N}$ we have $n > K_1 \implies |\ln a_n - \ln a| < \frac{\varepsilon}{2(|b|+1)}$ (since $a_n \to a \in \mathbb{R}^+ \implies \ln a_n \to \ln a$ by continuity of $\ln$ on $\mathbb{R}^+$) and
$n > K_2 \implies |b_n - b| < \min(\frac{\varepsilon}{2 (|\ln a| + 1)},1)$ (by hypothesis). Let $K = \max(K_1, K_2)$. Then
$$\begin{eqnarray}
n > K \implies |b_n \ln a_n - b \ln a| &=& |b_n \ln a_n - b_n \ln a + b_n \ln a - b \ln a|\\
&\leq& |b_n \ln a_n - b_n \ln a| + |b_n \ln a - b \ln a|\\
&=& |b_n|\,|\ln a_n - \ln a| + |b_n - b|\,|\ln a|\\
&<& (|b| + 1)\,\frac{\varepsilon}{2(|b|+1)} + \frac{\varepsilon}{2 (|\ln a|+1)}\,|\ln a|\\
&<& \varepsilon
\end{eqnarray}$$
so $b_n \ln a_n \to b \ln a \in \mathbb{R}$ by definition. But $x \mapsto e^x$ is continuous on $\mathbb{R}$, hence $a_n^{b_n} = e^{b_n \ln a_n} \to e^{b \ln a} = a^b$.
As mentioned at the top, using the theorem on the limit of a product gives this:
Short proof:
$$\begin{eqnarray}
a^b &=& \exp\left[b \ln a\right]\\
&=& \exp\left[\left(\lim_{n \to \infty} b_n\right)\left(\ln \lim_{n \to \infty} a_n\right)\right] &\text{ assumption}&\\
&=& \exp\left[\left(\lim_{n \to \infty} b_n\right)\left(\lim_{n \to \infty} \ln a_n\right)\right] &\text{ continuity of }\ln\text{ at }\lim_{n \to \infty}a_n = a \in \mathbb{R}^+&\\
&=& \exp\left[\lim_{n \to \infty} b_n \ln a_n\right] &\text{ product of limits }\longrightarrow\text{ limit of product}&\\
&=& \lim_{n \to \infty} e^{b_n \ln a_n} &\text{ continuity of }\exp\text{ at }\lim_{n \to \infty}b_n \ln a_n = b \ln a \in \mathbb{R}&\\
&=& \lim_{n \to \infty} a_n^{b_n}
\end{eqnarray}$$
All the work and $\varepsilon$'s are still there, but now they're hidden in the theorems we used.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/167355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Irreducible factors of $X^p-1$ in $(\mathbb{Z}/q \mathbb{Z})[X]$ Is it possible to determine how many irreducible factors has $X^p-1$ in the polynomial ring $(\mathbb{Z}/q \mathbb{Z})[X]$ has and maybe even the degrees of the irreducible factors? (Here $p,q$ are primes with $\gcd(p,q)=1$.)
| It has one factor of degree $1$, namely $x-1$. All the remaining factors have the same degree, namely the order of $q$ in the multiplicative group $(\mathbb{Z}/p \mathbb{Z})^*$. To see it: this is the length of every orbit of the action of the Frobenius $a\mapsto a^q$ on the set of the roots of $(x^p-1)/(x-1)$ in the algebraic closure.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/167486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
What does "increases in proportion to" mean? I came across a multiple choice problem where a function $f(x) = \frac{x^2 - 1}{x+1} - x$ is given. One has to choose the statement that is correct about the function. The different statements about the function included:
*
*(1) the function increases in proportion to $x^2$.
*(2) the function increases in proportion to $x$
*(3) the function is constant.
Obviously the function is constant, but my questions is what the "increases in proportion to" means. I haven't come across it before, and was wondering if it is standard for something. I would like so see an example of a function that can be said to satisfy this statement.
| To answer your question about examples where $f(x)$ would be proportional to $x$, and $x^2$, we only need to slightly modify the original function.
$f(x) = \frac {x^2-1}{x+1} $ is interesting between $x=-2$ and $x=2$, but as $x$ becomes really large or really small, the "-1" term and the "+1" term are insignificant and the expression is approximately $\frac {x^2}{x}$ or $x$. This would be proportional to $x$.
If $f(x) = \frac{x^3-1}{x+1} - x$, then as $x$ becomes really large or really small, the function breaks down to $\frac{x^3}{x} -x $ or $x^2$. This would be proportional to $x^2$ for much of $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/167574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Normal Field Extension $X^4 -4$ has a root in $\Bbb Q(2^{1/2})$ but does not split in $\Bbb Q(2^{1/2})$ implying that $\Bbb Q(2^{1/2})$ is not a normal extension of $\Bbb Q$ according to most definitions.
But $\Bbb Q(2^{1/2})$ is considered a normal extension of $\Bbb Q$ by everybody. What am I missing here?
| You are missing the fact that $x^4-4$ is not irreducible over $\mathbb{Q}$: $x^4-4 = (x^2-2)(x^2+2)$.
The definition you have in mind says that if $K/F$ is algebraic, then $K$ is normal if and only if every irreducible $f(x)\in F[x]$ that has at least one root in $K$ actually splits in $K$. Your test polynomial is not irreducible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/167641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Exercise on compact $G_\delta$ sets I'm having trouble proving an exercise in Folland's book on real analysis.
Problem: Consider a locally compact Hausdorff space $X$. If $K\subset X$ is a compact $G_\delta$ set, then show there exists a $f\in C_c(X, [0,1])$ with $K=f^{-1}(\{1\})$.
We can write $K=\cap_1^\infty U_i$, where the $U_i$ are open.
My thought was to use Urysohn's lemma to find functions $f_i$ which are 1 on $K$ and $0$ outside of $U_i$, but I don't see how to use them to get the desired function. If we take the limit, I think we just get the characteristic function of $K$.
I apologize if this is something simple. It has been a while since I've done point-set topology.
| As you have said, we can use Urysohn's lemma for compact sets to construct a sequence of functions $f_i$ such that $f_i$ equals $1$ in $K$ and $0$ outside $U_i$.
Furthermore, $X$ is locally compact, so there is an open neighbourhood $U$ of $K$ whose closure is compact. We can then assume without loss of generality that $U_i\subseteq U$
Then we can put $f=\sum_i2^{-i} f_i$. Clearly, $f^{-1}[\{1\}]=K$.
Moreover, $f$ is the uniform limit of continuous functions (because $f_i$ are bounded by $1$), so it is continuous, and its support is contained in $U$, so $f$ is the function you seek.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/167687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
proof for $ (\vec{A} \times \vec{B}) \times \vec{C} = (\vec{A}\cdot\vec{C})\vec{B}-(\vec{B}\cdot\vec{C})\vec{A}$ this formula just pop up in textbook I'm reading without any explanation
$ (\vec{A} \times \vec{B}) \times \vec{C} = (\vec{A}\cdot\vec{C})\vec{B}-(\vec{B}\cdot\vec{C})\vec{A}$
I did some "vector arithmetic" using the determinant method
but I'm not getting the answer to agreed with above formula.
I'm wondering if anyone saw this formula before and know the proof of it?
the final result that i get is
$b_{1}(a_{3}c_{3}+a_{2}c_{2})-a_{1}(b_{2}c_{2}+b_{3}c_{3})i$
$b_{2}(a_{3}c_{3}+a_{1}c_{1})-a_{2}(b_{3}c_{3}+b_{1}c_{1})j$
$b_{3}(a_{2}c_{2}+a_{1}c_{1})-a_{3}(b_{2}c_{2}+b_{1}c_{1})k$
But I failed to see any correlation for $(\vec{A}\cdot\vec{C})$ and $(\vec{B}\cdot\vec{C})$ part...
| vector $ \vec A\times \vec B$ is perpendicular to the plane containing $\vec A $ and $\vec B$.Now, $(\vec A\times \vec B)\times \vec C$ is perpendicular to plane containing vectors $\vec C$ and $\vec A\times \vec B$, thus $(\vec A\times \vec B)\times \vec C$ is in the plane containing $\vec A$ and $ \vec B$ and hence $(\vec A\times \vec B)\times \vec C$ is a linear combination of vectors $\vec A$ and $ \vec B\implies (\vec A\times \vec B)\times \vec C=\alpha \vec A + \beta \vec B$. Take dot product with $\vec C$ both sides gives $0$ on the L.H.S(as $(\vec A\times \vec B)\times \vec C$ is perpendicular to $\vec C$), hence $0=\alpha (\vec A. \vec C)+\beta(\vec B.\vec C)\implies \frac{\beta}{\vec A. \vec C}=\frac{-\alpha}{\vec B. \vec C}=\lambda \implies \alpha=-\lambda(\vec B. \vec C)$ and $\beta=\lambda(\vec A. \vec C) \implies (\vec A\times \vec B)\times \vec C=\lambda((\vec A. \vec C)\vec B-(\vec B. \vec C)\vec A)$. Here, $\lambda$ is independent of the magnitudes of vectors as if magnitude of any vector is multiplied by any scalar, that appears explicitly on both sides of the equation.Thus , putting unit vectors $\vec i,\vec j,\vec i$ in the equation gives $\vec j=\lambda(\vec j)\implies \lambda=1$ and hence $(\vec A\times \vec B)\times \vec C=((\vec A. \vec C)\vec B-(\vec B. \vec C)\vec A)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/167754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
} |
Idea behind factoring an operator? Suppose if we have an operator $\partial_t^2-\partial_x^2 $ what does it mean to factorise this operator to write it as $(\partial_t-\partial_x) (\partial_t+\partial x)$ When does it actually make sense and why ?
| In the abstract sense, the decomposition $x^2-y^2=(x+y)(x-y)$ is true in any ring where $x$ and $y$ commute (in fact, if and only if they commute).
For sufficiently nice (smooth) functions, differentiation is commutative, that is, the result of derivation depends on the degrees of derivation and not the order in which we apply them, so the differential operators on a set of smooth ($C^\infty$) functions (or abstract power series or other such objects) form a commutative ring with composition, so the operation makes perfect sense in that case in a quite general way.
However, we only need $\partial_x\partial_t=\partial_t\partial_x$, and for that we only need $C^2$. Of course, differential operators on $C^2$ do not form a ring (since higher order derivations may not make sense), but the substitution still is correct for the same reasons. You can look at the differentiations as polynomials of degree 2 with variables $\partial_\textrm{something}$.
For some less smooth functions it might not make sense.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/167814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
proving :$\frac{ab}{a^2+3b^2}+\frac{cb}{b^2+3c^2}+\frac{ac}{c^2+3a^2}\le\frac{3}{4}$. Let $a,b,c>0$ how to prove that :
$$\frac{ab}{a^2+3b^2}+\frac{cb}{b^2+3c^2}+\frac{ac}{c^2+3a^2}\le\frac{3}{4}$$
I find that
$$\ \frac{ab}{a^{2}+3b^{2}}=\frac{1}{\frac{a^{2}+3b^{2}}{ab}}=\frac{1}{\frac{a}{b}+\frac{3b}{a}} $$
By AM-GM
$$\ \frac{ab}{a^{2}+3b^{2}} \leq \frac{1}{2 \sqrt{3}}=\frac{\sqrt{3}}{6} $$
$$\ \sum_{cyc} \frac{ab}{a^{2}+3b^{2}} \leq \frac{\sqrt{3}}{2} $$
But this is obviously is not working .
| I have a Cauchy-Schwarz proof of it,hope you enjoy.:D
first,mutiply $2$ to each side,your inequality can be rewrite into
$$ \sum_{cyc}{\frac{2ab}{a^2+3b^2}}\leq \frac{3}{2}$$
Or
$$ \sum_{cyc}{\frac{(a-b)^2+2b^2}{a^2+3b^2}}\geq \frac{3}{2}$$
Now,Using Cauchy-Schwarz inequality,we have
$$ \sum_{cyc}{\frac{(a-b)^2+2b^2}{a^2+3b^2}}\geq \frac{\left(\sum_{cyc}{\sqrt{(a-b)^2+2b^2}}\right)^2}{4(a^2+b^2+c^2)}$$
Therefore,it's suffice to prove
$$\left(\sum_{cyc}{\sqrt{(a-b)^2+2b^2}}\right)^2\geq 6(a^2+b^2+c^2) $$
after simply expand,it's equal to
$$ \sum_{cyc}{\sqrt{[(a-b)^2+2b^2][(b-c)^2+2c^2]}}\geq a^2+b^2+c^2+ab+bc+ca $$
Now,Using Cauchy-Schwarz again,we notice that
$$ \sqrt{[(a-b)^2+2b^2][(b-c)^2+2c^2]}\geq (b-a)(b-c)+2bc=b^2+ac+bc-ab$$
sum them up,the result follows.
Hence we are done!
Equality occurs when $a=b=c$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/167855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 1
} |
Quick probability question If there is an $80\%$ chance of rain in the next hour, what is the percentage chance of rain in the next half hour?
Thanks.
| You could assume that occurrence of rain is a point process with constant rate lambda. Then the process is Poisson and the probability of rain in an interval of time [0,t] is given by the exponential distribution and if T = time until the event rain then
P[T<=t] = 1-exp(-λt) assuming λ is the rate per hour then P(rain in 1 hour)=1-exp(-λ).
and P(rain in 1/2 hour)=1-exp(-λ/2). Other models for the point process would give different answers.
In general the longer the time interval greater the chance of rain.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/167925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Splitting field and subextension Definition: Let $K/F$ be a field extension and let $p(x)\in F[x]$,
we say that $K$ is splitting field of $p$ over $F$ if $p$ splits
in $K$ and $K$ is generated by $p$'s roots; i.e. if $a_{0},...,a_{n}\in K$
are the roots of $p$ then $K=F(a_{0},...a_{n})$.
What I am trying to understand is this: in my lecture notes it is
written that if $K/E$,$E/F$ are field extensions then $K$ is splitting
field of $p$ over $F$ iff $K$ is splitting field of $p$ over $E$.
If I assume $K$ is splitting field of $p$ over $F$ then
$$\begin{align*}K=F(a_{0,}...,a_{n})\subset E(a_{0,}...,a_{n})\subset K &\implies F(a_{0,}...,a_{n})=E(a_{0,}...,a_{n})\\
&\implies K=E(a_{0,}...,a_{n}).
\end{align*}$$
Can someone please help with the other direction ? help is appreciated!
| This is false. Let $F=\mathbb{Q}$, let $E=\mathbb{Q}(\sqrt{2})$, let $K=\mathbb{Q}(\sqrt{2},\sqrt{3})$, and let $p=x^2-3\in F[x]$. Then the splitting field for $p$ over $E$ is $K$, but the splitting field for $p$ over $F$ is $\mathbb{Q}(\sqrt{3})\subsetneq K$.
Let's say that all fields under discussion live in an algebraically closed field $L$. Letting $M$ be the unique splitting field for $p$ over $F$ inside $L$, then the splitting field for $p$ over $E$ inside $L$ is equal to $M$ if and only if $ME=M$, which is the case if and only if $E\subseteq M$.
In other words, you'll get the same splitting field for $p$ over $F$ and over $E$ if and only if $E$ were already isomorphic to a subfield of the splitting field for $p$ over $F$. When $E$ does not have that property, there is "extra stuff" in $E$ (for example, $\sqrt{2}\in E$ in the example) that will need to also be contained in the splitting field for $p$ over $E$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/167980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Big List of Fun Math Books To be on this list the book must satisfy the following conditions:
*
*It doesn't require an enormous amount of background material to understand.
*It must be a fun book, either in recreational math (or something close to) or in philosophy of math.
Here are my two contributions to the list:
*
*What is Mathematics? Courant and Robbins.
*Proofs that Really Count. Benjamin and Quinn.
| Both of the following books are really interesting and very accessible.
Stewart, Ian - Math Hysteria
This book covers the math behind many famous games and puzzles and requires very little math to be able to grasp. Very light hearted and fun.
Bellos, Ales - Alex's Adventures in Numberland
This book is about maths in society and everyday life, from monkeys doing arithmetics to odds in Las Vegas.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/168019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "58",
"answer_count": 27,
"answer_id": 16
} |
Ratio of sides in a triangle vs ratio of angles? Given a triangle with the ratio of sides being $X: Y : Z$, is it true that the ratio of angles is also $X: Y: Z$? Could I see a proof of this? Thanks
| No, it is not true. Consider a 45-45-90 right triangle:
(image from Wikipedia)
The sides are in the ratio $1:1:\sqrt{2}$, while the angles are in the ratio $1:1:2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/168073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
In a field $F=\{0,1,x\}$, $x + x = 1$ and $x\cdot x = 1$ Looking for some pointers on how to approach this problem:
Let $F$ be a field consisting of exactly three elements $0$, $1$, $x$. Prove that
$x + x = 1$ and that $x x = 1$.
| Write down the addition and multiplication tables. Much of them are known immediately from the properties of 0 and 1, and there's only one way to fill in the rest so that addition and multiplication by nonzero are both invertible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/168137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Given that $x=\dfrac 1y$, show that $∫\frac {dx}{x \sqrt{(x^2-1)}} = -∫\frac {dy}{\sqrt{1-y^2}}$ Given that $x=\dfrac 1y$, show that $\displaystyle \int \frac 1{x\sqrt{x^2-1}}\,dx = -\int \frac 1{\sqrt{1-y^2}}\,dy$
Have no idea how to prove it.
here is a link to wolframalpha showing how to integrate the left side.
| Substitute $x=1/y$ and $dx/dy=-1/y^2$ to get $\int y^2/(\sqrt{1-y^2}) (-1/y^2)dy$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/168195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Finding the critical points of $\sin(x)/x$ and $\cosh(x^2)$ Could someone help me solve this:
What are all critical points of $f(x)=\sin(x)/x$ and $f(x)=\cosh(x^2)$?
Mathematica solutions are also accepted.
| Taken from here :
$x=c$ is a critical point of the function $f(x)$ if $f(c)$ exists and if one of the following are true:
*
*$f'(c) = 0$
*$f'(c)$ does not exist
The general strategy for finding critical points is to compute the first derivative of $f(x)$ with respect to $x$ and set that equal to zero.
$$f(x) = \frac{\sin x}{x}$$
Using the quotient rule, we have:
$$f'(x) = \frac{x\cdot \cos x - \sin x \cdot 1}{x^2}$$
$$f'(x) = \frac{x \cos x}{x^2} - \frac{\sin x}{x^2}$$
Dividing through by $x$ for the left terms, we now have:
$$f'(x) = \frac{\cos x}{x} - \frac{\sin x}{x^2}$$
Now set that equal to zero and solve for your critical points. Do the same for $f(x) = \cosh(x^2)$. Don't forget the chain rule!
For $f(x) = \cosh (x^2)$, recall that $\frac{d}{dx} \cosh (x) = \sinh (x)$. So,
$$f'(x) = \sinh(x^2) \cdot \frac{d}{dx} (x^2)$$
$$f'(x) = 2x \sinh(x^2)$$
$$0 = 2x \sinh(x^2) $$
$x = 0$ is your only critical point along the reals.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/168273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Finding a function that fits the "lowest points" of another one I came up with this problem, which I cannot solve myself.
Consider the function:
$\displaystyle f(x) = x^{\ln(|\pi \cos x ^ 2| + |\pi \tan x ^ 2|)}$, which has singularities at $\sqrt{\pi}\sqrt{n + \dfrac{1}{2}}$, with $n \in \mathbb{Z}$.
Looking at its graph:
we can see it is globally increasing:
I was wondering if there exists a function $g(x)$, such that $f(x) - g(x) \ge 0, \forall x \in \mathbb{R^{+}}$ and that best fits the "lowest points" of $f(x)$.
Sorry for the inaccurate terminology but I really don't know how to express this concept mathematically. Here is, for example, $g(x) = x ^ {1.14}$ (in red):
Actually $g(x)$ is not correct because for small values of $x$ it is greater than $f(x)$.
Is it possible to find such a $g(x)$, given that the "nearest" is $g(x)$ to $f(x)$'s "lowest points" the better it is? Again, sorry for my terminology, I hope you could point me in the right direction.
Thanks,
| As $a^{\ln b}=\exp(\ln a\cdot\ln b)=b^{\ln a}$ the function $f$ can be written in the following way:
$$f(x)=\bigl(\pi|\cos(x^2)|+\pi|\tan(x^2)|\bigr)^{\ln x}\ .$$
Now the auxiliary function
$$\phi:\quad{\mathbb R}\to[0,\infty],\qquad t\mapsto \pi(|\cos(t)|+|\tan(t)|)$$
is periodic with period $\pi$ and assumes its minimum $\pi$ at the points $t_n=n\pi$. The function $$\psi(x):=\phi(x^2)=\pi|\cos(x^2)|+\pi|\tan(x^2)|\bigr)$$ assumes the same values as $\phi$; in particular it is $\geq\pi$ for all $x\geq0$ and $=\pi$ at the points $x_n:=\sqrt{n\pi}$ $\ (n\geq0)$. Therefore
$$f(x)=\bigl(\psi(x)\bigr)^{\ln x}\geq \pi^{\ln x}=x^{\ln\pi}\qquad(x\geq1)$$
and $=x^{\ln\pi}$ at the $x_n>1$. For $0<x<1$ the inequality is the other way around because $y\mapsto q^y$ is decreasing when $0<q<1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/168321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
A limit question related to the nth derivative of a function This evening I thought of the following question that isn't related to homework, but it's a question that seems very challenging to me, and I take some interest in it.
Let's consider the following function:
$$ f(x)= \left(\frac{\sin x}{x}\right)^\frac{x}{\sin x}$$
I wonder what is the first derivative (1st, 2nd, 3rd ...) such that $\lim\limits_{x\to0} f^{(n)}(x)$ is different from $0$ or $+\infty$, $-\infty$, where $f^{(n)}(x)$ is the nth derivative of $f(x)$ (if such a case is possible).
I tried to use W|A, but it simply fails to work out such limits. Maybe i need the W|A Pro version.
| The Taylor expansion is
$$f(x) = 1 - \frac{x^2}{6} + O(x^4),$$
so
\begin{eqnarray*}
f(0) &=& 1 \\
f'(0) &=& 0 \\
f''(0) &=& -\frac{1}{3}.
\end{eqnarray*}
$\def\e{\epsilon}$
Addendum:
We use big O notation.
Let
$$\e = \frac{x}{\sin x} - 1 = \frac{x^2}{6} + O(x^4).$$
Then
\begin{eqnarray*}
\frac{1}{f(x)} &=& (1+\e)^{1+\e} \\
&=& (1+\e)(1+\e)^\e \\
&=& (1+\e)(1+O(\e\log(1+\e))) \\
&=& (1+\e)(1+O(\e^2)) \\
&=& 1+\e + O(\e^2),
\end{eqnarray*}
so $f(x) = 1-\e + O(\e^2) = 1-\frac{x^2}{6} + O(x^4)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/168369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Show inequality generalization $\sum (x_i-1)(x_i-3)/(x_i^2+3)\ge 0$ Let $f(x)=\dfrac{(x-1)(x-3)}{x^2+3}$.
It seems to be that:
If $x_1,x_2,\ldots,x_n$ are positive real numbers with $\prod_{i=1}^n x_i=1$ then $\sum_{i=1}^n f(x_i)\ge 0$.
For $n>2$ a simple algebraic approach gets messy. This would lead to a generalization of this inequality, but even the calculus solution offered there for $n=3$ went into cases.
I thought about Jensen's inequality, but $f$ is not convex on $x>0$.
Can someone prove or disprove the above claim?
| Unfortunately, this is not true.
Simple counterexample:
My original counterexample had some ugly numbers in it, but fortunately, there is a counterexample with nicer numbers. However, the explanation below might still prove informative.
Note that for $x>0$,
$$
f(x)=\frac{(x-1)(x-3)}{x^2+3}=1-\frac{4x}{x^2+3}\lt1\tag{1}
$$
Next, we compute
$$
f(2)=-\frac17\tag{2}
$$
Let $x_0=\frac{1}{256}$ and $x_k=2$ for $1\le k\le8$.
The product of the $x_k$ is $\frac{1}{256}\cdot2^8=1$, yet by $(1)$ and $(2)$, the sum of the $f(x_k)$ is less than $1-\frac87\lt0$.
Original couunterexample:
Let $x_0=e^{-3.85}$ and $x_k=e^{.55}$ for $1\le k\le 7$.
We get $f(x_0)=0.971631300121646$ and $f(x_k)=-0.154700260422285$ for $1\le k\le 7$.
Then,
$$
\prod_{k=0}^7x_k=1
$$
yet
$$
\sum_{k=0}^7f(x_k)=-0.111270522834348
$$
Explanation:
Let me explain how I came up with this example.
$\prod\limits_{k=0}^nx_k=1$ is equivalent to $\sum\limits_{k=0}^n\log(x_k)=0$. Therefore I considered $u_k=\log(x_k)$. Now we want
$$
\sum_{k=0}^nu_k=0
$$
to mean that
$$
\sum_{k=0}^n\frac{(e^{u_i}-1)(e^{u_i}-3)}{e^{2u_i}+3}\ge0
$$
I first looked at the graph of $\large\frac{(e^{u}-1)(e^{u}-3)}{e^{2u}+3}$. If the graph were convex, the result would be true.
$\hspace{2cm}$
Unfortunately, the graph was not convex, but I did note that $f(u)$ dipped below $0$ with a minimum of less than $-\frac17$ near $u=.55$, and that it was less than $1$ everywhere. Thus, if I took $u=.55$ for $7$ points and $u=-3.85$ for the other, the sum of the $u_k$ would be $0$, yet the sum of the $f(e^{u_k})$ would be less than $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/168437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
A problem dealing with even perfect numbers. Question: Show that all even perfect numbers end in 6 or 8.
This is what I have. All even perfect numbers are of the form $n=2^{p-1}(2^p -1)$ where $p$ is prime and so is $(2^p -1)$.
What I did was set $2^{p-1}(2^p -1)\equiv x\pmod {10}$ and proceeded to show that $x=6$ or $8$ were the only solutions.
Now, $2^{p-1}(2^p -1)\equiv x\pmod {10} \implies 2^{p-2}(2^p -1)\equiv \frac{x}{2}\pmod {5}$, furthermore there are only two solutions such that $0 \le \frac{x}{2} < 5$. So I plugged in the first two primes and only primes that satisfy. That is if $p=2$ then $\frac{x}{2}=3$ when $p=3$ then $\frac{x}{2}=4$. These yield $x=6$ and $x=8$ respectively. Furthermore All solutions are $x=6+10r$ or $x=8+10s$.
I would appreciate any comments and or alternate approaches to arrive at a good proof.
| $p$ is prime so it is 1 or 3$\mod 4$.
So, the ending digit of $2^p$ is (respectively) 2 or 8
(The ending digits of powers of 2 are $2,4,8,6,2,4,8,6,2,4,8,6...$
So, the ending digit of $2^{p-1}$ is (respectively) 6 or 4; and
the ending digit of $2^p-1$ is (respectively) 1 or 7.
Hence the ending digit of $2^{p-1}(2^p-1)$ is (respectively) $6\times1$ or $4\times7$ modulo 10, i.e., $6$ or $8$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/168504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Orthogonal projection to closed, convex subset in a Hilbert space I don't understand one step in the proof of the following lemma (Projektionssatz):
Let $X$ a Hilbert space with scalar product $(\cdot)_X$ and let $A\subset X$ be convex and closed. Then there is a unique map $P:X\rightarrow A$ that satisfies: $\|x-P(x)\| = \inf_{y\in A} \|x- y\|$. This is equivalent to the following statement:
(1) For all $a\in A$ and fixed $x\in X$, $\mbox{Re}\bigl( x-P(x), a-P(x) \bigr)_X \le 0$.
I don't understand the following step in the proof that (1) implies the properties of $P$:
Let $a\in A$. Then
$\|x-P(x)\|^2 + 2\mbox{Re}\bigl( x-P(x), P(x)-a \bigr)_X + \|P(x)-a\|^2 \ge \|x-P(x)\|^2$.
I don't understand the "$\ge$". How do we get rid of the term $\|P(x)-a\|$ on the left hand side?
Thank you very much!
| It is non-negative! Hence, just drop it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/168573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Weak*-convergence of regular measures Let $K$ be a compact Hausdorff space. Denote by $ca_r(K)$ the set of all countably additive, signed Borel measures which are regular and of bounded variation. Let $(\mu_n)_{n\in\mathbb{N}}\subset ca_r(K)$ be a bounded sequence satisfying $\mu_n\geq 0$ for all $n\in\mathbb{N}$. Can we conclude that $(\mu_n)$ (or a subsequence) converges in the weak*-topology to some $\mu\in ca_r(K)$ with $\mu\geq 0$?
| We cannot.
Let $K = \beta \mathbb{N}$ be the Stone-Cech compactification of $\mathbb{N}$, and let $\mu_n$ be a point mass at $n \in \mathbb{N} \subset K$. Suppose to the contrary $(\mu_n)$ has a weak-* convergent subsequence $\mu_{n_k}$. Define $f : \mathbb{N} \to \mathbb{R}$ by $f(n_k) = (-1)^k$, $f(n) = 0$ otherwise. Then $f$ has a continuous extension $\tilde{f} : K \to \mathbb{R}$. By weak-* convergence, the sequence $\left(\int \tilde{f} d\mu_{n_k}\right)$ should converge. But in fact $\int \tilde{f} d\mu_{n_k} = \tilde{f}(n_k) = (-1)^k$ which does not converge.
If $C(K)$ is separable, then the weak-* topology on the closed unit ball $B$ of $C(K)^* = ca_r(K)$ is metrizable. In particular it is sequentially compact and so in that case every bounded sequence of regular measures has a weak-* convergent subsequence. As Andy Teich points out, it is sufficient for $K$ to be a compact metrizable space. Also, since there is a natural embedding of $K$ into $B$, if $B$ is metrizable then so is $K$.
One might ask whether it is is possible for $B$ to be sequentially compact without being metrizable. I don't know the answer but I suspect it is not possible, i.e. that metrizability of $B$ (and hence $K$) is necessary for sequential compactness.
We do know (by Alaoglu's theorem) that closed balls in $C(K)^*$ are weak-* compact, so what we can conclude in general is that $\{\mu_n\}$ has at least one weak-* limit point. However, as the above example shows, this limit point need not be a subsequential limit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/168635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
How to calculate $\int_{-a}^{a} \sqrt{a^2-x^2}\ln(\sqrt{a^2-x^2})\mathrm{dx}$ Well,this is a homework problem.
I need to calculate the differential entropy of random variable
$X\sim f(x)=\sqrt{a^2-x^2},\quad -a<x<a$ and $0$ otherwise. Just how to calculate
$$
\int_{-a}^a \sqrt{a^2-x^2}\ln(\sqrt{a^2-x^2})\,\mathrm{d}x
$$
I can get the result with Mathematica,but failed to calculate it by hand.Please give me some idea.
| [Some ideas]
You can rewrite it as follows:
$$\int_{-a}^a \sqrt{a^2-x^2} f(x) dx$$
where $f(x)$ is the logarithm.
Note that the integral, sans $f(x)$, is simply a semicircle of radius $a$. In other words, we can write,
$$\int_{-a}^a \int_0^{\sqrt{a^2-x^2}} f(x) dy dx=\int_{-a}^a \int_0^{\sqrt{a^2-x^2}} \ln{\sqrt{a^2-x^2}} dy dx$$
Edit: Found a mistake. Thinking it through. :-)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/168686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 3
} |
How to approach integrals as a function? I'm trying to solve the following question involving integrals, and can't quite get what am I supposed to do:
$$f(x) = \int_{2x}^{x^2}\root 3\of{\cos z}~dz$$
$$f'(x) =\ ?$$
How should I approach such integral functions? Am I just over-complicating a simple thing?
| Using the Leibnitz rule of differentiation of integrals,
which states that if
\begin{align}
f(x) = \int_{a(x)}^{b(x)} g(y) \ dy,
\end{align}
then
\begin{align}
f^{\prime}(x) = g(b(x)) b^{\prime}(x) - g(a(x)) a^{\prime}(x).
\end{align}
Thus, for your problem $a^{\prime}(x) = 2$ and $b^{\prime}(x) = 2x$ and, therefore,
\begin{align}
f^{\prime}(x) = \int_{2x}^{x^2} \sqrt[3]{\cos z} dz = \sqrt[3]{\cos (x^2)} (2 x) - \sqrt[3]{\cos (2x)} (2).
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/168786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 4
} |
How many elements in a ring can be invertible?
If $R$ is a finite ring (with identity) but not a field, let $U(R)$ be its group of units. Is $\frac{|U(R)|}{|R|}$ bounded away from $1$ over all such rings?
It's been a while since I cracked an algebra book (well, other than trying to solve this recently), so if someone can answer this, I'd prefer not to stray too far from first principles within reason.
| $\mathbb{F}_p \times\mathbb{F}_q$ has $(p-1)(q-1)$ invertible elements, so no.
Since $\mathbb{F}_2^n$ has $1$ invertible element, the proportion is also not bounded away from $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/168875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 1
} |
Order of a product of subgroups. Prove that $o(HK) = \frac{o(H)o(K)}{o(H \cap K)}$. Let $H$, $K$ be subgroups of $G$. Prove that $o(HK) = \frac{o(H)o(K)}{o(H \cap K)}$.
I need this theorem to prove something.
| We know that
$$HK=\bigcup_{h\in H} hK$$
and each $hK$ has the same cardinality $|hK|=|K|$. (See ProofWiki.)
We also know that for any $h,h'\in G$ either $hK\cap h'K=\emptyset$ or $hK=h'K$.
So the only problem is to find out how many of the cosets $hK$, $h\in H$, are distinct.
Since
$$hK=h'K \Leftrightarrow h^{-1}h'\in K$$
(see ProofWiki)
we see that for each $k\in K$, the elements $h'=hk$ represent the same set. (We have $k=h^{-1}h'$.)
We also see that if $k=h^{-1}h'$ then $k$ must belong to $H$.
Since the number of elements that represent the same coset is $|H\cap K|$, we have $|H|/|H\cap K|$ distinct cosets and $\frac{|H||K|}{|H\cap K|}$ elements in the union.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/168942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48",
"answer_count": 6,
"answer_id": 4
} |
Uniqueness of morphism in definition of category theory product (etc) I'm trying to understand the categorical definition of a product, which describes them in terms of existence of a unique morphism that makes such-and-such a diagram commute. I don't really feel I've totally understood the motivation for this definition: in particular, why must that morphism be unique? What's the consequence of omitting the requirement for uniqueness in, say, Set?
| This is a question which you will be able to answer yourself after some experience ... anyway:
The cartesian product $X \times Y$ of two sets $X,Y$ has the property: Every element of $X \times Y$ has a representation $(x,y)$ with unique elements $x \in X$ and $y \in Y$. This is the important and characteristic property of ordered pairs. In other words, if $*$ denotes the one-point set: For every two morphisms $x : * \to X$ and $y : * \to Y$ there is a unique morphism $(x,y) : * \to X \times Y$ such that $p_X \circ (x,y) = x$ and $p_Y \circ (x,y) = y$. But since we can do everything pointwise, the same holds for arbitrary sets instead of $*$: For every two morphisms $x : T \to X$ and $y : T \to Y$ (which you may think of families of elements in $X$ resp. $Y$, also called $T$-valued points in the setting of functorial algebraic geometry), there is a unique morphism $(x,y) : T \to X \times Y$ such that $p_X \circ (x,y) = x$ and $p_Y \circ (x,y) = y$. Once you have understood this in detail, this motivates the general definition of a product diagram in a category. After all, these appear everywhere in mathematics (products of groups, vector spaces, $\sigma$-algebras, topological spaces, etc.). Of course, the uniqueness statement is essential. Otherwise, the product won't be unique and in the case of sets you will get many more objects instead of the usual cartesian product. In general, an object $T$ satisfies the universal property of $X \times Y$ without the uniqueness requirement iff there is a retraction $T \to X \times Y$. So for example in the category of sets every set larger than the cartesian product will qualify.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/169023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Can mathematical definitions of the form "P if Q" be interpreted as "P if and only if Q"?
Possible Duplicate:
Alternative ways to say “if and only if”?
So when I come across mathematical definitions like "A function is continuous if...."A space is compact if....","Two continuous functions are homotopic if.....", etc when is it okay to assume that the definition includes the converse as well?
| Absolutely. The definition will state that we say [something] is $P$ if $Q$. Thus, every time that $Q$ holds, $P$ also holds. The definition would be useless if the other direction didn't hold, though. We want our terms to be consistent, so it is tacitly assumed that we will also say $P$ only if $Q$. Many texts prefer to avoid leaving this as tacit, and simply state it as "if and only if" in their definitions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/169158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Differential Inequality Help I have the inequality
$f''(x)x + f'(x) \leq 0$
Also, $f''(x)<0$ and $f'(x)>0$ and $x \in R^+$. And I need to figure out when it is true. I know it is a fairly general question, but I couldn't find any information in several textbooks I have skimmed. Also, I am not sure if integrating would require a sign reversal or not, so I cant go ahead and try to manipulate it my self.
Any help or mention of a helpful source would be much appreciated.
edit: forgot to mention $f(x)\geq 0$ for every $x \in R^+$
| $$0\geq f''(x)x+f'(x)=(f'(x)x)'$$
that is why the function $$f'(x)x$$ decreases for positive reals. Then the maximum should be at $0$, so $$f'(x)x\leq 0,\quad x \in R^+$$ which is a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/169219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Convergence of $\sum_{n=0}^\infty(-1)^n\frac{4^{n-2}(x-2)}{(n-2)!}$ What theorem should I use to show that $$\sum_{n=0}^\infty(-1)^n\frac{4^{n-2}(x-2)}{(n-2)!}$$ is convergent no matter what value $x$ takes?
| Note that $(-1)^n = (-1)^{n-2}$. Hence, $$\sum_{n=0}^\infty(-1)^n\frac{4^{n-2}(x-2)}{(n-2)!} = (x-2) \sum_{n=0}^\infty\frac{(-4)^{n-2}}{(n-2)!} = (x-2) \left(\sum_{n=0}^\infty\frac{(-4)^{n}}{n!} \right)$$
where we have interpreted $\dfrac1{(-1)!} = 0 = \dfrac1{(-2)!}$.
This is a reasonable interpretation since $\dfrac1{\Gamma(0)} = 0 = \dfrac1{\Gamma(-1)}$.
Now recall that $$\sum_{n=0}^\infty\frac{y^{n}}{n!} = \exp(y).$$
Can you now conclude that the series converges no matter what value $x$ takes?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/169282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
Combinations - at least and at most There are 20 balls - 6 red, 6 green, 8 purple
We draw five balls and at least one is red, then replace them. We then draw five balls and at most one is green. In how many ways can this be done if the balls are considered distinct?
My guess:
$${4+3-1 \choose 3-1} \cdot {? \choose ?}$$
I don't know how to do the second part...at most one is green?
Thanks for your help.
| Event A
Number of ways to choose 5 balls = $ _{}^{20}\textrm{C}_5 $
Number of ways to choose 5 balls with no red balls = $ _{}^{14}\textrm{C}_5 $
Hence the number of ways to choose 5 balls including at least one red ball =$ _{}^{20}\textrm{C}_5 - _{}^{14}\textrm{C}_5 $
Event B
Number of ways to choose 5 balls with no green balls = $ _{}^{14}\textrm{C}_5 $
Number of ways to choose 5 balls with exactly one green ball = $ _{}^{14}\textrm{C}_4 \times 6 $ (we multiply here by 6 because we have 6 choices for the green ball we choose.)
Since the above two choice processes are mutually exclusive we have that,
Hence the number of ways to choose 5 balls including at most one green ball =$ _{}^{14}\textrm{C}_5 + _{}^{14}\textrm{C}_4 \times 6 $
Events A and B are independent.
Therefore, the total number of ways of doing A and B = $ (_{}^{20}\textrm{C}_5 - _{}^{14}\textrm{C}_5) \times (_{}^{14}\textrm{C}_5 + 6_{}^{14}\textrm{C}_4 )$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/169337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Fermat's theorem on sums of two squares composite number Suppose that there is some natural number $a$ and $b$. Now we perform $c = a^2 + b^2$. This time, c is even.
Will this $c$ only have one possible pair of $a$ and $b$?
edit: what happens if c is odd number?
| Not necessarily. For example, note that $50=1^2+7^2=5^2+5^2$, and $130=3^2+11^2=7^2+9^2$. For an even number with more than two representations, try $650$.
We can produce odd numbers with several representations as a sum of two squares by taking a product of several primes of the form $4k+1$. To get even numbers with multiple representations, take an odd number that has multiple representations, and multiply by a power of $2$.
To help you produce your own examples, the following identity, often called the Brahmagupta Identity, is quite useful:
$$(a^2+b^2)(x^2+y^2)=(ax\pm by)^2 +(ay\mp bx)^2.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/169425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Please explain this notation equation I am confused by this equation as I rarely use math in my job but need this for a program that I am working on. What exactly does the full expression mean? Note that $m^*_i{_j}$ refers to a matrix whose values have already been obtained.
Define the transition matrix $M = ${$m_i{_j}$} as follows: for $i\not=j$ set $m_i{_j}$ to $m^*_i{_j}/|U|$ and let $m_i{_i} = 1-\Sigma_{j\not=i} m_i{_j}$
| To obtain the transition matrix $M$ from the matrix $M^*=(m^*_{ij})$, the rule gives us two steps. First, for all off-diagonal terms $m^*_{ij}$ where $i\neq j$ we simply divide the existing entry by $\lvert U\rvert$ (in this case $\lvert U\rvert =24$), and we temporarily replace the diagonal entries $m^*_{ii}$ by $0.$ Second, to get the $i^{\rm th}$ diagonal entry $m_{ii}$ of $M$ we sum up all entries in the $i^{\rm th}$ row of this intermediate matrix and subtract the resulting sum from $1,$ giving $1-\sum_{j\neq i}m_{ij}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/169537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
When is $(6a + b)(a + 6b)$ a power of two?
Find all positive integers $a$ and $b$ for which the product $(6a + b)(a + 6b)$ is a power of $2$.
I havnt been able to get this one yet, found it online, not homework!
any help is appreciated thanks!
| Assume, (6a + b)(a + 6b) = 2^c
where, c is an integer ge 0
Assume, (6a + b) = 2^r
where, r is an integer ge 0
Assume, (a + 6b) = 2^s
where, s is an integer ge 0
Now, (2^r)(2^s) = 2^c
i.e. r + s = c
Now, (6a + b) + (a + 6b) = 2^r + 2^s
i.e. 7(a + b) = 2^r + 2^s
i.e. a + b = (2^r)/7 + (2^s)/7
Now, (6a + b) - ( a + 6b) = 2^r - 2^s
i.e. a - b = (2^r)/5 - (2^s)/5
Now we have,
a + b = (2^r)/7 + (2^s)/7
a - b = (2^r)/5 - (2^s)/5
Here solving for a we get, a = ((2^r)(6) - 2^s)/35
Now solving for b we get, b = ((2^s)(6) - 2^r)/35
A careful observation reveals that there are no positive integers a and b for which the product (6a + b)(a + 6b) is a power of 2.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/169608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
limit question on Lebesgue functions Let $f\in L^1(\mathbb{R})$. Compute $\lim_{|h|\rightarrow\infty}\int_{-\infty}^\infty |f(x+h)+f(x)|dx$
If $f\in C_c(\mathbb{R})$ I got the limit to be $\int_{-\infty}^\infty |f(x)|dx$. I am not sure if this is right.
| *
*Let $f$ a continuous function with compact support, say contained in $-[R,R]$. For $h\geq 2R$, the supports of $\tau_hf$ and $f$ are disjoint (they are respectively $[-R-h,R-h]$ and $[-R,R]$ hence
\begin{align*}
\int_{\Bbb R}|f(x+h)+f(x)|dx&=\int_{[-R,R]}|f(x+h)+f(x)|+\int_{[-R-h,R-h]}|f(x+h)+f(x)|\\
&=\int_{[-R,R]}|f(x)|+\int_{[-R-h,R-h]}|f(x+h)|\\
&=2\int_{\Bbb R}|f(x)|dx.
\end{align*}
*If $f\in L^1$, let $\{f_n\}$ a sequence of continuous functions with compact support which converges to $f$ in $L^1$, for example $\lVert f-f_n\rVert_{L^1}\leq n^{—1}$. Let $L(f,h):=\int_{\Bbb R}|f(x+h)+f(x)|dx$. We have
\begin{align}
\left|L(f,h)-L(f_n,h)\right|&\leq
\int_{\Bbb R}|f(x+h)-f_n(x+h)+f(x)-f_n(x)|dx\\
&\leq \int_{\Bbb R}(|f(x+h)-f_n(x+h)|+|f(x)-f_n(x)|)dx\\
&\leq 2n^{-1},
\end{align}
and we deduce that
$$|L(f,h)-2\lVert f\rVert_{L^1}|\leq 4n^{-1}+|L(f_n,h)-2\lVert f_n\rVert_{L^1}|.$$
We have for each integer $n$,
$$\limsup_{h\to +\infty}|L(f,h)-2\lVert f\rVert_{L^1}|\leq 4n^{—1}.$$
This gives the wanted result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/169667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Trying to prove $\frac{2}{n+\frac{1}{2}} \leq \int_{1/(n+1)}^{1/n}\sqrt{1+(\sin(\frac{\pi}{t}) -\frac{\pi}{t}\cos(\frac{\pi}{t}))^2}dt$ I posted this incorrectly several hours ago and now I'm back! So this time it's correct. Im trying to show that for $n\geq 1$:
$$\frac{2}{n+\frac{1}{2}} \leq \int_{1/(n+1)}^{1/n}\sqrt{1+\left(\sin\left(\frac{\pi}{t}\right) -\frac{\pi}{t}\cos\left(\frac{\pi}{t}\right)\right)^2}dt$$
I checked this numerically for several values of $n$ up through $n=500$ and the bounds are extremely tight.
I've been banging my head against this integral for a while now and I really can see no way to simplify it as is or to shave off a tiny amount to make it more palatable. Hopefully someone can help me. Thanks.
| Potato's answer is what's going on geometrically. If you want it analytically:$$\sqrt{1+\left(\sin\left(\frac{\pi}{t}\right) -\frac{\pi}{t}\cos\left(\frac{\pi}{t}\right)\right)^2} \geq \sqrt{\left(\sin\left(\frac{\pi}{t}\right) -\frac{\pi}{t}\cos\left(\frac{\pi}{t}\right)\right)^2}$$
$$ = \bigg|\sin\left(\frac{\pi}{t}\right) -\frac{\pi}{t}\cos\left(\frac{\pi}{t}\right)\bigg|$$
The above expression is the absolute value of the derivative of $t\sin(\pi/t)$.
So your integral is greater than
$$\int_{1 \over n + 1}^{1 \over n + {1 \over 2}}|(t\sin({\pi \over t}))'|\,dt + \int_{1 \over n + {1 \over 2}}^{1 \over n}|(t\sin({\pi \over t}))'|\,dt$$
This is at least what you get when you put the absolute values on the outside, or
$$\bigg|\int_{1 \over n + 1}^{1 \over n + {1 \over 2}}(t\sin({\pi \over t}))'\,dt\bigg| + \bigg|\int_{1 \over n + {1 \over 2}}^{1 \over n}(t\sin({\pi \over t}))'\,dt\bigg|$$
Then the fundamental theorem of calculus says this is equal to the following, for $f(t) = t \sin(\pi/t)$:
$$\bigg|f({1 \over n + {1 \over 2}}) - f(0)\bigg| + \bigg|f({1 \over n}) - f({1 \over n + {1 \over 2}})\bigg|$$
$$= \bigg|{1 \over n + {1 \over 2}} - 0\bigg| + \bigg|0 -{1 \over n + {1 \over 2}}\bigg|$$
$$ = {2 \over n + {1 \over 2}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/169721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
} |
Find Elementary Matrics E1 and E2 such that $E_2E_1$A = I I am studying Linear Algebra part-time and would like to know if anyone has advice on solving the following type of questions:
Considering the matrix:
$$A = \begin{bmatrix}1 & 0 & \\-5 & 2\end{bmatrix}$$
Find elementary Matrices $E_1$ and $E_2$ such that $E_2E_1A = I$
Firstly can this be re-written as?
$$E_2E_1 = IA^{-1}$$
and that is the same as?
$$E_2E_1 = A^{-1}$$
So I tried to find $E_1$ and $E_2$ such that $E_2E_1 = A^{-1}$:
My solution:
$$A^{-1} = \begin{bmatrix}1 & 0 & \\{\frac {5}{2}} & {\frac {1}{2}}\end{bmatrix}$$
$$E_2 = \begin{bmatrix}1 & 0 & \\0 & {\frac {5}{2}}\end{bmatrix}$$
$$E_1 = \begin{bmatrix}1 & 0 & \\1 & {\frac {1}{5}}\end{bmatrix}$$
This is the incorrect answer. Any help as to what I did wrong as well as suggestions on how to approach these questions would be aprpeciated.
Thanks
| Just look at what needs to be done.
First, eliminate the $-5$ using $E_1 = \begin{bmatrix} 1 & 0 \\ 5 & 1 \end{bmatrix}$. This gives
$$ E_1 A = \begin{bmatrix} 1 & 0 \\ 0 & 2 \end{bmatrix}.$$
Can you figure out what $E_2$ must be?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/169808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
When does a morphism preserve the degree of curves? Suppose $X \subset \mathbb{P}_k^n$ is a smooth, projective curve over an algebraically closed field $k$ of degree $d$ . In this case, degree of $X$ is defined as the leading coefficient of $P_X$, where $P_X$ is the Hilbert polynomial of $X$.
I guess Hartshorne using following fact without proof:
Under certain condition, the projection morphism $\phi$ from $O \notin X$ to some lower dimension space $\mathbb{P}_k^m$ gives a curve $\phi(X)$, and $deg X= deg\ \phi(X)$. I don't know which condition is needed to make this possible (the condition for $X \cong \phi(X)$ is given in the book), and more importantly, why they have the same degree.
One thing giving me trouble is the definition of degree by Hilbert polynomial. Is it possible to define it more geometrically?
| Chris Dionne already explained the condition for $\phi$ to be an isomorphism from $X$ to $\phi(X)$. Suppose $\phi$ is a projection to $Q=\mathbb P^{n-1}$. Let $H'$ be a hyperplane in $Q$, then the schematic pre-image $\phi^{-1}Q=H\setminus \{ O\}$ where $H$ is a hyperplane of $P:=\mathbb P^n$ passing through $O$. Now
$$O_P(H)|_X =(O_P(H)|_{P\setminus \{ O \}})|_X \simeq (\phi^*O_Q(H'))|_X \simeq (\phi|{}_X)^*(O_Q(H')|_{\phi(X)})$$
hence
$$ \deg X=\deg O_P(H)|_X= \deg O_Q(H')|_{\phi(X)}=\deg \phi(X).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/169861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
how to solve system of linear equations of XOR operation? how can i solve this set of equations ? to get values of $x,y,z,w$ ?
$$\begin{aligned} 1=x \oplus y \oplus z \end{aligned}$$
$$\begin{aligned}1=x \oplus y \oplus w \end{aligned}$$
$$\begin{aligned}0=x \oplus w \oplus z \end{aligned}$$
$$\begin{aligned}1=w \oplus y \oplus z \end{aligned}$$
this is not a real example, the variables don't have to make sense, i just want to know the method.
| The other answers are fine, but you can use even more elementary (if somewhat ad hoc) methods, just as you might with an ordinary system of linear equations over $\Bbb R$. You have this system:
$$\left\{\begin{align*}
1&=x\oplus y\oplus z\\
1&=x\oplus y\oplus w\\
0&=x\oplus w\oplus z\\
1&=w\oplus y\oplus z
\end{align*}\right.$$
Add the first two equations:
$$\begin{align*}
(x\oplus y\oplus z)\oplus(x\oplus y\oplus w)&=(z\oplus w)\oplus\Big((x\oplus y)\oplus(x\oplus y)\Big)\\
&=z\oplus w\oplus 0\\
&=z\oplus w\;,
\end{align*}$$
and $1\oplus 1=0$, so you get $z+w=0$. Substitue this into the last two equations to get $0=x\oplus 0=x$ and $1=y\oplus 0=y$. Now you know that $x=0$ and $y=1$ so $x\oplus y=1$. Substituting this into the first two equations, we find that $1=1\oplus z$ and $1=1\oplus w$. Add $1$ to both sides to get $0=z$ and $0=w$. The solution is therefore $x=0,y=1,z=0,w=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/169921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 6,
"answer_id": 1
} |
Proving that for every real $x$ there exists $y$ with $x+y^2\in\mathbb{Q}$ I'm having trouble with proving this question on my assignment:
For all real numbers $x$, there exists a real number $y$ so that $x + y^2$ is rational.
I'm not sure exactly how to prove or disprove this. I proved earlier that for all real numbers $x$, there exists some $y$ such that $x+y$ is rational by cases and i'm assuming this is similar but I'm having trouble starting.
| Hint $\rm\ x\! +\! y^2 = q\in\mathbb Q\iff y^2 = q\!-\!x\:$ so choose a rational $\rm\:q\ge x\:$ then let $\rm\:y = \sqrt{q-x}.$
Remark $\ $ Note how writing it out equationally makes it clearer what we need to do to solve it. Just as for "word problems", the key is learning how to translate the problem into the correct algebraic (equational) representation - the rest is easy.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/169971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Exponentiation of Center of Lie Algebra Let $G$ be a Lie Group, and $g$ its Lie Algebra. Show that the subgroup generated by exponentiating the center of $g$ generates the connected component of $Z(G)$, the center of $G$.
Source: Fulton-Harris, Exercise 9.1
The difficulty lies in showing that exponentiating the center of $g$ lands in the center of $G$. Since the image of $exp(Z(g))$ is the union of one parameter subgroups that are disjoint, we know it connected. Also I can show this for the case when $G = Aut(V) $ and $g = End(V)$ since we have $G \subset g $.
EDIT: G not connected
| $G$ is connected and so is generated by elements of the form $\exp Y$ for $Y \in g$. Therefore it is sufficient to show that for $X \in Z(g)$ $\exp X$ and $\exp Y$ commute. Now define $\gamma: \mathbb R \to G$ by $\gamma(t) = \exp(X)\exp(tY)\exp(-X)$. Then
$$
\gamma'(0) = Ad_{\exp(X)} Y = e^{ad_X} Y = (1 + ad_X + \frac{1}{2}ad_X \circ ad_X + \cdots)(Y) = Y.
$$
But it is also easily seen that $\gamma$ is a homomorphism, i.e. $\gamma(t+s) = \gamma(t)\gamma(s)$. This characterizes the exponential map so that $\gamma(t) = \exp(tY)$. Taking $t=1$ shows that $\exp X$ and $\exp Y$ commute.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/170035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Kernel of $T$ is closed iff $T$ is continuous I know that for a Banach space $X$ and a linear functional $T:X\rightarrow\mathbb{R}$ in its dual $X'$ the following holds:
\begin{align}T \text{ is continuous } \iff \text{Ker }T \text{ is closed}\end{align}
which probably holds for general operators $T:X\rightarrow Y$ with finite-dimensional Banach space $Y$. I think the argument doesn't work for infinite-dimensional Banach spaces $Y$. Is the statement still correct? I.e. continuity of course still implies the closedness of the kernel for general Banach spaces $X,Y$ but is the converse still true?
| The result is false if $Y$ is infinite dimensional. Consider $X=\ell^2$ and $Y=\ell^1$ they are not isomorphic as Banach spaces (the dual of $\ell^1$ is not separable). However they both have a Hamel basis of size continuum therefore they are isomorphic as vector spaces. The kernel of the vector space isomorphism is closed (since is the zero vector) but it can not be continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/170091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 1
} |
$\ell_0$ Minimization (Minimizing the support of a vector) I have been looking into the problem $\min:\|x\|_0$ subject to:$Ax=b$. $\|x\|_0$ is not a linear function and can't be solved as a linear (or integer) program in its current form. Most of my time has been spent looking for a representation different from the one above (formed as a linear/integer program). I know there are approximation methods (Basis Pursuit, Matching Pursuit, the $\ell_1$ problem), but I haven't found an exact formulation in any of my searching and sparse representation literature. I have developed a formulation for the problem, but I would love to compare with anything else that is available. Does anyone know of such a formulation?
Thanks in advance, Clark
P.S.
The support of a vector $s=supp(x)$ is a vector $x$ whose zero elements have been removed. The size of the support $|s|=\|x\|_0$ is the number of elements in the vector $s$.
P.P.S.
I'm aware that the $\|x\|_0$ problem is NP-hard, and as such, probably will not yield an exact formulation as an LP (unless P=NP). I was more referring to an exact formulation or an LP relaxation.
| Consider the following two problems
$$ \min:\|x\|_0 \text{ subject to } Ax = b \tag{P0} $$
$$ \min:\|x\|_1 \text{ subject to } Ax = b \tag{P1} $$
The theory of compressed assert that the optimal solution to the linear program $(P1)$ is an optimal solution to $(P0)$ i.e., the sparsest vector, given the following conditions on $A.$ Let $B = (b_{j,k})$ be an $n \times n$ orthogonal matrix (but not necessarily orthonormal). Let the coherence of $B$ denoted by $$\mu = \max_{j,k} | b_{j,k} |.$$ Let $A$ be the $m \times n$ matrix formed by taking any random $m$ rows of $B.$ If $m \in O(\mu^2 |s| \log n)$ then $(P1)$ is equivalent to $(P0).$ More in the papers referenced in the Wikipedia article on compressed sensing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/170162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Nth derivative of $\tan^m x$ $m$ is positive integer,
$n$ is non-negative integer.
$$f_n(x)=\frac {d^n}{dx^n} (\tan ^m(x))$$
$P_n(x)=f_n(\arctan(x))$
I would like to find the polynomials that are defined as above
$P_0(x)=x^m$
$P_1(x)=mx^{m+1}+mx^{m-1}$
$P_2(x)=m(m+1)x^{m+2}+2m^2x^{m}+m(m-1)x^{m-2}$
$P_3(x)=(m^3+3m^2+2m)x^{m+3}+(3m^3+3m^2+2m)x^{m+1}+(3m^3-3m^2+2m)x^{m-1}+(m^3-3m^2+2m)x^{m-3}$
I wonder how to find general formula of $P_n(x)$?
I also wish to know if any orthogonal relation can be found for that polynomials or not?
Thanks for answers
EDIT:
I proved Robert Isreal's generating function. I would like to share it.
$$ g(x,z) = \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} \tan^m(x)
= \tan^m(x+z) $$
$$ \frac {d}{dz} (\tan^m(x+z))=m \tan^{m-1}(x+z)+m \tan^{m+1}(x+z)=m \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} \tan^{m-1}(x)+m \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} \tan^{m+1}(x)= \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} (m\tan^{m-1}(x)+m\tan^{m+1}(x))=\sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} (\dfrac{d}{dx}(\tan^{m}(x)))=\sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^{n+1}}{dx^{n+1}} (\tan^{m}(x))=\sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^{n+1}}{dx^{n+1}} (\tan^{m}(x))$$
$$ \frac {d}{dz} ( \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} \tan^m(x)
)= \sum_{n=1}^\infty \dfrac{z^{n-1}}{(n-1)!} \dfrac{d^n}{dx^n} \tan^m(x) = \sum_{n=1}^\infty \dfrac{z^{n-1}}{(n-1)!} \dfrac{d^n}{dx^n} \tan^m(x)=\sum_{k=0}^\infty \dfrac{z^{k}}{k!} \dfrac{d^{k+1}}{dx^{k+1}} \tan^m(x)$$
I also understood that it can be written for any function as shown below .(Thanks a lot to Robert Isreal)
$$ \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} h^m(x)
= h^m(x+z) $$
I also wrote $P_n(x)$ as the closed form shown below by using Robert Israel's answer.
$$P_n(x)=\frac{n!}{2 \pi i}\int_0^{2 \pi i} e^{nz}\left(\dfrac{x+\tan(e^{-z})}{1-x \tan(e^{-z})}\right)^m dz$$
I do not know next step how to find if any orthogonal relation exist between the polynomials or not. Maybe second order differential equation can be found by using the relations above. Thanks for advice.
| The formula used to obtain the exponential generating function in Robert's answer is most easily seen with a little operator calculus. Let $\rm\:D = \frac{d}{dx}.\,$ Then the operator $\rm\,{\it e}^{\ zD} = \sum\, (zD)^k/k!\:$ acts as a linear shift operator $\rm\:x\to x+z\,\:$ on polynomials $\rm\:f(x)\:$ since
$$\rm {\it e}^{\ zD} x^n =\, \sum\, \dfrac{(zD)^k}{k!} x^n =\, \sum\, \dfrac{z^k}{k!} \dfrac{n!}{(n-k)!}\ x^{n-k} =\, \sum\, {n\choose k} z^k x^{n-k} =\, (x+z)^n$$
so by linearity $\rm {\it e}^{\ zD} f(x) = f(x\!+\!z)\:$ for all polynomials $\rm\:f(x),\:$ and also for all formal power series $\rm\,f(x)\,$ such that $\rm\:f(x\!+\!z)\,$ converges, i.e. where $\rm\:ord_x(x\!+\!z)\ge 1,\:$ e.g. for $\rm\: z = tan^{-1} x = x -x^3/3 +\, \ldots$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/170203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 0
} |
Distance in vector space Suppose $k≧3$, $x,y \in \mathbb{R}^k$, $|x-y|=d>0$, and $r>0$. Then prove
(i)If $2r > d$, there are infinitely many $z\in \mathbb{R}^k$ such that $|z-x| = |z-y| = r$
(ii)If $2r=d$, there is exactly one such $z$.
(iii)If $2r < d$, there is no such $z$
I have proved the existence of such $z$ for (i) and (ii). The problem is i don't know how to show that there are infinitely many and is exactly one such z resectively. Plus i can't derive a contradiction to show that there is no such z for (iii). Please give me some suggestions
| If there's a $z$ satisfying $|z-x|=r=|z-y|$ then by the triangle inequality, $d=|x-y|\le|z-x|+|z-y|=2r$
So if $d>2r$ there would've been no such $z$!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/170243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How many triangles with integral side lengths are possible, provided their perimeter is $36$ units? How many triangles with integral side lengths are possible, provided their perimeter is $36$ units?
My approach:
Let the side lengths be $a, b, c$; now,
$$a + b + c = 36$$
Now, $1 \leq a, b, c \leq 18$.
Applying multinomial theorem, I'm getting $187$ which is wrong.
Please help.
| The number of triangles with perimeter $n$ and integer side lengths is given by Alcuin's sequence $T(n)$. The generating function for $T(n)$ is $\dfrac{x^3}{(1-x^2)(1-x^3)(1-x^4)}$. Alcuin's sequence can be expressed as
$$T(n)=\begin{cases}\left[\frac{n^2}{48}\right]&n\text{ even}\\\left[\frac{(n+3)^2}{48}\right]&n\text{ odd}\end{cases}$$
where $[x]$ is the nearest integer function, and thus $T(36)=27$. See this article by Krier and Manvel for more details. See also Andrews, Jordan/Walch/Wisner, these two by Hirschhorn, and Bindner/Erickson.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/170319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 1,
"answer_id": 0
} |
Question related to regular pentagon My question is-
ABCDE is e regular pentagon.If AB = 10 then find AC.
Any solution for this question would be greatly appreciated.
Thank you,
Hey all thanks for the solutions using trigonometry....can we also get the solution without using trigonometry? –
| Here's a solution without trig:
[Edit: here is a diagram which should make this more intuitive hopefully.]
First, $\angle ABC=108^{\circ}$; I won't prove this here, but you can do it rather trivially by dividing the pentagon into 3 triangles, e.g. $\triangle{ABC}$, $\triangle{ACD}$, and $\triangle{ADE}$, and summing and rearranging their angles to find the interior angle.
Draw segment $\overline{AC}$. Then $\triangle ABC$ is isosceles, and has 180 degrees, therefore $\angle ACB=\angle BAC=(180^{\circ}-108^{\circ})/2=36^{\circ}$.
Draw segment $\overline{BE}$. By symmetry, $\triangle ABC \cong \triangle ABE$, and so therefore $\angle ABE=36^{\circ}$.
Let $F$ be the point where segment $\overline{AC}$ intersects segment $\overline{BE}$. Then $\angle AFB = 180^{\circ}-\angle BAC - \angle ABE = 180^{\circ}-36^{\circ}-36^{\circ} = 108^{\circ}$.
This means that $\triangle ABC \sim \triangle ABF$, as both triangles have angles of $36^{\circ}$, $36^{\circ}$, and $108^{\circ}$.
Next, we can say $\angle CBE=\angle ABC-\angle ABE = 108^{\circ} - 36^{\circ} = 72^{\circ}$, and from there, $\angle BFC = \angle AFC - \angle AFB = 180^{\circ} - 108^{\circ} = 72^{\circ}$. Therefore $\triangle BCF$ is isoceles, as it has angles of $36^{\circ}$, $72^{\circ}$, and $72^{\circ}$. This means that $\overline{CF}=\overline{BC}=\overline{AB}=10$ (adding in the information given to us in the problem).
The ratios of like sides of similar triangles are equal. Therefore,
$$\frac{\overline{AC}}{\overline{AB}}=\frac{\overline{AB}}{\overline{AF}}$$
We know that $\overline{AC}=\overline{AF}+\overline{CF}=\overline{AF}+10$. Let's define $x:=\overline{AF}$. Substituting everything we know into the previous equation,
$$\frac{x+10}{10}=\frac{10}{x}$$
Cross-multiply and solve for $x$ by completing the square. (Or, if you prefer, you can use the quadratic formula instead.)
$$x(x+10)=100$$
$$x^2+10x=100$$
$$x^2+10x+25=125$$
$$(x+5)^2=125$$
$$x+5=\pm 5\sqrt 5$$
$$x=-5\pm 5\sqrt 5$$
Choose the positive square root, as $x:=\overline{AF}$ can't be negative.
$$\overline{AF}=-5 + 5\sqrt 5$$
Finally, recall that earlier we proved $\overline{AC}=\overline{AF}+10$. Plug in to get the final answer:
$$\boxed{ \overline{AC} = 5 + 5\sqrt 5 = 5(1 + \sqrt 5)}$$
Hope this helps! :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/170350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
How to compute the min-cost joint assignment to a variable set when checking the cost of a single joint assignment is high? I want to compute the min-cost joint assignment to a set of variables. I have 50 variables, and each can take on 5 different values. So, there are 550 (a huge number) possible joint assignments. Finding a good one can be hard!
Now, the problem is that computing the cost of any assignment takes about 15-20 minutes. Finding an approximation to the min-cost assignment is also okay, doesn't have to be the global solution. But with this large computational load, what is a logical approach to finding a low-cost joint assignment?
| In general, if the costs of different assignments are completely arbitrary, there may be no better solution than a brute force search through the assignment space. Any improvements on that will have to come from exploiting some statistical structure in the costs that gives us a better-than-random chance of picking low-cost assignments to try.
Assuming that the costs of similar assignments are at least somewhat correlated, I'd give some variant of shotgun hill climbing a try. Alternatively, depending on the shape of the cost landscape, simulated annealing or genetic algorithms might work better, but I'd try hill climbing first just to get a baseline to compare against.
Of course, this all depends on what the cost landscape looks like. Without more detail on how the costs are calculated, you're unlikely to get very specific answers. If you don't even know what the cost landscape looks like yourself, well, then you'll just have to experiment with different search heuristics and see what works best.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/170431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Ergodicity of the First Return Map I was looking for some results on Infinite Ergodic Theory and I found this proposition. Do you guys know how to prove the last item (iii)?
I managed to prove (i) and (ii) but I can't do (iii).
Let $(X,\Sigma,\mu,T)$ be a $\sigma$-finite space with $T$ presearving the measure $\mu$, $Y\in\Sigma$ sweep-out s.t. $0<\mu(Y)<\infty$. Making
$$\varphi(x)= \operatorname{min}\{n\geq0; \ T^n(x)\in Y\}$$
and also
$$T_Y(x) = T^{\varphi(x)}(x)$$
if $T$ is conservative then
(i) $\mu|_{Y\cap\Sigma}$ under the action of $T_Y$ on $(Y,Y\cap\Sigma,\mu|_{Y\cap\Sigma})$;
(ii) $T_Y$ is conservative;
(iii) If $T$ is ergodic, then $T_Y$ is ergodic on $(Y,Y\cap\Sigma,\mu|_{Y\cap\Sigma})$.
Any ideas?
Thank you guys in advance!!!
| Let be $B \subset Y$. To prove the invariance of $\mu_A$ it is sucient to prove that,
$$\mu(T_Y^{-1}B)=\mu(B)$$
First,
$$\mu(T_Y^{-1}B)=\sum_{n=1}^{\infty}\mu(Y\cap\{\varphi_Y =n \}\cap T^{-n}B)$$
Now,
$$\{ \varphi_Y\leq n\}\cap T^{-n-1}B=T^{-1}( \{\varphi_Y\leq n-1\}\cap T^{-n}B)\cup T^{-1}(Y \cap\{\varphi_Y =n \}\cap T^{-n}B ) $$
This gives by invariance of the measure
$$\mu(Y\cap \cap\{\varphi_Y =n \}\cap T^{-n}B)=\mu(B_n)-\mu(B_{n-1}) $$
where $B_n=\{ \varphi_Y\leq n\}\cap T^{-n-1}B.$ We have $\mu(B_n)\to\mu(B)$ as $n\to \infty$, thus
$$\mu(T_Y^{-1} B)=\lim\mu(B_n)=\mu(B).~~~~ :)$$
Let us assume now the ergodicity of the original system. Let $B\subset Y$ be a measurable
$T_Y$-invariant subset. For any $x \in B$ , the first iterate $T^n x~~~(n\geq 1)$
that belongs to $Y$ also belongs to $B$ , which means that $\varphi_B=\varphi_Y$ on $B.$
But if, $\mu(B)\neq 0$ , Kac's lemma gives that
$$\int_{B}\varphi_B d\mu=1=\int_{Y}\varphi_{Y} d\mu$$ which implies that $\mu(B \setminus A) = 0$, proving ergodicity. :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/170564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Prove that , any primitive root $r$ of $p^n$ is also a primitive root of $p$
For an odd prime $p$, prove that any primitive root $r$ of $p^n$ is also a primitive root of $p$
So I have assumed $r$ have order $k$ modulo $p$ , So $k|p-1$.Then if I am able to show that $p-1|k$ then I am done .But I haven't been able to show that.Can anybody help me this method?Any other type of prove is also welcomed.
| Note that an integer $r$ with $\gcd(r,p)=1$ is a primitive root modulo $p^k$ when the smallest $b$ such that $r^b\equiv1\bmod p^k$ is $b=p^{k-1}(p-1)$.
Suppose that $r$ is not a primitive root modulo $p$, so there is some $b<p-1$ such that $r^b\equiv 1\bmod p$.
In other words, there is some integer $t$ such that $r^b=1+pt$.
Then of course we have that $p^{n-1}b<p^{n-1}(p-1)$, and
$$r^{p^{n-1}b}\equiv 1\bmod p^n$$
because of
the binomial theorem.
(mouse over to reveal spoilers)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/170648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 6,
"answer_id": 2
} |
Find a function that satisfies the following five conditions. My task is to find a function $h:[-1,1] \to \mathbb{R}$ so that
(i) $h(-1) = h(1) = 0$
(ii) $h$ is continuously differentiable on $[-1,1]$
(iii) $h$ is twice differentiable on $(-1,0) \cup (0,1)$
(iv) $|h^{\prime\prime}(x)| < 1$ for all $x \in (-1,0)\cup(0,1)$
(v) $|h(x)| > \frac{1}{2}$ for some $x \in [-1,1]$
The source I have says to use the function
$h(x) = \frac{3}{4}\left(1-x^{4/3}\right)$
which fails to satisfy condition (iv) so it is incorrect. I'm starting to doubt the validity of the problem statement because of this. So my question is does such a function exist? If not, why? Thanks!
| Let $h$ satisfy (i)-(iv) and $x_0$ be a point where the maximum of $|h|$ is attained on $[-1,1]\;$. WLOG we can assume that $x_0\ge0$. Then
$$
|h(x_0)|=|h(x_0)-h(1)|=\left|\int_{x_0}^1h'(y)\,dy\right|=
$$
$$
\left|\int_{x_0}^1\int_{x_0}^yh''(z)\,dz\;dy\right|\le
\sup_{(0,1)}|h''|\int_{x_0}^1\int_{x_0}^y\,dz=\frac{(1-x_0)^2}2\sup_{(0,1)}|h''|\le \frac12.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/170710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Counting zero-digits between 1 and 1 million I just remembered a problem I read years ago but never found an answer:
Find how many 0-digits exist in natural numbers between 1 and 1 million.
I am a programmer, so a quick brute-force would easily give me the answer, but I am more interested in a pen-and-paper solution.
| Just to show there is more than one way to do it:
How many zero digits are there in all six-digit numbers? The first digit is never zero, but if we pool all of the non-first digits together, no value occurs more often than the others, so exactly one-tenth of them will be zeroes. There are $9\cdot 5\cdot 10^5$ such digits all of all, and a tenth of them is $9\cdot 5 \cdot 10^4$.
Repeating that reasoning for each possible length, the number of zero digits we find between $1$ and $999999$ inclusive is
$$\sum_{n=2}^6 9(n-1)10^{n-2} = 9\cdot 54321 = 488889 $$
To that we may (depending on how we interpret "between" in the problem statement) need to add the 6 zeroes from 1,000,000 itself, giving a total of 488,895.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/170761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
Application of Banach Separation theorem
Let $(\mathcal{H},\langle\cdot,\cdot\rangle)$ be a Hilbert Space, $U\subset \mathcal{H},U\not=\mathcal{H}$ be a closed subspace and $x\in\mathcal{H}\setminus U$. Prove that there exists $\phi\in\mathcal{H}^*$, such that\begin{align}\text{Re } \phi(x)<\inf_{u\in U}\text{Re }\phi(u)
\end{align}
Hint: Observe that $\inf_{u\in U}\text{Re }\phi(u)\leq0$.
This seems like an application of the Banach Separation theorem. But the way I know it is not directly applicable. I know that for two disjoint convex sets $A$ and $B$ of which one is open there exists a functional seperating them. Is there anything special in this problem about $\mathcal{H}$ being Hilbert and not some general Banach space?
| There is more general result which proof you can find in Rudin's Functional Analysis
Let $A$ and $B$ be disjoint convex subsets of topological vector space $X$. If $A$ is compact and $B$ is closed then there exist $\varphi\in X^*$ such that
$$
\sup\limits_{x\in A}\mathrm{Re}(\varphi(x))<\inf\limits_{x\in B}\mathrm{Re}(\varphi(x))
$$
Your result follows if we take $X=\mathcal{H}$, $A=\{x\}$ and $B=U$.
Of course this is a sledgehammer for such a simple problem, because in case of Hilbert space we can explicitly say that functional
$$
\varphi(z)=\langle z, \mathrm{Pr}_U(x)-x\rangle
$$
will fit, where $\mathrm{Pr}_U(x)$ is the unique orthogonal projection of vector $x$ on closed subspace $U$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/170882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Looking for a 'second course' in logic and set theory (forcing, large cardinals...) I'm a recent graduate and will likely be out of the maths business for now - but there are a few things that I'd still really like to learn about - forcing and large cardinals being two of them.
My background is what one would probably call a 'first graduate course' in logic and set theory (some intro to ZFC, ordinals, cardinals, and computability theory). Can you recommend any books or online lecture notes which are accessible to someone with my previous knowledge?
Thanks a lot!
| I would recommend the following as excellent graduate level introductions to set theory, including forcing and large cardinals.
*
*Thomas Jech, Set Theory.
*Aki Kanamori, The higher infinite. See the review I wrote of it for Studia Logica.
I typically recommend to my graduate students, who often focus on both forcing and large cardinals, that they should read both Jech and Kunen (mentioned in Francis Adams answer) and play these two books off against one another. For numerous topics, Jech will have a high-level explanation that is informative when trying to understand the underlying idea, and Kunen will have a greater level of notational detail that helps one understand the particulars. Meanwhile, Kanamori's book is a great exploration of the large cardinal hierarchy.
I would also recommend posting (and answering) questions on forcing and large cardinals here and also on mathoverflow. Probably most forcing questions belong on mathoverflow.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/170946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
Factorize $f$ as product of irreducible factors in $\mathbb Z_5$ Let $f = 3x^3+2x^2+2x+3$, factorize $f$ as product of irreducible factors in $\mathbb Z_5$.
First thing I've used the polynomial reminder theorem so to make the first factorization:
$$\begin{aligned} f = 3x^3+2x^2+2x+3 = (3x^2-x+3)(x+1)\end{aligned}$$
Obviously then as second step I've taken care of that quadratic polynomial, so:
$x_1,x_2=\frac{-b\pm\sqrt{\Delta}}{2a}=\frac{1\pm\sqrt{1-4(9)}}{6}=\frac{1\pm\sqrt{-35}}{6}$
my question is as I've done calculations in $\mathbb Z_5$, was I allowed to do that:
as $-35 \equiv_5 100 \Rightarrow \sqrt{\Delta}=\sqrt{-35} = \sqrt{100}$
then $x_1= \frac{11}{6} = 1 \text { (mod 5)}$, $x_2= -\frac{3}{2} = 1 \text { (mod 5)}$,
therefore my resulting product would be $f = (x+1)(x+1)(x+1)$.
I think I have done something illegal, that is why multiplying back $(x+1)(x+1)$ I get $x^2+2x+1 \neq 3x^2-x+3$.
Any ideas on how can I get to the right result?
| If $f(X) = aX^2 + bX + c$ is a quadratic polynomial with roots $x_1$ and $x_2$ then $f(X) = a(X-x_1)(X-x_2)$ (the factor $a$ is necessary to get the right leading coefficient). You found that $3x^2-x+3$ has a double root at $x_1 = x_2 = 1$, so $3x^2-x+3 = 3(x-1)^2$. Your mistakes were
*
*You forgot to multiply by the leading coefficient $3$.
*You concluded that a root in $1$ corresponds to the linear factor $(x+1)$, but this would mean a root in $-1$. The right linear factor is $(x-1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/171064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Functional equations of (S-shape?) curves I am looking for the way to "quite easily" express particular curves using functional equations.
What's important (supposing the chart's size is 1x1 - actually it doesn't matter in the final result):
*
*obviously the shape - as shown in the picture;
*there should be exactly three solutions to f(x) = x: x=0, x close to or equal 0.5 and x=1;
*(0.5,0.5) should be the point of intersection with y = x;
*it would be really nice, if both of the arcs are scalable - as shown in the left example (the lower arc is more significant than the upper one).
I've done some research, but nothing seemed to match my needs; I tried trigonometric and sigmoid functions too, they turned out to be quite close to what I want. I'd be grateful for any hints or even solutions.
P.S. The question was originally asked at stackoverflow and I was suggested to look for help here. Some answers involved splines, cumulative distribution functions or the logistic equation. Is that the way to go?
| Have you tried polynomial interpolation? It seems that for 'well-behaved' graphs like the ones you are looking for (curves for image processing?), it could work just fine.
At the bottom of this page you can find an applet demonstrating it. There is a potential problem with the interpolated degree 4 curve possibly becoming negative though. I'll let you know once I have worked out a better way to do this.
EDIT: The problem associated with those unwanted 'spikes' or oscillations is called Runge's phenomenon . I would guess that image manipulation programs like Photoshop actually use Spline interpolation to find the curves.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/171127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Throwing balls into $b$ buckets: when does some bucket overflow size $s$? Suppose you throw balls one-by-one into $b$ buckets, uniformly at random. At what time does the size of some (any) bucket exceed size $s$?
That is, consider the following random process. At each of times $t=1, 2, 3, \dots$,
*
*Pick up a ball (from some infinite supply of balls that you have).
*Assign it to one of $b$ buckets, uniformly at random, and independent of choices made for previous balls.
For this random process, let $T = T(s,b)$ be the time such that
*
*At time $T-1$ (after the $T-1$th ball was assigned), for each bucket, the number of balls assigned to it was $\le s$.
*At time $T$ (after the $T$th ball was assigned), there is some bucket for which the number of balls assigned to it is $s + 1$.
What can we say about $T$? If we can get the distribution of $T(s,b)$ that would be great, else even knowing its expected value and variance, or even just expected value, would be good.
Beyond the obvious fact that $T \le bs+1$ (and therefore $E[T]$ exists), I don't see anything very helpful. The motivation comes from a real-life computer application involving hashing (the numbers of interest are something like $b = 10000$ and $s = 64$).
| I just wrote some code to find the rough answer (for my particular numbers) by simulation.
$ gcc -lm balls-bins.c -o balls-bins && ./balls-bins 10000 64
...
Mean: 384815.56 Standard deviation: 16893.75 (after 25000 trials)
This (384xxx) is within 2% of the number ~377xxx, specifically
$$ T \approx b \left( (s + \log b) \pm \sqrt{(s + \log b)^2 - s^2} \right) $$
that comes from the asymptotic results (see comments on the question), and I must say I am pleasantly surprised.
I plan to edit this answer later to summarise the result from the paper, unless someone gets to it first. (Feel free!)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/171179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
If $f:D\to \mathbb{R}$ is continuous and exists $(x_n)\in D$ such as that $x_n\to a\notin D$ and $f(x_n)\to \ell$ then $\lim_{x\to a}f(x)=\ell$? Assertion: If $f:X\setminus\left\{a\right\}\to \mathbb{R}$ is continuous and there exists a sequence $(x_n):\mathbb{N}\to X\setminus\left\{a\right\}$ such as that $x_n\to a$ and $f(x_n)\to \ell$ prove that $\lim_{x\to a}f(x)=\ell$
I have three questions: 1) Is the assertion correct? If not, please provide counter-examples. In that case can the assertion become correct if we require that $f$ is monotonic, differentiable etc.?
2)Is my proof correct? If not, please pinpoint the problem and give a hint to the right direcition. Personally, what makes me doubt it are the choices of $N$ and $\delta$ since they depend on another
3)If the proof is correct, then is there a way to shorten it?
My Proof:
Let $\epsilon>0$. Since $f(x_n)\to \ell$
\begin{equation}
\exists N_1\in \mathbb{N}:n\ge N_1\Rightarrow \left|f(x_n)-\ell\right|<\frac{\epsilon}{2}\end{equation}
Thus, $\left|f(x_{N_1})-\ell\right|<\frac{\epsilon}{2}$ and by the continuity of $f$ at $x_{N_1}$,
\begin{equation}
\exists \delta_1>0:\left|x-x_{N_1}\right|<\delta_1\Rightarrow \left|f(x)-f(x_{N_1})\right|<\frac{\epsilon}{2}
\end{equation}
Since $x_n\to a$,
\begin{equation}
\exists N_2\in \mathbb{N}:n\ge N_2\Rightarrow \left|x_n-a\right|<\delta_1\end{equation}
Thus, $\left|x_{N_2}-a\right|<\delta_1$ and by letting $N=\max\left\{N_1,N_2\right\}$,
\begin{gather}
0<\left|x-a\right|<\delta_1\Rightarrow \left|x-x_N+x_N-a\right|<\delta_1\Rightarrow \left|x-x_N\right|-\left|x_N-a\right|<\delta_1\\
0<\left|x-a\right|<\delta_1\Rightarrow \left|x-x_N\right|<\delta_1+\left|x_N-a\right|
\end{gather}
By the continuity of $f$ at $x_N$,
\begin{equation}
\exists \delta_3>0:0<\left|x-x_N\right|<\delta_3\Rightarrow \left|f(x)-f(x_N)\right|<\frac{\epsilon}{2}
\end{equation}
Thus, letting $\delta=\max\left\{\delta_1+\left|x_N-a\right|,\delta_3\right\}>0$ we have that,
\begin{gather}
0<\left|x-a\right|<\delta\Rightarrow \left|x-x_N\right|<\delta\Rightarrow \left|f(x)-\ell+\ell-f(x_N)\right|<\frac{\epsilon}{2}\Rightarrow \left|f(x)-\ell\right|-\left|f(x_N)-\ell\right|<\frac{\epsilon}{2}\\
0<\left|x-a\right|<\delta\Rightarrow\left|f(x)-\ell\right|<\left|f(x_N)-\ell\right|+\frac{\epsilon}{2}<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon
\end{gather}
We conclude that $\lim_{x\to a}f(x)=\ell$
Thank you in advance
EDIT: The proof is false. One of the mistakes is in this part:
"Thus, letting $\delta=\max\left\{\delta_1+\left|x_N-a\right|,\delta_3\right\}>0$ we have that,
\begin{gather}
0<\left|x-a\right|<\delta{\color{Red} \Rightarrow} \left|x-x_N\right|<\delta{\color{Red} \Rightarrow} \left|f(x)-\ell+\ell-f(x_N)\right|<\frac{\epsilon}{2}\end{gather}"
| You need to have $f(x_n) \to l$ for all sequences $x_n \to a$, not just one sequence.
For example, let $a=(0,0)$ with $f(x,y) = \frac{x y}{x^2+y^2}$. This is continuous on $\mathbb{R}^2 \setminus \{a\}$, and the sequence $x_n=(\frac{1}{n},0) \to a$, with $f(x_n) \to 0$ (excuse abuse of notation), but $f$ is not continuous at $a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/171233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Preorders, chains, cartesian products, and lexicographical order Definitions:
A preorder on a set $X$ is a binary relation $\leq$ on $X$ which is reflexive and transitive.
A preordered set $(X, \leq)$ is a set equipped with a preorder.... Where confusion cannot result, we refer to the preordered set $X$ or sometimes just the preorder $X$.
If $x \leq y$ and $y \leq x$ then we shall write $x \cong y$ and say that $x$ and $y$ are isomorphic elements.
Write $x < y$ if $x \le y$ and $y \not\le x$ for each $x, y \in X$. This gives a strict partial order on $X$. (See also, this question.)
Given two preordered sets $A$ and $B$, the lexicographical order on the Cartesian product $A \times B$ is defined as $(a,b) \le_{lex} (a',b')$ if and only if $a < a'$ or ($a \cong a'$ and $b \le b'$). The result is a preorder.
A subset $C$ of a preorder $X$ is called a chain if for every $x,y \in C$ we have $x \leq y$ or $y \leq x$.... We shall say that a preorder $X$ is a chain ... if the underlying set $X$ is such.
Exercise:
Let $C$ and $C'$ be chains. Show that the set of pairs $(c, c')$, where $c \in C$ and $c' \in C'$, is also a chain when ordered lexicographically
| Suppose $(c_1, c_1')$ and $(c_2, c_2')$ are pairs with $c_1, c_2 \in C$ and $c_1', c_2' \in C'$. Then $c_1 \le c_2$ or $c_2 \le c_1$ and $c_1' \le c_2'$ or $c_2' \le c_1'$. If $c_1 \le c_2$ but $c_2 \not\le c_1$ then $c_1 < c_2$ and $(c_1, c_1') \le_{lex} (c_2, c_2')$. Similarly, if $c_1 \not\le c_2$ and $c_2 \le c_1$ then $(c_2, c_2') \le_{lex} (c_1, c_1')$. If $c_1 \le c_2$ and $c_2 \le c_1$ then $c_1 \cong c_2$. Now if $c_1' \le c_2'$ then $(c_1, c_1') \le_{lex} (c_2, c_2')$. On the other hand if $c_2' \le c_1'$ then $(c_2, c_2') \le_{lex} (c_1, c_1')$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/171310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Finding two numbers given their sum and their product
Which two numbers when added together yield $16$, and when multiplied together yield $55$.
I know the $x$ and $y$ are $5$ and $11$ but I wanted to see if I could algebraically solve it, and found I couldn't.
In $x+y=16$, I know $x=16/y$ but when I plug it back in I get something like $16/y + y = 16$, then I multiply the left side by $16$ to get $2y=256$ and then ultimately $y=128$. Am I doing something wrong?
| Given $$ x + y = 16 \tag{1} $$
and $$ x \cdot y = 55. \tag{2} $$
We can use the identity: $$ ( x-y)^2 = ( x+y)^2 - 4 \cdot x \cdot y \tag{3} $$
to get
$$ x - y = \sqrt{256 - 4 \cdot 55} = \sqrt{36} = \pm \, 6. \tag{4} $$
Finding half sum and half difference of equations (1) and (4) gives us
$$ (x,y) = (11,5) , \, (5,11) \tag{5} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/171407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 9,
"answer_id": 8
} |
Is there any way to find a angle of a complex number without a calculator? Transforming the complex number $z=-\sqrt{3}+3i$ into polar form will bring me to the problem to solve this two equations to find the angle $\phi$: $\cos{\phi}=\frac{\Re z}{|z|}$ and $\sin{\phi}=\frac{\Im z}{|z|}$.
For $z$ the solutions are $\cos{\phi}=-0,5$ and $\sin{\phi}=-0,5*\sqrt{3}$.
Using Wolfram Alpha or my calculator I can get $\phi=\frac{2\pi}{3}$ as solution. But using a calculator is forbidden in my examination.
Do you know any (cool) ways to get the angle without any other help?
| The direct calculation is
$$\arg(-\sqrt 3+ 3i)=\arctan\frac{3}{-\sqrt{3}}=\arctan (-\sqrt 3)=\arctan \frac{\sqrt 3/2}{-1/2}$$
As the other two answers remark, you must learn by heart the values of at least the sine and cosine at the main angle values between zero and $\,\pi/2\,$ and then, understanding the trigonometric circle, deduce the functions' values anywhere on that circle.
The solution you said you got is incorrectly deduced as you wrote
$$\,cos\phi=-0,5\,,\,\sin\phi=-0,5\cdot \sqrt 3\,$$
which would give you both values of $\,\sin\,,\,\cos\,$ negative, thus putting you in the third quadrant of the trigonometric circle, $\,\{(x,y)\;:\;x,y<0\}\,$, which is wrong as the value indeed is $\,2\pi/3\,$ (sine is positive!), but who knows how did you get to it.
In the argument's calculation above please do note the minus sign is at $\,1/2\,$ in the denominator, since that's what corresponds to the $\,cos\,$ in the polar form, but the sine is positive thus putting us on the second quadrant $\,\{(x,y,)\;:\;x<0<y\}\,$ .
So knowing that
$$\sin x = \sin(\pi - x)\,,\,\cos x=-\cos(\pi -x)\,,\,0\leq x\leq \pi/2$$
and knowing the basic values for the basic angles, gives you now
$$-\frac{1}{2}=-\cos\frac{\pi}{3}\stackrel{\text{go to 2nd quad.}}=\cos\left(\pi-\frac{\pi}{3}\right)=\cos\frac{2\pi}{3}$$
$$\frac{\sqrt 3}{2}=\sin\frac{\pi}{3}\stackrel{\text{go to 2nd quad.}}=\sin\left(\pi-\frac{\pi}{3}\right)=\sin\frac{2\pi}{3}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/171474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Where are good resources to study combinatorics? I am an undergraduate wiht basic knowledge of combinatorics, but I want to obtain sound knowledge of this topic. Where can I find good resources/questions to practice on this topic?
I need more than basic things like the direct question 'choosing r balls among n' etc.; I need questions that make you think and challenge you a bit.
| Lots of good suggestions here. Another freely available source is Combinatorics Through Guided Discovery. It starts out very elementary, but also contains some interesting problems. And the book is laid out as almost entirely problem-based, so it useful for self study.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/171543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 8,
"answer_id": 2
} |
Solving Recurrence $T_n = T_{n-1}*T_{n-2} + T_{n-3}$ I have a series of numbers called the Foo numbers, where $F_0 = 1, F_1=1, F_2 = 1 $
then the general equation looks like the following:
$$
F_n = F_{n-1}(F_{n-2}) + F_{n-3}
$$
So far I have got the equation to look like this:
$$T_n = T_{n-1}*T_{n-2} + T_{n-3}$$
I just don't know how to solve the recurrence. I tried unfolding but I don't know if i got the right answer:
$$
T_n = T_{n-i}*T_{n-(i+1)} + T_{n-(i+2)}
$$
Please help, I need to describe the algorithm which I have done but the analysing of the running time is frustrating me.
| Numerical data looks very good for
$$F_n \approx e^{\alpha \tau^n}$$
where $\tau = (1+\sqrt{5})/2 \approx 1.618$ and $\alpha \approx 0.175$. Notice that this makes sense: When $n$ is large, the $F_{n-1} F_{n-2}$ term is much larger than $F_{n-3}$, so
$$\log F_n \approx \log F_{n-1} + \log F_{n-2}.$$
Recursions of the form $a_n = a_{n-1} + a_{n-2}$ always have closed forms $\alpha \tau^n + \beta \tau^{-n}$.
Here's a plot of $\log \log F_n$ against $n$, note that it looks very linear. A best fit line (deleting the first five values to clean up end effects) gives that the slope of this line is $0.481267$; the value of $\log \tau$ is $0.481212$.
Your sequence is in Sloane but it doesn't say much of interest; if you have anything to say, you should add it to Sloane.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/171597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Existence of valuation rings in a finite extension of the field of fractions of a weakly Artinian domain without Axiom of Choice Can we prove the following theorem without Axiom of Choice?
This is a generalization of this problem.
Theorem
Let $A$ be a weakly Artinian domain.
Let $K$ be the field of fractions of $A$.
Let $L$ be a finite extension field of $K$.
Let $B$ be a subring of $L$ containing $A$.
Let $P$ be a prime ideal of $B$.
Then there exists a valuation ring of $L$ dominating $B_P$.
As for why I think this question is interesting, please see(particularly Pete Clark's answer):
Why worry about the axiom of choice?
| Lemma 1
Let $A$ be an integrally closed weakly Artinian domain.
Let $S$ be a multiplicative subset of $A$.
Let $A_S$ be the localization with respect to $S$.
Then $A_S$ is an integrally closed weakly Artinian domain.
Proof:
Let $K$ be the field of fractions of $A$.
Suppose that $x \in K$ is integral over $A_S$.
$x^n + a_{n-1}x^{n-1} + ... + a_1x + a_0 = 0$, where $a_i \in A_S$.
Hence there exists $s \in S$ such that $sx$ is integral over $A$.
Since $A$ is integrally closed, $sx \in A$.
Hence $x \in A_S$.
Hence $A_S$ is integrally closed.
Let $f$ be a non-zero element of $A_S$.
$f = a/s$, where $a \in A, s \in S$.
Then $fA_S = aA_S$.
By this, $aA$ is a product of prime ideals of $A$.
Let $P$ be a non-zero prime ideal $P$ of $A$.
Since $P$ is maximal, $A_S/P^nA_S$ is isomorphic to $A/P^n$ or $0$.
Hence $leng_{A_S} A_S/aA_S$ is finite.
QED
Lemma 2
Let $A$ be an integrally closed weakly Artinian domain.
Let $P$ be a non-zero prime ideal of $A$.
Then $A_P$ is a discrete valuation ring.
Proof:
By Lemma 1 and this, every non-zero ideal of $A_P$ has a unique factorization as a product of prime ideals.
Hence $PA_P \neq P^2A_p$.
Let $x \in PA_P - P^2A_P$.
Since $PA_P$ is the only non-zero prime ideal of $A_P$, $xA = PA_P$.
Since every non-zero ideal of $A_P$ can be written $P^nA_P$, $A_P$ is a principal ideal domain.
Hence $A_P$ is a discrete valuation ring.
QED
Proof of the title theorem
We can assume that $P \neq 0$.
Let $C$ be the integral closure of $B$ in $L$.
By this, $C$ is a weakly Artinian $A$-algebra in the sense of Definition 2 of my answer to this.
By Lemma 2 of my answer to this, $C$ is a weakly Artinian ring.
Let $S = B - P$.
Let $C_P$ and $B_P$ be the localizations of $C$ and $B$ with respect to $S$ respectively.
By this, $leng_A C/PC$ is finite. Hence by Lemma 7 of my answer to this, $C/PC$ is finitely generated as an $A$-module.Hence $C/PC$ is finitely generated as a $B$-module.
Hence $C_P/PC_P$ is finitely generated as a $B_P$-module.
Since $PC_P \neq C_P$, by the lemma of the answer by QiL to this, there exists a maximal ideal of $C_P/PC_P$ whose preimage is $PB_P$.
Hence there exists a maximal ideal $Q$ of $C_P$ such tha $PB_P = Q \cap B_P$.
Let $Q' = Q \cap C$.
Then $Q'$ is a prime ideal of $C$ lying over $P$.
By Lemma 2, $C_Q'$ is a discrete valuation ring and it dominates $B_P$.
QED
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/171653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Do equal distance distributions imply equivalence? Let $A$ and $B$ be two binary $(n,M,d)$ codes. We define $a_i = \#\{(w_1,w_2) \in A^2:\:d(w_1,w_2) = i\}$, and same for $b_i$. If $a_i = b_i$ for all $i$, can one deduct that $A$ and $B$ are equivalent, i.e. equal up to permutation of positions and permutation of letters in a fixed position?
| See Example 3.5.4 and the "other examples ... given in Section IV-4.9."
EDIT: Here are two inequivalent binary codes with the same distance enumerator. Code A consists of the codewords $a=00000000000$, $b=11110000000$, $c=11111111100$, $d=11000110011$; code R consists of the codewords $r=0000000000$, $s=1111000000$, $t=0111111111$, $u=1000111100$. Writing $xy$ for the Hamming distance between words $x$ and $y$, we have $ab=rs=4$, $bc=ru=5$, $cd=tu=6$, $ac=rt=9$, and $ad=bd=st=su=7$, so the distance enumerators are identical. But code A has the 7-7-4 triangle $adb$, while code R has the 7-7-6 triangle $tsu$, so there is no distance-preserving bijection between the two codes.
I have also posted this as an answer to the MO question. I'm sure there are shorter examples.
MORE EDIT: a much shorter example:
Code A consists of the codewords $a=000$, $b=100$, $c=011$, $d=111$; code R consists of the codewords $r=0000$, $s=1000$, $t=0100$, $u=0011$. Both codes have two pairs of words at each of the distances 1, 2, and 3; R has a 1-1-2 triangle, A doesn't.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/171687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
A strangely connected subset of $\Bbb R^2$ Let $S\subset{\Bbb R}^2$ (or any metric space, but we'll stick with $\Bbb R^2$) and let $x\in S$. Suppose that all sufficiently small circles centered at $x$ intersect $S$ at exactly $n$ points; if this is the case then say that the valence of $x$ is $n$. For example, if $S=[0,1]\times\{0\}$, every point of $S$ has valence 2, except $\langle0,0\rangle$ and $\langle1,0\rangle$, which have valence 1.
This is a typical pattern, where there is an uncountable number of 2-valent points and a finite, possibly empty set of points with other valences. In another typical pattern, for example ${\Bbb Z}^2$, every point is 0-valent; in another, for example a disc, none of the points has a well-defined valence.
Is there a nonempty subset of $\Bbb R^2$ in which every point is 3-valent? I think yes, one could be constructed using a typical transfinite induction argument, although I have not worked out the details. But what I really want is an example of such a set that can be exhibited concretely.
What is it about $\Bbb R^2$ that everywhere 2-valent sets are well-behaved, but
everywhere 3-valent sets are crazy? Is there some space we could use instead of $\Bbb R^2$ in which the opposite would be true?
| I claim there is a set $S \subseteq {\mathbb R}^2$ that contains exactly three points in every circle.
Well-order all circles by the first ordinal of cardinality $\mathfrak c$ as $C_\alpha, \alpha < \mathfrak c$. By transfinite induction I'll construct sets $S_\alpha$ with
$S_\alpha \subseteq S_\beta$ for $\alpha < \beta$, and take
$S = \bigcup_{\alpha < {\mathfrak c}} S_\alpha$. These will have the following properties:
*
*$S_\alpha$ contains exactly three points on every circle $C_\beta$ for $\beta \le \alpha$.
*$S_\alpha$ does not contain more than three points on any circle.
*$\text{card}(S_\alpha) \le 3\, \text{card}(\alpha)$
We begin with $S_1$ consisting of any three points on $C_1$.
Now given $S_{<\alpha} = \bigcup_{\beta < \alpha} S_\beta$, consider the circle $C_\alpha$.
Let $k$ be the cardinality of $C_\alpha \cap S_{<\alpha}$. By property (2), $k \le 3$. If $k = 3$, take $S_\alpha = S_{<\alpha}$.
Otherwise we need to add in $3-k$ points. Note that there are fewer than ${\mathfrak c}$ circles determined by triples of points in $S_{<\alpha}$, all of which are different from $C_\alpha$, and so there are fewer than $\mathfrak c$ points of $C_\alpha$ that are
on such circles. Since $C_\alpha$ has $\mathfrak c$ points, we can add in a point $a$ of $C_\alpha$ that is not on any of those circles. If $k \le 1$, we need a second point $b$ not to be on the circles determined by triples in $S_{<\alpha} \cup \{a\}$, and if $k=0$ a third point $c$ not on the circles determined by triples in $S_{<\alpha} \cup \{a,b\}$. Again this can be done, and it is easy to see that properties (1,2,3) are satisfied.
Finally, any circle $C_\alpha$ contains exactly three points of $S_\alpha$, and no
more than three points of $S$ (if it contained more than three points of $S$, it would have more than three in some $S_\beta$, contradicting property (2)).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/171751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits