Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Binomial Distribution, finding of at least $x$ success When calculating the $P$ for at least $x$ success one uses $\text{max} (x-1)$ instead, and then take $1- (\text{max} (x-1))$. This works. And I understand it. Because we use the complement to calculate it, because the calculator supports it. But what I do not understand is the following. When calculating a combination of these, $P(\text{max}\,\, x\,\,\, \text{and}\,\,\, \text{min}\,\, y)$ we can just forget about the $1 - (\text{max}\,\, (x-1))$ part, and just use $\text{max}\,(x-1)$ directly. For example: $$P(\text{at least 150 sixes and at most 180 sixes)} = P(\text{max}\,\, 180 \,\,\text{sixes}) - P(\text{max}\,\,149\,\,\text{sixes}).$$ And then we don't have to do the $1-x$ part. Why is this?
| If you threw 1000 dice, you might want to know
$$\Pr(\text{at least 150 sixes and at most 1000 sixes)} = \Pr(\text{at most 1000 sixes}) - \Pr(\text{at most 149 sixes}).$$
But you cannot get more than 1000 sixes from 1000 dice, so $\Pr(\text{at most 1000 sixes}) =1$, and you can rewrite this more briefly as
$$\Pr(\text{at least 150 sixes)} = 1 - \Pr(\text{at most 149 sixes}).$$
In other words, the method in you first case is a particular of the method in your second case.
Incidentally, by the time you get to 150 sixes you could be using the central limit theorem, in which case you are using "max" because many tables and calculators give the cumulative distribution function of a standard normal $\Phi(x)=\Pr(X \le x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/47653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
how to determine if two graphs are not isomorphic What are some good ways of determining if two reasonably simple looking graphs are not isomorphic? I know that you can check their cycle or some weird property (for certain graphs), but are there some other tricks to do this?
| With practice often one can quickly tell that graphs are not isomorphic. When graphs G and H are isomorphic they have the same chromatic number, if one has an Eulerian or Hamiltonian circuit so does the other, if G is planar so is H, if one is connected so is the other. If one has drawings of the two graphs, our visual systems are so attuned to finding patterns that seeing that the two graphs have some property the don't share often makes it easy to show graphs are not isomorphic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/47710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 2
} |
Fractions with radicals in the denominator I'm working my way through the videos on the Khan Academy, and have a hit a road block. I can't understand why the following is true:
$$\frac{6}{\quad\frac{6\sqrt{85}}{85}\quad} = \sqrt{85}$$
| No one seems to have posted the really simple way to do this yet:
$$
\frac{85}{\sqrt{85}} = \frac{\sqrt{85}\sqrt{85}}{\sqrt{85}}
$$
and then cancel the common factor.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/47748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Validity of $\sum\limits_{i=1}^n(a_i^2+b_i^2+c_i^2+d_i^2)\lambda_i\geq\lambda_1+\lambda_2+\lambda_3+\lambda_4$? Suppose that $\lambda_1\leq\lambda_2\leq\dots\leq\lambda_n$ is a sequence of real numbers. Clearly, if $a=(a_1,\dots, a_n)$ is a unit vector, then $\sum\limits_{i=1}^na_i^2\lambda_i\geq \lambda_1$. I want to see if the following generalization is true or not:
If $a=(a_1,\dots, a_n)$, $b=(b_1,\dots, b_n)$, $c=(c_1,\dots, c_n)$, and $d=(d_1,\dots, d_n)$ ($n\geq 4$) form an orthonormal set, I wonder if we have
$\sum\limits_{i=1}^n(a_i^2+b_i^2+c_i^2+d_i^2)\lambda_i\geq\lambda_1+\lambda_2+\lambda_3+\lambda_4$.
| It doesnt hold: if $\lambda_1=x<0$ and $\lambda_i=0, i=2..n$, your inequality becomes $(a_1^2+b_1^2+c_1^2+d_1^2)x\geq x$ which becomes false if we find and orthogonal system $(a,b,c,d)$ such as $ a_1^2+b_1^2+c_1^2+d_1^2>1$. For example
$a=(\frac{\sqrt{2}}{2}, -\frac{\sqrt{2}}{2},0,...,0)$,
$b=(\frac{\sqrt{3}}{3},\frac{\sqrt{3}}{3},\frac{\sqrt{6}}{6},\frac{\sqrt{6}}{6},0,...,0)$,
$c=(\frac{\sqrt{6}}{6},\frac{\sqrt{6}}{6},-\frac{\sqrt{3}}{3},-\frac{\sqrt{3}}{3},0...,0)$,
$d=(0,0,0,0,1,...,0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/47797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 3
} |
Given $N$, count $\{(m,n) \mid 0\leq mI'm confused at exercise 4.49 on page 149 from the book "Concrete Mathematics: A Foundation for Computer Science":
Let $R(N)$ be the number of pairs of integers $(m,n)$ such that $0\leq m < N$, $0\leq n<N$, and $m\perp n$.
(a) Express $R(N)$ in terms of the $\Phi$ function.
(b) Prove that $$R(N) = \displaystyle\sum_{d\geq 1}\left\lfloor\frac{N}{d}\right\rfloor^2 \mu(d)$$
*
*$m\perp n$ means $m$ and $n$ are relatively prime
*$\mu$ is the Möbius function
*$\Phi(x)=\sum_{1\leq k\leq x}\phi(k)$
*$\phi$ is the totient function
For question (a), my solution is $R(N) = 2 \cdot \Phi(N-1) + [N>1]$ (where $[\;\;]$ is the Iverson bracket, i.e. [True]=1, [False]=0)
Clearly $R(1)$ has to be zero, because the only possibility of $(m,n)$ for testing is $(0,0)$, which doesn't qualify. This agrees with my answer.
But here is the book's answer:
Either $m<n$ ($\Phi(N−1)$ cases) or $m=n$ (one case) or $m>n$ ($\Phi(N−1)$ again). Hence $R(N) = 2\Phi(N−1) + 1$.
$m=n$ is only counted when $m=n=1$, but how could that case appear when $N=1$?
I thought the book assumed $R$ is only defined over $N≥2$. But their answer for question (b) relies on $R(N) = 2Φ(N−1) + 1$ and proves the proposition also for the case $N=1$. They actually prove $2Φ(N−1) + 1 = RHS$ for $N≥1$. And if my assumption about the $R(1)$ case is true, then the proposition in (b) cannot be valid for $N=1$, for $LHS=0$ and $RHS=1$. But the fact that it's invalid just for one value seems a little fishy to me.
My question is, where am I confused? What is wrong in my understanding about the case $R(1)$?
Thank you very much.
| I did a search and found the 1994-1997 errata for the book.
So, the question was changed to:
Let R(N) be the number of pairs of (m,n) such that 1≤m≤N, 1≤n≤N, and m⊥n
This also slightly changes the solution for R(N), and everything makes sense. I don't post the solution to prevent spoilers.
I'm sorry for having wasted everybody's time.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/47986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Sum of First $n$ Squares Equals $\frac{n(n+1)(2n+1)}{6}$ I am just starting into calculus and I have a question about the following statement I encountered while learning about definite integrals:
$$\sum_{k=1}^n k^2 = \frac{n(n+1)(2n+1)}{6}$$
I really have no idea why this statement is true. Can someone please explain why this is true and if possible show how to arrive at one given the other?
| Notice that $(k+1)^3 - k^3 = 3k^2 + 3k + 1$ and hence
$$(n+1)^3 = \sum_{k=0}^n \left[ (k+1)^3 - k^3\right] = 3\sum_{k=0}^n k^2 + 3\sum_{k=0}^n k + \sum_{k=0}^n 1$$
which gives you
$$\begin{align}
\sum_{k=1}^n k^2
& = \frac{1}{3}(n+1)^3 - \frac{1}{2}n(n+1) - \frac{1}{3}(n+1) \\
& = \frac{1}{6}(n+1) \left[ 2(n+1)^2 - 3n - 2\right] \\
& = \frac{1}{6}(n+1)(2n^2 +n) \\
& = \frac{1}{6}n(n+1)(2n+1)
\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/48080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "145",
"answer_count": 32,
"answer_id": 6
} |
How can I complexify the right hand side of this differential equation? I want to get a particular solution to the differential equation
$$
y''+2y'+2y=2e^x cos(x)
$$
and therefore I would like to 'complexify' the right hand side. This means that I want to write the right hand side as $q(x)e^{\alpha x}$ with $q(x)$ a polynomial. How is this possible?
The solution should be $(1/4)e^x(\sin(x)+\cos(x))$ but I cannot see that.
| The point is that (for real $x$) $2 e^x \cos(x)$ is the real part of $2 e^x e^{ix} = 2 e^{(1+i)x}$. Find a particular solution of $y'' + 2 y' + 2 y = 2 e^{(1+i)x}$, and its real part is a solution of $y'' + 2 y' + 2 y = 2 e^x \cos(x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/48099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Multi-dimensional sequences I was just wondering if it is possible to consider sequences in multiple dimensions? Denote $(x_{t})^{n}$ to be a sequence in dimension $n$. So the "normal" sequences we are used to are denoted by $(x_{t})^{1}$. Likewise, $(x_{t})^{2} = \left((x_{1}(t)), x_{2}(t) \right)$, etc..
It seems that for an $n$-dimensional sequence to converge, all of its component sequence must converge. Is there any utility in looking at $n$ dimensional sequences that have a "significant" number of its component sequences converge? More specifically:
Let $$(x_{t})^{n} = \left(x_{1}(t), \dots, x_{n}(t) \right)$$ be an $n$ dimensional sequence. Suppose $p$ of the component sequences converge where $p <n$. What does this tell us about the behavior of $(x_{t})^{n}$?
| Why not look at a simple example? Consider $(0,0,0),(0,0,1),(0,0,2),(0,0,3),\dots$. Two of the three component sequences converge. What would you say about the behavior of this sequence of triples?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/48138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Find an absorbing set in the table: fast algorithm Consider a $m\times n,m\leq n$ matrix $P = (p_{ij})$ such that $p_{ij}$ is either $0$ or $1$ and for each $i$ there is at least one $j$ such that $p_{ij} =1$. Denote $s_i = \{1\leq j\leq n:p_{ij} = 1\}$ so $s_i$ is always non-empty.
We call a set $A\subseteq [1,m]$ absorbing if for all $i\in A$ holds $s_i\subset A$. If I apply my results directly then I will have an algorithm with a complexity of $\mathcal{O}(m^2n)$ which will find the largest absorbing set.
On the other hand I was not focused on developing this algorithm and hence I wonder if you could advise me some algorithms which are faster?
P.S. please retag if my tags are not relevant.
Edited: I reformulate the question (otherwise it was trivial).
I think this problem can be considered as a searching for the largest loop in the graph(if we connect $i$ and $j$ iff $p_{ij} = 1$).
| Since you have to look at every entry at least once to find $A_{\max}$ (the largest absorbing set), the time complexity of any algorith cannot be lower than $\mathcal{O}(n\times m)$. I think the algorithm below achives that.
Let $A_i$ be the smallest absorbing set containing $i$ or empty if $i$ is not part of an absorbing set. To find $A_i$, the algorithm starts with $s_i$ and joins is with every $A_j$ for $j\in s_i$. It uses caching to avoid calculating $A_j$ twice. $A_{\max}$ should be the union of all $A_i$s.
A_max := empty set
for i from 1 to m
merge A_max with result from explore(i)
explore(i)
if i is already explored
return known result
else
for j from m + 1 to n
if p_ij = 1
return empty set
A_i := empty set
for j from 1 to m
if p_ij = 1
add j to A_i
if i not equal to j
A_j = explore(j)
if A_j is empty then
return empty set
else
merge A_i with A_j
return A_i
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/48182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Examples of results failing in higher dimensions A number of economists do not appreciate rigor in their usage of mathematics and I find it very discouraging.
One of the examples of rigor-lacking approach are proofs done via graphs or pictures without formalizing the reasoning. I would like thus to come up with a few examples of theorems (or other important results) which may be true in low dimensions (and are pretty intuitive graphically) but fail in higher dimensions.
By the way, these examples are directed towards people who do not have a strong mathematical background (some linear algebra and calculus), so avoiding technical statements would be appreciated.
Jordan-Schoenflies theorem could be such an example (though most economists are unfamiliar with the notion of a homeomorphism). Could you point me to any others?
Thanks.
| May be this one.
Every polygon has a triangulation but not all polyhedra can be tetrahedralized (Schönhardt polyhedron)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/48301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "46",
"answer_count": 12,
"answer_id": 5
} |
What is the background for $\sum_{k=1}^n|f(x_k)-f(x_{k-1})|$? The question is from the following problem:
If $f$ is the function whose graph is indicated in the figure above, then the least upper bound (supremum) of
$$\big\{\sum_{k=1}^n|f(x_k)-f(x_{k-1})|:0=x_0<x_1<\cdots<x_{n-1}<x_n=12\big\}$$
appears to be
$A. 2\quad B. 7\quad C. 12\quad D. 16\quad E. 21$
I don't know what the set above means. And I am curious about the background of the set in real analysis.
| Total variation sums up how much a function bobs up and down. Yours does this 16 units. Therefore choose D.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/48366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Proof that a function is holomorphic How can i show that the function $$f\colon\mathbb{C}\setminus\{-i\}\rightarrow\mathbb{C}\quad \text{defined by}\quad f(z)= \frac{1+iz}{1-iz}$$ is an holomorphic function?
| One way is by differentiating it. You have $f(z)=\frac{1+iz}{1-iz}=-1+2\cdot\frac{1}{1-iz}$, so when $iz\neq 1$,
$\begin{align*}\lim_{h\to0}\frac{f(z+h)-f(z)}{h}&=\lim_{h\to 0}\frac{2}{h}\left(\frac{1}{1-i(z+h)}-\frac{1}{1-iz}\right)\\
&=\lim_{h\to 0}\frac{2}{h}\cdot\frac{1-iz-(1-i(z+h))}{(1-i(z+h))(1-iz)}\\
&\vdots
\end{align*}$
The next steps involve some cancellation, after which you can safely let $h$ go to $0$.
This is not a very efficient method, but it illustrates that it only takes a bit of algebra to work directly with the definition of the derivative in this case. More simple would be to apply a widely applicable tool, namely the quotient rule, along with the simpler fact that $1\pm iz$ are holomorphic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/48481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Problem finding zeros of complex polynomial I'm trying to solve this problem
$$ z^2 + (\sqrt{3} + i)|z| \bar{z}^2 = 0 $$
So, I know $ |z^2| = |z|^2 = a^2 + b ^2 $ and $ \operatorname{Arg}(z^2) = 2 \operatorname{Arg} (z) - 2k \pi = 2 \arctan (\frac{b}{a} ) - 2 k\pi $ for a $ k \in \mathbb{Z} $. Regarding the other term, I know $ |(\sqrt{3} + i)|z| \bar{z}^2 | = |z|^3 |\sqrt{3} + i| = 2 |z|^3 = 2(a^2 + b^2)^{3/2} $ and because of de Moivre's theorem, I have $ \operatorname{Arg} [(\sqrt{3} + i ) |z|\bar{z}^2] = \frac{\pi}{6} + 2 \operatorname{Arg} (z) - 2Q\pi $.
Using all of this I can rewrite the equation as follows
$$\begin{align*}
&|z|^2 \Bigl[ \cos (2 \operatorname{Arg} (z) - 2k \pi) + i \sin (2 \operatorname{Arg}(z) - 2k \pi)\Bigr]\\
&\qquad \mathop{+} 2|z|^3 \Biggl[\cos \left(\frac{\pi}{6} + 2 \operatorname{Arg} (z) -2Q\pi\right) + i \sin \left(\frac{\pi}{6} + 2 \operatorname{Arg} (z) -2Q\pi\right)\Biggr] = 0
\end{align*} $$
Which, assuming $ z \neq 0 $, can be simplified as
$$\begin{align*}
&\cos (2 \operatorname{Arg} (z) - 2k \pi) + i \sin (2 \operatorname{Arg} (z) - 2k \pi) \\
&\qquad\mathop{+} 2 |z|\Biggl[\cos \left(\frac{\pi}{6} + 2 \operatorname{Arg} (z) -2Q \pi \right) + i \sin \left(\frac{\pi}{6} + 2 \operatorname{Arg} (z) -2Q\pi\right)\Biggr] = 0
\end{align*} $$
Now, from this I'm not sure how to go on. I tried a few things that got me nowhere like trying to solve
$$ \cos (2 \operatorname{Arg}(z) - 2k \pi) = 2 |z| \cos \left(\frac{\pi}{6} + 2 \operatorname{Arg} (z) -2Q\pi\right) $$
I'm really lost here, I don't know how to keep going and I've looked for error but can't find them. Any help would be greatly appreciated.
| Here is an alternative to solving it using polar form.
Let $z=a+bi$, so that $\bar{z}=a-bi$ and $|z|=\sqrt{a^2+b^2}$. Then you want to solve
$$(a+bi)^2+(\sqrt{3}+i)\sqrt{a^2+b^2}(a-bi)^2=0,$$
which expands to
$$(a^2-b^2)+2abi+(\sqrt{3}+i)\sqrt{a^2+b^2}\left((a^2-b^2)-2abi\right)=0$$
Thus, we need both the real part and the imaginary part of the left side to be 0, i.e.
$$(a^2-b^2)+\sqrt{a^2+b^2}\left(\sqrt{3}\cdot (a^2-b^2)+2ab\right)=0$$
and
$$2ab+\sqrt{a^2+b^2}\left(-2ab\sqrt{3}+(a^2-b^2)\right)=0.$$
It should be possible to solve these equations by simple manipulations, though I haven't worked it out myself yet.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/48528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Equality of outcomes in two Poisson events I have a Poisson process with a fixed (large) $\lambda$. If I run the process twice, what is the probability that the two runs have the same outcome?
That is, how can I approximate
$$f(\lambda)=e^{-2\lambda}\sum_{k=0}^\infty\frac{\lambda^{2k}}{k!^2}$$
for $\lambda\gg1$? If there's a simple expression about $+\infty$ that would be best, but I'm open to whatever can be suggested.
| Fourier transforms yield a fully rigorous proof.
First recall that, as explained here, for every integer valued random variable $Z$,
$$
P(Z=0)=\int_{-1/2}^{1/2}E(\mathrm{e}^{2\mathrm{i}\pi tZ})\mathrm{d}t.
$$
Hence, if $X_\lambda$ and $Y_\lambda$ are independent Poisson random variables with parameter $\lambda$,
$$
f(\lambda)=P(X_\lambda=Y_\lambda)=\int_{-1/2}^{1/2}E(\mathrm{e}^{2\mathrm{i}\pi tX_\lambda})E(\mathrm{e}^{-2\mathrm{i}\pi tY_\lambda})\mathrm{d}t.
$$
For Poisson distributions, one knows that $E(s^{X_\lambda})=\mathrm{e}^{-\lambda(1-s)}$ for every complex number $s$. This yields
$$
f(\lambda)=\int_{-1/2}^{1/2}\mathrm{e}^{-2\lambda(1-\cos(2\pi t))}\mathrm{d}t=\int_{-1/2}^{1/2}\mathrm{e}^{-4\lambda\sin^2(\pi t)}\mathrm{d}t.
$$
Consider the change of variable $u=2\pi\sqrt{2\lambda}t$. One gets
$$
f(\lambda)=\frac1{\sqrt{4\pi\lambda}}\int_\mathbb{R} g_\lambda(u)\mathrm{d}u,
$$
with
$$
g_\lambda(u)=\frac1{\sqrt{2\pi}}\mathrm{e}^{-4\lambda\sin^2(u/\sqrt{8\lambda})}\,[|u|\le\pi\sqrt{2\lambda}].
$$
When $\lambda\to+\infty$, $g_\lambda(u)\to g(u)$ where $g$ is the standard Gaussian density, defined by
$$
g(u)=\frac1{\sqrt{2\pi}}\mathrm{e}^{-u^2/2}.
$$
Furthermore, the inequality
$$4\lambda\sin^2(u/\sqrt{8\lambda})\ge2u^2/\pi^2,
$$
valid for every $|u|\le\pi\sqrt{2\lambda}$, shows that the functions $g_\lambda$ are uniformly dominated by an integrable function. Lebesgue dominated convergence theorem and the fact that $g$ is a probability density yield finally
$$
\int_\mathbb{R} g_\lambda(u)\mathrm{d}u\to1,\qquad\text{hence}\ \sqrt{4\pi\lambda}f(\lambda)\to1.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/48593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Homology of the loop space Let $X$ be a nice space (manifold, CW-complex, what you prefer). I was wondering if there is a computable relation between the homology of $\Omega X$, the loop space of $X$, and the homology of $X$. I know that, almost by definition, the homotopy groups are the same (but shifted a dimension). Because the relation between homotopy groups and homology groups is very difficult, I expect that the homology of $\Omega X$ is very hard to compute in general. References would be great.
| Adams and Hilton gave a functorial way to describe the homology ring $H_\ast(\Omega X)$ in terms of the homology $H_\ast(X)$, at least when $X$ is a simply-connected CW complex with one $0$-cell and no $1$-cells. You'll find a more modern discussion of their construction here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/48637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Hauptmoduls for modular curves If I have a modular curve, how does one in general find a Hauptmodul for this curve?
| As Pete mentions in the comments, computing Hauptmoduls for genus zero modular curves has a long history (results go back to work by Klein and Gierster published in 1879). Since those early works, there have been many papers on Hauptmodules. For a nice discussion about Hauptmodules (including references to the aforementioned papers of Klein and Gierster) see
On Rationally Parametrized Modular Equations by Robert Maier, which includes a table of Hauptmoduls for $X_0(N)$ on page 11. Though one can construct Hauptmoduls for $X_0(N)$ using eta-functions, this does not work in general. In general one can instead use Siegel functions (or the closely related Klein forms) (see especially this paper by Sutherland and Zywina which explains a general procedure for constructing Hauptmoduls from Siegel functions starting on page 11). (One can also construct Hauptmodules from "generalized eta-functions", which are essentially Siegel functions.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/48691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Finding the real roots of a polynomial Recent posts on polynomials have got me thinking.
I want to find the real roots of a polynomial with real coefficients in one real variable $x$. I know I can use a Sturm Sequence to find the number of roots between two chosen limits $a < x < b$.
Given that $p(x) = \sum_{r=0}^n a_rx^r$ with $a_n = 1$ what are the tightest values for $a$ and $b$ which are simply expressed in terms of the coefficients $a_r$ and which make sure I capture all the real roots?
I can quite easily get some loose bounds and crank up the computer to do the rest, and if I approximate solutions by some algorithm I can get tighter. But I want to be greedy and get max value for min work.
| I actually had to do this for school about a month ago, and the method I came up with was as follows:
*
*Note that all zeros of a polynomial are between a local minimum and a local maximum (including the limits at infinity). However, not all adjacent pairs of a min and a max have a zero in between, but that is irrelevant.
*Therefore, one can find the mins and maxes and converge on the root in between by using the bisection method (if they're on opposite sides of the x-axis).
*Finding the mins and maxes is accomplished by taking the derivative and finding its zeros.
*Considering that this is a procedure for finding zeros, step 3 can be done recursively.
*The base case for recursion is a line. Here, $y=ax+b$ and the zero is $-\frac{b}{a}$.
This is a very easy and quick way to find all real zeros (to theoretically arbitrary precision). :D
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/48750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
How to determine number with same amount of odd and even divisors With given number N, how to determine first number after N with same amount of odd and even divisors?
For example if we have N=1, then next number we are searching for is :
2
because divisors:
odd : 1
even : 2
I figured out that this special number can't be odd and obviously it can't be prime.
I can't find any formula for this or do i have just compute it one by one and check if it's this special number ? Obviously 1 and number itself is divisors of this number.
Cheers
| For a given integer $n$, every divisor larger than $\sqrt{n}$ is paired with a divisor smaller than $\sqrt{n}$. Use this to figure out a general principle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/48876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 2
} |
Deriving the rest of trigonometric identities from the formulas for $\sin(A+B)$, $\sin(A-B)$, $\cos(A+B)$, and $\cos (A-B)$ I am trying to study for a test and the teacher suggest we memorize $\sin(A+B)$, $\sin(A-B)$, $\cos(A+B)$, $\cos (A-B)$, and then be able to derive the rest out of those. I have no idea how to get any of the other ones out of these, it seems almost impossible. I know the $\sin^2\theta + \cos^2\theta = 1$ stuff pretty well though. For example just knowing the above how do I express $\cot(2a)$ in terms of $\cot a$? That is one of my problems and I seem to get stuck half way through.
| Maybe this will help?
cot(x) = cosx / sinx ->
cot(2a) = cos(a + a) / sin(a + a) and then I assume you know these two.
Edit: Had it saved as a tab and didnt see the posted answer, but I still think it would have been best to let you compute the rest by yourself so that you could learn it by doing instead of reading.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/48938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 2
} |
How to prove $\text{Rank}(AB)\leq \min(\text{Rank}(A), \text{Rank}(B))$? How to prove $\text{Rank}(AB)\leq \min(\text{Rank}(A), \text{Rank}(B))$?
| I used a way to prove this, which I thought may not be the most concise way but it feels very intuitive to me.
The matrix $AB$ is actually a matrix that consist the linear combination of $A$ with $B$ the multipliers. So it looks like...
$$\boldsymbol{AB}=\begin{bmatrix}
& & & \\
a_1 & a_2 & ... & a_n\\
& & &
\end{bmatrix}
\begin{bmatrix}
& & & \\
b_1 & b_2 & ... & b_n\\
& & &
\end{bmatrix}
=
\begin{bmatrix}
& & & \\
\boldsymbol{A}b_1 & \boldsymbol{A}b_2 & ... & \boldsymbol{A}b_n\\
& & &
\end{bmatrix}$$
Suppose if $B$ is singular, then when $B$, being the multipliers of $A$, will naturally obtain another singular matrix of $AB$. Similarly, if $B$ is non-singular, then $AB$ will be non-singular. Therefore, the $rank(AB) \leq rank(B)$.
Then now if $A$ is singular, then clearly, no matter what $B$ is, the $rank(AB)\leq rank(A)$. The $rank(AB)$ is immediately capped by the rank of $A$ unless the the rank of $B$ is even smaller.
Put these two ideas together, the rank of $AB$ must have been capped the rank of $A$ or $B$, which ever is smaller. Therefore, $rank(AB) \leq min(rank(A), rank(B))$.
Hope this helps you!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/48989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36",
"answer_count": 5,
"answer_id": 2
} |
Using Horner's Method I'm trying to evaluate a polynomial recursively using Horner's method.
It's rather simple when I have every value of $x$ (like: $x+x^2+x^3...$), but what if I'm missing some of those? Example: $-6+20x-10x^2+2x^4-7x^5+6x^7$.
I would also appreciate it if someone could explain the method in more detail, I've used the description listed here but would like some more explanation.
| You can also carry it out in a synthetic division table. Suppose you want to evaluate $f(x) = x^4 - 3x^2 + x - 5$ for $x = 3$. Set up a table like this
1 0 -3 1 5
3
-------------------------
1
Now multiply the number on the bottom and total as follows.
1 0 -3 1 5
3 3
-------------------------
1 3
Work your way across in this manner.
1 0 -3 1 -5
3 3 9 18 57
-------------------------
1 3 6 19 52
We have $f(3) = 52$. Let's run a check
$$ f(3) = 81 -3*9 + 3 - 5 = 54 - 2 = 52.$$
$$ f(3) = 54 - 2 = 52.$$
This is a clean, tabular way to see Horner's method work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/49051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 1
} |
integrals inequalities $$
\left( {\int\limits_0^1 {f^2(x)\ \text{d}x} }\right)^{\frac{1}
{2}} \ \geqslant \quad \int\limits_0^1 {\left| {f(x)} \right|\ \text{d}x}
$$
I can't prove it )=
| $$\int_0^1 |f(x)| \, dx = \int_0^1 |1||f(x)| \, dx \leq \sqrt{\int_0^1 1 \, dx} \sqrt{\int_0^1 |f(x)|^2 \, dx} = \sqrt{\int_0^1 |f(x)|^2 \, dx}$$
By Cauchy-Schwarz.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/49097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Why $\sqrt{-1 \times -1} \neq \sqrt{-1}^2$? We know $$i^2=-1 $$then why does this happen?
$$
i^2 = \sqrt{-1}\times\sqrt{-1}
$$
$$
=\sqrt{-1\times-1}
$$
$$
=\sqrt{1}
$$
$$
= 1
$$
EDIT: I see this has been dealt with before but at least with this answer I'm not making the fundamental mistake of assuming an incorrect definition of $i^2$.
| Any non zero number has two distinct square roots. There's an algebraic statement which is always true : "a square root of $a$ times a square root of of $b$ equals a square root of $ab$", but this does not tell you which square root of $ab$ you get.
Now if $a$ and $b$ are positive, then the positive square root of $a$ (denoted $\sqrt{a}$) times the positive square root of $b$ (denoted $\sqrt{b}$) is a positive number. Thus, it's the positive square root of $ab$ (denoted $\sqrt{ab}$). Which yields
$$\forall a,b \ge 0, \ \sqrt{a} \sqrt{b} = \sqrt{ab}$$
In your calculation, because $i$ is a square root of $-1$, then $i^2$ is indeed a square root of $1$, but not the positive one.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/49169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "43",
"answer_count": 9,
"answer_id": 6
} |
Why can ALL quadratic equations be solved by the quadratic formula? In algebra, all quadratic problems can be solved by using the quadratic formula. I read a couple of books, and they told me only HOW and WHEN to use this formula, but they don't tell me WHY I can use it. I have tried to figure it out by proving these two equations are equal, but I can't.
Why can I use $x = \dfrac{-b\pm \sqrt{b^{2} - 4 ac}}{2a}$ to solve all quadratic equations?
| Most answers are explaining the method of completing the square. Although its the preferred method, I'll take another approach.
Consider an equation $$~~~~~~~~~~~~~~~~~~~~~ax^{2}+bx+c=0~~~~~~~~~~~~~~~~~~~~(1)$$We let the roots be $\alpha$ and $\beta$.
Now, $$~~~~~~~~~~~~~~~~~~~~~x-\alpha = x-\beta = 0~~~~~~~~~~~~~~~~~~~~~~~~$$
$$~~~~~~~~~~~~~~~~~~~~~k(x-\alpha)(x-\beta)=0~~~~~~~~~~~~~~~~~~~~(2)$$
Equating equation 1 and 2 (k is a constant),
$$ax^{2}+b{x}+c=k(x-\alpha)(x-\beta)$$
$$ax^{2}+b{x}+c=k(x^{2}-\alpha x-\beta x+\alpha \beta)$$
$$ax^{2}+b{x}+c=kx^{2}-k(\alpha+\beta )x+k\alpha \beta)$$
Comparing both sides, we get $$a=k~;~b=-k(\alpha +\beta)~;~c=k\alpha \beta$$
From this, we get $$\alpha + \beta = \frac{-b}{a}~~;~~~\alpha \beta = \frac{c}{a}$$
Now, to get the value of $\alpha$, we follow the following procedure :
First we take out the value of $\alpha - \beta$, so that we can eliminate one term and find out the value of another.
$$(\alpha-\beta)^{2} = \alpha ^{2}+ \beta ^{2} - 2 \alpha \beta$$
Now we'll add $4 \alpha \beta $ on both the sides
$$(\alpha-\beta)^{2} +4 \alpha \beta = \alpha ^{2}+ \beta ^{2} + 2 \alpha \beta$$
$$(\alpha-\beta)^{2} +4 \alpha \beta = (\alpha + \beta )^{2} $$
$$(\alpha-\beta)^{2} = (\alpha + \beta )^{2} -4 \alpha \beta $$
$$\alpha-\beta = \pm \sqrt{(\alpha + \beta )^{2} -4 \alpha \beta } $$
Substituting the values of $\alpha + \beta$ and $\alpha \beta$, we get,
$$\alpha-\beta = \pm \sqrt{(\frac{-b}{a} )^{2} -\frac{4c}{a} } $$
$$\alpha-\beta = \pm \sqrt{\frac{b^{2}-4ac}{a^{2}} } $$ or
$$~~~~~~~~~~~~~~~~~~~~~\alpha-\beta = \frac{\pm \sqrt{b^{2}-4ac}}{a} ~~~~~~~~~~~~~~~~~~~~~(3)$$
Adding $Eq^{n} (2)~and~(3)$, we get,
$$2 \alpha = \frac{-b \pm \sqrt{b^{2}-4ac}}{a}$$
$$\alpha = \frac{-b \pm \sqrt{b^{2}-4ac}}{2a}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/49229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "294",
"answer_count": 22,
"answer_id": 0
} |
Zero divisors of ${\Bbb Z}_n = $ integers $\!\bmod n$ Consider the following proposition:
A nonzero element $m\in{\bf Z}_n$ is a zero divisor if and only if $m$ and $n$ are not relatively prime.
I don't know if this is a classical textbook result. (I didn't find it in Gallian's book).
For the "only if" part, one may like to use the Euclid's lemma. But I cannot see how can one prove the "if" part:
If $m_1>0$, $(m_1,n)=d>1$, and $n|m_1m_2$, then $n\nmid m_2$.
Edit:
The "if" part, should be:
If $m_1>0$ and $(m_1,n)=d>1$, then there exists $m_2$ such that $n|m_1m_2$, and $n\nmid m_2$.
Does one need any other techniques other than "divisibility"?
Questions:
*
*How to prove the proposition above?
*How many different proofs can one have?
| Hint $\rm\,\ d\mid n,m\,\Rightarrow\,\ mod\ n\!:$ $\rm\displaystyle\:\ 0\equiv n\:\frac{m}d\ =\ \frac{n}d\: m\ $ and $\rm\, \dfrac{n}d\not\equiv 0\,$ if $\rm\,d>1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/49244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
When can two linear operators on a finite-dimensional space be simultaneously Jordanized? IN a comment to Qiaochu's answer here it is mentioned that two commuting matrices can be simultaneously Jordanized (sorry that this sounds less appealing then "diagonalized" :P ), i.e. can be brought to a Jordan normal form by the same similarity transformation. I was wondering about the converse - when can two linear operators acting on a finite-dimensional vector space (over an algebraically closed field) be simultaneously Jordanized? Unlike the case of simultaneous diagonalization, I don't think commutativity is forced on the transformations in this case, and I'm interested in other natural conditions which guarantee that this is possible.
EDIT: as Georges pointed out, the statements that two commuting matrices are simultaneously Jordanizable is in fact wrong. Nevertheless, I am still interested in interesting conditions on a pair of operators which ensures a simultaneous Jordanization (of course, there are some obvious sufficient conditions, i.e. that the two matrices are actually diagonalizable and commute, but this is not very appealing...)
| I am 2 years late, but I would like to leave a comment, because for matrices of order 2 exists a very simple criterion.
Thm: If $A,B$ are complex matrices of order 2 and not diagonalizable then $A$ and $B$ can be simultaneously Jordanized if and only if $A-B$ is a multiple of the identity.
Proof: Suppose $A-B=aId$.
Since $B$ is not diagonalizable then $B=RJR^{-1}$, where $J=\left(\begin{array}{cc}
b & 1 \\
0 & b\end{array}\right)$
Thus, $A= RJR^{-1}+aId=R(J+aId)R^{-1}=R\left(\begin{array}{cc}
b+a & 1 \\
0 & b+a\end{array}\right)R^{-1}$. Therefore $A$ and $B$ can be simultaneously Jordanized.
For the converse, let us suppose that $A$ and $B$ can be simultaneously Jordanized.
Since $A$ and $B$ are not diagonalizable then $A=RJ_AR^{-1}$ and $B=RJ_BR^{-1}$, where $J_A=\left(\begin{array}{cc}
a & 1 \\
0 & a\end{array}\right)$ and $J_B=\left(\begin{array}{cc}
b & 1 \\
0 & b\end{array}\right)$.
Therefore, $A-B=RJ_AR^{-1}-RJ_BR^{-1}=R(J_A-J_B)R^{-1}=R\left(\begin{array}{cc}
a-b & 0 \\
0 & a-b\end{array}\right)R^{-1}=(a-b)Id$. $\ \square$
Now, we can find many examples of matrices that commute and can not be simultaneously Jordanized.
Example: The matrices $\left(\begin{array}{cc}
a & 1 \\
0 & a\end{array}\right), \left(\begin{array}{cc}
b & -1 \\
0 & b\end{array}\right)$ are not diagonalizable and their difference is not a multiple of the identity, therefore they can not be simultaneously Jordanized. Notice that these matrices commute.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/49378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44",
"answer_count": 3,
"answer_id": 2
} |
For which $n$ is $ \int \limits_0^{2\pi} \prod \limits_{k=1}^n \cos(k x)\,dx $ non-zero? I can verify easily that for $n=1$ and $2$ it's $0$, $3$ and $4$ nonzero, $4$ and $5$ $0$, etc. but it seems like there must be something deeper here (or at least a trick).
| Write $\cos(kx)=(e^{ikx}+e^{-ikx})/2$. Obtain
$$\begin{array}{ll} \int_0^{2\pi}\prod_{k=1}^n\cos(kx)dx & =\int_0^{2\pi} \prod_{k=1}^n \frac{e^{k i x} + e^{- k i x}}{2} dx \\
& = 2^{-n}\int_0^{2\pi} e^{-(1+2+\cdots+n) \cdot i x} \prod_{k=1}^n \left( 1 + e^{2 k i x} \right) dx \\
& =2^{-n}\int_0^{2\pi}e^{-n(n+1)/2\cdot ix}\sum_{\sigma\in\Sigma} e^{2\sigma ix}dx \\
& =2^{-n}\sum_{\sigma\in\Sigma}\int_0^{2\pi}e^{(2\sigma -n(n+1)/2)\cdot ix}dx\end{array}$$
where $\Sigma$ is the multiset of numbers comprised of the sums of subsets of $\{1,\cdots,n\}$. The integral in the summand is given by $+1$ if $2\sigma=n(n+1)/2$ and $0$ otherwise. Therefore the sum is nonzero if and only if there is a $n(n+1)/4\in\Sigma$, i.e. $n(n+1)/4$ can be written as a sum of numbers taken from the set $\{1,\cdots,n\}$. Firstly $4\mid n(n+1)\Leftrightarrow n\equiv 0,-1$ mod $4$ is necesesary, and moreover
Lemma. Any number $0\le S\le n(n+1)/2$ may be written as a sum of numbers in $\{1,\cdots,n\}$.
Proof. $S=0$ corresponds to the empty product. $S=1$ corresponds to the term $1$ itself. Otherwise suppose the claim holds true for $n$ as induction hypothesis, and we seek to prove the claim still holds true for $n+1$. Let $0\le S\le (n+1)(n+2)/2$. If $S\le n(n+1)/2$ then simply take the numbers from $\{1,\cdots,n\}$ via induction hypothesis, otherwise $0\le S-(n+1)\le n(n+1)/2$ and we may invoke the induction hypothesis on $S-(n+1)$, then add $n+1$ to that sum to obtain a sum of elements from $\{1,\cdots,n,n+1\}$ which add up to $S$.
Therefore, $n\equiv 0,-1$ mod $4$ is both necessary and sufficient for the integral to be positive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/49467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 1
} |
Express $\int^1_0x^2 e^{-x^2} dx$ in terms of $\int^1_0e^{-x^2} dx$ (Apologies, this was initially incorrectly posted on mathoveflow)
In the MIT 18.01 practice questions for Exam 4 problem 3b (link below), we are asked to express $\int^1_0x^2 e^{-x^2} dx$ in terms of $\int^1_0e^{-x^2} dx$
I understand that this should involve using integration by parts but the given solution doesn't show working and I'm not able to obtain the same answer regardless of how I set up the integration.
Link to the practice exam:
http://ocw.mit.edu/courses/mathematics/18-01-single-variable-calculus-fall-2006/exams/prexam4a.pdf
| You can use this result as well: $$\int e^{x} \bigl[ f(x) + f'(x)\bigr] \ dx = e^{x} f(x) +C$$
So your integral can be rewritten as
\begin{align*}
\int\limits_{0}^{1} x^{2}e^{-x^{2}} \ dx & = -\int\limits_{0}^{1} \Bigl[-x^{2} -2x\Bigr] \cdot e^{-x^{2}} -\int\limits_{0}^{1} 2x \cdot e^{-x^{2}}\ dx
\end{align*}
The second part of the integral can be $\text{easily evaluated}$ by putting $x^{2}=t$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/49520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
A construction in the proof of "any local ring is dominated by a DVR"
Let $O$ be a noetherian local domain with maximal ideal $m$. I want to prove: for a suitable choice of generators $x_1,\dots,x_n$ of $m$, the ideal $(x_1)$ in $O'=O[x_2/x_1,\dots,x_n/x_1]$ is not equal to the unit ideal.
This statement originates from Ex.4.11, Chapter 2 of Hartshorne.
| If one is willing to use the results already proved in Hartshorne in the context of the valuative criterion, that is before exercise 4.11, I see the following approach: there exists a valuation ring $O_v$ of the field $K$ (for the moment I ignore the finite extension $L$ that appears in the exercise) that dominates the local ring $O$. In particular we have $v(x_k)>0$ for any set $x_1,\ldots ,x_n$ of generators of the maximal ideal $m$ of $O$. Suppose $v(x_1)$ is minimal among the values $v(x_k)$. Then $O^\prime\subseteq O_v$ and $q:=M_v\cap O^\prime$, $M_v$ the maximal ideal of $O_v$, is a proper prime ideal of $O^\prime$. By definition $x_1\in q$ and thus $x_1O^\prime\neq O^\prime$.
The "suitable choice" is just relabelling the elements $x_k$ if necessary.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/49580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Hardy Ramanujan Asymptotic Formula for the Partition Number I am needing to use the asymptotic formula for the partition number, $p(n)$ (see here for details about partitions).
The asymptotic formula always seems to be written as,
$ p(n) \sim \frac{1}{4n\sqrt{3}}e^{\pi \sqrt{\frac{2n}{3}}}, $
however I need to know the order of the omitted terms, (i.e. I need whatever the little-o of this expression is). Does anybody know what this is, and a reference for it? I haven't been able to find it online, and don't have access to a copy of Andrews 'Theory of Integer Partitions'.
Thank you.
| The original paper addresses this issue on p. 83:
$$
p(n)=\frac{1}{2\pi\sqrt2}\frac{d}{dn}\left(\frac{e^{C\lambda_n}}{\lambda_n}\right) + \frac{(-1)^n}{2\pi}\frac{d}{dn}\left(\frac{e^{C\lambda_n/2}}{\lambda_n}\right) + O\left(e^{(C/3+\varepsilon)\sqrt n}\right)
$$
with
$$
C=\frac{2\pi}{\sqrt6},\ \lambda_n=\sqrt{n-1/24},\ \varepsilon>0.
$$
If I compute correctly, this gives
$$
e^{\pi\sqrt{\frac{2n}{3}}} \left(
\frac{1}{4n\sqrt3}
-\frac{72+\pi^2}{288\pi n\sqrt{2n}}
+\frac{432+\pi^2}{27648n^2\sqrt3}
+O\left(\frac{1}{n^2\sqrt n}\right)
\right)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/49628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
How to find maximum $x$ that $k^x$ divides $n!$ Given numbers $k$ and $n$
how can I find the maximum $x$ where:
$n! \equiv\ 0 \pmod{k^x}$?
I tried to compute $n!$
and then make binary search over some range $[0,1000]$ for example
compute $k^{500}$
if $n!$ mod $k^{500}$ is greater than $0$ then I compute $k^{250}$ and so on
but I have to compute every time value $n!$ (storing it in bigint and everytime manipulate with it is a little ridiculous)
And time to compute $n!$ is $O(n)$, so very bad.
Is there any faster, math solution to this problem? Math friends?:)
Cheers Chris
| Computing $n!$ it is a very bad idea for great numbers $n$. To find the desired exponent you should develop something similar to the Legendre formula.
You could also search for Legendre in the following document.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/49670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Dense and locally compact subset of a Hausdorff space is open Let $X$ be a Hausdorff space and let $D \subseteq X$ be locally compact and dense in $X$. Why is $D$ open?
I can see that $D$ is regular but don't see why $D$ is in fact open.
| Here is a straightforward proof inspired by Theorem 2.70 in Aliprantis and Border's Infinite Dimensional Analysis (3rd ed.), p.56). Let $p \in D$. Since $D$ is locally compact, there is a neighborhood of $x$ in $D$ which is compact in $D$ and a neighborhood $V$ of $x$ in $D$ which is open in $D$ and $V \subset U$.
First, it is easy to see that $U$ is also compact in $X$. Since $X$ is Hausdorff, it implies that $U$ is closed (see, for example, Proposition 4.24 in Folland's Real Analysis (2nd ed.), p.128), and consequently $\overline{U}=U$.
By definition, there is an open set $W$ in the topology of $X$ such that $V = W\cap D$. Since $D$ is dense in $X$, it follows that $$W \subset \overline{W} = \overline{W \cap E} = \overline{V} \subset \overline{U} = U \subset D.$$ Hence for every $p \in D$ there is a neighborhood of $p$ open in $X$ which is included in $D$, i.e., $D$ is open in $X$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/49727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 4,
"answer_id": 3
} |
Reconstructing a ring from a stack of 2D images (radially aligned) I have a stack of images (about 180 of them) and there are 2 black dots on every single image. Hence, the position(x,y) of the two stars are provided initially. The dimensions of all these images are fixed and constant.
The radial 'distance' between the image is about 1o with the origin to be the center of every single 2D image. Since the images are radially aligned, the output would be a possible ring shape in 3D.
the dotted red circle and dotted purple circle are there to give a stronger scent of a 3D space and the arrangement of the 2D images(like a fan). It also indicates that each slice is about 1o apart and a legend that'd give you an idea where the z-axis should be.
Now my question is
With the provided (x,y) that appeared in the 2D image, how do you get the corresponding (x,y,z) in the 3d space knowing that each image is about 1o apart?
I know that every point on a sphere can be approximated by the following equations:
x = r sin (theta) cos (phi)
y = r sin (theta) sin (phi)
z = r cos (theta)
However, i don't know how to connect those equations to my problem as i am rather weak in math as you can see by now. :(
Thanks!!
| If I understand the question correctly, your $180$ images are all taken in planes that contain one common axis and are rotated around that axis in increments of $1^\circ$. Your axis labeling is somewhat confusing because you use $x$ and $y$ both for the 2D coordinates and for the 3D coordinates, even though these stand in different relations to each other depending on the plane of the image. So I'll use a different, consistent labeling of the axes and I hope you can apply the results to your situation.
Let's say the image planes all contain the $z$ axis, and lets label the axes within the 2D images with $u$ and $v$, where the $v$ axis coincides with the $z$ axis and the $u$ axis is orthogonal to it. Then the orientation of the image plane can be described by the (signed) angle $\phi$ between the $u$ axis and the $x$ axis (which changes in increments of $1^\circ$ from one plane to the next), and the relationship between the 2D coordinates $u,v$ and the 3D coordinates $x,y,z$ is
$$
\begin{eqnarray}
x&=&u\cos\phi\\
y&=&u\sin\phi\\
z&=&v\;.
\end{eqnarray}
$$
This only answers your question (as I understand it) about the relationship between the coordinates. How to reconstruct the ring from the set of points is another question; you could to do a least-squares fit for that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/49780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Behaviour of a holomorphic function near a pole Apparently, the following statement is true:
*
*"Let $D\subseteq \mathbb{C}$ be open and connected and $f:D\setminus \{a\}\longrightarrow \mathbb{C}$ holomorphic with a pole of arbitrary order at $a\in D$. For any $\epsilon > 0$ with $B_\epsilon(a)\setminus\{a\} \subseteq D$, there exists $r > 0$ so that $\{z \in \mathbb{C}: |z| > r\} \subseteq f(B_\epsilon(a)\setminus\{a\})$."
So far, I have been unsuccessful in proving this. I know that $f(B_\epsilon(a)\setminus\{a\})$ must be open and connected (open mapping theorem), as well as that for any $r > 0$ there exists an $x \in B_\epsilon(a)$ so that $f(x) > r$ (because $\lim_{z\rightarrow a}|f(z)| = \infty)$, but I don't see how this would imply the statement in question. Any help would be appreciated.
| Define $g$ on a punctured neighborhood of $a$ by $g(z)=\frac{1}{f(z)}$. Then $\displaystyle{\lim_{z\to a}g(z)=0}$, so the singularity of $g$ at $a$ is removable, and defining $g(a)=0$ gives an analytic function on a neighborhood of $a$. By the open mapping theorem, for each neighborhood $U$ of $a$ in the domain of $g$, there exists $\delta>0$ such that $\{z\in\mathbb{C}:|z|\lt \delta\}\subseteq g(U)$. Now let $r=\frac{1}{\delta}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/49860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
A Universal Property Defining Connected Sums I once read (I believe in Ravi Vakil's notes on Algebraic Geometry) that the connected sum of a pair of surfaces can be defined in terms of a universal property. This gives a slick proof that the connected sum is unique up to homeomorphism. Unfortunately, I am unable to find where exactly I read this or remember what exactly universal property was;
if anyone could help me out in either regard it would be much appreciated.
| As already noted in the comments, there is an obvious universal property (since the connected sum is a special pushout) once the embeddings of the discs have been chosen. For different embeddings, there exists some homeomorphism. There are lots of them, but even abstract nonsense cannot replace the nontrivial proof of existence. But since there is no canonical homeomorphism, I strongly doubt that there is a ny universal property which does not depend on the embeddings of the discs.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/49986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 1,
"answer_id": 0
} |
Different ways to prove there are infinitely many primes? This is just a curiosity. I have come across multiple proofs of the fact that there are infinitely many primes, some of them were quite trivial, but some others were really, really fancy. I'll show you what proofs I have and I'd like to know more because I think it's cool to see that something can be proved in so many different ways.
Proof 1 : Euclid's. If there are finitely many primes then $p_1 p_2 ... p_n + 1$ is coprime to all of these guys. This is the basic idea in most proofs : generate a number coprime to all previous primes.
Proof 2 : Consider the sequence $a_n = 2^{2^n} + 1$. We have that
$$
2^{2^n}-1 = (2^{2^1} - 1) \prod_{m=1}^{n-1} (2^{2^m}+1),
$$
so that for $m < n$, $(2^{2^m} + 1, 2^{2^n} + 1) \, | \, (2^{2^n}-1, 2^{2^n} +1) = 1$. Since we have an infinite sequence of numbers coprime in pairs, at least one prime number must divide each one of them and they are all distinct primes, thus giving an infinity of them.
Proof 3 : (Note : I particularly like this one.) Define a topology on $\mathbb Z$ in the following way : a set $\mathscr N$ of integers is said to be open if for every $n \in \mathscr N$ there is an arithmetic progression $\mathscr A$ such that $n \in \mathscr A \subseteq \mathscr N$. This can easily be proven to define a topology on $\mathbb Z$. Note that under this topology arithmetic progressions are open and closed. Supposing there are finitely many primes, notice that this means that the set
$$
\mathscr U \,\,\,\, \overset{def}{=} \,\,\, \bigcup_{p} \,\, p \mathbb Z
$$
should be open and closed, but by the fundamental theorem of arithmetic, its complement in $\mathbb Z$ is the set $\{ -1, 1 \}$, which is not open, thus giving a contradiction.
Proof 4 : Let $a,b$ be coprime integers and $c > 0$. There exists $x$ such that $(a+bx, c) = 1$. To see this, choose $x$ such that $a+bx \not\equiv 0 \, \mathrm{mod}$ $p_i$ for all primes $p_i$ dividing $c$. If $a \equiv 0 \, \mathrm{mod}$ $p_i$, since $a$ and $b$ are coprime, $b$ has an inverse mod $p_i$, call it $\overline{b}$. Choosing $x \equiv \overline{b} \, \mathrm{mod}$ $p_i$, you are done. If $a \not\equiv 0 \, \mathrm{mod}$ $p_i$, then choosing $x \equiv 0 \, \mathrm{mod}$ $p_i$ works fine. Find $x$ using the Chinese Remainder Theorem.
Now assuming there are finitely many primes, let $c$ be the product of all of them. Our construction generates an integer coprime to $c$, giving a contradiction to the fundamental theorem of arithmetic.
Proof 5 : Dirichlet's theorem on arithmetic progressions (just so that you not bring it up as an example...)
Do you have any other nice proofs?
| Maybe you wanna use the sum of reciprocal prime numbers. The argument for the fact that
the series diverges you may find here in one of Apostol's exercise.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/50006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "119",
"answer_count": 27,
"answer_id": 19
} |
creating smooth curves with $f(0) = 0$ and $f(1) = 1$ I would like to create smooth curves, which have $f(0) = 0$ and $f(1) = 1$.
What I would like to create are curves similar to the gamma curves known from CRT monitors. I don't know any better way to describe it, in computer graphics I used them a lot, but in math I don't know what kind of curves they are. They are defined by the two endpoints and a 3rd point.
What I am looking for is a similar curve, what can be described easily in math. For example with a simple exponential function or power function. Can you tell me what kind of curves these ones are (just by lookin at the image below), and how can I create a function which fits a curve using the 2 endpoints and a value in the middle?
So what I am looking for is some equation or algorithm what takes a midpoint value $f(0.5) = x$, returns me $a, b$ and $c$ for example if the curve can be parameterized like this (just ideas):
$a \exp (bt) + c$ or $a b^t + c$
Update: yes, $x^t$ works like this, but it gets really sharp when $t < 0.1$. I would prefer something with a smooth derivative at all points. Thats why I had exponential functions in mind. (I use smooth here as "not steep")
| It might be worth doing some research into Finite Element shape functions as the basis of these functions is very similar to the problem you are trying to solve here.
My experience with shape functions is that the equations are usually identified through trial and error although there are approaches that can ease you through the process.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/50060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Number of fields with characteristic of 3 and less than 10000 elements? it's exam time again over here and I'm currently doing some last preparations for our math exam that is up in two weeks. I previously thought that I was prepared quite well since I've gone through a load of old exams and managed to solve them correctly.
However, I've just found a strange question and I'm completely clueless on how to solve it:
How many finite fields with a charateristic of 3 and less than 10000 elements are there?
I can only think of Z3 (rather trivial), but I'm completely clueless on how to determine the others (that is of course - if this question isn't some kind of joke question and the answer really is "1").
| It's not a joke question. Presumably, the year that was on the exam, the class was shown a theorem completely describing all the finite fields. If they didn't do that theorem this year, you don't have to worry about that question (but you'd better make sure!). It's not the kind of thing you'd be expected to answer on the spot, if it wasn't covered in class.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/50103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Does Permuting the Rows of a Matrix $A$ Change the Absolute Row Sum of $A^{-1}$? For $A = (a_{ij})$ an $n \times n$ matrix, the absolute row sum of $A$ is
$$
\|A\|_{\infty} = \max_{1 \leq i \leq n} \sum_{j=1}^{n} |a_{ij}|.
$$
Let $A$ be a given $n \times n$ matrix and let $A_0$ be a matrix obtained by permuting the rows of $A$. Do we always have
$$
\|A^{-1}\|_{\infty} = \|A_{0}^{-1}\|_{\infty}?
$$
| Exchanging two rows of $A$ amounts to multiplying $A$ by an elementary matrix on the left, $B=EA$; so the inverse of $B$ is $A^{-1}E^{-1}$, and the inverse of the elementary matrix corresponding to exchanging two rows is itself. Multiplying on the right by $E$ corresponds to permuting two columns of $A^{-1}$. Thus, the inverse of the matrix we get form $A$ by exchanging two rows is the inverse of $A$ with two columns exchanged. Exchanging two columns of a matrix $M$ does not change the value of $\lVert M\rVert_{\infty}$; thus, $\lVert (EA)^{-1}\rVert_{\infty} = \lVert A^{-1}\rVert_{\infty}$.
Since any permutation of the rows of $A$ can be obtained as a sequence of row exchanges, the conclusion follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/50159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Help with solving an integral I am looking for help with finding the integral of a given equation $$ Y2(t) = (1 - 2t^2)\int {e^{\int-2t dt}\over(1-2t^2)^2}. dt$$
anyone able to help? Thanks in advance!
UPDATE: I got the above from trying to solve the question below.
Solve, using reduction of order, the following $$y'' - 2ty' + 4y =0$$ , where $$f(t) = 1-2t^2$$ is a solution
| There is no elementary antiderivative for this function. Neither Maple nor Mathematica can find a formula for it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/50220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
union of two independent probabilistic event I have following question:
Suppose we have two independent events whose probability are the following: $P(A)=0.4$ and $P(B)=0.7$.
We are asked to find $P(A \cap B)$ from probability theory. I know that $P(A \cup B)=P(A)+P(B)-P(A \cap B)$. But surely the last one is equal zero so it means that result should be $P(A)+P(B)$ but it is more than $1$ (To be exact it is $1.1$). Please help me where i am wrong?
| If the events $A$ and $B$ are independent, then $P(A \cap B) = P(A) P(B)$ and not necessarily $0$.
You are confusing independent with mutually exclusive.
For instance, you toss two coins. What is the probability that both show heads? It is $\frac{1}{2} \times \frac{1}{2}$ isn't it? Note that the coin tosses are independent of each other.
Now you toss only one coin, what is the probability that it shows both heads and tails?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/50269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 3
} |
mapping of cube by itself From exam textbook I am given to solve following problem: question is like this
in space how many lines are such by which if turn cube by $180^\circ$ it will map itself? I was thinking about this problem many times I though it should be axis of symmetry for which answer would be $4$ but in answers there is not 4 so I did not find solution of it yet please help me to make it clear for me
| Ross correctly enumerated the possible lines.
You should take into account that when you rotate the cube about a body diagonal, you have to rotate an integer multiple of 120 degrees in order to get the cube to map back to itself. So for the purposes of this question the body diagonals don't count. 9 is the correct answer.
Perhaps the easiest way to convince you of this is that there are 3 edges meeting at each corner. Rotation about a 3D-diagonal permutes these 3 edges cyclically, and is therefore of order 3 as a symmetry. Yet another way of seeing this is that if we view the cube as a subset $[0,1]\times[0,1]\times[0,1]\subset\mathbf{R}^3$, then the linear mapping $(x,y,z)\mapsto (y,z,x)$ keeps the opposite corners $(0,0,0)$ and $(1,1,1)$ as fixed, obviously maps the cube back to itself, and as this mapping is orientation preserving (in $SO(3)$), it must be a rotation about this diagonal. As it is of order 3, the angle of rotation must be 120 degrees.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/50326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
What's the meaning of algebraic data type? I'm reading a book about Haskell, a programming language, and I came across a construct defined "algebraic data type" that looks like
data WeekDay = Mon | Tue | Wed | Thu | Fri | Sat | Sun
That simply declares what are the possible values for the type WeekDay.
My question is what is the meaning of algebraic data type (for a mathematician) and how that maps to the programming language construct?
| Think of an algebraic data type as a type composed of simpler types, where the allowable compositions operators are AND (written $\cdot$, often referred to as product types) and OR (written $+$, referred to as union types or sum types).
We also have the unit type $1$ (representing a null type) and the basic type $X$ (representing a type holding one piece of data - this could be of a primitive type, or another algebraic type).
We also tend to use $2X$ to mean $X+X$ and $X^2$ to mean $X\cdot X$, etc.
For example, the Haskell type
data List a = Nil | Cons a (List a)
tells you that the data type List a (a list of elements of type a) is either Nil, or it is the Cons of a basic type and another lists. Algebraically, we could write
$$L = 1 + X \cdot L$$
This isn't just pretty notation - it encodes useful information. We can rearrange to get
$$L \cdot (1 - X) = 1$$
and hence
$$L = \frac{1}{1-X} = 1 + X + X^2 + X^3 + \cdot$$
which tells us that a list is either empty ($1$), or it contains 1 element ($X$), or it contains 2 elements ($X^2$), or it contains 3 elements, or...
For a more complicated example, consider the binary tree data type:
data Tree a = Nil | Branch a (Tree a) (Tree a)
Here a tree $T$ is either nil, or it is a Branch consisting of a piece of data and two other trees. Algebraically
$$T = 1 + X\cdot T^2$$
which we can rearrange to give
$$T = \frac{1}{2X} \left( 1 - \sqrt{1-4X} \right) = 1 + X + 2X^2 + 5X^3 + 14X^4 + 42X^5 + \cdots$$
where I have chosen the negative square root so that the equation makes sense (i.e. so that there are no negative powers of $X$, which are meaningless in this theory).
This tells us that a binary tree can be nil ($1$), that there is one binary tree with one datum (i.e. the tree which is a branch containing two empty trees), that there are two binary trees with two datums (the second datum is either in the left or the right branch), that there are 5 trees containing three datums (you might like to draw them all) etc.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/50375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 3,
"answer_id": 1
} |
Help me formalize this calculation I needed to find the number of five digits numbers that are made of numbers from $0,1,2,3,4,5$ and are divisble by 3. One of the proper methods can be, that $0+1+2+3+4+5 = 15$ So we can pick out either $3$ or $0$ from this set. For picking out $0$ there are $5!$ numbers and for picking out $3$ there are $5!$ numbers $4!$ of which are 4 digit numbers, so the total number is $5!+5!-4! =216$
I tried a rough estimate before the above (correct) solution. I need your help as I think it can formalized and used as a valid argument.
There are $^6C_5\times5!=720$ total $5$-digit numbers (including $4$-digit numbers with digits from one to five) Roughly a third of them, i.e $\approx 240$ should be divisble by three. Of these, roughly a tenth $\approx 24$ should be $4$-digit and hence the answer should be close to $\approx 216$.
I thought my answer should be close plus or minus some correction as this was very rough. The initial set of numbers has only $2$ of total $6$ numbers that are divisible by $3$ and it is not uniform and does not contain all digits $0$-$9$, but I get an exact number. How do I state this more formally? I need to know this as I use these rough calculations often.
"Formal" would be an argument that would allow me to replace the "approximately equal to" symbols in the third paragraph by equality symbols.
| Brian has already explained that an error in your reasoning happened to lead to the right result. Here's an attempt to fix the mistake and give a derivation of the correct result that has the "probabilistic" flavour of your initial estimate -- though the result could be argued to be closer to the correct solution in the first paragraph than to the initial estimate :-).
In a sense, you argued probabilistically and disregarded the correlation between the two events of the number being divisible by $3$ and the number starting with $0$. These are correlated, since fewer of the numbers that are divisible by $3$ can start with $0$ (since half of them don't contain the $0$) whereas all of the ones that aren't can.
Now what got you the right result was that you estimated, for the wrong reasons, that the probability of the number starting with $0$ was $1$ in $10$. The correct conditional probability, given that the number is divisible by $3$, is indeed
$$\frac12\cdot\frac15+\frac12\cdot0=\frac1{10}\;,$$
where the factors $1/2$ are the probabilities of taking out at $0$ or a $3$, respectively, to get a set of digits with sum divisible by $3$, and $1/5$ and $0$ are the probabilities of a zero being the leading digit in those two cases, respectively.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/50573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Math without infinity Does math require a concept of infinity?
For instance if I wanted to take the limit of $f(x)$ as $x \rightarrow \infty$, I could use the substitution $x=1/y$ and take the limit as $y\rightarrow 0^+$.
Is there a statement that can be stated without the use of any concept of infinity but which unavoidably requires it to be proved?
| Does math require an $\infty$? This assumes that all of math is somehow governed by a single set of universally agreed upon rules, such as whether infinity is a necessary concept or not. This is not the case.
I might claim that math does not require anything, even though a mathematician requires many things (such as coffee and paper to turn into theorems, etc etc). But this is a sharp (like a sharp inequality) concept, and I don't want to run conversation off a valuable road.
So instead I will claim the following: there are branches of math that rely on infinity, and other branches that do not. But most branches rely on infinity. So in this sense, I think that most of the mathematics that is practiced each day relies on a system of logic and a set of axioms that include infinities in various ways.
Perhaps a different question that is easier to answer is - "Why does math have the concept of infinity?" To this, I have a really quick answer - because $\infty$ is useful. It lets you take more limits, allows more general rules to be set down, and allows greater play for fields like Topology and Analysis.
And by the way - in your question you distinguish between $\lim _{x \to \infty} f(x)$ and $\lim _{y \to 0} f(\frac{1}{y})$. Just because we hide behind a thin curtain, i.e. pretending that $\lim_{y \to 0} \frac{1}{y}$ is just another name for infinity, does not mean that we are actually avoiding a conceptual infinity.
So to conclude, I say that math does not require $\infty$. If somehow, no one imagined how big things get 'over there' or considered questions like How many functions are there from the integers to such and such set, math would still go on. But it's useful, and there's little reason to ignore its existence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/50629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34",
"answer_count": 3,
"answer_id": 1
} |
If a sequence of boundaries converges, do the spectrums of the enclosed regions also converge? A planar region will have associated to it a spectrum consisting of Dirichlet eigenvalues, or parameters $\lambda$ for which it is possible to solve the Dirichlet problem for the Laplacian operator,
$$ \begin{cases} \Delta u + \lambda u = 0 \\ u|_{\partial R} = 0 \end{cases}$$
I'm wondering, if we have a sequence of boundaries $\partial R_n$ converging pointwise towards $\partial R$, then will the spectrums also converge? (I make the notion of convergence formal in the following manner: $\cap_{N=1}^\infty l(\cup_{n=N}^\infty\partial R_n)=\partial R$; $\cap_{N=1}^\infty l(\cup_{n=N}^\infty\mathrm{spec}(R_n))=\mathrm{spec}( R)$, where $ l(\cdot)$ denotes the set of accumulation points of a set and $\mathrm{spec}(\cdot)$ denotes the spectrum of a region.)
One motivating pathological example is the sequence of boundaries, indexed by $n$, defined by the polar equations $r=1+\frac{1}{n}\sin(n^2\theta)$. The boundaries converge to the unit circle. However, since the gradient of any eigenfunction must be orthogonal to the region boundary (as it is a level set), the eigenfunctions can't possibly converge to anything (under any meaningful notion) and so it makes me question if it's even possible for the eigenvalues to do so.
If the answer is "no, the spectrum doesn't necessarily converge," a much broader question arises: what are necessary and sufficient conditions for it to converge? Intuitively, I imagine a necessary condition is that the curvature of the boundaries also converge appropriately, but I have no idea if that's sufficient. EDIT: Another interesting question is if the principal eigenvalue (the smallest nonzero one) can grow arbitrarily large.
| There is a domain monotonicity of Dirichlet eigenvalues: if domains $\Omega^1\supset\Omega^2\supset\ldots\supset\Omega^n\supset\ldots\ $ then the corresponding eigenvalues $\lambda_k^1\ge\lambda_k^2\ge\ldots\ge\lambda_k^n\ge...\ $ so convergence of curvatures are not necessary in this case. There are also lots of more general results on spectral stability problems for elliptic differential operators.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/50750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
On the height of an ideal Which of the following inequalities hold for a ring $R$ and an ideal $I\subset R$?
$\operatorname{height}I\leq\dim R-\dim R/I$
$\operatorname{height}I\geq\dim R-\dim R/I$
| I think to have it: suppose $\mathrm{height}\;I=n$ and $\mathrm{dim}\;R/I=m$ then we have a chain
$\mathfrak{p}_0\subset\ldots\subset\mathfrak{p}_n\subset I\subset\mathfrak{p}_{n+1}\subset\ldots\subset\mathfrak{p}_{n+m}$
but in general $\mathrm{dim}\;R$ would be greater, so
$\mathrm{height}\;I+\mathrm{dim}\;R/I\leq\mathrm{dim}\;R$ holds
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/50818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is it possible to solve a separable equation a different way and still arrive at the same answer? I have the following equation
$$(xy^2 + x)dx + (yx^2 + y)dy=0$$ and I am told it is separable, but not knowing how that is, I went ahead and solved it using the Exact method.
Let $M = xy^2 + x $ and $N = yx^2 + y$
$$My = 2xy \text{ and } Nx = 2xy $$
$$ \int M.dx ==> \int xy^2 + x = x^2y^2 + (x^2)/2 + g(y)$$
$$ \text{Partial of } (x^2y^2 + (x^2)/2 + g(y)) => xy^2 + g(y)'$$
$$g(y)' = y$$
$$g(y) = y^2/2$$
the general solution then is
$$C = x^2y^2/2 + x^2/2 + y^2/2$$
Is this solution the same I would get if I had taken the Separate Equations route?
| We can also try it this way,
$$(xy^2 + x)dx + (yx^2 + y)dy=0$$
$$xdx +ydy +xy^2dx+yx^2dy$$
$$\frac{1}{2}(2xdx+2ydy) + \frac{1}{2}(2xy^2dx+2yx^2dy)=0$$
$$\frac{1}{2}d(x^2+y^2) + \frac{1}{2}d(x^2y^2) =0$$
$$x^2+y^2+ x^2y^2 +c=0$$
:)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/50887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Volume of Region in 5D Space I need to find the volume of the region defined by
$$\begin{align*}
a^2+b^2+c^2+d^2&\leq1,\\
a^2+b^2+c^2+e^2&\leq1,\\
a^2+b^2+d^2+e^2&\leq1,\\
a^2+c^2+d^2+e^2&\leq1 &\text{ and }\\
b^2+c^2+d^2+e^2&\leq1.
\end{align*}$$
I don't necessarily need a full solution but any starting points would be very useful.
| There's reflection symmetry in each of the coordinates, so the volume is $2^5$ times the volume for positive coordinates. There's also permutation symmetry among the coordinates, so the volume is $5!$ times the volume with the additional constraint $a\le b\le c\le d\le e$. Then it remains to find the integration boundaries and solve the integrals.
The lower bound for $a$ is $0$. The upper bound for $a$, given the above constraints, is attained when $a=b=c=d=e$, and is thus $\sqrt{1/4}=1/2$. The lower bound for $b$ is $a$, and the upper bound for $b$ is again $1/2$. Then it gets slightly more complicated. The lower bound for $c$ is $b$, but for the upper bound for $c$ we have to take $c=d=e$ with $b$ given, which yields $\sqrt{(1-b^2)/3}$. Likewise, the lower bound for $d$ is $c$, and the upper bound for $d$ is attained for $d=e$ with $b$ and $c$ given, which yields $\sqrt{(1-b^2-c^2)/2}$. Finally, the lower bound for $e$ is $d$ and the upper bound for $e$ is $\sqrt{1-b^2-c^2-d^2}$. Putting it all together, the desired volume is
$$V_5=2^55!\int_0^{1/2}\int_a^{1/2}\int_b^{\sqrt{(1-b^2)/3}}\int_c^{\sqrt{(1-b^2-c^2)/2}}\int_d^{\sqrt{1-b^2-c^2-d^2}}\mathrm de\mathrm dd\mathrm dc\mathrm db\mathrm da\;.$$
That's a bit of a nightmare to work out; Wolfram Alpha gives up on even small parts of it, so let's do the corresponding thing in $3$ and $4$ dimensions first. In $3$ dimensions, we have
$$
\begin{eqnarray}
V_3
&=&
2^33!\int_0^{\sqrt{1/2}}\int_a^{\sqrt{1/2}}\int_b^{\sqrt{1-b^2}}\mathrm dc\mathrm db\mathrm da
\\
&=&
2^33!\int_0^{\sqrt{1/2}}\int_a^{\sqrt{1/2}}\left(\sqrt{1-b^2}-b\right)\mathrm db\mathrm da
\\
&=&
2^33!\int_0^{\sqrt{1/2}}\frac12\left(\arcsin\sqrt{\frac12}-\arcsin a-a\sqrt{1-a^2}+a^2\right)\mathrm da
\\
&=&
2^33!\frac16\left(2-\sqrt2\right)
\\
&=&
8\left(2-\sqrt2\right)\;.
\end{eqnarray}$$
I've worked out part of the answer for $4$ dimensions. There are some miraculous cancellations that make me think that a) there must be a better way to do this (perhaps anon's answer, if it can be fixed) and b) this might be workable for $5$ dimensions, too. I have other things to do now, but I'll check back and if there's no correct solution yet I'll try to finish the solution for $4$ dimensions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/50953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "48",
"answer_count": 3,
"answer_id": 2
} |
How to show that this series does not converge uniformly on the open unit disc? Given the series $\sum_{k=0}^\infty z^k $, it is easy to see that it converges locally, but how do I go about showing that it does not also converge uniformly on the open unit disc? I know that for it to converge uniformly on the open disc that $sup{|g(z) - g_k(z)|}$, z element of open unit disc, must equal zero. However, I am finding it difficult to show that this series does not go to zero as k goes to infinity.
Edit:Fixed confusing terminology as mentioned in answer.
| Confine attention to real $x$ in the interval $0<x<1$.
Let
$$s_n(x)=1+x+x^2+\cdots +x^{n-1}.$$
If we use $s_n(x)$ to approximate the sum, the truncation error is $>x^n$.
Choose a positive $\epsilon$, where for convenience $\epsilon<1$. We want to make the truncation error $<\epsilon$, so we need
$$x^n <\epsilon,\qquad \text{or equivalently}\qquad n >\frac{|\ln(\epsilon)|}{|\ln(x)|}.$$
Since $\ln x \to 0$ as $x \to 1^{-}$, the required $n$ grows without bound as $x\to 1^{-}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/51004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 3
} |
Calculating point on a circle, given an offset? I have what seemed like a very simple issue, but I just cannot figure it out. I have the following circles around a common point:
The Green and Blue circles represent circles that orbit the center point. I have been able to calculate the distance/radius from the point to the individual circles, but I am unable to plot the next point on either circle, given an angle from the center point. Presently, my calculation looks like the following:
The coordinates of one of my circles is:
y1 = 152
x1 = 140.5
And my calculation for the next point, 1 degree from the starting point (140.5,152) is:
distance = SQRT((160-x1)^2 + (240-y1)^2) = 90.13
new x = 160 - (distance x COS(1 degree x (PI / 180)))
new y = 240 - (distance x SIN(1 degree x (PI / 180)))
My new x and y give me crazy results, nothing even close to my circle.
I can't figure out how to calculate the new position, given the offset of 160, 240 being my center, and what I want to rotate around. Where am I going wrong?
Update:
I have implemented what I believe to be the correct formula, but I'm only getting a half circle, e.g.
x1 = starting x coordinate, or updated coordinate
y1 = starting y coordinate, or updated y coordinate
cx = 100 (horizontal center)
cy = 100 (vertical center)
radius = SQRT((cx - x1)^2 + (cy - y1)^2)
arc = ATAN((y1 - cy) / (x1 - cx))
newX = cx + radius * COS(arc - PI - (PI / 180.0))
newY = cy + radius * SIN(arc - PI - (PI / 180.0))
Set the values so next iteration of drawing, x1 and y1 will be the new
base for the calculation.
x1 = newX
y1 = newY
The circle begins to draw at the correct coordinates, but once it hits 180 degrees, it jumps back up to zero degrees. The dot represents the starting point. Also, the coordinates are going counterclockwise, when they need to go clockwise. Any ideas?
| We can modify 6312's suggestion a bit to reduce the trigonometric effort. The key idea is that the trigonometric functions satisfy a recurrence relation when integer multiples of angles are considered.
In particular, we have the relations
$$\cos(\phi-\epsilon)=\cos\,\phi-(\mu\cos\,\phi-\nu\sin\,\phi)$$
$$\sin(\phi-\epsilon)=\sin\,\phi-(\mu\sin\,\phi+\nu\cos\,\phi)$$
where $\mu=2\sin^2\frac{\epsilon}{2}$ and $\nu=\sin\,\epsilon$. (These are easily derived through complex exponentials...)
In any event, since you're moving by constant increments of $1^\circ$; you merely have to cache the values of $\mu=2\sin^2\frac{\pi}{360}\approx 1.523048436087608\times10^{-4}$ and $\nu=\sin\frac{\pi}{180}\approx 1.745240643728351\times 10^{-2}$ and apply the updating formulae I gave, where your starting point is $\cos\,\phi=\frac{140.5-160}{\sqrt{(140.5-160) ^2+(152-240)^2}}\approx-0.2163430618226664$ and $\sin\,\phi=\frac{152-240}{\sqrt{(140.5-160) ^2+(152-240)^2}}\approx-0.9763174071997252$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/51111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
Take any number and keep appending 1's to the right of it. Are there an infinite number of primes in this sequence? Ignoring sequences that are always factorable such as starting with 11, Can we take any other number such as 42 and continually append 1s (forming the sequence {42, 421, 4211, ...}) to get a sequence that has an infinite number of primes in it?
| Unless prevented by congruence restrictions, a sequence that grows exponentially, such as Mersenne primes or repunits or this variant on repunits, is predicted to have about $c \log(n)$ primes among its first $n$ terms according to "probability" arguments. Proving this prediction for any particular sequence is usually an unsolved problem.
There is more literature (and more algebraic structure) available for the Mersenne case but the principle is the same for other sequences.
http://primes.utm.edu/mersenne/heuristic.html
Bateman, P. T.; Selfridge, J. L.; and Wagstaff, S. S. "The New Mersenne Conjecture." Amer. Math. Monthly 96, 125-128, 1989
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/51168",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
} |
Yet another sum involving binomial coefficients Let $k,p$ be positive integers. Is there a closed form for the sums
$$\sum_{i=0}^{p} \binom{k}{i} \binom{k+p-i}{p-i}\text{, or}$$
$$\sum_{i=0}^{p} \binom{k-1}{i} \binom{k+p-i}{p-i}\text{?}$$
(where 'closed form' should be interpreted as a representation which is free of sums, binomial coefficients, or any other hypergeometric functions).
| Lets examine the first sum. I can't seem to find a closed form, but there is something very nice with the generating series. They are simple, and symmetrical with respect to the variables $p$ and $k$.
Result:
Your sum is the $k^{th}$ coefficient of $\frac{(1+x)^{p}}{\left(1-x\right)^{p+1}},$ and also the $p^{th}$ coefficient of $\frac{(1+x)^{k}}{\left(1-x\right)^{k+1}}.$
The Generating Series for the variable $p$
Consider
$$F(x)=\sum_{p=0}^{\infty}\sum_{i=0}^{p}\binom{k}{i}\binom{k+p-i}{p-i}x^{p}.$$
Changing the order of summation, this becomes
$$F(x)=\sum_{i=0}^{\infty}\binom{k}{i}\sum_{p=i}^{\infty}\binom{k+p-i}{p-i}x^{p},$$
and then shifting the second sum we have
$$F(x)=\sum_{i=0}^{\infty}\binom{k}{i}x^{i}\sum_{p=0}^{\infty}\binom{k+p}{p}x^{p}.$$ Since the rightmost sum is $\frac{1}{(1-x)^{k+1}}$ we see that the generating series is
$$F(x)=\frac{1}{(1-x)^{k+1}}\sum_{i=0}^{\infty}\binom{k}{i}x^{i}=\frac{\left(1+x\right)^{k}}{(1-x)^{k+1}}$$
by the binomial theorem.
The Generating Series for the variable $k$:
Lets consider the other generating series with respect to the variable $k$. Let
$$G(x)=\sum_{k=0}^{\infty}\sum_{i=0}^{p}\binom{k}{i}\binom{k+p-i}{p-i}x^{k}.$$
Then
$$G(x)=\sum_{i=0}^{p}\sum_{k=i}^{\infty}\binom{k}{i}\binom{k+p-i}{p-i}x^{k}=\sum_{i=0}^{p}x^{i}\sum_{k=0}^{\infty}\binom{k+i}{i}\binom{k+p}{p-i}x^{k}.$$
Splitting up the binomial coefficients into factorials, this is
$$=\sum_{i=0}^{p}x^{i}\sum_{k=0}^{\infty}\frac{(k+i)!}{k!i!}\frac{(k+p)!}{(k+i)!(p-i)!}x^{k}=\sum_{i=0}^{p}\frac{x^{i}p!}{i!\left(p-i\right)!}\sum_{k=0}^{\infty}\frac{\left(k+p\right)!}{k!p!}x^{k}.$$
Consequently,
$$G(x)=\frac{(1+x)^{p}}{\left(1-x\right)^{p+1}}.$$
Comments: I am not sure why the generating series has this symmetry. Perhaps you can use this property to tell you more about the sum/generating series.
Hope that helps,
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/51218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
How to find the least $N$ such that $N \equiv 7 \mod 180$ or $N \equiv 7 \mod 144$ but $N \equiv 1 \mod 7$? How to approach this problem:
N is the least number such that $N \equiv 7 \mod 180$ or $N \equiv 7 \mod 144$ but $N \equiv 1 \mod 7$.Then which of the these is true:
*
*$0 \lt N \lt 1000$
*$1000 \lt N \lt 2000$
*$2000 \lt N \lt 4000$
*$N \gt 4000$
Please explain your idea.
ADDED: The actual problem which comes in my paper is "or" and the "and" was my mistake but I think I learned something new owing to that.Thanks all for being patient,and appologies for the inconvenience.
| (1) For the original version of the question $\rm\:mod\ 180 \ $ and $\rm\: mod\ 144\::$
$\rm\: 144,\:180\ |\ N-7\ \Rightarrow\ 720 = lcm(144,180)\ |\ N-7\:.\:$
So, $\rm\: mod\ 7:\ 1\equiv N = 7 + 720\ k\ \equiv -k\:,\:$ so $\rm\:k\equiv -1\equiv 6\:.$
Thus $\rm\: N = 7 + 720\ (6 + 7\ j) =\: 4327 + 5040\ j\:,\:$ so $\rm\ N\ge0\ \Rightarrow\ N \ge 4327\:.$
(2) For the updated simpler version $\rm\:mod\ 180\ $ or $\rm\ mod\ 144\:,\:$ the same method shows that
$\rm\: N = 7 + 180\ (3+ 7\ j)\:$ or $\rm\:N = 7 + 144\ (2 + 7\ j)\:,\:$ so the least$\rm\ N> 0\:$ is $\rm\:7 + 144\cdot 2 = 295\:.$
SIMPLER $\rm\ N = 7+144\ k\equiv 4\ k\ (mod\ 7)\:$ assumes every value $\rm\:mod\ 7\:$ for $\rm\:k = 0,1,2,\:\cdots,6\:,\:$ and all these values satisfy $\rm\:0 < N < 1000\:.\:$ Presumably this is the intended "quick" solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/51272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Are all numbers real numbers? If I go into the woods and pick up two sticks and measure the ratio of their lengths, it is conceivable that I could only get a rational number, namely if the universe was composed of tiny lego bricks. It's also conceivable that I could get any real number. My question is, can there mathematically exist a universe in which these ratios are not real numbers? How do we know that the real numbers are all the numbers, and that they dont have "gaps" like the rationals?
I want to know if what I (or most people) intuitively think of as length of an idealized physical object can be a non-real number. Is it possible to have more then a continuum distinct ordered points on a line of length 1? Why do mathematicians mostly use only R for calculus etc, if a number doesnt have to be real?
By universe I just mean such a thing as Eucildean geometry, and by exist that it is consistent.
| With regard to the OP's question Can there mathematically exist a universe in which these ratios are not real numbers? to provide a meaningful answer, the question needs to be reinterpreted first. Obviously the real numbers, being a mathematical model, do not coincide with anything in "the universe out there". The question is meaningful nonetheless when formulated as follows: What is the most appropriate number system to describe the universe if we want to understand it mathematically?
Put this way, one could argue that the hyperreal number system is more appropriate for the task than the real number system, since it contains infinitesimals which are useful in any mathematical modeling of phenomena requiring the tools of the calculus, which certainly includes a large slice of mathematical physics. For a gentle introduction to the hyperreals see Keisler's freshman textbook Elementary Calculus.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/51316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 6,
"answer_id": 5
} |
Can a prime in a Dedekind domain be contained in the union of the other prime ideals? Suppose $R$ is a Dedekind domain with a infinite number of prime ideals. Let $P$ be one of the nonzero prime ideals, and let $U$ be the union of all the other prime ideals except $P$. Is it possible for $P\subset U$?
As a remark, if there were only finitely many prime ideals in $R$, the above situation would not be possible by the "Prime Avoidance Lemma", since $P$ would have to then be contained in one of the other prime ideals, leading to a contradiction.
The discussion at the top of pg. 70 in Neukirch's "Algebraic Number Theory" motivates this question.
Many thanks,
John
| If $R$ is the ring of integers $O_K$ of a finite extension $K$ of $\mathbf{Q}$, then I don't think this can happen. The class of the prime ideal $P$ is of finite order in the class group, say $n$. This means that the ideal $P^n$ is principal. Let $\alpha$ be a generator of $P^n$. Then $\alpha$ doesn't belong to any prime ideal other than $P$, because at the level of ideals inclusion implies (reverse) divisibility, and the factorization of ideals is unique.
This argument works for all the rings, where we have a finite class group, but I'm too ignorant to comment, how much ground this covers :-(
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/51362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25",
"answer_count": 3,
"answer_id": 1
} |
How to add compound fractions? How to add two compound fractions with fractions in numerator like this one:
$$\frac{\ \frac{1}{x}\ }{2} + \frac{\ \frac{2}{3x}\ }{x}$$
or fractions with fractions in denominator like this one:
$$\frac{x}{\ \frac{2}{x}\ } + \frac{\ \frac{1}{x}\ }{x}$$
| Yet another strategy:
\begin{align}
\frac{\frac1x}2+\frac{\frac2{3x}}x&=\frac1{2x}+\frac2{3x^2}\\
&=\frac{3x}{6x^2}+\frac4{6x^2}=\frac{3x+4}{6x^2}\,.
\end{align}
What did I do? Given is the sum of two fractions, and I multiplied top-and-bottom of the first by $x$, and top-and-bottom of the second by $3x$. Second step, find the minimal common denominator, which is $6x^2$, and on each of your current fractions, multiply top-and-bottom by a suitable quantity to get the denominators equal. Now add.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/51410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Neglecting higher order terms in expansion Suppose we have a function $v$ of $x$ with a minimum at $x=0$. We have, for $x$ close to zero, $$v'(x) = v'(0) +xv''(0) +\frac{x^2}{2}v'''(0)+\cdots$$ Then as $v'(0)=0$ $$v'(x)\approx xv''(0)$$ if $$|xv'''(0)|\ll v''(0)$$
Which is fine. I am unable to understand this statement:
Typically each extra derivative will bring with it a factor of $1/L $
where $L$ is the distance over which the function changes by a large
fraction. So $$x\ll L$$
This is extracted from a physics derivation, and I cannot get how they tacked on a factor of $1/L$
| If each derivative contributes $\frac{1}{L}$, then $|xv'''| << v'' \implies x(\frac{1}{L})^3 << (\frac{1}{L})^2$. Divide both sides by $(\frac{1}{L})^3$ and this becomes $x << L$.
That $\frac{1}{L}$ term is refering to the change in the function according to the difference method of derivatives (Definition via difference quotients) given in Wikipedia. If you calculate out the quotient between the second and third derivatives (or first and second), it should approximate to the result above given the context.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/51500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
A Math function that draws water droplet shape? I just need a quick reference. What is the function for this kind of shape?
Thanks.
| You may also try Maple to find a kind of water droplet as follows:
[> with(plots):
[> implicitplot3d(x^2+y^2+z^4 = z^2, x = -1 .. 1, y = -1 .. 1, z = -1 .. 0, numpoints = 50000, lightmodel = light2, color = blue, axes = boxed);
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/51539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 6,
"answer_id": 4
} |
If $f'$ tends to a positive limit as $x$ approaches infinity, then $f$ approaches infinity Some time ago, I asked this here. A restricted form of the second question could be this:
If $f$ is a function with continuous first derivative in $\mathbb{R}$ and such that $$\lim_{x\to \infty} f'(x) =a,$$ with $a\gt 0$, then $$\lim_{x\to\infty}f(x)=\infty.$$
To prove it, I tried this:
There exist $x_0\in \mathbb{R}$ such that for $x\geq x_0$,
$$f'(x)\gt \frac{a}{2}.$$
There exist $\delta_0\gt 0$ such that for $x_0\lt x\leq x_0+ \delta_0$
$$\begin{align*}\frac{f(x)-f(x_0)}{x-x_0}-f'(x_0)&\gt -\frac{a}{4}\\
\frac{f(x)-f(x_0)}{x-x_0}&\gt f'(x_0)-\frac{a}{4}\\
&\gt \frac{a}{2}-\frac{a}{4}=\frac{a}{4}\\
f(x)-f(x_0)&\gt \frac{a}{4}(x-x_0)\end{align*}.$$
We can assume that $\delta_0\geq 1$. If $\delta_0 \lt 1$, then $x_0+2-\delta_0\gt x_0$ and then $$f'(x_0+2-\delta_0)\gt \frac{a}{2}.$$
Now, there exist $\delta\gt 0$ such that for $x_0+2-\delta_0\lt x\leq x_0+2-\delta_0+\delta$ $$f(x)-f(x_0+2-\delta_0)\gt \frac{a}{4}(x-(x_0+2-\delta_0))= \frac{a}{4}(x-x_0-(2-\delta_0))\gt \frac{a}{4}(x-x_0).$$ It is clear that $x\in (x_0,x_0+2-\delta_0+\delta]$ and $2-\delta_0+\delta\geq 1$.
Therefore, we can take $x_1=x_0+1$. Then $f'(x_1)\gt a/2$ and then there exist $\delta_1\geq 1$ such that for $x_1\lt x\leq x_1+\delta_1$ $$f(x)-f(x_1)\gt \frac{a}{4}(x-x_1).$$
Take $x_2=x_1+1$ and so on. If $f$ is bounded, $(f(x_n))_{n\in \mathbb{N}}$ is a increasing bounded sequence and therefore it has a convergent subsequence. Thus, this implies that the sequence $(x_n)$:
$$x_{n+1}=x_n+1,$$ have a Cauchy's subsequence and that is a contradiction. Therefore $\lim_{x\to \infty} f(x)=\infty$.
I want to know if this is correct, and if there is a simpler way to prove this. Thanks.
| I will try to prove is in a different way which can be much simpler - using visualization.
Imagine how will a function look if it has a constant, positive slope -
A straight line, with a positive angle with the positive x axis.
Although this can be imagined, I am attaching a simple pic -
(Plot of our imaginative function - $f(x)$ vs $x$)
As per the situation in the question, for $f(x)$ the slope exists (and is finite) at all points, so it means that the function is continuous. Since the slope is also constant at $\infty$, $f$ has to be linear at $\infty$. Thus, the graph of the function should be similar to the above graph.
(assume the value of x to be as large as you can imagine.)
Hence, putting the above situation mathematically, we have,
If $\lim_{x\to \infty}\ f'(x) =a\qquad(with\ a>0)$
then $\lim_{x\to\infty}\ f(x)=\infty.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/51596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 5,
"answer_id": 4
} |
Moving a rectangular box around a $90^\circ$ corner I have seen quite a few problems like this one presented below. The idea is how to determine if it is possible to move a rectangular 3d box through the corner of a hallway knowing the dimensions of all the objects given.
Consider a hallway with width $1.5$ and height $2.5$ which has a corner of $90^\circ$. Determine if a rectangular box of dimensions $4.3\times 0.2\times 0.07$ can be taken on the hallway around the corner.
I know that intuitively, the principle behind this is similar to the 2 dimensional case (illustrated here), but how can I solve this rigorously?
| Here is an attempt based on my experiences with furniture moving. The long dimension a=4.3 will surely be horizontal. One of the short dimensions, call it b will be vertical, the remaining dimension c will be horizontal. The box must be as "short" as possible during the passage at the corner. So, one end of the box will be lifted:
We calculate the projection L = x1 + x2 of the lifted box onto the horizontal plane. Now we move the shortened box around the corner. Here is an algorithm as a Python program (I hope it is readable):
# hallway dimensions:
height = 2.5
width = 1.5
def box(a, b, c):
# a = long dimension of the box = 4.3, horizontal
# b = short dimension, 0.2 (or 0.07), vertical
# c = the other short dimension, horizontal
d = math.sqrt(a*a + b*b) # diagonal of a x b rectangle
alpha = math.atan(b/a) # angle of the diagonal in axb rectangle
omega = math.asin(height/d) - alpha # lifting angle
x1 = b * math.sin(omega) # projection of b to the floor
x2 = a * math.cos(omega) # projection of a to the floor
L = x1 + x2 # length of the lifted box projected to the floor
sin45 = math.sin(math.pi/4.0)
y1 = c * sin45 # projection of c to the y axis
y2 = L / 2 * sin45 # projection of L/2 to the y axis
w = y1 + y2 # box needs this width w
ok = (w <= width) # box passes if its width w is less than the
# the available hallway width
print "w =", w, ", pass =", ok
return ok
def test():
# 1) try 0.07 as vertical dimension:
box(4.3, 0.07, 0.2) # prints w= 1.407, pass= True
# 2) try 0.2 as vertical dimension:
box(4.3, 0.2, 0.07) # prints w= 1.365, pass= True
test()
So, the box can be transported around the corner either way (either 0.2 or 0.07 vertical).
Adding Latex formulae for the pure mathematician:
$$
\begin{align*}
d= & \sqrt{a^{2}+b^{2}}\\
\alpha= & \arctan(b/a)\\
\omega= & \arcsin(height/d)-\alpha\\
L= & x_{1}+x_{2}=b\sin\omega+a\cos\omega\\
w= & y_{1}+y_{2}=\frac{c}{\sqrt{2}}+\frac{L}{2\sqrt{2}}
\end{align*}
$$
The box can be transported around the corner if $w \le width$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/51671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Number of ways a natural number can be written as sum of smaller natural number It is easy to realize that given a natural number N the number of doublets that sum N are
$\frac{N+(-1)(N \pmod 2)}{2}$ , so I thought I could reach some recursive formula in the sense that found the number of doublets I could find the number of triplets and so on ..., example:
N=3 the only doublet is 2+1=3 -not said yet but 2+1, and 1+2 count as one- then I could count the number of way the number 2 can be expressed as the indicated sum and got the total number of ways 3 can be written as a sum. But this seems not so efficient, so I was wondering if there is other way to attack the problem and if there is some reference to this problem such as if it is well known its used, once I read that this have a chaotic behavior, and also read It was used in probability but don't remember where I got that information. So if you know something I would be grateful to be notice, thanks in advance.
| You are asking about integer partitions. This is a well studied topic and you can look at http://en.wikipedia.org/wiki/Integer_partitions for details.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/51721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
A good book on Statistical Inference? Anyone can suggest me one or more good books on Statistical Inference (estimators, UMVU estimators, hypothesis testing, UMP test, interval estimators, ANOVA one-way and two-way...) based on rigorous probability/measure theory?
I've checked some classical books on this topic but apparently all start from scratch with an elementary probability theory.
| Dienst's recommendations above are all good,but a classic text you need to check out is S.S. Wilks' Mathematical Statistics. A complete theoretical treatment by one of the subject's founding fathers. It's out of print and quite hard to find,but if you're really interested in this subject,it's well worth hunting down.
Be sure you get the hardcover 1963 Wiley edition; there's a preliminary mimeographed Princeton lecture notes from 1944 by the same author and with the same title-it's not the same book,it's much less complete and more elementary. Make sure you get the right one!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/51785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 4,
"answer_id": 2
} |
Ways of building groups besides direct, semidirect products? Let's say we have a group G containing a normal subgroup H. What are the possible relationships we can have between G, H, and G/H? Looking at groups of small order, it seems to always be the case that G = G/H x H or G/H x| H. What, if any, other constructions/relations are possible? And why is it the case that there are or aren't any other possible constructions/relations(if this question admits a relatively elementary answer)?
| I don't believe that this question admits elementary answer. The two ways, direct product and semidirect product, give various groups but not all.
As per my experience with small groups, complexity of constructions of groups lies mainly in $p$-groups. For $p$-groups of order $p^n$, $n>4$ (I think) there are always some groups which can not be semidirect products of smaller groups.
One method, is using generators and relations.
Write generators and relations of normal subgroup $H$, and quotient $G/H$.
Choose some elements of $G$ whose images are generators of $G/H$; make single choice for each generators.
So this pullback of generators and generators of $H$ gives generators of $G$. We only have to determine relations. Relations of $H$ are also relations of $G$. Other possible relations are obtained by considering relations of $G/H$ and their pullbacks.
Not all pullbacks of relations of $G/H$ give groups of order equal $|G|$; but order may become less. Moreover, different relations may give isomorphic subgroups.
For best elementary examples by this method (generators and relations), see constructions of non-abelian groups of order $8$; (Ref. the excellent book "An Introduction to The Theory of Groups : Joseph Rotman")
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/51861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Continuity of this function at $x=0$ The following function is not defined at $x=0$:
$$f(x) = \frac{\log(1+ax) - \log(1-bx)}{x} .$$
What would be the value of $f(0)$ so that it is continuous at $x=0$?
| Do you want to evaluate the limit at $0$. Then you can see that
\begin{align*}
f(x) = \lim_{x \to 0} \biggl(\frac{1+ax}{1-bx}\biggr)^{1/x} &=\lim_{x \to 0} \biggl(1+ \frac{(b+a)x}{1-bx}\biggr)^{1/x} \\\ &=\lim_{x \to 0} \biggl(1+ \frac{(b+a)x}{1+bx}\biggr)^{\small \frac{1+bx}{(b+a)x} \cdot \frac{(b+a)x}{x \cdot (1+bx)}} \\\ &=e^{\small\displaystyle\tiny\lim_{x \to 0} \frac{(b+a)x}{x\cdot (1+bx)}} = e^{b+a} \qquad \qquad \Bigl[ \because \small \lim_{x \to 0} (1+x)^{1/x} =e \Bigr]
\end{align*}
Therefore as $x \to 0$, $\log(f(x)) \to (b+a)$
Please see this post: Solving $\lim\limits_{x \to 0^+} \frac{\ln[\cos(x)]}{x}$ as a similar kind of methodology is used to solve this problem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/51914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Do addition and multiplication have arity? Many books classify the standard four arithmetical functions of addition, subtraction, multiplication, and division as binary (in terms of arity). But, "sigma" and "product" notation often writes just one symbol at the front, and indexes those symbols which seemingly makes expressions like $+(2, 3, 4)=9$ meaningful. Of course, we can't do something similar for division and subtraction, since they don't associate, but does the $+$ symbol in the above expression qualify as the same type of expression as when someone writes $2+4=6$? Do addition and multiplication qualify as functions which don't necessarily have a fixed arity, or do they actually have a fixed arity, and thus instances of sigma and product notation should get taken as abbreviation of expressions involving binary functions? Or is the above question merely a matter of perspective? Do we get into any logical difficulties if we regard addition and multiplication as $n$-ary functions, or can we only avoid such difficulties if we regard addition and multiplication as binary?
| There are no logical difficulties passing back and forth between binary associative operations and their higher-arity extensions. However, a theorem of Sierpinski (Fund. Math., 33 (1945) 169-73) shows that higher-order operations are not needed: every finitary operation may be expressed as a composition of binary operations. The proof is especially simple for operations on a finite set $\rm\:A\:.\:$ Namely, if $\rm\:|A| = n\:$ then we may encode $\rm\:A\:$ by $\rm\:\mathbb Z/n\:,\:$ the ring of integers $\rm\:mod\ n\:,\:$ allowing us to employ Lagrange interpolation to represent any finitary operation as a finite composition of the binary operations $\rm\: +,\ *\:,\:$ and $\rm\: \delta(a,b) = 1\ if\ a=b\ else\ 0\:,\:$ namely
$$\rm f(x_1,\ldots,x_n)\ = \sum_{(a_1,\ldots,a_n)\ \in\ A^n}\ f(a_1,\ldots,a_n)\ \prod_{i\ =\ 1}^n\ \delta(x_i,a_i) $$
When $\rm\:|A|\:$ is infinite one may instead proceed by employing pairing functions $\rm\:A^2\to A\:.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/51962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 0
} |
Probability of Median from a continuous distribution For a sample of size $n=3$ from a continuous probability distribution, what is $P(X_{(1)}<k<X_{(2)})$ where $k$ is the median of the distribution? What is $P(X_{(1)}<k<X_{(3)})$?
$X_{(i)},i=1,2,3$ are the ordered values of the sample.
I'm having trouble trying to solve this question since the median is for the distribution and not the sample. The only explicit formulas for the median I know of are the median $k$ of any random variable $X$ satisfies $P(X≤k)≥1/2$ and $P(X≥k)≥1/2$, but I don't see how to apply that here.
| I assume $X_1, X_2, X_3$ are taken to be iid. Here's a hint:
$$P(X_{(1)} < k < X_{(2)}) = 3P(X_1 < k \cap X_2 > k \cap X_3 > k)$$ by a simple combinatoric argument. Do you see why? Since the distributions are continuous, $$P(X_1 > k) = P(X_1 \ge k) = P(X_1 < k) = P(X_1 \le k) = \frac 1 2.$$ The second part of the question is similar.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/52032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Types of divergence My teacher said there are two main ways a sequence can diverge, it can increase in magnitude without bound, or it can fail to resolve to any one limit.
But maybe that second kind of divergence is too diverse? There is a big difference between the divergent sequence 1, -1, 1, -1 . . .
And the sequence formed by taking a digit from pi, g, then adding up the next g digits of pi and dividing that by g. (6/3, 25/5, 36/5, 11/2, 18/4, . . . )
Yet both of the above are more orderly than a sequence of random numbers. From what little I understand of randomness.
So maybe we should say that we have:
*
*Sequences that increase in magnitude without bound.
*Sequences the can be decomposed in to convergent sub sequences, or in to sequences as in #1
*Sequences based on a rule.
*Random sequences.
Yet, a random sequence, with even distribution will have convergent sub sequences to every number in it's range...suddenly randomness seems orderly.
What do professionals say about types of divergence?
| Every sequence that doesn't increase in magnitude without bound can be decomposed into convergent subsequences.
EDIT: Maybe a useful way of classifying divergent sequences (of real numbers) would be by the set of accumulation points, that is, by the set of limit points of convergent subsequences. One could ask
*
*Is $+\infty$ an accumulation point?
*Is $-\infty$ an accumulation point?
*Are there any finite accumulation points?
*Is there more than one finite accumulation point?
*Are there infinitely many accumulation points?
*Are there uncountably many accumulation points?
*Is every real number an accumulation point?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/52077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Introduction to the mathematical theory of regularization I asked the question "Are there books on Regularization at an Introductory level?" at physics.SE.
I was informed that "there is (...) a mathematical theory of regularization (Cesàro, Borel, Ramanujan summations and many others) that is interesting per se".
Question: Can someone advise me on how to study one or more of the above topics and provide some reference?
| In terms of summations of otherwise divergent series (which is what Borel summation and Cesàro summation are about), a decent reference is G.H. Hardy's Divergent Series.
In terms of divergent integrals, you may also be interested in learning about Cauchy principal values, which is related to Hadamard regularisation. (The references in those Wikipedia articles should be good enough; these two concepts are actually quite easily understood.)
Zeta function regularisation has its roots in number theory, which unfortuantely I don't know enough about to comment.
Heat kernel type regularisation techniques is closely related to the study of partial differential equations and harmonic analysis. It is related to Friedrichs mollifiers (an exposition is available in most introductory texts in generalised functions / distribution theory; and a slightly more advanced text is volume 1 of Hörmander's Analysis of Linear Partial Differential Operator). It can also be interpreted as a Fourier-space cut-off (which in physics terminology is probably called "ultraviolet cutoff" and which can be interpreted in physical space as setting a minimal length scale), so can be described in terms of, say, Littlewood-Paley Theory (another advanced text is Stein's Topics in Harmonic Analysis relating to Littlewood-Paley Theory) or the FBI transform. Unfortunately I don't know many good introduction texts in these areas. But I hope some of these keywords can aid your search.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/52139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Maximize and Minimize a 12" piece of wire into a square and circle
A wire of length 12" can be bent into a circle, a square or cut into 2 pieces and make both a circle and a square. How much wire should be used for the circle if the total area enclosed by the figure(s) is to be:
a) a Maximum
b) a Minimum
What I've got so far is that the formula for the square is $A_s=\frac{1}{16}s^2$ and the circumfrance of the circle to be $P=12-c$ and area to be $A_c = \pi(\frac{P}{2\pi})^2$ where $c$ is the length of the wire for the circle and $s$ is the length of the wire for the square.
Now I know I need to differentiate these formulas to then find the max and min they both can be, but what am I differentiating with respect to? The missing variable in each of the formulas?
Also, once, I find the derivitives, what would my next steps be to minimizing and maximizing these?
And did I set the problem up correctly?
Thanks for any help
| Let $s$ be the circumference of the square. Then the circumference of the circle is $12-s$ (because that's what is left from the wire). Now you already computed the formulas $A_{\mathrm{square}}(s) = \frac{1}{16} s^2$ and $A_{\mathrm{circle}}(s) = \frac{1}{4\pi}(12 - s)^2$. The total area is $A(s) = A_{\mathrm{square}}(s) + A_{\mathrm{circle}}(s)$, where $s \in [0,12]$ is the variable. To find the extrema (maximum/minimum) of this function, a necessary condition is $A'(s) = 0$ (differentiate with respect to $s$) when $0 \lt s \lt 12$ and you need also consider $A(0)$ and $A(12)$.
So the task you need to do is to differentiate $A(s)$ with respect to $s$, solve $A'(s) = 0$ for $s$ (there will be only one solution $s_0$). Now the maximum among $A(0)$, $A(12)$ and $A(s_0)$ will be the maximum and the minimum among them will be the minimum of $A(s)$. It may also help if you sketch the graph to convince yourself of the solution.
Here's a small sanity check: The circle is the geometric figure that encloses the largest area among all figures with the same circumference, so the maximum should be achieved for $s = 0$. Since enclosing two figures needs more wire than enclosing a single one, the minimum should be achieved at $s_0$.
Added:
Since the results you mention are a bit off, let me show you what I get:
First $$A(s) = \frac{1}{16}s^2 + \frac{1}{4\pi}(12-s)^2.$$
Differentiating this with respect to $s$ I get
$$A'(s) = \frac{1}{8}s - \frac{1}{2\pi}(12-s)$$
Now solve $A'(s) = 0$ to find $$s_0 = \frac{12}{1+\frac{\pi}{4}} \approx 6.72$$
Plugging this in gives me $A(s_0) \approx 5.04$. (No warranty, I hope I haven't goofed)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/52200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
The leap to infinite dimensions Extending this question, page 447 of Gilbert Strang's Algebra book says
What does it mean for a vector to have infinitely many components? There are two different answers, both good:
1) The vector becomes $v = (v_1, v_2, v_3 ... )$
2) The vector becomes a function $f(x)$. It could be $\sin(x)$.
I don't quite see in what sense the function is "infinite dimensional". Is it because a function is continuous, and so represents infinitely many points? The best way I can explain it is:
*
*1D space has 1 DOF, so each "vector" takes you on "one trip"
*2D space has 2 DOF, so by following each component in a 2D (x,y) vector you end up going on "two trips"
*...
*$\infty$D space has $\infty$ DOF, so each component in an $\infty$D vector takes you on "$\infty$ trips"
How does it ever end then? 3d space has 3 components to travel (x,y,z) to reach a destination point. If we have infinite components to travel on, how do we ever reach a destination point? We should be resolving components against infinite axes and so never reach a final destination point.
| One thing that might help is thinking about the vector spaces you already know as function spaces instead. Consider $\mathbb{R}^n$. Let $T_{n}=\{1,2,\cdots,n\}$ be a set of size $n$. Then $$\mathbb{R}^{n}\cong\left\{ f:T_{n}\rightarrow\mathbb{R}\right\} $$ where the set on the right hand side is the space of all real valued functions on $T_n$. It has a vector space structure since we can multiply by scalars and add functions. The functions $f_i$ which satisfy $f_i(j)=\delta_{ij}$ will form a basis.
So a finite dimensional vector space is just the space of all functions on a finite set. When we look at the space of functions on an infinite set, we get an infinite dimensional vector space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/52266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
} |
A simple conditional expectation problem $X, Y$ iid uniform random variables on $[0,1]$
$$Z =
\left\{
\begin{aligned}
X+Y \quad&\text{ if } X>\frac{1}{2}
\\
\frac{1}{2} + Y \quad & \text{ if } X\leq\frac{1}{2}
\end{aligned}
\right.$$
The question is $E\{Z|Z\leq 1\}= ?$
I tried $\displaystyle \int_0^1 E\{Z|Z = z\} P\{Z = z\}dz$ and got $5/8$, but I am not so sure about the result since I haven't touched probability for years.
| Your probability space is the unit square in the $(x,y)$-plane with $dP={\rm d}(x,y)$. The payout $Z$ is ${1\over 2}+y$ in the left half $L$ of the square and $x+y$ in the right half $R$. The region where $Z\leq 1$ consists of the lower half of $L$ and a triangle in the lower left of $R$; it has total area $P(Z\leq 1)={3\over8}$.
It follows that the expectation $E:=E[Z\ |\ Z\leq 1]$ is given by
$$E=\left(\int_0^{1/2}\int_0^{1/2}\bigl({1\over2}+y\bigr)dy dx + \int_{1/2}^1\int_0^{1-x}(x+y)dy dx\right)\Bigg/{3\over8} ={{3\over16}+{5\over48}\over{3\over8}}={7\over9}\ .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/52317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Range of a sum of sine waves Suppose I'm given a function
f(x) = sin(Ax + B) + sin(Cx + D)
is there a simple (or, perhaps, not-so-simple) way to compute the range of this function? My goal is ultimately to construct a function g(x, S, T) that maps f to the range [S, T].
My strategy is to first compute the range of f, then scale it to the range [0,1], then scale that to the range [S, T].
Ideally I would like to be able to do this for an arbitrary number of waves, although to keep things simple I'm willing to be satisfied with 2 if it's the easiest route.
Numerical methods welcome, although an explicit solution would be preferable.
| I'll assume that all variables and parameters range over the reals, with $A,C\neq0$. Let's see how we can get a certain combination of phases $\alpha$, $\gamma$:
$$Ax+B=2\pi m+\alpha\;,$$
$$Cx+D=2\pi n+\gamma\;.$$
Eliminating $x$ yields
$$2\pi(nA-mC)=AB-BC+\alpha C-\gamma A\;.$$
If $A$ and $C$ are incommensurate (i.e. their ratio is irrational), given $\alpha$ we can get arbitrarily close to any value of $\gamma$, so the range in this case is at least $(-2,2)$. If $AB-BC$ happens to be an integer linear combination of $2\pi A$ and $2\pi C$, then we can reach $2$, and the range is $(-2,2]$, whereas if $AB-BC$ happens to be a half-integer linear combination of $2\pi A$ and $2\pi C$ (i.e. and odd-integer linear combination of $\pi A$ and $\pi C$), then we can reach $-2$, and the range is $[-2,2)$. (These cannot both occur if $A$ and $C$ are incommensurate.)
On the other hand, if $A$ and $C$ are commensurate (i.e. their ratio is rational), you can transform $f$ to the form
$$f(u)=\sin mu+ \sin (nu+\phi)$$
by a suitable linear transformation of the variable, so $f$ is periodic. In this case, there are periodically recurring minima and maxima, and in general you'll need to use numerical methods to find them.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/52352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Should I combine the negative part of the spectrum with the positive one? When filtering sound I currently analyse only the positive part of the spectrum. From the mathematical point of view, will discarding the negative half of the spectrum impact significantly on my analysis?
Please consider only samples that I will actually encounter, not computer generate signals that are designed to thwart my analysis.
I know this question involves physics, biology and even music theory. But I guess the required understanding of mathematics is deeper than of those other fields of study.
| Sound processing is achieved through Real signal samples. Therefore there is no difference in the phase and magnitude of the FFT, or DFT coefficients, from positive to negative part of the found spectrum.
So, to save us or the machine the burden of saving/analyzing twice the same information/data, one looks only to the positive side of the FFT/DFT. However, do take notice that when figuring out spectral energy, you must remember to multiply the density by two (accounting for the missing, yet equal, negative part).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/52428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$T(a_{0}+a_{1}x+a_{2}x^2)=2a_{0}+a_{2}+(a_{0}+a_{1}+a_{2})x+3a_{2}x^2$- Finding $[T]_{E}^{E}$ Let $T(a_{0}+a_{1}x+a_{2}x^2)=2a_{0}+a_{2}+(a_{0}+a_{1}+a_{2})x+3a_{2}x^2$ be a linear transformation. I need to find the eigen-vectors eigenvalues of $T$.
So, I'm trying to find $[T]_{E}^{E}$ when the base is $E=\{1,x,x^2\}$.
I don't understand how I should use this transformation to do that.
Thanks.
| The columns of the matrix you seek are the coordinates of the images under $T$ of the elements of the basis. So you need only compute $T(1)$, $T(x)$, and $T(x^2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/52500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
A prize of $27,000 is to be divided among three people in the ratio 3:5:7. What is the largest share? This is not homework; I was just reviewing some old math flash cards and I came across this one I couldn't solve. I'm not interested in the solution so much as the reasoning.
Thanks
| You can think of splitting the money in the ratio $3:5:7$ as dividing it into $3+5+7=15$ equal parts and giving $3$ of these parts to one person, $5$ to another, and $7$ to the third. One part, then, must amount to $\frac{27000}{15}=1800$ dollars, and the shares must then be $3 \cdot 1800 = 5400$, $5 \cdot 1800 = 9000$, and $7 \cdot 1800 = 12600$ dollars, respectively. (As a quick check, $5400+9000+12600=27000$, as required.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/52552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Problem in skew-symmetric matrix
Let $A$ be a real skew-symmetric matrix. Prove that $I+A$ is non-singular, where $I$ is the identity matrix.
| As $A$ is skew symmetric, if $(A+I)x=0$, we have $0=x^T(A+I)x=x^TAx+\|x\|^2=\|x\|^2$, i.e. $x=0$. Hence $(A+I)$ is invertible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/52593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 0
} |
What is the name of the vertical bar in $(x^2+1)\vert_{x = 4}$ or $\left.\left(\frac{x^3}{3}+x+c\right) \right\vert_0^4$? I've always wanted to know what the name of the vertical bar in these examples was:
$f(x)=(x^2+1)\vert_{x = 4}$ (I know this means evaluate $x$ at $4$)
$\int_0^4 (x^2+1) \,dx = \left.\left(\frac{x^3}{3}+x+c\right) \right\vert_0^4$ (and I know this means that you would then evaluate at $x=0$ and $x=4$, then subtract $F(4)-F(0)$ if finding the net signed area)
I know it seems trivial, but it's something I can't really seem to find when I go googling and the question came up in my calc class last night and no one seemed to know.
Also, for bonus internets; What is the name of the horizontal bar in $\frac{x^3}{3}$? Is that called an obelus?
| Jeff Miller calls it "bar notation" in his Earliest Uses of Symbols of Calculus (see below). The bar denotes an evaluation functional, a concept whose importance comes to the fore when one studies duality of vector spaces (e.g. such duality plays a key role in the Umbral Calculus).
The bar notation to indicate evaluation of an antiderivative at the two limits of integration was first used by Pierre Frederic Sarrus (1798-1861) in 1823 in Gergonne’s Annales, Vol. XIV. The notation was used later by Moigno and Cauchy (Cajori vol. 2, page 250).
Below is the cited passage from Cajori
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/52651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 4,
"answer_id": 1
} |
Questions about composite numbers Consider the following problem:
Prove or disprove that if $n\in \mathbb{N}$, then $n$ is prime iff $$(n-1)!+n$$ is prime.
If $n$ is composite and greater than $1$, then $n$ has a divisor less than $n-1$, therefore $(n-1)!$ and $n$ have a common factor. Thus "$\Leftarrow$" is true. To proof the other direction we can consider the more general problem:
Let $n\in\mathbb{N}$. Consider the set $$C(n)=\{m\in\mathbb{N}:n+m\text{ is composite}\}.$$ How can we characterize the elements of $C(n)$?
The ideal answer would be to describe all elements in $C(n)$ in terms of only $n$. But, is that possible?
As a first approximation to solve this, we can start by defining for $n,p\in\mathbb{N}$: $$A(n,p)= \{ m\in\mathbb{N}:n+m\equiv 0\pmod{p} \}.$$ After of some observations we can prove that
$$A(n,p)=\{(\lceil n/p \rceil + k)p - n:k\in \mathbb{N}\}$$
and then $A(n,p)$ is the range of a function of the form $f_{n,p}:\mathbb{N}\to \mathbb{N}$. From this $$C(n)=\bigcup_{p=2}^\infty A(n,p),$$
But this still far from a characterization in terms of $n$. What do you think that is the best that we can do or the best we can hope?
| One reason your professors might have smiled at you is that
$$ C(n) = C(0) - n, $$
where $C(0) = \{m \in \mathbb N: m \text{ is composite} \}$. So characterizing $C(n)$ reduces to characterizing $C(0)$, which in turn reduces to characterizing the set of primes $\mathbb N \setminus C(0)$.
(Well, okay, technically $C(n) = (C(0) - n) \cap \mathbb N$ as you've defined it, but cutting off the negative part of the set doesn't make any fundamental difference.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/52765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 2
} |
On sorting in an array-less language This is partly a programming and partly a combinatorics question.
I'm working in a language that unfortunately doesn't support array structures. I've run into a problem where I need to sort my variables in increasing order.
Since the language has functions for the minimum and maximum of two inputs (but the language does not allow me to nest them, e.g. min(a, min(b, c)) is disallowed), I thought this might be one way towards my problem.
If, for instance, I have two variables $a$ and $b$, I only need one temporary variable so that $a$ ends up being less than or equal to $b$:
t = min(a, b);
b = max(a, b);
a = t;
for three variables $a,b,c$, the situation is a little more complicated, but only one temporary variable still suffices so that $a \leq b \leq c$:
a = min(a, b);
t = max(a, b);
c = max(t, c);
t = min(t, c);
b = max(a, t);
a = min(a, t);
Not having a strong combinatorics background, however, I don't know how to generalize the above constructions if I have $n$ variables in general. In particular, is there a way to figure out how many temporary variables I would need to sort out $n$ variables, and to figure out what is the minimum number of assignment statements needed for sorting?
Thanks in advance!
| Many sorting algorithms work by performing a sequence of swaps, so you need only one extra variable to implement them for any fixed $n$. What you're doing is effectively unrolling the entire algorithm loop into a sequence of conditional assignments.
The number of assignments will be three times the number of swaps, and I think the exact number may depend on the sorting algorithm. It'll be on the order of $n \log n$, though.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/52802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Integer solutions of $3a^2 - 2a - 1 = n^2$ I've got an equation $3a^2 - 2a - 1 = n^2$, where $a,n \in \mathbb{N}$.
I put it in Wolfram Alpha and besides everything else it gives integer solution: see here.
For another equation (say, $3a^2 - 2a - 2 = n^2$, where $a,n \in \mathbb{N}$) Wolfram Alpha does not provide integer solutions: here.
Could you please tell me:
*
*How does Wolfram Alpha determine existence of the integer solutions?
*How does it find them?
*What should I learn to be able to do the same with a pencil and a piece of paper (if possible)?
Thanks in advance!
| I believe Pell's Equation (and variants) would be useful.
The first one can be recast as
$$9a^2 - 6a - 3 = 3n^2$$ i.e.
$$(3a-1)^2 - 4 = 3n^2$$
You are looking for solutions to
$$ x^2 - 3y^2 = 4$$ such that $x = -1 \mod 3$.
There are standard techniques to solve Pell's equation and variants (see the wiki page linked above and mathworld page here: http://mathworld.wolfram.com/PellEquation.html) and I am guessing Wolfram Alpha is using one of them.
For the second I believe we get
$$x^2 - 3y^2 = 7$$
which does not have solutions, considering modulo $4$ (as pointed out by Adrián Barquero).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/52940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
Prove that $(a_1a_2\cdots a_n)^{2} = e$ in a finite Abelian group Let $G$ be a finite abelian group, $G = \{e, a_{1}, a_{2}, ..., a_{n} \}$. Prove that $(a_{1}a_{2}\cdot \cdot \cdot a_{n})^{2} = e$.
I've been stuck on this problem for quite some time. Could someone give me a hint?
Thanks in advance.
| The map $\phi:x\in G\mapsto x^{-1}\in G$ is an automorphism of $G$ so, in particular, it induces a bijection $G\setminus\{e\}\to G\setminus\{e\}$. It maps $b=a_1\cdots a_n$ to itself, so that $b=b^{-1}$ and, therefore, $b^2=e$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/53026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 5,
"answer_id": 3
} |
Proving that $ 30 \mid ab(a^2+b^2)(a^2-b^2)$ How can I prove that $30 \mid ab(a^2+b^2)(a^2-b^2)$ without using $a,b$ congruent modulo $5$ and then
$a,b$ congruent modulo $6$ (for example) to show respectively that $5 \mid ab(a^2+b^2)(a^2-b^2)$ and
$6 \mid ab(a^2+b^2)(a^2-b^2)$?
Indeed this method implies studying numerous congruences and is quite long.
| You need to show $ab(a^2 - b^2)(a^2 + b^2)$ is a multiple of 2,3, and 5 for all $a$ and $b$.
For 2: If neither $a$ nor $b$ are even, they are both odd and $a^2 \equiv b^2 \equiv 1 \pmod 2$, so that 2 divides $a^2 - b^2$.
For 3: If neither $a$ nor $b$ are a multiple of 3, then $a^2 \equiv b^2 \equiv 1 \pmod 3$, so 3 divides $a^2 - b^2$ similar to above.
For 5: If neither $a$ nor $b$ are a multiple of 5, then either $a^2 \equiv 1 \pmod 5$ or $a^2 \equiv -1 \pmod 5$. The same holds for $b$. If $a^2 \equiv b^2 \pmod 5$ then 5 divides $a^2 - b^2$, while if $a^2 \equiv -b^2 \pmod 5$ then 5 divides $a^2 + b^2$.
This does break into cases, but as you can see it's not too bad to do it systematically like this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/53135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 1
} |
The product of all the elements of a finite abelian group I'm trying to prove the following statements.
Let $G$ be a finite abelian group $G = \{a_{1}, a_{2}, ..., a_{n}\}$.
*
*If there is no element $x \neq e$ in $G$ such that $x = x^{-1}$, then $a_{1}a_{2} \cdot \cdot \cdot a_{n} = e$.
Since the only element in $G$ that is an inverse of itself is the identity element $e$, for every other element $k$, it must have an inverse $a_{k}^{-1} = a_{j}$ where $k \neq j$. Thus $a_{1}a_{1}^{-1}a_{2}a_{2}^{-1} \cdot \cdot \cdot a_{n}a_{n}^{-1} = e$.
*
*If there is exactly one $x \neq e$ in $G$ such that $x = x^{-1}$, then $a_{1}a_{2} \cdot \cdot \cdot a_{n} = x$.
This is stating that $x$ is not the identity element but is its own inverse. Then every other element $p$ must also have an inverse $a_{p}^{-1} = a_{w}$ where $p \neq w$. Similarly to the first question, a rearrangement can be done: $a_{1}a_{1}^{-1}a_{2}a_{2}^{-1} \cdot \cdot \cdot xx^{-1} \cdot \cdot \cdot a_{n}a_{n}^{-1} = xx^{-1} = e$. And this is where I am stuck since I proved another statement.
Any comments would be appreciated for both problems.
| For the first answer, you are almost there : if $a_1 a_1^{-1} \cdots a_n a_n^{-1} = e$, since the elements $a_1, \cdots , a_n$ are all distinct, their inverses are also distinct. Since the product written above involves every element of the group, we have $a_1 a_1^{-1} \cdots a_n a_n^{-1} = (a_1 a_2 \cdots a_n) (a_1^{-1} a_2^{-1} \cdots a_n^{-1}) = (a_1 \cdots a_n)^2 = e$, and since no element is its own inverse (by hypothesis) besides $e$, you have to conclude that $a_1 \cdots a_n = e$.
For the second one, when you re-arrange the terms, $x^{-1}$ should not appear in there, since $x = x^{-1}$ and $x$ does not appear twice in the product, so all that's left is $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/53185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Factorial decomposition of integers? This question might seem strange, but I had the feeling it's possible to decompose in a unique way a number as follows:
if $x < n!$, then there is a unique way to write x as:
$$x = a_1\cdot 1! + a_2\cdot 2! + a_3\cdot3! + ... + a_{n-1}\cdot(n-1)!$$
where $a_i \leq i$
I looked at factorial decomposition on google but I cannot find any name for such a decomposition.
example:
If I chose :
(a1,a2) =
*
*1,0 -> 1
*0,1 -> 2
*1,1 -> 3
*0,2 -> 4
*1,2 -> 5
I get all number from $1$ to $3!-1$
ideas for a proof:
The number of elements between $1$ and $N!-1$ is equal to $N!-1$ and I have the feeling they are all different, so this decomposition should be right. But I didn't prove it properly.
Are there proofs of this decomposition? Does this decomposition as a name? And above all is this true ?
Thanks in advance
| Your conjecture is correct. There is a straightforward proof by induction that such a decomposition always exists. Suppose that every positive integer less than $n!$ can be written in the form $\sum_{k=1}^{n-1} a_k k!$, where $0 \le a_k \le k$, and let $m$ be a positive integer such that $n! \le m < (n+1)!$. There are unique integers $a_n$ and $r$ such that $m = a_nn! + r$ and $0 \le r < n!$, and since $m < (n+1)! = (n+1)n!$, it’s clear that $a_n \le n$. Since $r < n!$, the induction hypothesis ensures that there are non-negative integers $a_1,\dots,a_{n-1}$ such that $r = \sum_{k=1}^{n-1} a_k k!$, and hence $m = \sum_{k=1}^n a_k k!$.
We’ve now seen that each of the $(n+1)!$ non-negative integers in $\{0,1,\dots,n\}$ has a representation of the form $\sum_{k=1}^n a_k k!$ with $0 \le a_k \le k$ for each $k$. However, there are only $\prod_{k=1}^n (k+1) = (n+1)!$ distinct representations of that form, so each must represent a different integer, and each integer’s representation is therefore unique.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/53262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 1
} |
Real world applications of Pythagoras' Theorem I have a school assignment, and it requires me to list a few of the real world applications of Pythagoras Theorem. However, most of the ones I found are rather generic, and not special at all.
What are some of the real world applications of Pythagoras' Theorem?
| Here is a true life application of the Pythagorean theorem (the 3-dimensional version, which is a corollary of the 2-dimensional version).
My wife and I needed to have a long iron rod manufactured for us, to use as a curtain rod.
I measured the length $L$ of the rod we wanted.
But we forgot to take into account that we live on the 24th floor of an apartment building and therefore the only way the rod could get into our apartment was by coming up the elevator.
Would the rod fit in the elevator?
My wife measured the height $H$, the width $W$, and the depth $D$ of the elevator box. She then calculated the diagonal of the elevator box by applying the Pythagorean theorem: $\sqrt{H^2 + W^2 + D^2}$. She compared it to $L$, and thankfully, it was greater than $L$. The rod would fit!
I would like to say that we realized this problem BEFORE we asked them to manufacture the rod, but that would be a lie. However, at least my wife realized it before the manufacturers arrived at our apartment building with the completed curtain rod, and she quickly did the measurements, and the Pythagorean Theorem calculation, and the comparison. So PHEW, we were saved.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/53463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 3
} |
Find the image of a vector by using the standard matrix (for the linear transformation T) Was wondering if anyone can help out with the following problem:
Use the standard matrix for the linear transformation $T$ to find the image of the vector $\mathbf{v}$, where
$$T(x,y) = (x+y,x-y, 2x,2y),\qquad \mathbf{v}=(3,-3).$$
I found out the standard matrix for $T$ to be:
$$\begin{bmatrix}1&1\\1&-1\\2&0\\0&2\end{bmatrix}$$
From here I honestly don't know how to find the "image of the vector $\mathbf{v}$". Does anyone have any suggestions?
| The matrix you've written down is correct. If you have a matrix $M$ and a vector $v$, the image of $v$ means $Mv$.
Something is a bit funny with the notation in your question. Your matrix is 4x2, so it operates on column vectors of height two (equivalently, 2x1 matrices). But the vector given is a row vector. Still, it seems clear that what you need to calculate is the product $Mv$ that Theo wrote down in the comment. Do you know how to do that?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/53525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Any idea about N-topological spaces? In Bitopological spaces, Proc. London Math. Soc. (3) 13 (1963) 71–89 MR0143169, J.C. Kelly introduced the idea of bitopological spaces. Is there any paper concerning the generalization of this concept, i.e. a space with any number of topologies?
| For $n=3$ Google turns up mention of AL-Fatlawee J.K. On paracompactness in bitopological spaces and tritopological spaces, MSc. Thesis, University of Babylon (2006). Asmahan Flieh Hassan at the University of Kufa, also in Iraq, also seems to be interested in tritopological spaces and has worked with a Luay Al-Sweedy at the Univ. of Babylon. This paper by Philip Kremer makes use of tritopological spaces in a study of bimodal logics, as does this paper by J. van Benthem et al., which Kremer cites. In my admittedly limited experience with the area these are very unusual, in that they make use of a tritopological structure to study something else; virtually every other paper that I’ve seen on bi- or tritopological spaces has studied them for their own sake, usually in an attempt to extend topological notions in some reasonably nice way.
I’ve seen nothing more general than this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/53573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Why are cluster co-occurrence matrices positive semidefinite? A cluster (aka a partition) co-occurrence matrix $A$ for $N$ points $\{x_1, \dots x_n\}$ is an $N\times N$ matrix that encodes a partitioning of these points into $k$ separate clusters ($k\ge 1$) as follows:
$A(i,j) = 1$ if $x_i$ and $x_j$ belong to the same cluster, otherwise $A(i,j) = 0$
I have seen texts that say that $A$ is positive semidefinite. My intuition tells me that this has something to do with transitive relation encoded in the matrix, i.e.:
If $A(i,j) = 1$, and $A(j,k) = 1$, then $A(i,k) = 1$ $\forall (i,j,k)$
But I don't see how the above can be derived from the definition of positive semidefinite matrices, i.e. $z^T A z > 0$ $\forall z\in R^N$
Any thoughts?
| ....and yet another way to view it: an $n\times n$ matrix whose every entry is 1 is $n$ times the matrix of the orthogonal projection onto the 1-dimensional subspace spanned by a column vector of 1s. Its eigenvalues are therefore $n$, with multiplicity 1, and 0, with multiplicity $n-1$. Now look at $\mathrm{diag}(A,B,C,\ldots)$, where each of $A,B,C,\ldots$ is such a square matrix with each entry equal to 1 (but $A,B,C,\ldots$ are generally of different sizes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/53704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Projection of tetrahedron to complex plane It is widely known that:
distinct points $a,b,c$ in the complex plane form equilateral triangle iff $ (a+b+c)^{2}=3(a^{2}+b^{2}+c^{2}). $
New to me is this fact:
let $a,b,c,d$ be the images of vertices of regular tetrahedron projected to complex plane, then $(a+b+c+d)^{2}=4(a^{2}+b^{2}+c^{2}+d^{2}).$
I wonder if somebody would came up with intresting proof, maybe involving previous statement.
What I try is some analytic geometry but things get messy enough for me to quit.
| As I mentioned in my comment, the tetrahedral formula is invariant under translations, so let's focus on regular tetrahedra conveniently centered at the origin.
Let $T$ be the coordinate matrix such a tetrahedron; that is, the matrix whose columns are coordinates in $\mathbb{R}^3$ of the tetrahedron's vertices. The columns of the matrix obviously sum to zero, but there's something less-obvious that we can say about the rows:
Fact: The rows of $T$ form an orthogonal set of vectors of equal magnitude, $m$.
For example (and proof-of-fact), take the tetrahedron that shares vertices with the double-unit cube, for which $m=2$:
$$T = \begin{bmatrix}1&1&-1&-1\\1&-1&1&-1\\1&-1&-1&1\end{bmatrix} \hspace{0.25in}\text{so that}\hspace{0.25in} T T^\top=\begin{bmatrix}4&0&0\\0&4&0\\0&0&4\end{bmatrix}=m^2 I$$
Any other origin-centered regular tetrahedron is similar to this one, so its coordinate matrix has the form $S = k Q T$ for some orthogonal matrix $Q$ and some scale factor $k$. Then
$$SS^\top = (kQT)(kQT)^\top = k^2 Q T T^\top Q^\top = k^2 Q (m^2 I) Q^\top = k^2 m^2 (Q Q^\top) = k^2 m^2 I$$
demonstrating that the rows of $S$ are also orthogonal and of equal magnitude. (Fact proven.)
For the general case, take $T$ as follows
$$T=\begin{bmatrix}a_x&b_x&c_x&d_x\\a_y&b_y&c_y&d_y\\a_z&b_z&c_z&d_z\end{bmatrix}$$
Now, consider the matrix $J := \left[1,i,0\right]$. Left-multiplying $T$ by $J$ gives $P$, the coordinate matrix (in $\mathbb{C}$) of the projection of the tetrahedron into the coordinate plane:
$$P := J T = \left[a_x+i a_y, b_x+ib_y, c_x+i c_y, d_x + i d_y\right] = \left[a, b, c, d\right]$$
where $a+b+c+d=0$. Observe that
$$P P^\top = a^2 + b^2 + c^2 + d^2$$
On the other hand,
$$PP^\top = (JT)(JT)^\top = J T T^\top J^\top = m^2 J J^\top = m^2 (1 + i^2) = 0$$
Therefore,
$$(a+b+c+d)^2=0=4(a^2 + b^2 + c^2 + d^2)$$
Note: It turns out that the Fact applies to all the Platonic solids ... and most Archimedeans ... and a great many other uniforms, including wildly self-intersecting realizations (even in many-dimensional space). The ones for which the Fact fails have slightly-deformed variants for which the Fact succeeds. (The key is that the coordinate matrices of these figures are (right-)eigenmatrices of the vertex adjacency matrix. That is, $TA=\lambda T$. For the regular tetrahedron, $\lambda=-1$; for the cube, $\lambda = 1$; for the great stellated dodecahedron, $\lambda=-\sqrt{5}$; for the small retrosnub icosicosidodecahedron, $\lambda\approx-2.980$ for a pseudo-classical variant whose pentagrammic faces have non-equilateral triangular neighbors.)
The argument of my answer works for all "Fact-compliant" origin-centered polyhedra, so that $(\sum p_i)^2 = 0 = \sum p_i^2$ for projected vertices $p_i$. Throwing in a coefficient --namely $n$, the number of vertices-- that guarantees translation-invariance, and we have
$$\left( \sum p_i \right)^2 = n \sum p_i^2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/53756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 3,
"answer_id": 2
} |
A property of the totient function Let $\ m\ge3$, and let $\ a_i$ be the natural numbers less than or equal to $\ m$ that are coprime to $\ m$ put in the following order:
$$\ a_1<a_2<\cdots<a_\frac{\phi(m)}{2}\le \frac{m}{2}\le a_{\frac{\phi(m)}{2}+1}<a_{\frac{\phi(m)}{2}+2}<\cdots<a_{\phi(m)}.$$
If $\ a_{\frac{\phi(m)}{2}}>\frac{m}{2}$ and $\ a_{\frac{\phi(m)}{2}+1}\ge\frac{m}{2}$ then $\ a_{\frac{\phi(m)}{2}}+a_{\frac{\phi(m)}{2}+1}>m$ which is wrong.
If $\ a_{\frac{\phi(m)}{2}}\le\frac{m}{2}$ and $\ a_{\frac{\phi(m)}{2}+1}<\frac{m}{2}$ then
$\ a_{\frac{\phi(m)}{2}}+a_{\frac{\phi(m)}{2}+1}<m$ which is wrong.
If $\ a_{\frac{\phi(m)}{2}}>\frac{m}{2}$ and $\ a_{\frac{\phi(m)}{2}+1}<\frac{m}{2}$ then $\ a_{\frac{\phi(m)}{2}+1}<a_{\frac{\phi(m)}{2}}$ which is wrong.
So $\ a_{\frac{\phi(m)}{2}}>\frac{m}{2}$ or $\ a_{\frac{\phi(m)}{2}+1}<\frac{m}{2}$ is wrong, $\ a_{\frac{\phi(m)}{2}}\le\frac{m}{2}$ and $\ a_{\frac{\phi(m)}{2}+1}\ge\frac{m}{2}$ is true, and it gives the result.
Does this proof work?
| Your proof is correct, but you should clearly indicate where the proof starts and that you are using the result on the sum of two symmetric elements in the proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/53821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Multiplicative inverses for elements in field How to compute multiplicative inverses for elements in any simple (not extended) finite field? I mean an algorithm which can be implemented in software.
| In both cases one may employ the extended Euclidean algorithm to compute inverses. See here for an example. Alternatively, employ repeated squaring to compute $\rm\:a^{-1} = a^{q-2}\:$ for $\rm\:a \in \mathbb F_q^*\:,\:$ which is conveniently recalled by writing the exponent in binary Horner form. A useful reference is Knuth: TAoCP, vol 2: Seminumerical Algorithms.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/53879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Connection Between Automorphism Groups of a Graph and its Line Graph First, the specific case I'm trying to handle is this:
I have the graph $\Gamma = K_{4,4}$.
I understand that its automorphism group is the wreath product of $S_4 \wr S_2$ and thus it is a group of order 24*24*2=1152.
My goal is to find the order of the AUTOMORPHISM GROUP of the Line Graph: $L(\Gamma)$.
That is - $|Aut(L(G))|$
I used GAP and I already know that the answer is 4608, which just happens to be 4*1152.
I guess this isn't a coincidence. Is there some sort of an argument which can give me this result theoretically?
Also, I would use this thread to ask about information of this problem in general (Connection Between Automorphism Groups of a Graph and its Line Graph).
I suppose that there is no general case theorem.
I was told by one of the professors in my department that "for a lot of cases, there is a general rule of thumb that works" although no more details were supplied.
If anyone has an idea what he was referring to, I'd be happy to know.
Thanks in advance,
Lost_DM
| If G is a graph with minimum valency (degree) 4, then Aut(G) is group-theoritical isomorphic to Aut(L(G)). See Godsil & Royle, Algebraic graph theory exercise 1.15. The proof is not too hard.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/53939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
} |
Determine limit of |a+b| This is a simple problem I am having a bit of trouble with. I am not sure where this leads.
Given that $\vec a = \begin{pmatrix}4\\-3\end{pmatrix}$ and $|\vec b|$ = 3, determine the limits between which $|\vec a + \vec b|$ must lie.
Let, $\vec b = \begin{pmatrix}\lambda\\\mu\end{pmatrix}$, such that $\lambda^2 + \mu^2 = 9$
Then,
$$
\begin{align}
\vec a + \vec b &= \begin{pmatrix}4+\lambda\\-3 + \mu\end{pmatrix}\\
|\vec a + \vec b| &= \sqrt{(4+\lambda)^2 + (\mu - 3)^2}\\
&= \sqrt{\lambda^2 + \mu^2 + 8\lambda - 6\mu + 25}\\
&= \sqrt{8\lambda - 6\mu + 34}
\end{align}
$$
Then I assumed $8\lambda - 6\mu + 34 \ge 0$. This is as far I have gotten. I tried solving the inequality, but it doesn't have any real roots? Can you guys give me a hint? Thanks.
| We know that $\|a\|=5$, $\|b\|=3$, and we have two vector formulas
$$ \|a+b\|^2=\|a\|^2+2(a\cdot b)+\|b\|^2,$$
$$ a\cdot b = \|a\| \|b\| \cos\theta.$$
Combining all this, we have
$$\|a+b\|^2 = (5^2+3^2)+2(5)(3)\cos\theta.$$
Cosine's maximum and minimum values are $+1$ and$-1$, so we have
$$\|a+b\|^2 \in [4,64]$$
$$\|a+b\| \in [2,8].$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/54001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
A periodic decimal expansion Let us suppose that $\{\alpha_{n}\}_{n \in \mathbb{N}}$ is a strictly increasing sequence of natural numbers and that the number obtained by concatenating the decimal representations of the elements of $\{\alpha_{n}\}_{n \in \mathbb{N}}$ after the decimal point, i.e.,
$0.\alpha_{1}\alpha_{2}\alpha_{3}\ldots$
has period $s$ (e.g., $0.12 \mathbf{12} \mathrm{121212}...$ has period 2).
If $a_{k}$ denotes the number of elements in $\{\alpha_{n}\}_{n \in \mathbb{N}}$ with exactly $k$ digits in their decimal representation, does the inequality
$a_{k} \leq s$
always hold?
What would be, in your opinion, the right way to approach this question? I've tried a proof by exhaustion without much success. I'd really appreciate any (self-contained) hints you can provide me with.
| If the period is $s$ then there are essentially $s$ starting places in the recurring decimal for a $k$-digit integer - begin at the first digit of the decimal, the second etc - beyond $s$ you get the same numbers coming round again. If you had $a_k > s$ then two of your $\alpha_n$ with $k$ digits would be the same by the pigeonhole principle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/54077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
eigenvalues of certain block matrices This question inquired about the determinant of this matrix:
$$
\begin{bmatrix}
-\lambda &1 &0 &1 &0 &1 \\
1& -\lambda &1 &0 &1 &0 \\
0& 1& -\lambda &1 &0 &1 \\
1& 0& 1& -\lambda &1 &0 \\
0& 1& 0& 1& -\lambda &1 \\
1& 0& 1& 0&1 & -\lambda
\end{bmatrix}
$$
and of other matrices in a sequence to which it belongs. In a comment I mentioned that if we permute the indices 1, 2, 3, 4, 5, 6 to put the odd ones first and then the even ones, thus 1, 3, 5, 2, 4, 6, then we get this:
$$
\begin{bmatrix}
-\lambda & 0 & 0 & 1 & 1 & 1 \\
0 & -\lambda & 0 & 1 & 1 & 1 \\
0 & 0 & -\lambda & 1 & 1 & 1 \\
1 & 1 & 1 & -\lambda & 0 & 0 \\
1 & 1 & 1 & 0 & -\lambda & 0 \\
1 & 1 & 1 & 0 & 0 & -\lambda
\end{bmatrix}
$$
So this is of the form
$$
\begin{bmatrix}
A & B \\ B & A
\end{bmatrix}
$$
where $A$ and $B$ are symmetric matrices whose characteristic polynomials and eigenvalues are easily found, even if we consider not this one case of $6\times 6$ matrices, but arbitrarily large matrices following the same pattern.
Are there simple formulas for determinants, characteristic polynomials, and eigenvalues for matrices of this latter kind?
I thought of the Haynesworth inertia additivity formula because I only vaguely remembered what it said. But apparently it only counts positive, negative, and zero eigenvalues.
| Because the subblocks of the second matrix (let's call it $C$) commute i.e. AB=BA, you can use a lot of small lemmas given, for example here.
And also you might consider the following elimination: Let $n$ be the size of $A$ or $B$ and let,(say for $n=4$)
$$
T = \left(\begin{array}{cccccccc}
1 &0 &0 &0 &0 &0 &0 &0\\
0 &0 &0 &0 &1 &0 &0 &0\\
-1 &1 &0 &0 &0 &0 &0 &0\\
-1 &0 &1 &0 &0 &0 &0 &0\\
-1 &0 &0 &1 &0 &0 &0 &0\\
0 &0 &0 &0 &-1 &1 &0 &0\\
0 &0 &0 &0 &-1 &0 &1 &0\\
0 &0 &0 &0 &-1 &0 &0 &1
\end{array} \right)
$$
Then , $TCT^{-1}$ gives
$$
\hat{C} = \begin{pmatrix}-\lambda &n &\mathbf{0} &\mathbf{1} \\n &-\lambda &\mathbf{1} &\mathbf{0}\\ & &-\lambda I &0\\&&0&-\lambda I \end{pmatrix}
$$
from which you can identify the upper triangular block matrix. The bold face numbers indicate the all ones and all zeros rows respectively. $(1,1)$ block is the $2\times 2$ matrix and $(2,2)$ block is simply $-\lambda I$.
EDIT: So the eigenvalues are $(-\lambda-n),(-\lambda+n)$ and $-\lambda$ with multiplicity of $2(n-1)$. Thus the determinant is also easy to compute, via their product.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/54133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 3
} |
Is there a geometric meaning of the Frobenius norm? I have a positive definite matrix $A$. I am going to choose its Frobenius norm $\|A\|_F^2$ as a cost function and then minimize $\|A\|_F^2$. But I think I need to find a reason to convince people it is reasonable to choose $\|A\|_F^2$ as a cost function. So I'm wondering if there are some geometric meanings of the Frobenius norm. Thanks.
Edit: here $A$ is a 3 by 3 matrix. In the problem I'm working on, people usually choose $\det A$ as a cost function since $\det A$ has an obvious geometric interpretation: the volume of the parallelepiped determined by $A$. Now I want to choose $\|A\|_F^2$ as a cost function because of the good properties of $\|A\|_F^2$. That's why I am interested in the geometric meaning of $\|A\|_F^2$.
| In three dimensions (easier to visualize) we know that the scalar triple product of three vectors, say $a, b, c$, is the determinant of a matrix with those vectors as columns and the modulus is the volume of the parallelepiped spanned by $a, b$ and $c$.
The squared Frobenius norm is the average squared length of the four space diagonals of the parallelepiped. This can easily be shown. The diagonals are:
$d_1 = a + b + c\\
d_2 = a + b - c\\
d_3 = b + c - a\\
d_4 = c + a - b.$
Calculate and sum their squared lengths as $d_1^T d_1 + d_2^T d_2 + d_3^T d_3 + d_4^T d_4.$ Things cancel nicely and one is left with $ 4 ( a^T a + b^T b + c^T c)$ which is exactly four times the square of the Frobenius norm.
The proof in more dimensions is along the same lines, just more sides and diagonals.
The squared Frobenius norm of the Jacobian of a mapping from $\mathbb{R}^m$ to $\mathbb{R}^n$ is used, when it is desired that reductions in volume under the mapping shall be favoured in a minimization task. Because of its form, it is much easier to differentiate the squared Frobenius norm, than any other measure which quantifies the volume change, such as the modulus of the determinant of the Jacobian (which can only be used if $m=n$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/54176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 2,
"answer_id": 1
} |
Order of solving definite integrals I've been coming across several definite integrals in my homework where the solving order is flipped, and am unsure why. Currently, I'm working on calculating the area between both intersecting and non-intersecting graphs.
According to the book, the formula for finding the area bounded by two graphs is
$$A=\int_{a}^{b}f(x)-g(x) \mathrm dx$$
For example, given $f(x)=x^3-3x^2+3x$ and $g(x)=x^2$, you can see that the intersections are $x={0, 1, 3}$ by factoring. So, at first glance, it looks as if the problem is solved via
$$\int_0^1f(x)-g(x)\mathrm dx+\int_1^3f(x)-g(x)\mathrm dx$$
However, when I solved using those integrals, the answer didn't match the book answer, so I took another look at the work. According to the book, the actual integral formulas are
$$\int_0^1f(x)-g(x)\mathrm dx+\int_1^3g(x)-f(x)\mathrm dx$$
I was a little curious about that, so I put the formulas in a grapher and it turns out that $f(x)$ and $g(x)$ flip values at the intersection $x=1.$
So how can I determine which order to place the $f(x)$ and $g(x)$ integration order without using a graphing utility? Is it dependent on the intersection values?
| You are, I hope, not quoting your calculus book correctly.
The correct result is:
Suppose that $f(x) \ge g(x)$ in the interval from $x=a$ to $x=b$. Then the area of the region between the curves $y=f(x)$ and $y=g(x)$, from the line $x=a$ to the line $x=b$, is equal to
$$\int_a^b(f(x)-g(x))\,dx.$$
The condition $f(x)-g(x) \ge 0$ is essential here.
In your example, from $x=0$ to $x=1$ we have $f(x) \ge g(x)$, so the area from $0$ to $1$ is indeed
$\int_0^1 (f(x)-g(x))\, dx$.
However, from $x=1$ to $x=3$, we have $f(x) -g(x) \le 0$, the curve $y=g(x)$ lies above the curve $y=f(x)$. So the area of the region between the two curves, from $x=1$ to $x=3$, is $\int_1^3(g(x)-f(x))\,dx$.
To find the full area, add up.
Comment: When you calculate $\int_a^b h(x)\,dx$, the integral cheerfully "adds up" and does not worry about whether the things it is adding up are positive or negative. This often gives exactly the answer we need. For example, if $h(t)$ is the velocity at time $t$, then $\int_a^bh(t)\,dt$ gives the net displacement (change of position) as time goes from $a$ to $b$. The integral takes account of the fact that when $h(t)<0$, we are going "backwards."
If we wanted the total distance travelled, we would have to treat the parts where $h(t) \le 0$ and the parts where $h(t)\ge 0$ separately, just as we had to in the area case.
For determining where $f(x)-g(x)$ is positive, negative, we can let $h(x)=f(x)-g(x)$, and try to find where $h(x)$ is positive, negative. A continuous function $h(x)$ can only change sign where $h(x)=0$. (It need not change sign there. For example, if $h(x)=(x-1)^2$, then $h(1)=0$, but $h(x)$ does not change sign at $x=1$.)
If the solutions of $h(x)=0$ are easy to find, we can quickly determine all the places where there might be a change of sign, and the rest is straightforward. Otherwise, a numerical procedure has to be used to approximate the roots.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/54309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 2
} |
Combinatorial proof Using notion of derivative of functions from Taylor formula follow that
$$e^x=\sum_{k=0}^{\infty}\frac{x^k}{k!}$$
Is there any elementary combinatorial proof of this formula
here is my proof for $x$ natural number
Denote by $P_k^m$ number of $k$-permutations with unlimited repetitions of elements from a $m$-set then we can prove that
$$P_k^m=\sum_{r_0+r_1+...+r_{m-1}=k}\frac{k!}{r_0 !...r_{m-1}!}$$
also is valid
$$P_k^m=m^k$$
Based on first formula we can derive that
$$\sum_{k=0}^{\infty}P_k^m\frac{x^k}{k!}=\left(\sum_{k=0}^{\infty}\frac{x^k}{k!}\right)^m$$
from second formula
$$\sum_{k=0}^{\infty}P_k^m\frac{x^k}{k!}=\sum_{k=0}^{\infty}\frac{(mx)^k}{k!}$$
now is clear that
$$\sum_{k=0}^{\infty}\frac{(mx)^k}{k!}=\left(\sum_{k=0}^{\infty}\frac{x^k}{k!}\right)^m$$
from last equation for $x=1$ taking in account that
$$\sum_{k=0}^{\infty}\frac{1}{k!}=e=2,71828...$$
we have finally that for natural number $m$ is valid formula
$$e^m=\sum_{k=0}^{\infty}\frac{m^k}{k!}$$
| We will handle $x>0$ here.
If we define $e=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n$, then $e^x=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{nx}$. Note that since $0\le nx-\lfloor nx\rfloor<1$,
$$
\begin{align}
e^x&=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{nx}\\
&=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{\lfloor nx\rfloor} \left(1+\frac{1}{n}\right)^{nx-\lfloor nx\rfloor}\\
&=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{\lfloor nx\rfloor} \lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{nx-\lfloor nx\rfloor}\\
&=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{\lfloor nx\rfloor}
\end{align}
$$
Using the binomial theorem,
$$
\begin{align}
\left(1+\frac{1}{n}\right)^{\lfloor nx\rfloor}
&=\sum_{k=0}^{\lfloor nx\rfloor} \frac{1}{k!}\frac{P({\lfloor nx\rfloor},k)}{n^k}\\
&=\sum_{k=0}^\infty \frac{1}{k!}\frac{P({\lfloor nx\rfloor},k)}{n^k}
\end{align}
$$
Where $P(n,k)=n(n-1)(n-2)...(n-k+1)$ is the number of permutations of $n$ things taken $k$ at a time.
Note that $0\le\frac{P({\lfloor nx\rfloor},k)}{n^k}\le x^k$ and that $\sum_{k=0}^\infty \frac{x^k}{k!}$ converges absolutely. Thus, if we choose an $\epsilon>0$, we can find an $N$ large enough so that, for all $n$,
$$
0\le\sum_{k=N}^\infty \frac{1}{k!}\left(x^k-\frac{P({\lfloor nx\rfloor},k)}{n^k}\right)\le\frac{\epsilon}{2}
$$
Furthermore, note that $\lim_{n\to\infty}\frac{P({\lfloor nx\rfloor},k)}{n^k}=x^k$. Therefore, we can choose an $n$ large enough so that
$$
0\le\sum_{k=0}^{N-1} \frac{1}{k!}\left(x^k-\frac{P({\lfloor nx\rfloor},k)}{n^k}\right)\le\frac{\epsilon}{2}
$$
Thus, for n large enough,
$$
0\le\sum_{k=0}^\infty \frac{1}{k!}\left(x^k-\frac{P({\lfloor nx\rfloor},k)}{n^k}\right)\le\epsilon
$$
Therefore,
$$
\lim_{n\to\infty}\;\sum_{k=0}^\infty\frac{1}{k!}\frac{P({\lfloor nx\rfloor},k)}{n^k}=\sum_{k=0}^\infty\frac{x^k}{k!}
$$
Summarizing, we have
$$
\begin{align}
e^x&=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{nx}\\
&=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{\lfloor nx\rfloor}\\
&=\lim_{n\to\infty}\;\sum_{k=0}^\infty \frac{1}{k!}\frac{P({\lfloor nx\rfloor},k)}{n^k}\\
&=\sum_{k=0}^\infty\frac{x^k}{k!}
\end{align}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/54448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 3
} |
Is this Batman equation for real? HardOCP has an image with an equation which apparently draws the Batman logo. Is this for real?
Batman Equation in text form:
\begin{align}
&\left(\left(\frac x7\right)^2\sqrt{\frac{||x|-3|}{|x|-3}}+\left(\frac y3\right)^2\sqrt{\frac{\left|y+\frac{3\sqrt{33}}7\right|}{y+\frac{3\sqrt{33}}7}}-1 \right) \\
&\qquad \qquad \left(\left|\frac x2\right|-\left(\frac{3\sqrt{33}-7}{112}\right)x^2-3+\sqrt{1-(||x|-2|-1)^2}-y \right) \\
&\qquad \qquad \left(3\sqrt{\frac{|(|x|-1)(|x|-.75)|}{(1-|x|)(|x|-.75)}}-8|x|-y\right)\left(3|x|+.75\sqrt{\frac{|(|x|-.75)(|x|-.5)|}{(.75-|x|)(|x|-.5)}}-y \right) \\
&\qquad \qquad \left(2.25\sqrt{\frac{(x-.5)(x+.5)}{(.5-x)(.5+x)}}-y \right) \\
&\qquad \qquad \left(\frac{6\sqrt{10}}7+(1.5-.5|x|)\sqrt{\frac{||x|-1|}{|x|-1}} -\frac{6\sqrt{10}}{14}\sqrt{4-(|x|-1)^2}-y\right)=0
\end{align}
| The 'Batman equation' above relies on an artifact of the plotting software used which blithely ignores the fact that the value $\sqrt{\frac{|x|}{x}}$ is undefined when $x=0$. Indeed, since we’re dealing with real numbers, this value is really only defined when $x>0$. It seems a little ‘sneaky’ to rely on the solver to ignore complex values and also to conveniently ignore undefined values.
A nicer solution would be one that is unequivocally defined everywhere (in the real, as opposed to complex, world). Furthermore, a nice solution would be ‘robust’ in that small variations (such as those arising from, say, roundoff) would perturb the solution slightly (as opposed to eliminating large chunks).
Try the following in Maxima (actually wxmaxima) which is free. The resulting plot is not quite as nice as the plot above (the lines around the head don’t have that nice ‘straight line’ look), but seems more ‘legitimate’ to me (in that any reasonable solver should plot a similar shape). Please excuse the code mess.
/* [wxMaxima batch file version 1] [ DO NOT EDIT BY HAND! ]*/
/* [ Created with wxMaxima version 0.8.5 ] */
/* [wxMaxima: input start ] */
load(draw);
/* [wxMaxima: input end ] */
/* [wxMaxima: input start ] */
f(a,b,x,y):=a*x^2+b*y^2;
/* [wxMaxima: input end ] */
/* [wxMaxima: input start ] */
c1:sqrt(26);
/* [wxMaxima: input end ] */
/* [wxMaxima: input start ] */
draw2d(implicit(
f(1/36,1/9,x,y)
+max(0,2-f(1.5,1,x+3,y+2.7))
+max(0,2-f(1.5,1,x-3,y+2.7))
+max(0,2-f(1.9,1/1.7,(5*(x+1)+(y+3.5))/c1,(-(x+1)+5*(y+3.5))/c1))
+max(0,2-f(1.9,1/1.7,(5*(x-1)-(y+3.5))/c1,((x-1)+5*(y+3.5))/c1))
+max(0,2-((1.1*(x-2))^4-(y-2.1)))
+max(0,2-((1.1*(x+2))^4-(y-2.1)))
+max(0,2-((1.5*x)^8-(y-3.5)))
-1,
x,-6,6,y,-4,4));
/* [wxMaxima: input end ] */
/* Maxima can't load/batch files which end with a comment! */
"Created with wxMaxima"$
The resulting plot is:
(Note that this is, more or less, a copy of the entry I made on http://blog.makezine.com.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/54506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "466",
"answer_count": 10,
"answer_id": 6
} |
Subsets and Splits