Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
How does a Class group measure the failure of Unique factorization? I have been stuck with a severe problem from last few days. I have developed some intuition for my-self in understanding the class group, but I lost the track of it in my brain. So I am now facing a hell.
The Class group is given by $\rm{Cl}(F)=$ {Fractional Ideals of F} / {Principle fractional Ideals of F} , ($F$ is a quadratic number field) so that we are actually removing the Principal fractional ideals there (that's what I understood by quotient group). But how can that class group measure the failure of Unique Factorization ?
For example a common example that can be found in any text books is $\mathbb{Z[\sqrt{-5}]}$
in which we can factorize $6=2\cdot3=(1+\sqrt{-5})(1-\sqrt{-5})$. So it fails to have unique factorization. Now can someone kindly clarify these points ?
*
*How can one construct $\rm{Cl}(\sqrt{-5})$ by using the quotient groups ?
*What are the elements of $\rm{Cl}(\sqrt{-5})$ ? What do those elements indicate ? ( I think they must some-how indicate the residues that are preventing the $\mathbb{Z[\sqrt{-5}]}$ from having a unique factorization )
*What does $h(n)$ indicate ? ( Class number ). When $h(n)=1$ it implies that unique factorization exists . But what does the $1$ in $h(n)=1$ indicate. It means that there is one element in the class group , but doesn't that prevent Unique Factorization ?
EDIT:
I am interested in knowing whether are there any polynomial time running algorithms that list out all the numbers that fail to hold the unique factorization with in a number field ?
I am expecting that may be Class group might have something to do with these things. By using the class group of a number field can we extract all such numbers ? For example, if we plug in $\mathbb{Z}[\sqrt{-5}]$ then we need to have $6$ and other numbers that don't admit to a unique factorization.
Please do answer the above points and save me from confusion .
Thank you.
| h=1 means that the size of the class group is 1. That means that the group is the trivial group with only one element, the identity. The identity element of the class group is the equivalence class of principal ideals. Hence h=1 is equivalent to "all fractional ideals are principal" or equivalently "all ideals are principal".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/178783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 5,
"answer_id": 2
} |
What is the chance of an event happening a set number of times or more after a number of trials? Assuming every trial is independent from all the others and the probability of a successful run is the same every trial, how can you determine the chance of a successful trial a set number of times or more?
For example, You run 20 independent trials and the chance of a "successful" independent trial each time is 60%. how would you determine the chance of 3 or more"successful" trials?
| If the probability of success on any trial is $p$, then the probability of exactly $k$ successes in $n$ trials is
$$\binom{n}{k}p^k(1-p)^{n-k}.$$
For details, look for the Binomial Distribution on Wikipedia.
So to calculate the probability of $3$ or more successes in your example, let $p=0.60$ and $n=20$. Then calculate the probabilities that $k=3$, $k=4$, and so on up to $k=20$ using the above formula, and add up.
A lot of work! It is much easier in this case to find the probability of $2$ or fewer successes by using the above formula, and subtracting the answer from $1$. So, with $p=0.60$, the probability of $3$ or more successes is
$$1-\left(\binom{20}{0}p^0(1-p)^{20}+\binom{20}{1}p(1-p)^{19}+\binom{20}{2}p^2(1-p)^{18} \right).$$
For the calculations, note that $\binom{n}{k}=\frac{n!}{k!(n-k)!}$. In particular, $\binom{20}{0}=1$, $\binom{20}{1}=20$ and $\binom{20}{2}=\frac{(20)(19)}{2!}=190$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/178837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Longest cylinder of specified radius in a given cuboid Find the maximum height (in exact value) of a cylinder of radius $x$ so that it can completely place into a $100 cm \times 60 cm \times50 cm$ cuboid.
This question comes from http://hk.knowledge.yahoo.com/question/question?qid=7012072800395.
I know that this question is equivalent to two times of the maximum height (in exact value) of a right cone of radius $x$ so that it can completely place into a $50 cm \times$ $30 cm \times 25 cm$ cuboid whose the apex of the right cone is placed at the corner of the cuboid, but I still have no idea until now.
| A beginning:
Let $a_i>0$ $\>(1\leq i\leq 3)$ be the dimensions of the box. Then we are looking for a unit vector ${\bf u}=(u_1,u_2,u_3)$ in the first octant and a length $\ell>0$ such that
$$\ell u_i+2 x\sqrt{1-u_i^2}=a_i\qquad(1\leq i\leq 3)\ .$$
When $x$ is small compared to the dimensions of the box one might begin with
$$\ell^{(0)}:=d:=\sqrt{a_1^2+a_2^2+a_3^2}\ ,\qquad u_i^{(0)}:={a_i\over d}\quad(1\leq i\leq3)$$
and do a few Newton iterations in order to obtain an approximate solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/178895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
When can one use logarithms to multiply matrices If $a,b \in \mathbb{Z}_{+}$, then $\exp(\log(a)+\log(b))=ab$. If $A$ and $B$ are square matrices, when can we multiply $A$ and $B$ using logarithms? If $A \neq B^{-1}$, should $A$ and $B$ be symmetric?
| When they commute, or: when they have the same eigenvectors, or if $AB=BA$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/178973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
If $a^n-b^n$ is integer for all positive integral value of $n$, then $a$, $b$ must also be integers. If $a^n-b^n$ is integer for all positive integral value of n with a≠b, then a,b must also be integers.
Source: Number Theory for Mathematical Contests, Problem 201, Page 34.
Let $a=A+c$ and $b=B+d$ where A,B are integers and c,d are non-negative fractions<1.
As a-b is integer, c=d.
$a^2-b^2=(A+c)^2-(B+c)^2=A^2-B^2+2(A-B)c=I_2(say),$ where $I_2$ is an integer
So, $c=\frac{I_2-(A^2-B^2)}{2(A-B)}$ i.e., a rational fraction $=\frac{p}{q}$(say) where (p,q)=1.
When I tried to proceed for the higher values of n, things became too complex for calculation.
| assuming $a \neq b$
if $a^n - b^n$ is integer for all $n$, then it is also integer for $n = 1$ and $n = 2$.
From there you should be able to prove that $a$ is integer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/179048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
Is this CRC calculation correct? I am currently studying for an exam and trying to check a message (binary) for errors using a polynomial, I would like if somebody could verify that my results below are (in)valid.
Thanks.
Message: 11110101 11110101
Polynomial: X4 + x2 + 1
Divisor (Derived from polynomial): 10101
Remainder:111
Result: There is an error in the above message?
Also, I had a link to an online calculator that would do the division but can't relocate it, any links to a calculator would be greatly appreciated.
Thanks.
| 1111010111110101 | 10101
+10101 | 11001011101
10111 |
+10101 |
10011 |
+10101 |
11011 |
+10101 |
11101 |
+10101 |
10000 |
+10101 |
10110 |
+10101 |
111 | <- you are right! there is an error in the message!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/179113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is concave quadratic + linear a concave function? Basic question about convexity/concavity:
Is the difference of a concave quadratic function of a matrix $X$ given by f(X) and a linear function l(X), a concave function?
i.e, is f(X)-l(X) concave?
If so/not what are the required conditions to be checked for?
| A linear function is both concave and convex (here $-l$ is concave), and the sum of two concave functions is concave.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/179218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Function writen as two functions having IVP I heard this problem and I am a bit stuck.
Given a function $f : I \rightarrow \mathbb{R}$ where $I \subset \mathbb{R}$ is an open interval.
Then $f$ can be writen $f=g+h$ where $g,h$ are defined in the same interval and have the Intermediate Value Property. I tried to construct firstly the one function arbitarily at two points and then tried to define it in a way to have the IVP but I cannot manage to control the other function, as I try to fix the one I destroy the other and I cannot seem to know how to be certain I have enough point to define both in a way they have the IVP.
Any help appreciated! Thank you.
| Edit: In fact, all the information I give below (and more) is provided in another question in a much more organized way. I just found it.
My original post: The intermediate Value property is also called the Darboux property. Sierpinski first proved this theorem.The problem is treated in a blog of Beni Bogosel, a member of our own community and in much more generality too.
http://mathproblems123.files.wordpress.com/2010/07/strange-functions.pdf
It is also proved in( As I found from Wikipedia)
Bruckner, Andrew M: Differentiation of real functions, 2 ed, page 6, American Mathematical Society, 1994
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/179310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Canonical Isomorphism Between $\mathbf{V}$ and $(\mathbf{V}^*)^*$ For the finite-dimensional case, we have a canonical isomorphism between $\mathbf{V}$, a vector space with the usual addition and scalar multiplication, and $(\mathbf{V}^*)^*$, the "dual of the dual of $\mathbf{V}$." This canonical isomorphism means that the isomorphism is always the same, independent of additional choices.
We can define a map $I : \mathbf{V} \to (\mathbf{V}^*)^*$ by $$x \mapsto I(x) \in (\mathbf{V}^*)^* \ \text{ where } \ I(x)(f) = f(x) \ \text{for any } \ f \in \mathbf{V}^*$$
My Question: what can go wrong in the infinite-dimensional case? The notes I am studying remark that if $\mathbf{V}$ is finite-dimensional, then $I$ is an isomorphism, but in the infinite-dimensional case we can go wrong? How?
| There are two things that can go wrong in the infinite-dimensional (normed) case.
First you could try to take the algebraic dual of $V$. Here it turns out that $V^{**}$ is much larger than $V$ for simple cardinality reasons as outlined in the Wikipedia article.
On the other hand, if $V$ is a normed linear space and you take the continuous dual, $V''$, then $V'$ (and thus also $V''$) will always be a Banach space. But! While $I$ as a map from $V$ to $V^{**}$ is obviously well-defined, it is not entirely obvious that $I(x)$ is in fact continuous for all $x$. In fact it is a consequence of the Hahn–Banach theorem which (roughly) states that there are "enough" continuous linear maps from $V$ to the base field in order for $V'$ and $V''$ to be interesting, e.g. the map $I$ is injective.
If $V$ is not a normed linear space, then things are more complicated and better left to a more advanced course in functional analysis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/179367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 5,
"answer_id": 1
} |
Enumerate certain special configurations - combinatorics. Consider the vertices of a regular n-gon, numbered 1 through n. (Only the vertices, not the sides).
A "configuration" means some of these vertices are joined by edges.
A "good" configuration is one with the following properties:
1) There is at least one edge.
2) There can be multiple edges from a single vertex.
3) If A and B are joined by an edge, then the degree of A equals the degree of B.
4) No two edges must intersect each other, except possibly at the endpoints.
5) The degree of each vertex is at most k. (where $0\leq k \leq n$ )
Find f(n, k), the number of good configurations. For example, f(3, 2) = 4 and f(n, 0) = 0.
| We can show that vertex degrees $k \le 2$. Suppose for contradiction that $n$ is the size of a minimal counterexample, a convex $n$-gon with some degree $k \gt 2$. By minimality (discarding any vertices not connected to the given one) all vertices in the $n$-gon have degree $k$.
But it is known that the maximum number of nonintersecting diagonals of an $n$-gon is $n-3$. Add in the outer edges of the polygon itself and we'd have at most $2n-3$ nonintersecting edges.
But for $n$ vertices to each have degree $k$ requires $\frac{nk}{2}$ edges. Now an elementary inequality argument:
$$ \frac{nk}{2} \le 2n-3 $$
$$ k \le 4 - \frac{6}{n} $$
immediately gives us that $k \le 3$.
To rule out $k = 3$ makes essential use of the convexity of the polygon, in that a nonconvex quadrilateral allows nonintersecting edges with three meeting at each vertex. However in a convex polygon once the maximum number of nonintersecting diagonals are added, the result is a dissection of the polygon into triangles. At least two of these triangles consist of two "external" edges and one diagonal, so that the corresponding vertex opposite the diagonal edge has only degree $2$.
Added: Let's flesh out the reduction of $f(n,k)$ to fewer vertices by consideration of cases: vertex $1$ has (a) no edge, (b) one edge, or (c) two edges (if $k=2$ allows).
If vertex $1$ has no edge, we get a contribution of $f(n-1,k)$ from "good configurations" of the remaining $n-1$ vertices.
If vertex $1$ has just one edge, its other endpoint vertex $p$ must also have only that edge. This induces a contribution summing the "good configurations" of the two sets of vertices on either side of vertex $p$, including possibly empty ones there since we've already satisfied the requirement of at least one edge:
$$ \sum_{p=2}^{n} (1 + f(p-2,k)) (1 + f(n-p,k)) $$
If vertex $1$ has two edges (as noted, only possible if $k=2$), the contributions are similar but need more bookkeeping. Vertex $1$ belongs to a cycle of $r \gt 2$ edges, and the remaining $n-r$ vertices are partitioned thereby into varying consecutive subsets, each of which has its own "good configuration" (or an empty one).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/179452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Showing the sum of orthogonal projections with orthogonal ranges is also an orthogonal projection
Show that if $P$ and $Q$ are two orthogonal projections with orthogonal ranges, then $P+Q$ is also an orthogonal projection.
First I need to show $(P+Q)^\ast = P+Q$. I am thinking that since
\begin{align*}
((P+Q)^\ast f , g) & = (f,(P+Q)g) \\
& = (f,Pg) + (f,Qg) \\
& = (P^\ast f,g) + (Q^\ast f,g) \\
& = (Pf,g) + (Qf,g) \\
& = ((P+Q)f,g),
\end{align*}
we get $(P+Q)^\ast=P+Q$.
I am not sure if what I am thinking is right since I assumed that $(P+Q)f=Pf+Qf$ is true for any bounded linear operator $P$, $Q$.
For $(P+Q)^2=P+Q$, I use
$$(P+Q)^2= P^2 + Q^2 + PQ +QP,$$
but I cant show $PQ=0$ and $QP=0$.
Anyone can help me? Thanks.
| To complete your proof we need the following observations.
If $\langle f,g\rangle=0$ for all $g\in H$, then $f=0$. Indeed, take $g=f$, then you get $\langle f,g\rangle=0$. By definition of inner product this implies $f=0$.
Since $\mathrm{Im}(P)\perp\mathrm{Im}(Q)$, then for all $f,g\in H$ we have $\langle Pf,Qg\rangle=0$. which is equivalent to $\langle Q^*Pf,g\rangle=0$ for all $f,g\in H$. Using observation from previous section we see that $Q^*P(f)=0$ for all $f\in H$, i.e. $Q^*P=0$. Since $Q^*=Q$ and $P^*=P$ we conclude
$$
QP=Q^*P=0
$$
$$
PQ=P^*Q^*=(QP)^*=0^*=0
$$
In fact $R$ is an orthogonal projection iff $R=R^*=R^2$. In this case we can prove your result almost algebraically
$$
(P+Q)^*=P^*+Q^*=P+Q
$$
$$
(P+Q)^2=P^2+PQ+QP+Q^2=P+0+0+Q=P+Q
$$
I said almost, because prove of $PQ=QP=0$ requires some machinery with elements of $H$ and its inner product.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/179520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Trigonometry proof involving sum difference and product formula How would I solve the following trig problem.
$$\cos^5x = \frac{1}{16} \left( 10 \cos x + 5 \cos 3x + \cos 5x \right)$$
I am not sure what to really I know it involves the sum and difference identity but I know not what to do.
| $$\require{cancel}
\frac1{16} [ 5(\cos 3x+\cos x)+\cos 5x+5\cos x ]\\
=\frac1{16}[10\cos x \cos 2x+ \cos 5x +5 \cos x]\\
=\frac1{16} [5\cos x(2\cos 2x+1)+\cos 5x]\\
=\frac1{16} [5\cos x(2(2\cos^2 x-1)+1)+\cos 5x]\\
=\frac1{16} [5\cos x(4\cos^2 x-1)+\cos 5x]\\
=\frac1{16} [5\cos x(4\cos^2 x-1)+\cos 5x]\\
=\frac1{16} [20\cos^3 x-5\cos x+\cos 5x]\\
=\frac1{16} [20\cos^3 x-5\cos x+\cos 5x]\\
=\frac1{16} [\cancel{20\cos^3 x}\cancel{-5\cos x}+16\cos^5 x\cancel{-20\cos^3 x}+\cancel{5\cos x}]\\
=\frac1{16} (16\cos^5 x)\\=\cos^5 x$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/179584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
A Tri-Factorable Positive integer Found this problem in my SAT book the other day and wanted to see if anyone could help me out.
A positive integer is said to be "tri-factorable" if it is the product of three consecutive integers. How many positive integers less than 1,000 are tri-factorable?
| HINT:
$1,000= 10*10*10<10*11*12$ so in the product $n*(n+1)*(n+2)$, $n$ must be less then $10.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/179636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Find the domain of $f(x)=\frac{3x+1}{\sqrt{x^2+x-2}}$
Find the domain of $f(x)=\dfrac{3x+1}{\sqrt{x^2+x-2}}$
This is my work so far:
$$\dfrac{3x+1}{\sqrt{x^2+x-2}}\cdot \sqrt{\dfrac{x^2+x-2}{x^2+x-2}}$$
$$\dfrac{(3x+1)(\sqrt{x^2+x-2})}{x^2+x-2}$$
$(3x+1)(\sqrt{x^2+x-2})$ = $\alpha$ (Just because it's too much to type)
$$\dfrac{\alpha}{\left[\dfrac{-1\pm \sqrt{1-4(1)(-2)}}{2}\right]}$$
$$\dfrac{\alpha}{\left[\dfrac{-1\pm \sqrt{9}}{2}\right]}$$
$$\dfrac{\alpha}{\left[\left(\dfrac{-1+3}{2}\right)\left(\dfrac{-1-3}{2}\right)\right]}$$
$$\dfrac{\alpha}{(1)(-2)}$$
Now, I checked on WolframAlpha and the domain is $x\in \mathbb R: x\lt -2$ or $x\gt 1$
But my question is, what do I do with the top of the problem? Or does it just not matter at all.
| Note that $x^2+x-2=(x-1)(x+2)$. There is a problem only if $(x-1)(x+2)$ is $0$ or negative. (If it is $0$, we have a division by $0$ issue, and if it is negative we have a square root of a negative issue.)
Can you find where $(x-1)(x+2)$ is $0$? Can you find where it is negative? Together, these are the numbers which are not in the domain of $f(x)$.
Or, to view things more positively, the function $f(x)$ is defined precisely for all $x$ such that $(x-1)(x+2)$ is positive.
Remark: Let $g(x)=x^2+x-2$. We want to know where $g(x)$ is positive. By factoring, or by using the Quadratic Formula, we can see that $g(x)=0$ at $x=-2$ and at $x=1$.
It is a useful fact that a nice continuous function can only change sign by going throgh $0$. This means that in the interval $(-\infty, -2)$, $g(x)$ has constant sign. It also has constant sign in $(-2,1)$, and also in $(1,\infty)$.
We still don't know which signs. But this can be determined by finding $g(x)$ at a test point in each interval. For example, let $x=-100$. Clearly $g(-100)$ is positive, so $g(x)$ is positive for all $x$ in the interval $(-\infty,0)$.
For the interval $(-2,1)$, $x=0$ is a convenient test point. Note that $g(0) \lt 0$, so $g(x)$ is negative in the whole interval $(-2,1)$. A similar calculation will settle things for the remaining interval $(1,\infty)$.
There are many other ways to handle the problem. For example, you know that the parabola $y=(x+2)(x-1)$ is upward facing, and crosses the $x$-axis at $x=-2$ and $x=1$. So it is below the $x$-axis (negative) only when $-2 \lt x \lt 1$.
Or we can work with pure inequalities. The product $(x+2)(x-1)$ is positive when $x+2$ and $x-1$ are both positive. This happens when $x \gt 1$. The product is also positive when $x+2$ and $x-1$ are both negative. This happens when $x \lt -2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/179713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Show that if $\kappa$ is an uncountable cardinal, then $\kappa$ is an epsilon number Firstly, I give the definition of the epsilon number:
$\alpha$ is called an epsilon number iff $\omega^\alpha=\alpha$.
Show that if $\kappa$ is an uncountable cardinal, then $\kappa$ is an epsilon number and there are $\kappa$ epsilon numbers below $\kappa$; In particular, the first epsilon number, called $\in_0$, is countable.
I've tried, however I have not any idea for this. Could anybody help me?
| The following is intended as a half-outline/half-solution.
We will prove by induction that every uncountable cardinal $\kappa$ is an $\epsilon$-number, and that the family $E_\kappa = \{ \alpha < \kappa : \omega^\alpha = \alpha \}$ has cardinality $\kappa$.
Suppose that $\kappa$ is an uncountable cardinal such that the two above facts are knows for every uncountable cardinal $\lambda < \kappa$.
*
*If $\kappa$ is a limit cardinal, note that in particular $\kappa$ is a limit of uncountable cardinals. By normality of ordinal exponentiation it follows that $$\omega^\kappa = \lim_{\lambda < \kappa} \omega^\lambda = \lim_{\lambda < \kappa} \lambda = \kappa,$$ where the limit is taken only over the uncountable cardinals $\lambda < \kappa$.
Also, it follows that $E_\kappa = \bigcup_{\lambda < \kappa} E_\lambda$, and so $| E_\kappa | = \lim_{\lambda < \kappa} | E_\lambda | = \kappa$.
*If $\kappa$ is a successor cardinal, note that $\kappa$ is regular. Note, also, that every uncountable cardinal is an indecomposable ordinal. Therefore $\kappa = \omega^\delta$ for some (unique) ordinal $\delta$. As $\omega^\kappa \geq \kappa$, we know that $\delta \leq \kappa$. It suffices to show that $\omega^\beta < \kappa$ for all $\beta < \kappa$. We do this by induction: assume $\beta < \kappa$ is such that $\omega^\gamma < \kappa$ for all $\gamma < \beta$.
*
*If $\beta = \gamma + 1$, note that $\omega^\beta = \omega^\gamma \cdot \omega = \lim_{n < \omega} \omega^\gamma \cdot n$. By indecomposability it follows that $\omega^\gamma \cdot n < \kappa$ for all $n < \omega$, and by regularity of $\kappa$ we have that $\{ \omega^\gamma \cdot n : n < \omega \}$ is bounded in $\kappa$.
*If $\beta$ is a limit ordinal, then $\omega^\beta = \lim_{\gamma < \beta} \omega^\gamma$. Note by regularity of $\kappa$ that $\{ \omega^\gamma : \gamma < \beta \}$ must be bounded in $\kappa$.
To show that $E_\kappa$ has cardinality $\kappa$, note that by starting with any ordinal $\alpha < \kappa$ and defining the sequence $\langle \alpha_n \rangle_{n < \omega}$ by $\alpha_0 = \alpha$ and $\alpha_{n+1} = \omega^{\alpha_n}$ we have that $\alpha_\omega = \lim_{n < \omega} \alpha_n < \kappa$ is an $\epsilon$-number. Use this fact to construct a strictly increasing $\kappa$-sequence of $\epsilon$-numbers less than $\kappa$.
(There must be an easier way, but I cannot think of it.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/179792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Two sums with Fibonacci numbers
*
*Find closed form formula for sum: $\displaystyle\sum_{n=0}^{+\infty}\sum_{k=0}^{n} \frac{F_{2k}F_{n-k}}{10^n}$
*Find closed form formula for sum: $\displaystyle\sum_{k=0}^{n}\frac{F_k}{2^k}$ and its limit with $n\to +\infty$.
First association with both problems: generating functions and convolution. But I have been thinking about solution for over a week and still can't manage. Can you help me?
| For (2) you have $F_k = \dfrac{\varphi^k}{\sqrt 5}-\dfrac{\psi^k}{\sqrt 5}$ where $\varphi = \frac{1 + \sqrt{5}}{2} $ and $\psi = \frac{1 - \sqrt{5}}{2}$ so the problem becomes the difference between two geometric series.
For (1) I think you can turn this into something like $\displaystyle \sum_{n=0}^{\infty} \frac{F_{2n+1}-F_{n+2}}{2\times 10^n}$ and again make it into a sum of geometric series.
There are probably other ways.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/179855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Goldbach's conjecture and number of ways in which an even number can be expressed as a sum of two primes Is there a functon that counts the number of ways in which an even number can be expressed as a sum of two primes?
| See Goldbach's comet at Wikipedia.
EDIT: To expand on this a little, let $g(n)$ be the number of ways of expressing the even number $n$ as a sum of two primes. Wikipedia gives a heuristic argument for $g(n)$ to be approximately $2n/(\log n)^2$ for large $n$. Then it points out a flaw with the heuristic, and explains how Hardy and Littlewood repaired the flaw to come up with a better conjecture. The better conjecture states that, for large $n$, $g(n)$ is approximately $cn/(\log n)^2$, where $c>0$ depends on the primes dividing $n$. In all cases, $c>1.32$.
I stress that this is all conjectural, as no one has been able to prove even that $g(n)>0$ for all even $n\ge4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/179895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
What is the name of the logical puzzle, where one always lies and another always tells the truth? So i was solving exercises in propositional logic lately and stumbled upon a puzzle, that goes like this:
Each inhabitant of a remote village always tells the truth or always lies. A villager will only give a "Yes" or a "No" response to a question a tourist asks. Suppose you are a tourist visiting this area and come to a fork in the road.
One branch leads to the ruins you want to visit; the other branch leads deep into the jungle. A villager is standing at the fork in the road. What one question can you ask the villager to determine which branch to take?
I intuitively guessed the answer is "If I asked you this path leads to the ruins, would you say yes?". So my questions are:
*
*What is the name and/or source of this logical riddle?
*How can i corroborate my answer with mathematical rigor?
| Knights and Knaves?
How: read about it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/179968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
} |
Surfaces of constant projected area Generalizing the well-known variety of plane curves of constant width, I'm wondering about three-dimensional surfaces of constant projected area.
Question: If $A$ is a (bounded) subset of $\mathbb R^3$, homeomorphic to a closed ball, such that the orthogonal projection of $A$ onto a plane has the same area for all planes, is $A$ necessarily a sphere? If not, what are some other possibilities?
Wikipedia mentions a concept of convex shapes with constant width, but that's different.
(Inspired by the discussion about spherical cows in comments to this answer -- my question is seeking to understand whether there are other shapes of cows that would work just as well).
| These are called bodies of constant brightness. A convex body that has both constant width and constant brightness is a Euclidean ball. But non-spherical convex bodies of constant brightness do exist; the first was found by Blaschke in 1916. See: Google and related MSE thread.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/180017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Approximating $\pi$ with least digits Do you a digit efficient way to approximate $\pi$? I mean representing many digits of $\pi$ using only a few numeric digits and some sort of equation. Maybe mathematical operations also count as penalty.
For example the well known $\frac{355}{113}$ is an approximation, but it gives only 7 correct digits by using 6 digits (113355) in the approximation itself. Can you make a better digit ratio?
EDIT: to clarify the "game" let's assume that each mathematical operation (+, sqrt, power, ...) also counts as one digit. Otherwise one could of course make artifical infinitely nested structures of operations only. And preferably let's stick to basic arithmetics and powers/roots only.
EDIT: true. logarithm of imaginary numbers provides an easy way. let's not use complex numbers since that's what I had in mind. something you can present to non-mathematicians :)
| Let me throw in Clive's suggestion to look at the wikipedia site. If we allow for logarithm (while not using complex numbers), we can get 30 digits of $\pi$ with
$\frac{\operatorname{ln}(640320^3+744)}{\sqrt{163}}$
which is 13 digits and 5 operation, giving a ratio of about 18/30=0.6.
EDIT: Here is another one I found on this site:
$\ln(31.8\ln(2)+\ln(3))$
gives 11 digits of $\pi$ with using 5 numbers and 4 operations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/180073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 5,
"answer_id": 1
} |
Alternating sum of squares of binomial coefficients I know that the sum of squares of binomial coefficients is just ${2n}\choose{n}$ but what is the closed expression for the sum ${n\choose 0}^2 - {n\choose 1}^2 + {n\choose 2}^2 + \cdots + (-1)^n {n\choose n}^2$?
| Here's a combinatorial proof.
Since $\binom{n}{k} = \binom{n}{n-k}$, we can rewrite the sum as $\sum_{k=0}^n \binom{n}{k} \binom{n}{n-k} (-1)^k$. Then $\binom{n}{k} \binom{n}{n-k}$ can be thought of as counting ordered pairs $(A,B)$, each of which is a subset of $\{1, 2, \ldots, n\}$, such that $|A| = k$ and $|B| = n-k$. The sum, then, is taken over all such pairs such that $|A| + |B| = n$.
Given $(A,B)$, let $x$ denote the largest element in the symmetric difference $A \oplus B = (A - B) \cup (B - A)$ (assuming that such an element exists). In other words, $x$ is the largest element that is in exactly one of the two sets. Then define $\phi$ to be the mapping that moves $x$ to the other set. The pairs $(A,B)$ and $\phi(A,B)$ have different signs, and $\phi(\phi(A,B)) = (A,B)$, so $(A,B)$ and $\phi(A,B)$ cancel each other out in the sum. (The function $\phi$ is what is known as a sign-reversing involution.)
So the value of the sum is determined by the number of pairs $(A,B)$ that do not cancel out. These are precisely those for which $\phi$ is not defined; in other words, those for which there is no largest $x$. But there can be no largest $x$ only in the case $A=B$. If $n$ is odd, then the requirement $\left|A\right| + \left|B\right| = n$ means that we cannot have $A=B$, so in the odd case the sum is $0$. If $n$ is even, then the number of pairs is just the number of subsets of $\{1, 2, \ldots, n\}$ of size $n/2$; i.e., $\binom{n}{n/2}$, and the parity is determined by whether $|A| = n/2$ is odd or even.
Thus we get $$\sum_{k=0}^n \binom{n}{k}^2 (-1)^k = \begin{cases} (-1)^{n/2} \binom{n}{n/2}, & n \text{ is even}; \\ 0, & n \text{ is odd}.\end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/180150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 3,
"answer_id": 2
} |
Periodic solution of differential equation let be the ODE $ -y''(x)+f(x)y(x)=0 $
if the function $ f(x+T)=f(x) $ is PERIODIC does it mean that the ODE has only periodic solutions ?
if all the solutions are periodic , then can all be determined by Fourier series ??
| No, it doesn't mean that. For instance, $f(x)=0$ is periodic with any period, but $y''(x)=0$ has non-periodic solutions $y(x)=ax+b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/180213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Transforming an inhomogeneous Markov chain to a homogeneous one I fail to understand Cinlar's transformation of an inhomogeneous Markov chain to a homogeneous one. It appears to me that $\hat{P}$ is not fully specified. Generally speaking, given a $\sigma$-algebra $\mathcal A$, a measure can be specified either explicitly over the entire $\sigma$-algebra, or implicitly by specifying it over a generating ring and appealing to Caratheodory's extension theorem. However, Cinlar specifies $\hat{P}$ over a proper subset of $\hat{\mathcal{E}}$ that is not a ring.
| We give the condition that $\widehat P$ is a Markow kernel, and we have that
$$\widehat P((n,x),\{n+1\}\times E)=P_{n+1}(x,E)=1,$$
hence the measure $\widehat P((n,x),\cdot)$ is concentrated on $\{n+1\}\times E\}$? Therefore, we have $\widehat P((n,x),I\times A)=0$ for any $A\subset E$ and $I\subset \Bbb N$ which doesn't contain the integer $n+1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/180264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solving linear system of equations when one variable cancels I have the following linear system of equations with two unknown variables $x$ and $y$. There are two equations and two unknowns. However, when the second equation is solved for $y$ and substituted into the first equation, the $x$ cancels. Is there a way of re-writing this system or re-writing the problem so that I can solve for $x$ and $y$ using linear algebra or another type of numerical method?
$2.6513 = \frac{3}{2}y + \frac{x}{2}$
$1.7675 = y + \frac{x}{3}$
In the two equations above, $x=3$ and $y=0.7675$, but I want to solve for $x$ and $y$, given the system above.
If I subtract the second equation from the first, then:
$2.6513 - 1.7675 = \frac{3}{2}y - y + \frac{x}{2} - \frac{x}{3}$
Can the equation in this alternate form be useful in solving for $x$ and $y$? Is there another procedure that I can use?
In this alternate form, would it be possible to limit $x$ and $y$ in some way so that a solution for $x$ and $y$ can be found by numerical optimization?
| $$\begin{equation*}
\left\{
\begin{array}{c}
2.6513=\frac{3}{2}y+\frac{x}{2} \\
1.7675=y+\frac{x}{3}
\end{array}
\right.
\end{equation*}$$
If we multiply the first equation by $2$ and the second by $3$ we get
$$\begin{equation*}
\left\{
\begin{array}{c}
5.3026=3y+x \\
5.3025=3y+x
\end{array}
\right.
\end{equation*}$$
This system has no solution because
$$\begin{equation*}
5.3026\neq 5.3025
\end{equation*}$$
However if the number $2.6513$ resulted from rounding $2.65125$, then the
same computation yields
$$\begin{equation*}
\left\{
\begin{array}{c}
5.3025=3y+x \\
5.3025=3y+x
\end{array}
\right.
\end{equation*}$$
which is satisfied by all $x,y$.
A system of the form
$$\begin{equation*}
\begin{pmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{pmatrix}
\begin{pmatrix}
x \\
y
\end{pmatrix}
=
\begin{pmatrix}
b_{1} \\
b_{2}
\end{pmatrix}
\end{equation*}$$
has the solution (Cramer's rule)
$$\begin{equation*}
\begin{pmatrix}
x \\
y
\end{pmatrix}
=\frac{1}{\det
\begin{pmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{pmatrix}
}
\begin{pmatrix}
a_{22}b_{1}-a_{12}b_{2} \\
a_{11}b_{2}-a_{21}b_{1}
\end{pmatrix}
=
\begin{pmatrix}
\frac{a_{22}b_{1}-a_{12}b_{2}}{a_{11}a_{22}-a_{21}a_{12}} \\
\frac{a_{11}b_{2}-a_{21}b_{1}}{a_{11}a_{22}-a_{21}a_{12}}
\end{pmatrix}
\end{equation*}$$
if $\det
\begin{pmatrix}
a_{11} & a_{12} \\
a_{21} & a_{22}%
\end{pmatrix}%
\neq 0$.
In the present case, we have
$$\begin{equation*}
\begin{pmatrix}
\frac{1}{2} & \frac{3}{2} \\
\frac{1}{3} & 1
\end{pmatrix}
\begin{pmatrix}
x \\
y
\end{pmatrix}
=
\begin{pmatrix}
2.6513 \\
1.7675
\end{pmatrix}
\end{equation*}$$
and $$\det
\begin{pmatrix}
\frac{1}{2} & \frac{3}{2} \\
\frac{1}{3} & 1
\end{pmatrix}
=0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/180324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Calculate Rotation Matrix to align Vector $A$ to Vector $B$ in $3D$? I have one triangle in $3D$ space that I am tracking in a simulation. Between time steps I have the the previous normal of the triangle and the current normal of the triangle along with both the current and previous $3D$ vertex positions of the triangles.
Using the normals of the triangular plane I would like to determine a rotation matrix that would align the normals of the triangles thereby setting the two triangles parallel to each other. I would then like to use a translation matrix to map the previous onto the current, however this is not my main concern right now.
I have found this website http://forums.cgsociety.org/archive/index.php/t-741227.html
that says I must
*
*determine the cross product of these two vectors (to determine a rotation axis)
*determine the dot product ( to find rotation angle)
*build quaternion (not sure what this means)
*the transformation matrix is the quaternion as a $3 \times 3$ (not sure)
Any help on how I can solve this problem would be appreciated.
| General solution for n dimensions in matlab / octave:
%% Build input data
n = 4;
a = randn(n,1);
b = randn(n,1);
%% Compute Q = rotation matrix
A = a*b';
[V,D] = eig(A'+A);
[~,idx] = min(diag(D));
v = V(:,idx);
Q = eye(n) - 2*(v*v');
%% Validate Q is correct
b_hat = Q'*a*norm(b)/norm(a);
disp(['norm of error = ' num2str(norm(b_hat-b))])
disp(['eigenvalues of Q = ' num2str(eig(Q)')])
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/180418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "181",
"answer_count": 19,
"answer_id": 11
} |
Logical question problem A boy is half as old as the girl will be when the boy’s age is twice the sum of their ages when the boy was the girl’s age.
How many times older than the girl is the boy at their present age?
This is a logical problem sum.
| If $x$ is the boy's age and $y$ is the girl's age, then when the boy was the girl's current age, her age was $2y-x$. So "twice the sum of their ages when the boy was the girl's age" is $2(3y-x)=6y-2x$. The boy will reach this age after a further $6y-3x$ years, at which point the girl will be $7y-3x$. We are told that $x$ is half of this; so $2x=7y-3x$, which means that $x=\frac{7}{5}y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/180556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Easy way to find roots of the form $qi$ of a polynomial Let $p$ be a polynomial over $\mathbb{Z}$, we know that there is an easy way to check if $p$ have rational roots (using the rational root theorem).
Is there an easy way to check if $p$ have any roots of the form $qi$ where $q\in\mathbb{Q}$ (or at least $q\in\mathbb{Z}$) ? ($i\in\mathbb{C}$) ?
| Hint $\ f(q\,i) = a_0\! -\! a_2 q^2\! +\! a_4 q^4\! +\cdots + i\,q\,(a_1\! -\! a_3 q^2\! +\! a_5 q^4\! +\! \cdots) = g(q) + i\,q\,h(q)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/180698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Compute $\int \frac{\sin(x)}{\sin(x)+\cos(x)}\mathrm dx$ I'm having trouble computing the integral:
$$\int \frac{\sin(x)}{\sin(x)+\cos(x)}\mathrm dx.$$
I hope that it can be expressed in terms of elementary functions. I've tried simple substitutions such as $u=\sin(x)$ and $u=\cos(x)$, but it was not very effective.
Any suggestions are welcome. Thanks.
| Hint: $\sqrt{2}\sin(x+\pi/4)=\sin x +\cos x$, then substitute $x+\pi/4=z$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/180744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "88",
"answer_count": 8,
"answer_id": 0
} |
Comparing speed in stochastic processes generated from simulation? I have an agent-based simulation that generates a time series in its output for my different treatments. I am measuring performance through time, and at each time tick the performance is the mean of 30 runs (30 samples). In all of the treatments the performance starts from near 0 and ends in 100%, but with different speed. I was wondering if there is any stochastic model or probabilistic way to to compare the speed or growth of these time series. I want to find out which one "significantly" grows faster.
Thanks.
| Assuming you're using a pre-canned application, then there will be an underlying distribution generating your time series. I would look in the help file of the application to find this distribution.
Once you know the underlying distribution, then "significance" is determined the usual way, namely pick a confidence level, and test for the difference between two random variables.
The wrinkle is to do with the time element. If your "treatments" are the result of an accumulation of positive outcomes, then each time series is effectively a sum of random variables, and each term in that sum is itself the result of a mean of 30 samples from the underlying distribution.
So, although the formulation is different, the treatment of significance is the same.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/180795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is the complex number $z=a+bi$ equivalent to the matrix form $\left(\begin{smallmatrix}a &-b\\b&a\end{smallmatrix}\right)$
Possible Duplicate:
Relation of this antisymmetric matrix $r = \!\left(\begin{smallmatrix}0 &1\\-1 & 0\end{smallmatrix}\right)$ to $i$
On Wikipedia, it says that:
Matrix representation of complex numbers
Complex numbers $z=a+ib$ can also be represented by $2\times2$ matrices that have the following form: $$\pmatrix{a&-b\\b&a}$$
I don't understand why they can be represented by these matrices or where these matrices come from.
| Since you put the tag quaternions, let me say a bit more about performing identifications like that:
Recall the quaternions $\mathcal{Q}$ is the group consisting of elements $\{\pm1, \pm \hat{i}, \pm \hat{j}, \pm \hat{k}\}$ equipped with multiplication that satisfies the rules according to the diagram
$$\hat{i} \rightarrow \hat{j} \rightarrow \hat{k}.$$
Now what is more interesting is that you can let $\mathcal{Q}$ become a four dimensional real vector space with basis $\{1,\hat{i},\hat{j},\hat{k}\}$ equipped with an $\Bbb{R}$ - bilinear multiplication map that satisfies the rules above. You can also define the norm of a quaternion $a + b\hat{i} + c\hat{j} + d\hat{k}$ as
$$||a + b\hat{i} + c\hat{j} + d\hat{k}|| = a^2 + b^2 + c^2 + d^2.$$
Now if you consider $\mathcal{Q}^{\times}$, the set of all unit quaternions you can identify $\mathcal{Q}^{\times}$ with $\textrm{SU}(2)$ as a group and as a topological space. How do we do this identification? Well it's not very hard. Recall that
$$\textrm{SU}(2) = \left\{ \left(\begin{array}{cc} a + bi & -c + di \\ c + di & a-bi \end{array}\right) |\hspace{3mm} a,b,c,d \in \Bbb{R}, \hspace{3mm} a^2 + b^2 + c^2 + d^2 = 1 \right\}.$$
So you now make an ansatz (german for educated guess) that the identification we are going to make is via the map $f$ that sends a quaternion $a + b\hat{i} + c\hat{j} + d\hat{k}$ to the matrix $$\left(\begin{array}{cc} a + bi & -c + di \\ c + di & a-bi \end{array}\right).$$
It is easy to see that $f$ is a well-defined group isomorphism by an algebra bash and it is also clear that $f$ is a homeomorphism. In summary, the point I wish to make is that these identifications give us a useful way to interpret things. For example, instead of interpreting $\textrm{SU}(2)$ as boring old matrices that you say "meh" to you now have a geometric understanding of what $\textrm{SU}(2)$. You can think about each matrix as being a point on the sphere $S^3$ in 4-space! How rad is that?
On the other hand when you say $\Bbb{R}^4$ has now basis elements consisting of $\{1,\hat{i},\hat{j},\hat{k}\}$, you have given $\Bbb{R}^4$ a multiplication structure and it becomes not just an $\Bbb{R}$ - module but a module over itself.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/180849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44",
"answer_count": 8,
"answer_id": 5
} |
Is failing to admit an axiom equivalent to proof when the axiom is false? Often, mathematicians wish to develop proofs without admitting certain axioms (e.g. the axiom of choice).
If a statement can be proven without admitting that axiom, does that mean the statement is also true when the axiom is considered to be false?
I have tried to construct a counter-example, but in every instance I can conceive, the counter-example depends on a definition which necessarily admits an axiom. I feel like the answer to my question is obvious, but maybe I am just out of practice.
| Yes. Let the axiom be P. The proof that didn't make use of P followed all the rules of logic, so it still holds when you adjoin $\neg P$ to the list of axioms. (It could also happen that the other axioms sufficed to prove P, in which case the system that included $\neg P$ would be inconsistent. In an inconsistent theory, every proposition can be proved, so the thing you originally proved is still true, although vacuously. The case where the other axioms prove $\neg P$ is also OK, obviously.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/180919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Equivalence of norms on the space of smooth functions Let $E, F$ be Banach spaces, $A$ be an open set in $E$ and $C^2(A,F)$ be the space of all functions $f:A\to F,$ which are twice continuously differentiable and bounded with all derivatives. The question is when following two norms in $C^2(A,F)$ are equivalent:
$$
\|f\|_{1}=\sup_{x\in A}\sum^2_{k=0}\|f^{(k)}(x)\|, \ \|f\|_{2}=\sup_{x\in A}(\|f(x)\|+\|f^{(2)}(x)\|).
$$
In the case $A=E$ they are equivalent. One can prove it in the following way. To bound $\|f^{(1)}(x)h\|$ consider a line $g(t)=f(x+th)$ through $x$ in the direction $h$ and use inequality
$$
\sup_{t\in \mathbb{R}}\|g^{(1)}(t)\|\leq\sqrt{2\sup_{t\in \mathbb{R}}\|g(t)\|\sup_{t\in \mathbb{R}}\|g^{(2)}(t)\|}.
$$
The case when $A$ is an open ball is unknown to me. Of course, one can try to consider not lines but segments. But the problem is the length of segment can't be bounded from below and inequalitites I know can't be applied.
| We can reduce to the case $F=\mathbb R$ by considering all compositions $\varphi\circ f$ with $\varphi$ ranging over unit-norm functionals on $F$.
Let $A$ be the open unit ball in $E$. Given $x\in A$ and direction $v\in E$ (a unit vector), we would like to estimate the directional derivative $f_v(x)$ in terms of $M=\sup_A(\|f\|, \|f\,''\|)$. Instead of restricting to a line, let us restrict $f$ to a 2-dimensional subspace $P$ that contains the line (and the origin). The advantage is that $D:=A\cap P$ is a 2-dimensional unit disk: the size of section does not depend on $x$ or $v$.
The directional derivative $f_v:D\to \mathbb R$ is itself differentiable, and using the 2nd derivative bound we conclude that the oscillation of $f_v$ on $A$ (namely, $\sup_A f_v-\inf_A f_v$) is bounded by $2M$. Suppose $\sup_A f_v>5M$. Then $f_v>3M$ everywhere in $A$. Integrating this inequality along the line through the origin in direction $v$, we find that the oscillation of $f$ on this line is at least $6M$, a contradiction. Therefore, $\sup_A f_v\le 5M$. Same argument shows $\inf_A f_v\ge -5M$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/180980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Worst case analysis of MAX-HEAPIFY procedure . From CLRS book for MAX-HEAPIFY procedure :
The children's subtrees each have size at most 2n/3 - the worst case
occurs when the last row of the tree is exactly half full
I fail to see this intuition for the worst case scenario . Can some one explain possibly with a diagram . Thanks .
P.S: I know Big O notation and also found this answer here but still I have doubts.
| Start with a heap $H$ with $n$ levels with all levels full. That's $2^{i - 1}$ nodes for each level $i$ for a total of $$|H| = 2^n - 1$$ nodes in the heap. Let $L$ denote the left sub-heap of the root and $R$ denote the right sub-heap. $L$ has a total of $$|L| = 2^{n - 1} - 1$$ nodes, as does $R$. Since a binary heap is a complete binary tree, then new nodes must be added such that after heapification, nodes fill up the last level from left to right. So, let's add nodes to $L$ so that a new level is filled and let's denote this modified sub-heap as $L'$ and the modified whole heap as $H'$. This addition will require $2^{n - 1}$ nodes, bringing the total number of nodes in $L'$ to $$ |L'| = (2^{n - 1} - 1) + 2^{n - 1} = 2\cdot 2^{n - 1} - 1$$ and the total number of nodes in the entire heap to $$ |H'| = (2^{n} - 1) + 2^{n - 1} = 2^n + 2^{n - 1} - 1 $$
The amount of space $L'$ takes up out of the whole heap $H'$ is given by
$$ \frac{|L'|}{|H'|} = \frac{2\cdot 2^{n-1} - 1}{2^n + 2^{n - 1} - 1} =
\frac{2\cdot 2^{n-1} - 1}{2\cdot 2^{n - 1} + 2^{n - 1} - 1} =
\frac{2\cdot 2^{n-1} - 1}{3\cdot 2^{n - 1} - 1} $$
Taking the limit as $n \to \infty$, we get:
$$ \lim_{n\to\infty} { \frac{|L'|}{|H'|} } = \lim_{n\to\infty} { \frac{2\cdot 2^{n-1} - 1}{3\cdot 2^{n - 1} - 1} } = \frac{2}{3} $$
Long story short, $L'$ and $R$ make up effectively the entire heap. $|L'|$ has twice as many elements as $R$ so it makes up $\frac{2}{3}$ of the heap while $R$ makes up the other $\frac{1}{3}$.
This $\frac{2}{3}$ of the heap corresponds to having the last level of the heap half full from left to right. This is the most the heap can get imbalanced; adding another node will either begin to rebalance the heap (by filling out the other, right, half of the last level) or break the heap's shape property of being a complete binary tree.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/181022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 3
} |
Singular-value inequalities This is my question: Is the following statement true ?
Let $H$ be a real or complex Hilbertspace and $R,S:H \to H$ compact operators.
For every $n\in\mathbb{N}$ the following inequality holds:
$$\sum_{j=1}^n s_j(RS) \leq \sum_{j=1}^n s_j(R)s_j(S)$$
Note: $s_j(R)$ denotes the j-th singular value of the opeartor $R$.
The sequence of the singular values falls monotonically to zero.
With best regards,
mat
Edit: I found out, that the statement is true for products instead of sums. By that I mean:
Let $H$ be a $\mathbb{K}$-Hilbertspace and $R,S: H \to H$ compact operators.
For every $n\in\mathbb{N}$ we have:
$$\Pi_{j=1}^n s_j(RS) \leq \Pi_{j=1}^n s_j(R)s_j(S)$$
Is it possible to derive the statement for sums from this?
| The statement is true. It is a special case of a result by Horn (On the singular values of a product of completely continuous operators, Proc. Nat.Acad. Sci. USA 36 (1950) 374-375).
The result is the following. Let $f:[0,\infty)\rightarrow \mathbb{R}$ with $f(0)=0$. If $f$ becomes convex following the substitution $x=e^t$ with $-\infty\leq t<\infty$, then for any linear completely continuous operators $R$ and $S$, $$\sum_{j=1}^n f(s_j(RS))\leq \sum_{j=1}^n f(s_j(R)s_j(S)).$$
The function $f(x)=x$ falls in the scope of the theorem and the proof follows from the theorem you stated about products.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/181093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 1,
"answer_id": 0
} |
Are there diagonalisable endomorphisms which are not unitarily diagonalisable? I know that normal endomorphisms are unitarily diagonalisable. Now I'm wondering, are there any diagonalisable endomorphisms which are not unitarily diagonalisable?
If so, could you provide an example?
| Another way to look at it, though no really different in essence, is to consider the operator norm on ${\rm M}_{n}(\mathbb{C})$ induced by the Euclidean norm on $\mathbb{C}^{n}$ (thought of as column vectors). Hence $\|A \| = {\rm max}_{ v : \|v \| = 1} \|Av \|.$ Since the unitary transformations are precisely the isometries of $\mathbb{C}^{n},$ we see that conjugation by a unitary matrix does not change the norm of a matrix. If $A$ can be diagonalized by a unitary matrix, then it is clear from this discussion that $\| A \| = {\rm max}_{\lambda} |\lambda |,$ as $\lambda$ runs over the eigenvalues of $A$. Hence as soon as we find a diagonalizable matrix $B$ with $\| B \| \neq {\rm max}_{\lambda} |\lambda|,$ we know that $B$ is not diagonalizable by a unitary matrix. For example, the matrix $$B = \left( \begin{array}{clcr} 3 & 5\\0 & 2 \end{array} \right)$$ has largest eigenvalue $3,$ but $\|B \| > 5$ because
$B \left( \begin{array}{cc} 0 \\1 \end{array} \right) = \left( \begin{array}{cc} 5 \\2 \end{array} \right).$ Also, $B$ is diagonalizable, but we now know that can't be achieved via a unitary matrix.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/181289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Is it true, that $H^1(X,\mathcal{K}_{x_1,x_2})=0$? - The cohomology of the complex curve with a coefficient of the shaeaf of meromorphic functions... Let X be complex curve (complex manifold and $\dim X=1$).
For $x_1,x_2\in X$ we define the sheaf $\mathcal{K}_{x_1,x_2}$(in complex topology) of meromorphic functions vanish at the points $x_1$ and $x_2$.
Is it true, that $H^1(X,\mathcal{K}_{x_1,x_2})=0$?
In general, what are sufficient conditions for the $\mathcal{F}$ to $H^1(X,\mathcal{F})=0$ if X is curve?
| The answer is yes for a non-compact Riemann surface $H^1(X, \mathcal K_{x_1,x_2})=0$ .
The key is the exact sequence of sheaves on $X$:$$0\to \mathcal K_{x_1,x_2} \to \mathcal K \xrightarrow {truncate } \mathcal Q_1\oplus \mathcal Q_2\to 0$$ where $\mathcal Q_i$ is the sky-scraper sheaf at $x_i$ with fiber the Laurent tails (locally of the form $\sum_{j=0}^Na_jz^{-j}$).
Taking cohomology we get a long exact sequence $$\cdots \mathcal K(X) \xrightarrow {\text {truncate}} \mathcal Q_1(X) \oplus \mathcal Q_2(X)\to H^1(X, \mathcal K_{x_1,x_2})\to H^1(X, \mathcal K) \to \cdots $$
The vanishing of the cohomology group $H^1(X, \mathcal K_{x_1,x_2})$ then follows from the two facts:
1) $H^1(X, \mathcal K)=0$
2) The morphism $ \mathcal K(X) \xrightarrow {\text {truncate}} \mathcal Q_1(X) \oplus \mathcal Q_2(X)$ is surjective because of the solvability of the Mittag-Leffler problem on a non-compact Riemann surface.
For a compact Riemann surface of genus $\geq1$ the relevant Mittag-Leffler problem is not always solvable, so that we have $H^1(X, \mathcal K_{x_1,x_2})\neq 0$ (however for the Riemann sphere $H^1(\mathbb P^1, \mathcal K_{x_1,x_2})=0$)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/181352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
For a Turing machine and input $w$- Is "Does M stop or visit the same configuration twice" a decidable question? I have the following question out of an old exam that I'm solving:
Input: a Turing machine and input w
Question: Does on running of M on w, at least one of the following things happen
-M stops of w
-M visits the same configuration at least twice
First I thought that it's clearly in $RE$, it is a recursive enumerable question since we can simulate running of $M$ on $w$ and list the configuration or wait until it stops,
but then I thought to myself:"If it visits the same configuration more than twice, it must be in an infinite loop", because, as I understand, if it reached the same configuration it's gonna copy the same transitions, over and over again, so the problem might be in $R$, it's decidable, since it's the same question as "It stops for $w$ or it doesn't"?
What do you think? Thank you!
| We can modify a Turing machine $T$ by replacing every computation step of $T$ by a procedure in which we go to the left end of the (used portion of) the tape, and one more step left, print a special symbol $U$, and then hustle back to do the intended step. So in the modified machine, a configuration never repeats. Thus if we can solve your problem, we can solve the Halting Problem. The conclusion is that your problem is not decidable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/181405",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
A field without a canonical square root of $-1$ The following is a question I've been pondering for a while. I was reminded of it by a recent dicussion on the question How to tell $i$ from $-i$?
Can you find a field that is abstractly isomorphic to $\mathbb{C}$, but that does not have a canonical choice of square root of $-1$?
I mean canonical in the following sense: if you were to hand your field to one thousand mathematicians with the instructions "Pick out the most obvious square root of -1 in this field. Your goal is to make the same choice as most other mathematicians," there should be be a fairly even division of answers.
| You can model the complex numbers by linear combinations of the $2\times 2$ unit matrix $\mathbb{I}$ and a real $2\times 2$ skew-symmetric matrix with square $-\mathbb I$, of which there are two, $\begin{pmatrix}0 & -1\\1 & 0\end{pmatrix}$ and $\begin{pmatrix}0 & 1\\-1 & 0\end{pmatrix}$. I see no obvious reason to prefer one over the other.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/181464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 8,
"answer_id": 5
} |
Prove that given any $2$ vertices $v_0,v_1$ of Graph $G$ that is a club, there is a path of length at most $2$ starting in $v_0$ and ending in $v_1$ Definition of a club: Let $G$ be a graph with $n$ vertices where $n > 2$. We call the graph $G$ a club if for all pairs of distinct vertices $u$ and $v$ not connected by an edge, we have $\deg(u)+\deg(v)\ge n$.
Ref: Khoussainov,B. , Khoussainova,N. (2012), Lectures on Discrete Mathematics for Computers Science, World Scientific Pg.(83)
My strategy was to prove this using the proof by cases method.
Case 1: Prove for the case when $v_0$ and $v_1$ are connected by an edge, in which case there is clearly a path from $v_0$ to $v_1$, of length $1$, so this case is proven.
Case 2: Prove for the case when $v_0$ and $v_1$ are not connected by an edge.
Now, for this case, since the hypothesis is assumed to be true and Graph $G$ is a club,
$$\deg(v_0) + \deg(v_1)\ge n\;.$$
That's all I've got so far, and I'm not sure how to proceed.
| The statement is not true. Consider a path of length 4, where $v_0,v_1$ are the endpoints of the path. There is no path of length at most two from $v_0$ to $v_1$, and the graph is a club by your definition.
Edit: After the definition of club changed
With the new definition, the proof can be made as follows:
Let $G$ be a (simple) graph which is a club.
Assume there is no path of length 1 or 2 from $v_0$ to $v_1$. Let $S_0$ denote the set of vertices joined to $v_0$ by an edge, and let $S_1$ be the set of vertices joined to $S_1$ by an edge. If there is a vertex $x$ which is in both $S_0$ and $S_1$ (that is $x \in S_0 \cap S_1$), then there is a path $v_0,x,v_1$ of length 2 from $v_0$ to $v_1$, so assume there is no vertex in $S_0 \cap S_1$.
We know by the definition of $S_0$ and $S_1$ that $|S_0| = \text{deg}(v_0)$ and $|S_1| = \text{deg}(v_1)$, and since $G$ is a club, $|S_0| + |S_1| = \text{deg}(v_0) + \text{deg}(v_1) \geq n$. Since $S_0$ and $S_1$ have no vertices in common, this means that any of the $n$ vertices is in either $S_0$ or $S_1$. But this is a contradiction since $v_0,v_1$ are in none of them (if they were, there would be a direct edge from $v_0$ to $v_1$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/181584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Express Expectation and Variance in other terms. Let $X \sim N(\mu,\sigma^2)$ and
$$f_X(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}.$$
where $-\infty < x < \infty$.
Express $\operatorname{E}(aX + b)$ and $\operatorname{Var}(aX +b)$ in terms of $\mu$, $\sigma$, $a$ and $b$, where $a$ and $b$ are real constants.
This is probably an easy question but I'm desperate at Probability! Any help is much appreciated as I'm not even sure where to start.
| Not an answer:
Check out Wikipedia, and then learn them through comprehension and by heart.
*
*Normal Distribution (E, $\sigma$ included)
*What is Variance
*Important Properties of Variance
*Important Properties of Expected Value
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/181648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What function $f$ such that $a_1 \oplus\, \cdots\,\oplus a_n = 0$ implies $f(a_1) \oplus\, \cdots\,\oplus f(a_n) \neq 0$ For a certain algorithm, I need a function $f$ on integers such that
$a_1 \oplus a_2 \oplus \, \cdots\,\oplus a_n = 0 \implies f(a_1) \oplus f(a_2) \oplus \, \cdots\,\oplus f(a_n) \neq 0$
(where the $a_i$ are pairwise distinct, non-negative integers and $\oplus$ is the bitwise XOR operation)
The function $f$ should be computable in $O(m)$, where $m$ is the maximum number of digits of the $a_i$. Of course the simpler the function is, the better. Preferrably the output of the function would fit into $m$ digits as well.
Is there something like this? It would also be okay to have a family of finitely many functions $f_n$ such that for one of the functions the result of the above operation will be $\neq 0$.
My own considerations so far were the following:
*
*If we choose the ones' complement as $f$, we can rule out all cases where $n$ is odd.
*If $n$ is even, this means that for every bit, an even number of the $a_i$ has the bit set and the rest has not, therefore taking the ones' complement before XORing doesn't change the result.
So the harder part seems to be the case where $n$ is even.
| The function $f$, if it exists, must have very large outputs.
Call a set of integers "closed" if it is closed under the operation $\oplus$. A good example of a closed set of integers is the set of positive integers smaller than $2^k$ for some $k$.
Let $S$ be a closed set of integers that form the domain of $f$. Take as an example those positive integers with at most $m$ bits. Let $T$ be the codomain of $f$, so that we have $f : S \to T$ being the function of interest. Assume furthermore that $T$ is a closed set of integers.
Big claim: $|T| \ge (2^{|S|}-1)/|S|$.
Proof sketch:
Let $A$ be the set of sequences $a_1 < a_2 < \dots < a_n$ of distinct positive integers in $S$. Let $p : A \to S$ be defined by $p(a_1,a_2,\dots,a_n) = a_1 \oplus \dots \oplus a_n$, and let $q : A \to T$ be defined by $q(a_1,\dots,a_n) = f(a_1) \oplus \dots \oplus f(a_n)$.
Claim: If $p(a_1,\dots,a_n) = p(b_1,\dots,b_l),$ then $q(a_1,\dots,a_n) \ne q(b_1,\dots,b_l)$.
Proof: Interleave the sequences $a$ and $b$, removing duplicates, to obtain a sequence $c$. Then $p(c) = p(a) \oplus p(b) = 0$, so $q(c) \ne 0$; yet $q(c) = q(a) \oplus q(b)$.
Now, note that there are $2^{|S|}-1$ elements of $A$, so there must be $(2^{|S|}-1)/|S|$ such elements sharing the same value of $p$. This means that $T$ must contain $(2^{|S|}-1)/|S|$ distinct values.
So, if $S$ consists of $m$-bit integers, then $T$ must consist of roughly $(2^m-m)$-bit integers.
EDIT: Incorporating comments: the function $f(a) = 2^a$ has the desired property, and roughly achieves this bound.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/181721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Count permutations. Hi I have a compinatorial exercise:
Let $s \in S_n$. Count the permutations such that
$$s(1)=1$$ and
$$|s(i+1)-s(i)|\leq 2 \,\, \mathrm{for} \, \, i\in\{1,2, \ldots , n-1 \}$$
Thank you!
| This is OEIS A038718 at the On-Line Encyclopedia of Integer Sequences. The entry gives the generating function
$$g(x)=\frac{x^2-x+1}{x^4-x^3+x^2-2x+1}$$
and the recurrence $a(n) = a(n-1) + a(n-3) + 1$, where clearly we must have initial values $a(0)=0$ and $a(1)=a(2)=1$.
Added: I got this by calculating $a(1)$ through $a(6)$ by hand and then looking at OEIS. For completeness, here’s a brief justification for the recurrence. Start with any of the $a(n-1)$ permutations of $[n-1]$. Add $1$ to each element of the permutation, and prepend a $1$; the result is an acceptable permutation of $[n]$ beginning $12$, and every such permutation of $[n]$ is uniquely obtained in this way. Now take each of the $a(n-3)$ permutations of $[n-3]$, add $3$ to each entry, and prepend $132$; the result is an acceptable permutation of $[n]$ beginning $132$, and each such permutations is uniquely obtained in this way. The only remaining acceptable permutation of $[n]$ is the unique single-peaked permutation: $13542$ and $135642$ are typical examples for odd and even $n$ respectively.
From here we can easily get the generating function. I’ll write $a_n$ for $a(n)$. Assuming that $a_n=0$ for $n<0$, we have $a_n=a_{n-1}+a_{n-3}+1-[n=0]-[n=2]$ for all $n\ge 0$, where the last two terms are Iverson brackets. Multiply by $x^n$ and sum over $n\ge 0$:
$$\begin{align*}
g(x)&=\sum_{n\ge 0}a_nx^n\\
&=\sum_{n\ge 0}a_{n-1}x^n+\sum_{n\ge 0}a_{n-3}x^n+\sum_{n\ge 0}x^n-1-x^2\\
&=xg(x)+x^3g(x)+\frac1{1-x}-1-x^2\;,
\end{align*}$$
so
$$\begin{align*}g(x)&=\frac{1-(1-x)-x^2(1-x)}{(1-x)(1-x-x^3)}\\
&=\frac{x-x^2+x^3}{1-2x+x^2-x^3+x^4}\;.
\end{align*}$$
This is $x$ times the generating function given in the OEIS entry; that one is offset so that $a(0)=a(1)=1$, and in general its $a(n)$ is the number of acceptable permutations of $[n+1]$; my $a_n$ is the number of acceptable permutations of $[n]$. (I didn’t notice this until I actually worked out the generating function myself.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/181781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving that the number of vertices of odd degree in any graph G is even I'm having a bit of a trouble with the below question
Given $G$ is an undirected graph, the degree of a vertex $v$, denoted by $\mathrm{deg}(v)$, in graph $G$ is the number of neighbors of $v$.
Prove that the number of vertices of odd degree in any graph $G$ is even.
| Simply, sum of even numbers of odd number is an even number (always odd+odd=even and even+odd=odd and even+even=even). As the sum of degree of vertices needs to be even number, number of such vertices must be even. Which @Mike has presented very succinctly.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/181833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33",
"answer_count": 8,
"answer_id": 7
} |
A tricky but silly doubt regarding the solutions of $x^2/(y-1)^2=1$ Motivation :
I have been confused with some degree 2 equation. I suddenly came across a simple equation and couldn't get the quintessence behind that.
I have an equation $$\dfrac{x^2}{(y-1)^2}=1 \tag{1}$$ and I was looking for its solutions. It was asked by some kid ( of $9$nth standard ) to me.
I did some manipulation and got $$x^2=(y-1)^2 \tag{2} $$ finally. And one can see that $(0,1)$ satisfies the Equation $[2]$ well. But I was happy, and within small time,
I realized that the same solution set can't satisfy the equation $[1]$ . If you substitute $(0,1)$ in $[1]$ you get $\dfrac{0}{0}=1$ which is wrong.
The answer that convinced me finally :
We can see the same equation as this $x^2. \dfrac{1}{(y-1)^2}=1$ . We know that the set of integers form a ring. So the product of two numbers is one if one number is the inverse of other number. '$1$' present on the R.H.S is the identity element. So the product of the entity with its inverse always gives us the identity.
So when $x$ is $0$, the $0$ doesn't have an inverse in the integers. So the case is to be emitted.
Still persisting questions :
But the thing that makes me surprise is that the Wolfram Alpha gives me this solution .
In the picture you can clearly see that they both intersect at $(0,1)$ . But what is that confusion ?
We omitted that solution, but in fact $(0,1)$ is the intersection of the two lines.
Questions that are to be answered by learned people :
*
*What is the value of term $\dfrac{0}{0}$ ? Isn't it $1$ ?
*Why the solution pair $(0,1)$ satisfies $x^2=(y-1)^2$ but not $\dfrac{x^2}{(y-1)^2}=1$ ? We know that both of them are manifestations of each other in a simple manner.
*If we need to omit that solution, why do the lines intersect at $(0,1)$ ?
Thank you everyone for giving your time.
| The equations $$x^2=(y-1)^2\tag{1}$$ and $$\frac{x^2}{(y-1)^2}=1\tag{2}$$ do not have the same solution set. Every solution of $(2)$ is a solution of $(1)$, but $\langle 0,1\rangle$ is a solution of $(1)$ that is not a solution of $(2)$, because $\frac00$ is undefined.
The reason is that $(1)$ does not imply $(2)$. Note first that $(2)$ does imply $(1)$, because you can multiply both sides of $(2)$ by $(y-1)^2$ to get $(1)$. In order to derive $(2)$ from $(1)$, however, you must divide both sides of $(1)$ by $(y-1)^2$, and this is permissible if and only if $(y-1)^2\ne 0$. Thus, $(1)$ and $(2)$ are equivalent if and only if $(y-1)^2\ne 0$. As long as $(y-1)^2\ne 0$, $(1)$ and $(2)$ have exactly the same solutions, but a solution of $(1)$ with $(y-1)^2=0$ need not be (and in fact isn’t) a solution of $(2)$.
As far as the graphs go, the solution of $(1)$ is the union of the straight lines $y=x+1$ and $y=-x+1$. The solution of $(2)$ consists of every point on these two straight lines except their point of intersection.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/181960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Complete course of self-study I am about $16$ years old and I have just started studying some college mathematics. I may never manage to get into a proper or good university (I do not trust fate) but I want to really study mathematics.
I request people to tell me what topics an undergraduate may/must study and the books that you highly recommend (please do not ask me to define an undergraduate).
Background:
*
*Single variable calculus from Apostol's book Calculus;
*I have started IN Herstein's topics in algebra;
*I have a limited knowledge of linear algebra: I only know what a basis is, a dimension is, a bit of transpose, inverse of a matrix, determinants defined in terms of co-factors, etc., but no more;
*absolutely elementary point set topology. I think open and closed balls, limit points, compactness, Bolzano-Weirstrass theorem (I may have forgotten this topology bit);
*binomial coefficients, recursions, bijections;
*very elementary number theory: divisibility, modular arithmetic, Fermat's little theorem, Euler's phi function, etc.
I asked a similar question (covering less ground than this one) some time back which received no answers and which I deleted. Even if I do not manage to get into a good university, I wish to self-study mathematics. I thanks all those who help me and all those who give me their valuable criticism and advice.
P.S.: Thanks all of you. Time for me to start studying.
| I would suggest some mathematical modeling or other practical application of mathematics. Also Finite automata and graph-theory is interesting as it is further away from "pure math" as I see it, it has given me another perspective of math.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/181984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "109",
"answer_count": 23,
"answer_id": 21
} |
Simple Limit Divergence I am working with a definition of a divergent limit as follows:
A sequence $\{a_n\}$ diverges to $-\infty$ if, given any number $M$, there is an $N$ so that $n \ge N$ implies that $a_n \le M$.
The sequence I am considering is $a_n = -n^2$, which I thought would be pretty simple, but I keep running into a snag.
My Work:
For a given $M$, we want to show that $n \ge N \Rightarrow -n^2 \le M$. So $n^2 \ge -M$. But here is where I run into trouble, because I can't take square roots from here. What should I do?
| If $M>0$ inequality $n^2\geq -M$ always holds. So you can take any $N$ you want.
If $M\leq 0$, then $n\geq\sqrt{-M}$, and you can take $N=\lfloor\sqrt{-M}\rfloor +1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/182060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If $R$ is a ring s.t. $(R,+)$ is finitely generated and $P$ is a maximal ideal then $R/P$ is a finite field
Let $R$ be a commutative unitary ring and suppose that the abelian
group $(R,+)$ is finitely generated. Let's also $P$ be a maximal ideal
of $R$.
Then $R/P$ is a finite field.
Well, the fact that the quotient is a field is obvious. The problem is that I have to show it is a finite field. I do not know how to start: I think that we have to use some tools from the classification of modules over PID (the hypotesis about the additive group is quite strong).
I found similar questions here and here but I think my question is (much) easier, though I don't manage to prove it.
What do you think about? Have you got any suggestions?
Thanks in advance.
| As abelian groups, both $\,R\,,\,P\,$ are f.g. and thus the abelian group $\,R/P\,$ is f.g....but this is also a field so if it had an element of additive infinite order then it'd contain an isomorphic copy of $\,\Bbb Z\,$ and thus also of $\,\Bbb Q\,$, which of course is impossible as the last one is not a f.g. abelian group. (of course, if an abelian group is f.g. then so is any subgroup)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/182115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Frechet Differentiabilty of a Functional defined on some Sobolev Space How can I prove that the following Functional is Frechet Differentiable and that the Frechet derivative is continuous?
$$
I(u)=\int_\Omega |u|^{p+1} dx , \quad 1<p<\frac{n+2}{n-2}
$$
where $\Omega$ is a bounded open subset of $\mathbb{R}^n$ and $I$ is a functional on $H^1_0(\Omega).$
| As was given in the comments, the Gâteaux derivative is
$$
I'(u)\psi = (p+1) \int_\Omega |u|^{p-1}u\psi.
$$
It is clearly linear, and bounded on $L^{p+1}$ since
$$
|I'(u)\psi| \leq (p+1) \|u\|_{p+1}^p\|\psi\|_{p+1},
$$
by the Hölder inequality with the exponents $\frac{p+1}p$ and $p+1$.
Here $\|\cdot\|_{q}$ denotes the $L^q$-norm.
Then the boundedness on $H^1_0(\Omega)$ follows from the continuity of the embedding $H^1_0\subset L^{p+1}$.
Now we will show the continuity of $I':L^{p+1}\to (L^{p+1})'$, with the latter space taken with its norm topology. First, some elementary calculus. For a constant $a>0$, the function $f(x)=|x|^a x$ is continuously differentiable with
$$
f'(x) = (a+1)|x|^a,
$$
implying that
$$
\left||x|^ax-|y|^ay\right| \leq (a+1)\left|\int_{x}^{y} |t|^a\mathrm{d}t \right| \leq (a+1)\max\{|x|^a,|y|^a\}|x-y|.
$$
Using this, we have
$$
|I(u)\psi-I(v)\psi|\leq (p+1)\int_\Omega \left||u|^{p-1}u-|v|^{p-1}v\right|\cdot|\psi|
\leq p(p+1) \int_\Omega \left(|u|^{p-1}+|v|^{p-1}\right)|u-v|\cdot|\psi|.
$$
Finally, it follows from the Hölder inequality with the exponents $\frac{p+1}{p-1}$, $p+1$, and $p+1$ that
$$
|I(u)\psi-I(v)\psi|
\leq p(p+1) \left(\|u\|_{p+1}^{p-1}+\|v\|_{p+1}^{p-1}\right)\|u-v\|_{p+1}\cdot\|\psi\|_{p+1},
$$
which establishes the claim.
Finally, the continuity of $I':H^1_0\to (H^1_0)'$ follows from
the continuity of the embedding $H^1_0\subset L^{p+1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/182185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
One step subgroup test help
Possible Duplicate:
Basic Subgroup Conditions
could someone please explain how the one step subgroup test works,
I know its important and everything but I do not know how to apply it as well as with the two step subgroup.
If someone could also give some examples with it it would be really helpful.
thank you
| Rather than prove that the "one step subgroup test" and the "two step subgroup test" are equivalent (which the links in the comments do very well), I thought I would "show it in action".
Suppose we want to show that $2\Bbb Z = \{k \in \Bbb Z: k = 2m, \text{for some }m \in \Bbb Z\}$ is a subgroup of $\Bbb Z$ under addition.
A) The "two-step method": first, we show closure - given $k,k' \in 2\Bbb Z$, we have that:
$k = 2m,k' = 2m'$ for some integers $m,m'$, so $k+k' = 2m+2m' = 2(m+m')$. Since $\Bbb Z$ is a group, and closed under addition, $m+m'$ is an integer, so $k+k' \in 2\Bbb Z$.
Next, we show that if $k \in 2\Bbb Z$, $-k \in 2\Bbb Z$: since $k = 2m$, for some integer $m$, we have $-k = -(2m) = 2(-m)$, and since $-m$ is also an integer, $-k \in 2\Bbb Z$.
B) The "one step method": here, we combine both steps into one: given $k,k' \in 2\Bbb Z$, we aim to show that $k + (-k') \in 2\Bbb Z$. As before, we write:
$k + (-k') = k - k' = 2m - 2m' = 2(m -m')$, and since $m - m'$ is an integer, $k + (-k') \in 2\Bbb Z$.
A more sophisticated use of this test, is to show that for any subgroup $H$ of a group $G$, and any element $g \in G$, $gHg^{-1} = \{ghg^{-1}: h \in H\}$ is also a subgroup of $G$. So given any pair of elements $x,y \in gHg^{-1}$, we must show $xy^{-1} \in gHg^{-1}$. Note we can write:
$x = ghg^{-1}$, for some $h \in H$, $y = gh'g^{-1}$, for some $h'\in H$.
Then $y^{-1} = (gh'g^{-1})^{-1} = (g^{-1})^{-1}h'^{-1}g^{-1} = gh'^{-1}g^{-1}$, so:
$xy^{-1} = (ghg^{-1})(gh'^{-1}g^{-1}) = gh(g^{-1}g)h'^{-1}g^{-1} = gh(e)h'^{-1}g^{-1} = g(hh'^{-1})g^{-1}$.
Since $H$ is a subgroup, it contains all inverses, so $h'^{-1}$ is certainly in $H$, and $H$ is also closed under multiplication, so $hh'^{-1} \in H$, thus:
$xy^{-1} = g(hh'^{-1})g^{-1} \in gHg^{-1}$, and we are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/182246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How is a system of axioms different from a system of beliefs? Other ways to put it: Is there any faith required in the adoption of a system of axioms? How is a given system of axioms accepted or rejected if not based on blind faith?
| There are similarities about how people obtain beliefs on different matters. However it is hardly a blind faith.
There are rules that guide mathematicians in choosing axioms. There has always been discussions about whether an axiom is really true or not. For example, not long ago, mathematicians were discussing whether the axiom of choice is reasonable or not. The unexpected consequences of the axiom like the well ordering principle caused many to think it is not true.
Same applies to axioms that are discussed today among set theorists. Set theoretical statements which are independent of the ZFC. There are various views regarding these but they are not based on blind belief. One nice paper to have a look at is Saharon Shelah's Logical Dreams. (This is only one of the views regarding which axioms we should adopt for mathematics, another interesting point of views is the one held by Godel which can be found in his collected works.)
I think a major reason for accepting the consistency of mathematical systems like ZFC is that this statement is refutable (to refute the statement one just needs to come up with a proof of contradiction in ZFC) but no such proof has been found. In a sense, it can be considered to be similar to physics: as long as the theory is describing what we see correctly and doesn't lead to strange things mathematicians will continue to use it. If at some point we notice that it is not so (this happened in the last century in naive Cantorian set theory, see Russell's Paradox) we will fix the axioms to solve those issues.
There has been several discussion on the FOM mailing list that you can read if you are interested.
In short, adoption of axioms for mathematics is not based on "blind faith".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/182303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "101",
"answer_count": 10,
"answer_id": 3
} |
A non-square matrix with orthonormal columns I know these 2 statements to be true:
1) An $n$ x $n$ matrix U has orthonormal columns iff. $U^TU=I=UU^T$.
2) An $m$ x $n$ matrix U has orthonormal columns iff. $U^TU=I$.
But can (2) be generalised to become "An $m$ x $n$ matrix U has orthonormal columns iff. $U^TU=I=UU^T$" ? Why or why not?
Thanks!
| The $(i,j)$ entry of $U^T U$ is the dot product of the $i$'th and $j$'th columns of $U$, so
the matrix has orthonormal columns if and only if $U^T U = I$ (the $n \times n$ identity matrix, that is). If $U$ is $m \times n$, this requires $m \ge n$, because the rank of $U^T U$ is at most $\min(m,n)$. On the other hand, $U U^T$ is $m \times m$, and this again has rank at most $\min(m,n)$, so if $m > n$ it can't be the $m \times m$ identity matrix.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/182355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 1,
"answer_id": 0
} |
Proving that $2^{2^n} + 5$ is always composite by working modulo $3$ By working modulo 3, prove that $2^{2^n} + 5$ is always composite for every positive integer n.
No need for a formal proof by induction, just the basic idea will be great.
| Obviously $2^2 \equiv 1 \pmod 3$.
If you take the above congruence to the power of $k$ you get
$$(2^2)^k=2^{2k} \equiv 1^k=1 \pmod 3$$
which means that $2$ raised to any even power is congruent to $1$ modulo $3$.
What can you say about $2^{2k}+5$ then modulo 3?
It is good to keep in mind that you can take powers of congruences, multiply them and add them together.
If you have finished the above, you have shown that $3\mid 2^{2k}+5$. Does this imply that $2^{2k}+5$ is composite?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/182428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 1
} |
Equivalence of a Lebesgue Integrable function I have the following question:
Let $X$: $\mu(X)<\infty$, and let $f \geq 0$ on $X$. Prove that $f$ is Lebesgue integrable on $X$ if and only if $\sum_{n=0}^{\infty}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) < \infty $.
I have the following ideas, but am a little unsure. For the forward direction:
By our hypothesis, we are taking $f$ to be Lebesgue integrable. Assume $\sum_{n=0}^{\infty}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) = \infty $. Then for any n, no matter how large, $\mu(\lbrace x \in X : f(x) \geq 2^n \rbrace)$ has positive measure. Otherwise, the sum will terminate for a certain $N$, giving us $\sum_{n=0}^{N}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) < \infty $. Thus we have $f$ unbounded on a set of positive measure, which in combination with $f(x) \geq 0$, gives us that $\int_E f(x) d\mu=\infty$. This is a contradiction to $f$ being Lebesgue integrable. So our summation must be finite.
For the reverse direction:
We have that $\sum_{n=0}^{N}2^n \mu(\lbrace x \in X : f(x) \geq 2^n \rbrace) < \infty \\$. Assume that $f$ is not Lebesgue integrable, then we have $\int_E f(x) d\mu=\infty$. Since we are integrating over a finite set $X$, then this means that $f(x)$ must be unbounded on a set of positive measure, which makes our summation infinite, a contradiction.
Any thoughts as to the validity of my proof? I feel as if there is an easier, direct way to do it.
| $$\frac12\left(1+\sum_{n=0}^{+\infty}2^n\,\mathbf 1_{f\geqslant2^n}\right)\leqslant f\lt1+\sum_{n=0}^{+\infty}2^n\,\mathbf 1_{f\geqslant2^n}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/182527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 5,
"answer_id": 0
} |
Prime factorization, Composite integers. Describe how to find a prime factor of 1742399 using at most 441 integer divisions and one square root.
So far I have only square rooted 1742399 to get 1319.9996. I have also tried to find a prime number that divides 1742399 exactly; I have tried up to 71 but had no luck. Surely there is an easier way that I am missing without trying numerous prime numbers. Any help would be great, thanks.
| Note that the problem asks you do describe how you would go about factorizing 1742399 in at most 442 operations. You are not being asked to carry out all these operations yourself! I think your method of checking all primes up to the squareroot is exactly what the problem is looking for, but to be safe you should check that there are no more than 441 primes less than or equal to 1319.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/182581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
What is it about modern set theory that prevents us from defining the set of all sets which are not members of themselves? We can clearly define a set of sets. I feel intuitively like we ought to define sets which do contain themselves; the set of all sets which contain sets as elements, for instance. Does that set produce a contradiction?
I do not have a very firm grasp on what constitutes a set versus what constitutes a class. I understand that all sets are classes, but that there exist classes which are not sets, and this apparently resolves Russell's paradox, but I don't think I see exactly how it does so. Can classes not contain classes? Can a class contain itself? Can a set?
| crf wrote:
"I understand that all sets are classes, but that there exist classes
which are not sets, and this apparently resolves Russell's
paradox...."
You don't need classes to resolve Russell's paradox. The key is that, for any formula P, you cannot automatically assume the existence of $\{x | P(x)\}$. If $P(x)$ is $x\notin x$, we have arrive at Russsell's Paradox. If $P(x)$ is $x\in x$, however, you don't necessarily run into any problems.
So, you can ban the use of certain formulas, hoping that the ban will cover all possibilities that lead to a contradiction. My preference (see http://www.dcproof.com ) is not to assume a priori the existence of any sets, not even the empty set. In such a system, you cannot prove the existence of any sets, problematic or otherwise. You can, of course, postulate the existence of a set in such a system, and construct other sets from it, e.g. subsets, or power sets as permitted.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/182618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 9,
"answer_id": 8
} |
About the sequence satisfying $a_n=a_{n-1}a_{n+1}-1$ "Consider sequences of positive real numbers of the form x,2000,y,..., in which every term after the first is 1 less than the product of its two immediate neighbors. For how many different values of x does the term 2001 appear somewhere in the sequence?
(A) 1 (B) 2 (C) 3 (D) 4 (E) More than 4"
Can anyone suggest a systematic way to solve this problem? Thanks!
| (This is basically EuYu's answer with the details of periodicity added; took a while to type up.)
Suppose that $a_0 , a_1 , \ldots$ is a generalised sequence of the type described, so that $a_i = a_{i-1} a_{i+1} - 1$ for all $i > 0$. Note that this condition is equivalent to demanding that $$a_{i+1} = \frac{ a_i + 1 }{a_{i-1}}.$$
Using this we find the following recurrences:
$$ a_2 = \frac{ a_1 + 1}{a_0}; \\
a_3 = \frac{ a_2 + 1}{a_1} = \frac{ \frac{ a_1 + 1}{a_0} }{a_1} = \frac{ a_0 + a_1 + 1 }{ a_0a_1 }; \\
a_4 = \frac{ a_3 + 1 }{a_2} = \frac{\frac{ a_0 + a_1 + 1 }{ a_0a_1 } + 1}{\frac{ a_1 + 1}{a_0}} = \frac{ ( a_0 + 1 )( a_1 + 1) }{ a_1 ( a_1 + 1 ) } = \frac{a_0 + 1}{a_1};\\
a_5 = \frac{ a_4 + 1 }{ a_3 } = \frac{ \frac{a_0 + 1}{a_1} + 1}{\frac{ a_0 + a_1 + 1 }{ a_0a_1 }} = \frac{ \left( \frac{a_0 + a_1 + 1}{a_1} \right) }{ \left( \frac{a_0+a_1+1}{a_0a_1} \right) } = a_0 \\
a_6 = \frac{ a_5 + 1 }{a_4} = \frac{ a_0 + 1}{ \left( \frac{ a_0 + 1 }{a_1} \right) } = a_1.
$$
Thus every such sequence is periodic with period 5, so if 2001 appears, it must appear as either $a_0, a_1, a_2, a_3, a_4$.
*
*Clearly if $a_0 = 2001$, we're done.
*As we stipulate that $a_1 = 2000$, it is impossible for $a_1 = 2001$.
*If $a_2 = 2001$, then it must be that $2001 = \frac{ 2000 + 1 }{a_0}$ and so $a_0 = 1$.
*If $a_3 = 2001$, then it must be that $2001 = \frac{a_0 + 2000 + 1}{a_0 \cdot 2000}$, and it follows that $a_0 = \frac{2001}{2000 \cdot 2001 - 1}$.
*If $a_4 = 2001$, then it must be that $2001 = \frac{ a_0 + 1 }{2000}$, and so $a_0 = 2001 \cdot 2000 - 1$.
There are thus exactly four values of $a_0$ such that 2001 appears in the sequence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/182696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
Find all linearly dependent subsets of this set of vectors I have vectors in such form
(1 1 1 0 1 0)
(0 0 1 0 0 0)
(1 0 0 0 0 0)
(0 0 0 1 0 0)
(1 1 0 0 1 0)
(0 0 1 1 0 0)
(1 0 1 1 0 0)
I need to find all linear dependent subsets over $Z_2$.
For example 1,2,5 and 3,6,7.
EDIT (after @rschwieb)
The answer for presented vectors:
521
642
763
6541
7432
75431
765321
I did by brute force. I mean i wrote program to iterate through all variants in
$${7 \choose 3} {7 \choose 4} {7 \choose 5} {7 \choose 6} {7 \choose 7}$$
99 in total.
But i just thought what some method exist for such task. For now im trying to implement http://en.wikipedia.org/wiki/Quadratic_sieve . Code incorporated in whole program. I plan to put it here then i organize it well.
| Let us denote with $M$ (the transpose) of your matrix,
$$M= \begin{pmatrix}
1 & 0 & 1 & 0 & 1 & 0 & 1 \\
1 & 0 & 0 & 0 & 1 & 0 & 0 \\
1 & 1 & 0 & 0 & 0 & 1 & 1 \\
0 & 0 & 0 & 1 & 0 & 1 & 1 \\
1 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0
\end{pmatrix}.$$
As rschwieb already noted, a vector $v$ with $Mv=0$ solves your problem.
Using elementary row operation (modulo 2), we can easily bring it on the row echelon form
$$M' = \begin{pmatrix}
1 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 & 1 & 1 & 1 \\
0 & 0 & 1 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 & 0 & 1 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0
\end{pmatrix}.$$
Now, it is easy to see that the vectors
$$v = \begin{pmatrix}
\alpha \\
\alpha +\beta +\gamma \\
\gamma \\
\beta +\gamma \\
\alpha \\
\beta \\
\gamma \\
\end{pmatrix}
$$
parameterized by $\alpha$, $\beta$, $\gamma$ are in the kernel of $M'$ and thus in the kernel of $M$. Setting $\alpha =0,1$, $\beta=0,1$, and $\gamma=0,1$, we obtain the $2^3=8$ solutions. The solution with $\alpha=\beta=\gamma=0$ is trivial, so there are 7 nontrivial solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/182753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Intuition on proof of Cauchy Schwarz inequality To prove Cauchy Schwarz inequality for two vectors $x$ and $y$ we take the inner product of $w$ and $w$ where $w=y-kx$ where $k=\frac{(x,y)}{|x|^2}$ ($(x,y)$ is the inner product of $x$ and $y$) and use the fact that $(w,w) \ge0$ . I want to know the intuition behind this selection. I know that if we assume this we will be able to prove the theorem, but the intuition is not clear to me.
| My favorite proof is inspired by Axler and uses the Pythagorean theorem (that $\|v+w\|^2 =\|v\|^2+\|w\|^2$ when $(v,w)=0$). It motivates the choice of $k$ as the component of $y$ in an orthogonal decomposition (i.e., $kx$ is the projection of $y$ onto the space spanned by $x$ using the decomposition $\langle x\rangle\oplus \langle x\rangle^\perp$). For simplicity we will assume a real inner product space (a very small tweak makes it work in both cases).
The idea is that we want to show $$\left|\left(\frac{x}{\|x\|},y\right)\right| = |(\hat{x},y)| \leq \|y\|,$$
where I have divided both sides by $\|x\|$ and let $\hat{x}=x/\|x\|$. If we interpret taking an inner product with a unit vector as computing a component, the above says the length of $y$ is at least the component of $y$ in the $x$ direction (quite plausible).
Following the above comments we will prove this statement by decomposing $y$ into two components: one in the direction of $x$ and the other orthogonal to $x$. Let $y=k\hat{x}+(y-k\hat{x})$ where $k = (\hat{x},y)$. We see that
$$(\hat{x},y-k\hat{x}) = (\hat{x},y) - (\hat{x},k\hat{x}) = k - k(\hat{x},\hat{x})=0,$$
showing $\hat{x}$ and $y-k\hat{x}$ are orthogonal. This allows us to apply the Pythagorean theorem:
$$\|y\|^2 = \|k\hat{x}\|^2+\|y-k\hat{x}\|^2 = |k|^2 + \|y-k\hat{x}\|^2 \geq |k|^2,$$
since norms are non-negative. Taking square roots gives the result.
As a final comment, note that
$$k\hat{x} = (\hat{x},y)\hat{x} = \left(\frac{x}{\|x\|},y\right)\frac{x}{\|x\|} = \frac{(x,y)}{\|x\|^2}x$$
matches your formulation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/182808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Two problems with prime numbers
Problem 1. Prove that there exists $n\in\mathbb{N}$ such that in interval $(n^2, \ (n+1)^2)$ there are at least $1000$ prime numbers.
Problem 2. Let $s_n=p_1+p_2+...+p_n$ where $p_i$ is the $i$-th prime number. Prove that for every $n$, there exists $k\in\mathbb{N}$ such that $s_n<k^2<s_{n+1}$.
I've found these two a while ago and they interested me. But don't have any ideas.
| Problem 2:
For any positive real $x$, there is a square between $x$ and $x+2\sqrt{x}+2$. Therefore it will suffice to show that $p_{n+1}\geq 2\sqrt{s_n}+2$. We have $s_{n}\leq np_n$ and $p_{n+1}\geq p_n+2$, so we just need to show $p_n\geq 2\sqrt{np_n}$, i.e., $p_n\geq 4n$. That this holds for all sufficiently large $n$ follows either from a Chebyshev-type estimate $\pi(x)\asymp\frac{x}{\log(x)}\,$ (we could also use PNT, but we don't need the full strength of this theorem), or by noting that fewer than $\frac{1}{4}$ of the residue classes mod $210=2\cdot3\cdot5\cdot7$ are coprime to $210$. We can check that statement by hand for small $n$.
There have already been a couple of answers, but here is my take on problem 1:
Suppose the statement is false. It follows that $\pi(x)\leq 1000\sqrt{x}$ for all $x$. This contradicts Chebyshev's estimate $\pi(x)\asymp \frac{x}{\log(x)}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/182889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 0
} |
Attaching a topological space to another I'm self-studying Mendelson's Introduction to Topology. There is an example in the identification topology section that I cannot understand:
Let $X$ and $Y$ be topological spaces and let $A$ be a non-empty closed subset of $X$. Assume that $X$ and $Y$ are disjoint and that a continuous function $f : A \to Y$ is given. Form the set $(X - A) \cup Y$ and define a function $\varphi: X \cup Y \to (X - A) \cup Y$ by $\varphi(x) = f(x)$ for $x \in A$, $\varphi(x) = x$ for $x \in X - A$, and $\varphi(y) = y$ for $y \in Y$. Give $X \cup Y$ the topology in which a set is open (or closed) if and only if its intersections with $X$ and $Y$ are both open (or closed). $\varphi$ is onto. Let $X \cup_f Y$ be the set $(X - A) \cup Y$ with the identification topology defined by $\varphi$.
Let $I^2$ be the unit square in $\mathbb{R}^2$ and let $A$ be the union of its two vertical edges. Let $Y = [0, 1]$ be the unit interval. Define $f : I^2 \to Y$ by $f(x, y) = y$. Then $I^2 \cup_f Y$ is a cylinder formed by identifying the two vertical edges of $I^2$.
I don't understand how $I^2 \cup_f Y$ can be a cylinder. The set is equal to $(I^2 - A) \cup Y$. Which is a union of a subset of $\mathbb{R}^2$ and $[0, 1]$. How can this be a cylinder?
The book has an exercise that constructs a torus in a similar manner. I'm hoping to be able to solve it once I understand this example.
I looked up some examples online. While I understand the definitions and theorems of identification topologies, I have no clue how geometric objects are constructed.
| I think it's good that you ask this question, plus one. Your intuition will eventually develop, don't worry. I had trouble understanding identification topologies too when I saw them first. It just takes some time to get used to, don't worry. The way I think about it now, is as follows:
You have two spaces $X,Y$ and you know how you want to "glue" them together, namely, you take all the points in $A \subset X$ and stick them on $Y$. The map $f$ tells you where on $Y$ you stick them. In pictures it looks something like this:
In the example, this looks like this:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/183022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Deducing formula for a linear transformation The question I'm answering is as follows:
Let $ T: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ be a linear transformation such that $ T(1,1) = (2,1) $ and $ T(0,2) = (2,8) $. Find a formula for $ T(a,b) $ where $ (a,b) \in \mathbb{R}^2 $.
Earlier we proved that $\{(1,1), (0,2)\}$ spans $\mathbb{R}^2$. I used this when trying to find a formula for $T$. My working is:
$T(a(1,1) + b(0,2)) = aT(1,1) + bT(0,2) $
Because $T$ is linear. Thus:
$ T(a(1,1) + b(0,2)) = a(2,1) + b(2,8) = T(a,b)$
Is this correct? It seems a bit too easy and so I'm wondering if I missed anything.
| We have
$$(1,0)=(1,1)-\frac{1}{2}(0,2)\qquad\text{and} \qquad(0,1)=\frac{1}{2}(0,2).\tag{$1$}$$
Note that $(a,b)=a(1,0)+b(0,1)$. So
$$T(a,b)=aT(1,0)+bT(0,1).$$
Now use the values of $T(1,1)$ and $T(0,2)$, and Equations $(1)$, to find $T(1,0)$ and $T(0,1)$, and simplify a bit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/183077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Explicitness in numeral system
Prove that for every $a\in\mathbb{N}$ there is one and only one way to express it in the system with base $\mathbb{N}\ni s>1$.
Seems classical, but I don't have any specific argument.
| We can express a natural number $a$ to any base $s$ by writing $k = \lceil \log_s a \rceil$, the number of digits we'll need, $a_k=\max(\{n\in \mathbb{N}: ns^k\leq a\}),$ and recursively
$a_{i}=\max(\{n\in \mathbb{N}: ns^i \leq a-\sum_{j=i+1}^k a_j s^j\})$. It's immediate that each of these maxes exists, since $0s^i\leq a$ no matter what and $a s^i\geq a$, and that they're unique by, say, the well-ordering of $\mathbb{N}$.
Then
$$a=\sum_{j=0}^k a_j s^k$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/183132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Stabilizer of a point and orbit of a point I really need help with this topic I have an exam tomorrow and am trying to get this stuff in my head. But the book is not explaining me these two topics properly.
It gives me the definition of a stabilizer at a point where $\mathrm {Stab}_G (i) = \{\phi \in G \mid \phi(i) = i\}$, and where $\mathrm{Orb}_G (i) = \{\phi(i) \mid \phi \in G\}$.
I do not know how to calculate the stabilizer nor the orbit for this. I am also given an example
Let $G = \{ (1), (132)(465)(78), (132)(465), (123)(456), (123)(456)(78), (78)\}$ and then
$\mathrm{Orb}_G (1) = \{1, 3, 2\}$,
$\mathrm{Orb}_G (2) = \{2, 1, 3\}$,
$\mathrm{Orb}_G (4) = \{4, 6, 5\}$, and
$\mathrm{Orb}_G (7) = \{7, 8\}$.
also
$\mathrm{Stab}_G (1) = \{(1), (78)\},\\ \mathrm{Stab}_G (2) = \{(1), (78)\},\\ \mathrm{Stab}_G (3) = \{(1), (78)\},\text {and}\\ \mathrm{Stab}_G (7) = \{(1), (132)(465), (123)(456)\}.$
If someone could PLEASE go step by step in how this example was solved it would be really helpful.
Thank you
| In simple terms,
Stabilizer of a point is that permutation in the group which does not change the given point
=> for stab(1) = (1), (78)
Orbit of a point(say 1) are those points that follow given point(1) in the permutations of the group.
=>orbit(1) = 1 for (1); 3 for (132)...; 2 for (123)...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/183190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
Sum of a stochastic process I have a question regarding the distribution of the sum of a discrete-time stochastic process. That is, if the stochastic process is $(X_1,X_2,X_3,X_4,\ldots)$, what is the distribution of $X_1+X_2+X_3+\ldots$? $X_i$ could be assumed from a discrete or continuous set, whatever is easier to calculate.
I understand that it mainly depends on the distribution of $X_i$ and on the fact if the $X_i$ are correlated, right? If they are independent, the computation is probably relatively straightforward, right? For the case of two variables, it is the convolution of the probability distributions and probably this can be generalized to the case of n variables, does it? But what if they are dependent?
Are there any types of stochastic processes, where the distribution of the sum can be computed numerically or even be given as a closed-form expression?
I really appreciate any hints!
|
Are there any types of stochastic processes, where the distribution of the sum can be computed numerically or even be given as a closed-form expression?
As stated, the problem is quite equivalent to compute the distribution of the sum of an arbritary set of random variables. Little can be said in general, as the fact that the variables ($X_i$) form a stochastic process adds practically nothing.
Let's assume that the stochastic process $X(n)$ is a stationary ARMA$(P,Q)$ process, i.e., it's generated from a white noise process $R(n)$ of zero mean and given distribution that passes through a LTI causal filter with $P$ zeroes and $Q$ poles. Then, the process $Z(n) = \sum_{k=n-M+1}^{n} X(k)$ is obtained by chaining a MA$(M)$ filter, so $Z(n)$ is ARMA$(P+M,Q)$ (apart from cancellation which might occur). Now any finite order invertible causal ARMA filter can be expressed as an infinite order MA filter, so that $Z(n)$ can be expressed as a (infinite) linear combination of the white noise input:
$$Z(n) = \sum_{k=-\infty}^n a_k R(k)$$
Because $R(k)$ is iid, the distribution of the sum can be obtained as a convolution. (Notice, however, that the CLT does not apply here). In terms of the characteristic functions, we'd get
$$F_Z(w)=\prod_{k=-\infty}^n F_R(a_k w)$$
Notice, however, that all this might have little or not practical use. For one thing, ARMA modelling is usually applied only to second order moment analysis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/183254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Multiplicative group of integers modulo n definition issues
It is easy to verify that the set $(\mathbb{Z}/n\mathbb{Z})^\times$ is closed under multiplication in the sense that $a, b ∈ (\mathbb{Z}/n\mathbb{Z})^\times$ implies $ab ∈ (\mathbb{Z}/n\mathbb{Z})^\times$, and is closed under inverses in the sense that $a ∈ (\mathbb{Z}/n\mathbb{Z})^\times$ implies $a^{-1} ∈ (\mathbb{Z}/n\mathbb{Z})^\times$.
The question is the following:
Firstly, are $a$ and $b$ referring to each equivalence class of integers modulo $n$?
Secondly, by $a^{-1}$, what is this referring to? If $a$ is the equivalence class, I cannot see (or I am not sure) how I can make inverse set.
| Well $2\cdot 3\equiv 1\; \text{mod}\ 5$, so $2$ and $3$ are multiplicative inverses $\text{ mod } 5$.
How to find the inverse of a number modulo a prime number was the topic of one of my previous answers. Modulo a composite number, inverses don't always exist.
See Calculating the Modular Multiplicative Inverse without all those strange looking symbols for the way to find the inverse of $322$ mod $701$. It turns out to be $455$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/183306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 6,
"answer_id": 2
} |
How to find the maxium number of edge-disjoint paths using flow network Given a graph $G=(V,E)$ and $2$ vertices $s,t \in V$, how can I find the maximum number of edge-disjoint paths from $s$ to $t$ using a flow network? $2$ paths are edge disjoint if they don't have any common edge, though they may share some common vertices.
Thank you.
| Hint: if each edge has a capacity of one unit, different units of stuff flowing from $s$ to $t$ must go on edge-disjoint paths.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/183347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Finding a basis for the solution space of a system of Diophantine equations Let $m$, $n$, and $q$ be positive integers, with $m \ge n$.
Let $\mathbf{A} \in \mathbb{Z}^{n \times m}_q$ be a matrix.
Consider the following set:
$S = \big\{ \mathbf{y} \in \mathbb{Z}^m \mid \mathbf{Ay} \equiv \mathbf{0} \pmod q \big\}$.
It can be easily shown that $S$ constitutes a lattice, because it is a discrete additive subgroup of $\mathbb{R}^m$.
I want to find the basis of this lattice. In other words, I want to find a matrix $\mathbf{B} \in \mathbb{Z}^{m \times m}$, such that the following holds:
$S = \{\mathbf{Bx} \mid \mathbf{x} \in \mathbb{Z}^m \}$.
Let me give some examples:
*
*$q=2$, and $\mathbf{A} = [1,1]$ $\quad \xrightarrow{\qquad}\quad$
$\mathbf{B} = \left[ \begin{array}{cc} 2&1 \\ 0&1 \end{array} \right]$
*$q=3$, and $\mathbf{A} = \left[ \begin{array}{ccc} 0&1&2 \\ 2&0&1 \end{array} \right]$
$\quad \xrightarrow{\qquad}\quad$
$\mathbf{B} = \left[ \begin{array}{ccc} 3&0&1 \\ 0&3&1 \\ 0&0&1 \end{array} \right]$
*$q=4$, and $\mathbf{A} = \left[ \begin{array}{ccc} 0&2&3 \\ 3&1&2 \end{array} \right]$
$\quad \xrightarrow{\qquad}\quad$
$\mathbf{B} = \left[ \begin{array}{ccc} 4&2&1 \\ 0&2&1 \\ 0&0&2 \end{array} \right]$
Note that in all cases, $\mathbf{AB} =\mathbf{0} \pmod q$. However, $\mathbf{B}$ is not an arbitrary solution to this equivalence, since it must span $S$. For instance, in the example 1 above, matrix $\mathbf{\hat B} = \left[ \begin{array}{cc} 2&0\\ 0&2 \end{array} \right]$ satisfies $\mathbf{A \hat B} =\mathbf{0} \pmod 2$, but generates a sub-lattice of $S$.
Also note that if $\mathbf{B}$ is a basis of $S$, any other basis $\mathbf{\bar B}$ is also a basis of $S$, provided that there exists a unimodular matrix $\mathbf{U}$, for which $\mathbf{\bar B} = \mathbf{BU}$.
My questions:
*
*How to obtain $\mathbf{B}$ from $\mathbf{A}$ and $q$?
*Is it possible that $\mathbf{B}$ is not full rank, i.e. $\text{Rank}(\mathbf{B})< m$?
*Is there any difference between the case where $q$ is prime and the case where it is composite?
Side note: As far as I understood, $S$ is the solution space of a system of linear Diophantine equations. The solution has something to do with Hermite normal forms, but I can't figure out how.
| Even over a field, a fair amount goes into this. Here are two pages from Linear Algebra and Matrix Theory by Evar D. Nering, second edition:
From a row-echelon form for your data matrix $A,$ one can readily find the null space as a certain number of columns by placing $1$'s in certain "free" positions and back-substituting to get the other positions.
Meanwhile, once you have that information, the square matrices you display above are not quite what Buchmann, Lindner, Ruckert, and Schneider want. At the bottom of page 2 they define their HNF for $B$ as being upper triangular as well, where the first several columns have merely a single $q$ on the diagonal element and otherwise $0$'s. So you were close.
Note that, over a field ($q$ prime) there is a well-defined notion of the rank of $A.$ The number of non-trivial columns of $B$ is $m-n,$ as you have illustrated above.
Anyway, I think everything about your problem is in ordinary linear algebra books for fields, then I recommend INTEGRAL MATRICES by Morris Newman.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/183407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Can you construct a field with 6 elements?
Possible Duplicate:
Is there anything like GF(6)?
Could someone tell me if you can build a field with 6 elements.
| If such a field $F$ exists, then the multiplicative group $F^\times$ is cyclic of order 5. So let $a$ be a generator for this group and write
$F = \{ 0, 1, a, a^2, a^3, a^4\}$.
From $a(1 + a + a^2 + a^3 + a^4) = 1 + a + a^2 + a^3 + a^4$, it immediately follows that
$1 + a + a^2 + a^3 + a^4 = 0$. Let's call this (*).
Since $0$ is the additive inverse of itself and $F^\times$ has odd number of elements, at least one element of $F^\times$ is its own additive inverse. Since $F$ is a field, this implies $1 = -1$. So, in fact, every element of $F^\times$ is its own additive inverse (**).
Now, note that $1 + a$ is different from $0$, $1$ and $a$. So it is $a^i$, where i = 2, 3 or 4. Then, $1 + a - a^i = 1 + a + a^i = 0$. Hence, by $(*)$ one of $a^2 + a^3$, $a^2 + a^4$ and $a^3 + a^4$ must be $0$, a contradiction with (**).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/183462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 5,
"answer_id": 2
} |
can one derive the $n^{th}$ term for the series, $u_{n+1}=2u_{n}+1$,$u_{0}=0$, $n$ is a non-negative integer derive the $n^{th}$ term for the series $0,1,3,7,15,31,63,127,255,\ldots$
observation gives, $t_{n}=2^n-1$, where $n$ is a non-negative integer
$t_{0}=0$
| The following is a semi-formal variant of induction that is particularly useful for recurrences.
Let $x_n=2^n-1$. It is easy to verify that $x_0=0$. It is also easy to verify that
$$x_{n+1}=2x_n+1,$$
since $2^{n+1}-1=2(2^n-1)+1$.
So the sequence $(x_n)$ starts in the same way as your sequence and obeys the same recurrence as your sequence. Thus the two sequences must be the same.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/183599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Is $ \ln |f| $ harmonic? I'd like to show that $\ln |f| $ is harmonic, where $f$ is holomorphic defined on a domain of the complex plane and never takes the value 0.
My idea was to use the fact that $\ln |f(z)| = \operatorname{Log} f(z) - i*\operatorname{Arg}(f(z)) $, but $Log$ is only holomorphic on some part of the complex plane and $\operatorname{Arg}$ is not holomorphic at all.
Any help is welcome!
| This is a local result; you need to show that given a $z_0$ with $f(z_0) \neq 0$ there is a neighborhood of $z_0$ on which $\ln|f(z)|$ is harmonic.
Fix $z_0$ with $f(z_0) \neq 0$. Let $\log(z)$ denote an analytic branch of the logarithm defined on a neighborhood of $f(z_0)$. Then the real part of $\log(z)$ is $\ln|z|$; any two branches of the logarithm differ by an integer multiple of $2\pi i$. The function $\log(f(z))$, being the composition of analytic functions, is analytic on a neighborhood of $z_0$. The real part of this function is $\ln|f(z)|$, which is therefore harmonic on a neighborhood of $z_0$, which is what you need to show.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/183622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 2
} |
Does there exist a nicer form for $\beta(x + a, y + b) / \beta(a, b)$? I have the expression
$$\displaystyle\frac{\beta(x + a, y + b)}{\beta(a, b)}$$
where $\beta(a_1,a_2) = \displaystyle\frac{\Gamma(a_1)\Gamma(a_2)}{\Gamma(a_1+a_2)}$.
I have a feeling this should have a closed-form which is intuitive and makes less heavy use of the Beta function. Can someone describe to me whether this is true?
Here, $x$ and $y$ are integers larger than $0.$
| $$
\beta(1+a,b) = \frac{\Gamma(1+a)\Gamma(b)}{\Gamma(1+a+b)} = \frac{a\Gamma(a)\Gamma(b)}{(a+b)\Gamma(a+b)} = \frac{a}{a+b} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)} = \frac{a}{a+b} \beta(a,b).
$$
If you have, for example $\beta(5+a,8+b)$, just repeat this five times for the first argument and eight for the second:
$$
\frac{(4+a)(3+a)(2+a)(1+a)\cdot(7+b)(6+b)\cdots (1+b)b}{(12+a+b)(11+a+b)\cdots (1+a+b)(a+b)}\beta(a,b).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/183784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Find a side of a triangle given other two sides and an angle I have a really simple-looking question, but I have no clue how I can go about solving it?
The question is
Find the exact value of $x$ in the following diagram:
Sorry for the silly/easy question, but I'm quite stuck! To use the cosine or sine rule, I'd need the angle opposite $x$, but I can't find that, cause I don't have anything else to help it along.
Thank You!
| Use the cosine rule with respect to the 60 degree angle. Then you get an equation involving $x$ as a variable, Then you solve the equation for $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/183864",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
A question about applying Arzelà-Ascoli An example of an application of Arzelà-Ascoli is that we can use it to prove that the following operator is compact:
$$ T: C(X) \to C(Y), f \mapsto \int_X f(x) k(x,y)dx$$
where $f \in C(X), k \in C(X \times Y)$ and $X,Y$ are compact metric spaces.
To prove that $T$ is compact we can show that $\overline{TB_{\|\cdot\|_\infty} (0, 1)}$ is bounded and equicontinuous so that by Arzelà-Ascoli we get what we want. It's clear to me that if $TB_{\|\cdot\|_\infty} (0,1)$ is bounded then $\overline{TB_{\|\cdot\|_\infty} (0, 1)}$ is bounded too. What is not clear to me is why $\overline{TB_{\|\cdot\|_\infty} (0, 1)}$ is equicontinuous if $TB_{\|\cdot\|_\infty} (0, 1)$ is.
I think about it as follows: $TB_{\|\cdot\|_\infty} (0, 1)$ is dense in $\overline{TB_{\|\cdot\|_\infty} (0, 1)}$ with respect to $\|\cdot\|_\infty$ hence all $f$ in $\overline{TB_{\|\cdot\|_\infty} (0, 1)}$ are continuous (since they are uniform limits of continuous sequences). Since $Y$ is compact they are uniformly continuous. Now I don't know how to argue why I get equicontinuity from this. Thanks for your help.
| Following tb's comment:
Claim: If $\{f_n\}$ is equicontinuous and $f_n \to f$ uniformly then $\{f\} \cup \{f_n\}$ is equicontinuous.
Proof: Let $\varepsilon > 0$.
(i) Let $\delta^\prime$ be the delta that we get from equicontinuity of $\{f_n\}$ so that $d(x,y) < \delta^\prime$ implies $|f_n(x) - f_n(y)| < \varepsilon$ for all $n$.
(ii) Since $f_n \to f$ uniformly, $f$ is continuous and since $X$ is compact, $f$ is uniformly continuous so there is a $\delta^{\prime \prime}$ such that $d(x,y) < \delta^{\prime \prime}$ implies $|f(x) - f(y)| < \varepsilon$.
Now let $\delta = \min(\delta^\prime, \delta^{\prime \prime})$ then $d(x,y) < \delta$ implies $|g(x) - g(y)| < \varepsilon$ for all $g$ in $\{f\} \cup \{f_n\}$.
Edit
What I wrote above is rubbish and doesn't lead anywhere. As pointed out in Ahriman's comment to the OP, we don't need continuity of $f$. We can bound $f$ as follows (in analogy to the proof of the uniform limit theorem): Let $\delta$ be such that $|f_n(x) - f_n(y)| < \varepsilon / 3$ for all $n$ and all $x,y$. Since $f$ is the uniform limit of $f_n$, for $x$ fixed, $f_n(x)$ is a Cauchy sequence converging to $f(x)$. Let $n$ be such that $\|f-f_n\|_\infty < \varepsilon / 3$. Then
$$ |f(x) - f(y)| \leq |f(x) - f_n(x)| + |f_n(x) - f_n(y)| + |f(y)-f_n (y)| < \varepsilon$$
Hence we may choose $\delta$ such that $|f(x) - f(y)| < \varepsilon / 3$ for all $f$ in $TB(0,1)$ to get that $\overline{TB(0,1)}$ is equicontinuous, too.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/183925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
direct sum of image and kernel in a infinitedimensional space Is it true that in an infinitdimensional Hilbert space the formula $$\text{im} S\oplus \ker S =H$$holds, where $S:H\rightarrow H$ ?
I know it is true for finitely many dimensions but I'm not so sure about infinitely many. Would it be true under some additional assumption, like assuming that the rank of $S$ is finite ?
| Consider $l_2$, and the translation operator $T:~e_n\mapsto e_{n+1}$, it's injective but not surjective. So that $ker~T=0,im~T\neq l_2,ker~T\oplus im~T\neq l_2$.
If $rank~im T$ is finite, i remember i have learned that the equality holds in some book. (but it's vague to me now, so take care)
You can try to consider the coordinate function on the finite dimensional space $im~T$ in this case.
Other kinds of additional assumptions may concern with
*
*Projection operator
*Compact operator
*Fredholm operator
Try your best
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/183984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to find $\lim\limits_{x\to0}\frac{e^x-1-x}{x^2}$ without using l'Hopital's rule nor any series expansion? Is it possible to determine the limit
$$\lim_{x\to0}\frac{e^x-1-x}{x^2}$$
without using l'Hopital's rule nor any series expansion?
For example, suppose you are a student that has not studied derivative yet (and so not even Taylor formula and Taylor series).
| Define $f(x)=\lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n$. One possibility is to take $f(x)$ as the definition of $e^x$. Since the OP has suggested a different definition, I will show they agree.
If $x=\frac{p}{q}$ is rational, then
\begin{eqnarray*}
f(x)&=&\lim_{n\to\infty}\left(1+\frac{p}{qn}\right)^n\\
&=&\lim_{n\to\infty}\left(1+\frac{p}{q(pn)}\right)^{pn}\\
&=&\lim_{n\to\infty}\left(\left(1+\frac{1}{qn}\right)^n\right)^p\\
&=&\lim_{n\to\infty}\left(\left(1+\frac{1}{(qn)}\right)^{(qn)}\right)^{p/q}\\
&=&\lim_{n\to\infty}\left(\left(1+\frac{1}{n}\right)^{n}\right)^{p/q}\\
&=&e^{p/q}
\end{eqnarray*}
Now, $f(x)$ is clearly non-decreasing, so
$$
\sup_{p/q\leq x}e^{p/q}\leq f(x)\leq \inf_{p/q\geq x}e^{p/q}
$$
It follows that $f(x)=e^x$.
Now, we have
\begin{eqnarray*}
\lim_{x\to0}\frac{e^x-1-x}{x^2}&=&\lim_{x\to0}\lim_{n\to\infty}\frac{\left(1+\frac{x}{n}\right)^n-1-x}{x^2}\\
&=&\lim_{x\to0}\lim_{n\to\infty}\frac{n-1}{2n}+\sum_{k=3}^n\frac{{n\choose k}}{n^k}x^{k-2}\\
&=&\frac{1}{2}+\lim_{x\to0}x\lim_{n\to\infty}\sum_{k=3}^n\frac{{n\choose k}}{n^k}x^{k-3}\\
\end{eqnarray*}
We want to show that the limit in the last line is 0. We have $\frac{{n\choose k}}{n^k}\leq\frac{1}{k!}\leq 2^{-(k-3)}$, so we have
\begin{eqnarray*}
\left|\lim_{x\to0}x\lim_{n\to\infty}\sum_{k=3}^n\frac{{n\choose k}}{n^k}x^{k-3}\right|&\leq&\lim_{x\to0}|x|\lim_{n\to\infty}\sum_{k=3}^n \left(\frac{|x|}{2}\right)^{k-3}\\
&=&\lim_{x\to0}|x| \frac{1}{1-\frac{|x|}{2}}\\
&=&0
\end{eqnarray*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/184053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 7,
"answer_id": 1
} |
Evaluation of $\sum\limits_{n=0}^\infty \left(\operatorname{Si}(n)-\frac{\pi}{2}\right)$? I would like to evaluate the sum
$$
\sum\limits_{n=0}^\infty \left(\operatorname{Si}(n)-\frac{\pi}{2}\right)
$$
Where $\operatorname{Si}$ is the sine integral, defined as:
$$\operatorname{Si}(x) := \int_0^x \frac{\sin t}{t}\, dt$$
I found that the sum could be also written as
$$
-\sum\limits_{n=0}^\infty \int_n^\infty \frac{\sin t}{t}\, dt
$$
Anyone have any ideas?
| We want (changing the sign and starting with $n=1$) :
$$\tag{1}S(0)= -\sum_{n=1}^\infty \left(\mathrm{Si}(n)-\frac{\pi}{2}\right)$$
Let's insert a 'regularization parameter' $\epsilon$ (small positive real $\epsilon$ taken at the limit $\to 0^+$ when needed) :
$$\tag{2} S(\epsilon) = \sum_{n=1}^\infty \int_n^\infty \frac {\sin(x)e^{-\epsilon x}}x\,dx$$
$$= \sum_{n=1}^\infty \int_1^\infty \frac {\sin(nt)e^{-\epsilon nt}}t\,dt$$
$$= \int_1^\infty \sum_{n=1}^\infty \Im\left( \frac {e^{int-\epsilon nt}}t\right)\,dt$$
$$= \int_1^\infty \frac {\Im\left( \sum_{n=1}^\infty e^{int(1+i\epsilon )}\right)}t\,dt$$
(these transformations should be justified...)
$$S(\epsilon)= \int_1^\infty \frac {\Im\left(\dfrac {-e^{it(1+i\epsilon)}}{e^{it(1+i\epsilon)}-1}\right)}t\,dt$$
But
$$\Im\left(\dfrac {-e^{it(1+i\epsilon)}}{e^{it(1+i\epsilon)}-1}\right)=\Im\left(\dfrac {i\,e^{it(1+i\epsilon)/2}}2\frac{2i}{e^{it(1+i\epsilon)/2}-e^{-it(1+i\epsilon)/2}}\right)$$
Taking the limit $\epsilon \to 0^+$ we get GEdgar's expression :
$$\frac {\cos(t/2)}{2\sin(t/2)}=\frac {\cot\left(\frac t2\right)}2$$
To make sense of the (multiple poles) integral obtained :
$$\tag{3}S(0)=\int_1^\infty \frac{\cot\left(\frac t2\right)}{2t}\,dt$$
let's use the cot expansion applied to $z=\frac t{2\pi}$ :
$$\frac 1{2t}\cot\left(\frac t2\right)=\frac 1{2\pi t}\left[\frac {2\pi}t-\sum_{k=1}^\infty\frac t{\pi\left(k^2-\left(\frac t{2\pi}\right)^2\right)}\right]$$
$$\frac 1{2t}\cot\left(\frac t2\right)=\frac 1{t^2}-\sum_{k=1}^\infty\frac 2{(2\pi k)^2-t^2}$$
Integrating from $1$ to $\infty$ the term $\frac 1{t^2}$ and all the terms of the series considered as Cauchy Principal values $\ \displaystyle P.V. \int_1^\infty \frac 2{(2\pi k)^2-t^2} dt\ $ we get :
$$\tag{4}S(0)=1+\sum_{k=1}^\infty\frac {\mathrm{atanh}\bigl(\frac 1{2\pi k}\bigr)}{\pi k}$$
and the result :
$$\tag{5}\boxed{\displaystyle\sum_{n=0}^\infty \left(\mathrm{Si}(n)-\frac{\pi}{2}\right)=-1-\frac{\pi}4-\sum_{k=1}^\infty\frac {\mathrm{atanh}\bigl(\frac 1{2\pi k}\bigr)}{\pi k}}$$$$\approx -1.8692011939218853347728379$$
(and I don't know why the $\frac {\pi}2$ term re-inserted from the case $n=0$ became a $\frac {\pi}4$ i.e. the awaited answer was $-S(0)-\frac{\pi}2$ !)
Let's try to rewrite this result using the expansion of the $\mathrm{atanh}$ :
$$\mathrm{atanh(x)}=\sum_{n=0}^\infty \frac{x^{2n+1}}{2n+1}$$
so that
$$A=\sum_{k=1}^\infty\frac {\mathrm{atanh}\bigl(\frac 1{2\pi k}\bigr)}{\pi k}=\sum_{k=1}^\infty \sum_{n=0}^\infty \frac 1{\pi k(2\pi k)^{2n+1}(2n+1)}$$
$$=\sum_{n=0}^\infty \frac 2{(2n+1)(2\pi)^{2n+2}}\sum_{k=1}^\infty \frac 1{ k^{2n+2}}$$
$$=2\sum_{n=1}^\infty \frac {\zeta(2n)}{2n-1}a^{2n}\quad \text{with}\ \ a=\frac 1{2\pi} $$
$$\tag{6}\boxed{\displaystyle\sum_{n=0}^\infty \left(\mathrm{Si}(n)-\frac{\pi}{2}\right)=-1-\frac{\pi}4-2\sum_{n=1}^\infty \frac {\zeta(2n)}{(2n-1)(2\pi)^{2n}}}$$
and... we are back to the cotangent function again since it is a generating function for even $\zeta$ constants !
$$1-z\,\cot(z)=2\sum_{n=1}^\infty \zeta(2n)\left(\frac z{\pi}\right)^{2n}$$
Here we see directly that
$$A=\frac 12\int_0^{\frac 12} \frac {1-z\,\cot(z)}{z^2} dz$$
with the integral result :
$$\tag{7}\boxed{\displaystyle\sum_{n=0}^\infty \left(\mathrm{Si}(n)-\frac{\pi}{2}\right)=-1-\frac{\pi}4-\int_0^1 \frac 1{t^2}-\frac {\cot\left(\frac t2\right)}{2t} dt}$$
(this shows that there was probably a more direct way to make (3) converge but all journeys are interesting !)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/184098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29",
"answer_count": 5,
"answer_id": 2
} |
Trigonometry- why we need to relate to circles I'm a trigonometry teaching assistant this semester and have a perhaps basic question about the motivation for using the circle in the study of trigonometry. I certainly understand Pythagorean Theorem and all that (I would hope so if I'm a teaching assistant!) but am looking for more clarification on why we need the circle, not so much that we can use it.
I'll be more specific- to create an even angle incrementation, it seems unfeasible, for example, to use the line $y = -x+1$, divide it into even increments, and then draw segments to the origin to this lattice of points, because otherwise $\tan(\pi/3)$ would equal $(2/3)/(1/3)=2$. But why mathematically can't we do this?
| Suppose that instead of parametrizing the circle by arc length $\theta$, so that $(\cos\theta,\sin\theta)$ is a typical point on the circle, one parametrizes it thus:
$$
t\mapsto \left(\frac{1-t^2}{1+t^2}, \frac{2t}{1+t^2}\right)\text{ for }t\in\mathbb{R}\cup\{\infty\}. \tag{1}
$$
The parameter space is the one-point compactification of the circle, i.e. there's just one $\infty$, which is at both ends of the line $\mathbb{R}$, rather than two, called $\pm\infty$. So $\mathbb{R}\cup\{\infty\}$ is itself topologically a circle, and $\infty$ is mapped to $(-1,0)$.
Now do some geometry: let $t$ be the $y$-coordinate of $(0,t)$, and draw a straight line through $(-1,0)$ and $(0,t)$, and look at the point where that line intersects the circle. That point of intersection is just the point to which $t$ is mapped in $(1)$.
Later edit: an error appears below. I just noticed I did something dumb: the mapping between the circle and the line $y=1-x$ that associates a point on that line with a point on that circle if the line through them goes through $(0,0)$ is not equivalent to the one in $(1)$ because the center of projection is the center of the circle rather than a point on the circle. end of later edit
This mapping is in a sense equivalent to the one you propose: I think you can find an affine mapping from $t$ to your $x$ on the line $y=-x+1$, such that the point on the circle to which $t$ is mapped and the point on the circle to which $x$ is mapped are related by linear-fractional transformations of the $x$- and $y$-coordinates.
The substitution
$$
\begin{align}
(\cos\theta,\sin\theta) & = \left(\frac{1-t^2}{1+t^2}, \frac{2t}{1+t^2}\right) \\[10pt]
d\theta & = \frac{2\,dt}{1+t^2}
\end{align}
$$
is the Weierstrass substitution, which transforms integrals of rational functions of sine and cosine, to integrals of simply rational functions. I'm pretty sure proposed mapping from the $(x,y=-x+1)$ to the circle would accomplish the same thing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/184135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
The base of a triangular prism is $ABC$. $A'B'C'$ is an equilateral triangle with lengths $a$... The base of a triangular prism is $ABC$. $A'B'C'$ is an equilateral triangle with lengths $a$, and the lengths of its adjacent sides also equal $a$. Let $I$ be the midpoint of $AB$ and $B'I \perp (ABC)$. Find the distance from the $B'$ to the plane ($ACC'A'$) in term of $a$.
| Let $BB'$ be along $x$ axis and $BC$ be along $y$ axis ($B$ being the origin). Given that $B'I$ is perpendicular to $BA$, $\angle{ABC}$ will be $\pi/3$ (as $\Delta BIB'$ is a $(1,\sqrt{3},2)$ right-triangle). The co-ordinates of $A$ will then be of the form $\left(a\cos{\pi/3},a\cos{\pi/3},h\right)$. As the length of $AB$ is $a$, it leads to
$$
\left(a\cos{\pi/3}\right)^{2}+\left(a\cos{\pi/3}\right)^{2}+h^{2}=a^{2} \\
h=\frac{a}{\sqrt{2}}
$$
Due to symmetry the height of $A$ from plane $BCC'B'$ is the same as $B'$ to $ACC'A'$, which is $a/\sqrt{2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/184271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Application of Radon Nikodym Theorem on Absolutely Continuous Measures I have the following problem:
Show $\beta \ll \eta$ if and only if for every $\epsilon > 0 $ there exists a $\delta>0$ such that $\eta(E)<\delta$ implies $\beta(E)<\epsilon$.
For the forward direction I had a proof, but it relied on the use of the false statement that "$h$ integrable implies that $h$ is bounded except on a set of measure zero".
I had no problem with the backward direction.
| Assume that $\beta=h\eta$ with $h\geqslant0$ integrable with respect to $\eta$, in particular $\beta$ is a finite measure. Let $\varepsilon\gt0$.
There exists some finite $t_\varepsilon$ such that $\beta(B_\varepsilon)=\int_{B_\varepsilon} h\,\mathrm d\eta\leqslant\varepsilon$ where $B_\varepsilon=[h\geqslant t_\varepsilon]$. Note that, for every measurable $A$, $A\subset B_\varepsilon\cup(A\setminus B_\varepsilon)$, hence $\beta(A)\leqslant\beta(B_\varepsilon)+\beta(A\cap[h\leqslant t_\varepsilon])\leqslant\varepsilon+t_\varepsilon\eta(A)$.
Let $\delta=\varepsilon/t_\varepsilon$. One sees that, for every measurable $A$, if $\eta(A)\leqslant\delta$, then $\beta(A)\leqslant2\varepsilon$, QED.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/184308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
using central limit theorem I recently got a tute question which I don't know how to proceed with and I believe that the tutor won't provide solution... The question is
Pick a real number randomly (according to the uniform measure) in the interval $[0, 2]$. Do this one million times and let $S$ be the sum of all the numbers. What, approximately, is the probability that
a) $S\ge1,$
b) $S\ge0.001,$
c) $S\ge0$?
Express as a definite integral of the function $e^\frac{-x^2}{2}$.
Can anyone show me how to do it? It is in fact from a Fourier analysis course but I guess I need some basic result from statistcs which I am not familiar with at all..
| Let's call $S_n$ the sum of the first $n$ terms. Then for $0 \le x \le 1$ it can be shown by induction that $\Pr(S_n \le x) = \dfrac{x^n}{2^n \; n!}$
So the exact answers are
a) $1 - \dfrac{1}{2^{1000000} \times 1000000!}$
b) $1 - \dfrac{1}{2000^{1000000} \times 1000000!}$
c) $1$
The first two are extremely close to 1; the third is 1. The central limit theorem will not produce helpful approximations here, so you may have misquoted the question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/184357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Show that for this function the stated is true.
For the function
$$G(w) = \frac{\sqrt2}{2}-\frac{\sqrt2}{2}e^{iw},$$
show that
$$G(w) = -\sqrt2ie^{iw/2} \sin(w/2).$$
Hey everyone, I'm very new to this kind of maths and would really appreciate any help. Hopefully I can get an idea from this and apply it to other similar questions. Thank you.
| Use the definition for the complex sine:
$$ \sin(z)=\frac{ e^{iz}-e^{-iz} } {2i} $$
Thus,
$$-\sqrt{2}ie^{i\frac{w}{2}}\sin\frac{w}{2} =-\sqrt{2}ie^{i\frac{w}{2}}(\frac{1}{2i}(e^{i\frac{w}{2}} - e^{-i\frac{w}{2}})) $$
Now simplify to get your result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/184491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Safe use of generalized inverses Suppose I'm given a linear system $$Ax=b,$$ with unknown $x\in\mathbb{R}^n$, and some symmetric $A\in\mathbb{R}^{n\times n}$ and $b=\in\mathbb{R}^n$. Furthermore, it is known that $A$ is not full-rank matrix, and that its rank is $n-1$; therefore, $A$ is not invertible. However, to compute the "solution" $x$, one may use $x=A^+b$, where $A^+$ is a generalized inverse of $A$, i.e., Moore-Penrose inverse.
What is the characteristic of such solution? More precisely, under which conditions will $x=A^+b$ give the exact solution to the system (supposing the exact solution exists)? Could one state that in the above case, with additional note that $b$ is orthogonal to null-space of $A$, the generalized inverse will yield the exact solution to the system?
| Let $\tilde x = A^+b$. Then obviously $A\tilde x = AA^+b$. But since $AA^+$ is an orthogonal projector, and specifically $I-AA^+$ is the projector to the null space of the Hermitian transpose of $A$, $\tilde x$ is a solution iff $b$ is orthogonal to the null space of $AA^+$, that is, orthogonal to the null space of the Hermitian transpose of $A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/184553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why is the last digit of $n^5$ equal to the last digit of $n$? I was wondering why the last digit of $n^5$ is that of $n$? What's the proof and logic behind the statement? I have no idea where to start. Can someone please provide a simple proof or some general ideas about how I can figure out the proof myself? Thanks.
| If $\gcd(a, n) = 1$ then by Euler's theorem,
$$a^{\varphi(n)} \equiv 1 \pmod{n}$$
From the tables and as @SeanEberhard stated, $$ \varphi(10) = \varphi(5*2) = 10\left( 1 - \frac{1}{5} \right) \cdot \left(1 - \frac{1}{2} \right)$$
$$= 10\left(\frac{4}{5} \right) \cdot \left(\frac{1}{2} \right) = 4$$
Let $n=10$ and thus,
$$a^{\varphi(10)} \equiv 1 \pmod{10} \implies a^{4} \equiv 1 \pmod{10}$$
Multiply both sides by $a$,
$$a^{5} \equiv a \pmod{10}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/184609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36",
"answer_count": 7,
"answer_id": 1
} |
divisibility for numbers like 13,17 and 19 - Compartmentalization method For denominators like 13, 17 i often see my professor use a method to test whether a given number is divisible or not. The method is not the following :
Ex for 17 : subtract 5 times the last digit from the original number, the resultant number should be divisible by 17 etc...
The method is similar to divisibility of 11. He calls it as compartmentalization method. Here it goes.
rule For 17 :
take 8 digits at a time(sun of digits at odd places taken 8 at a time - sum of digits at even places taken 8 at a time)
For Ex : $9876543298765432..... 80$digits - test this is divisible by 17 or not.
There will be equal number of groups (of 8 digits taken at a time) at odd and even places. Therefore the given number is divisible by 17- Explanation.
The number 8 above differs based on the denominator he is considering.
I am not able to understand the method and logic both. Kindly clarify.
Also for other numbers like $13$ and $19$, what is the number of digits i should take at a time? In case my question is not clear, please let me know.
| Your professor is using the fact that $100000001=10^8+1$ is divisible by $17$. Given for example your $80$-digit number, you can subtract $98765432\cdot 100000001=9876543298765432$, which will leave zeros in the last $16$ places. Slash the zeros, and repeat. After $5$ times you are left with the number $0$, which is divisible by $17$, and hence your $80$-digit number must also be divisible by $17$.
When checking for divisibility by $17$, you can also subtract multiples of $102=6\cdot 17$ in the same way.
For divisibility by $7$, $11$, or $13$, you can subtract any multiple of the number $1001=7\cdot 11\cdot 13$ without affecting divisibility by these three numbers. For example, $6017-6\cdot 1001=11$, so $6017$ is divisible by $11$, but not by $7$ or $13$.
For divisibility by $19$, you can use the number $1000000001=10^9+1=7\cdot 11\cdot 13\cdot 19\cdot 52579$. By subtracting multiples of this number, you will be left with a number of at most $9$ digits, which you can test for divisibility by $19$ by performing the division.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/184660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Solving a literal equation containing fractions. I know this might seem very simple, but I can't seem to isolate x.
$$\frac{1}{x} = \frac{1}{a} + \frac{1}{b} $$
Please show me the steps to solving it.
| You should combine $\frac1a$ and $\frac1b$ into a single fraction using a common denominator as usual:
$$\begin{eqnarray}
\frac1x& = &\frac1a + \frac1b \\
&=&{b\over ab} + {a\over ab} \\
&=& b+a\over ab
\end{eqnarray}$$
So we get: $$x = {ab\over{b+a}}.$$
Okay?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/184709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Why Does Finitely Generated Mean A Different Thing For Algebras? I've always wondered why finitely generated modules are of form
$$M=Ra_1+\dots+Ra_n$$
while finitely generated algebras have form
$$R=k[a_1,\dots, a_n]$$
and finite algebras have form
$$R=ka_1+\dots +ka_n$$
It seems to me that this is an flagrant abuse of nomenclature, and has certainly confused me in the past. Is there some historical reason for this terminology? Any references to the genesis of these terms would be greatly appreciated!
| The terminology is actually very appropriate and precise. Consider that "A is a finitely generated X" means "there exists a finite set G such that A is the smallest X containing G".
Looking at your examples, suppose $M$ is a finitely generated module, generated by $a_1,\dots,a_n$. Then $M$ contains $a_1,\dots,a_n$. Since it is a module, it must contain all elements of the form $Ra_i$ and their sums, so it must contain the module $Ra_1+\dots+Ra_n$. However, since this latter object is in fact a module, $M$ need not contain anything else and is in fact equal to this module.
If $R$ is a finitely generated algebra, we can go through the same procedure as before. However, since algebras have an additional operation (multiplication), we must allow not only sums of elements of the form $ka_n$ but also their products. This gives us that $R$ must contain all polynomial expressions in the elements $a_1,\dots,a_n$, i.e. it must contain the algebra $k[a_1,\dots,a_n]$. Again, since this latter object is in fact an algebra, $R$ need not contain anything else and is equal to this algebra.
A finite algebra seems to be a name for an algebra which is finitely generated as a module. Your example is then consistent with what I wrote above. I do admit that the name seems somewhat misleading.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/184794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
relation between integral and summation What is the relation between a summation and an integral ? This question is actually based on a previous question of mine here where I got two answers (one is based on summation notation) and the other is based on integral notation and I do not know yet which one to accept . So I would like to understand the connect between the two ?
| The Riemann or Lebesgue integral is in a sense an continuous sum. The symbol $\int$ is adapted from a letter looking like a somewhat elongated 's' from the word summa. In the definitions of the Riemann and the Lebesgue integrals the ordinary finite summation $\sum _{k=1}^n$ is used but the relation between the two is deeper than a mere 'one is used in the definition of the other' kind of relation.
The modern approach to viewing integrals is measure theory. Without getting into too many technical details one defines a measure space as an abstraction to the notion of measuring length or volume. Then real valued functions on a measurable space may be integrated to give a single real number.
For a particular measure space one gets the Lebesgue integral which itself, in a suitable way, subsumes the Riemann integral. On the other hand, given any set $X$ with $n$ elements there is a measure space structure on it such that for any function $f:X\to \mathbb {R}$ the integral of $f$ with respect to that measure is precisely the sum of the values that $f$ attains. In that sense general measure theory subsumes finite sums (of real numbers). More interestingly, given any countable set $X$ there is a measure on it such that the integral of real values functions on $X$ correspond to the infinite sum of its values. (Both of these remarks are trivial.) Thus measure theory subsumes infinite sums. From that point of view, a summation corresponds to integrals on a discrete measure space and the Lebesgue or Riemann integral corresponds to integrals on a continuous measure space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/184979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33",
"answer_count": 5,
"answer_id": 3
} |
A variable for the first 8 integers? I wish to use algebra to (is the term truncate?) the set of positive integers to the first 8 and call it for example 'n'.
In order to define $r_n = 2n$ or similar.
This means:
$$r_0 = 0$$
$$r_1 = 2$$
$$\ldots$$
$$r_7 = 14$$
However there would not be an $r_8$.
edit: Changed "undefined" to "would not be", sorry about this.
| You've tagged this abstract-algebra and group-theory but it's not entirely clear what you mean.
However, by these tags, perhaps you are referring to $\left(\Bbb Z_8, +\right)$?
In such a case, you have $r_1+r_7 = 1+_8 7 = 8 \mod 8 = 0$. So there is no $r_8$ per se; however, the re-definition of the symbols $r_0, r_1, \ldots$ is superfluous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/185030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Can this type of series retain the same value? Let $H$ be a Hilbert space and $\sum_k x_k$ a countable infinite sum in it. Lets say we partition the sequence $(x_k)_k$ in a sequence of blocks of finite length and change the order of summation only in those blocks, like this (for brevity illustrated only for the first two blocks $$(x_1,\ldots,x_k,x_{k+1},\ldots,x_{k+l},\ldots )$$ becomes
$$(x_{\pi(1)},\ldots,x_{\pi(k)},x_{\gamma(k+1)},\ldots,x_{\gamma(k+l)},\ldots ),$$ where $\pi$ and $\gamma$ are permutations.
If we denote the elements of the second sequence with $x'$, does anyone know, what will happen to the series $\sum _k x'_k$ in this case ? Can it stay the same ? Does staying the same requires additional assumptions ?
| If both series converge, it doesn't change anything. This can be easily seen by considering partial sums.
Put $k_j$ as the cumulative length of the first $j$ blocks. Then clearly $\sum_{j=1}^{k_n} x_j=\sum_{j=1}^{k_n} x_j'$ for any $n$, so assuming both series converge, we have that
$$\sum_j x_j=\lim_{n\to \infty}\sum_{j=1}^{k_n} x_j=\lim_{n\to \infty}\sum_{j=1}^{k_n} x'_j=\sum_j x_j'$$
On the other hand, we can change a (conditionally) convergent series into a divergent one using this method – if we take the alternating harmonic sequence $x_n=(-1)^n/n$, sorting large enough blocks so that all the positive elements come before all the negative elements, we can get infinitely many arbitrarily large “jumps”, preventing convergence.
On another note, if the length of permuted blocks is bounded, then I think this kind of thing could not happen (again, by considering partial sums).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/185093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Graph $f(x)=e^x$
Graph $f(x)=e^x$
I have no idea how to graph this. I looked on wolframalpha and it is just a curve. But how would I come up with this curve without the use of other resources (i.e. on a test).
| You mention in your question that "I have no idea how to graph this" and "how would I come up with this curve without the use of other resources (i.e. on a test)". I know that you have already accepted one answer, but I thought that I would add a bit.
In my opinion, then best thing is to remember (memorize if you will) certain types of functions and their graphs. So you probably already know how to graph a linear function like $f(x) = 5x -4$. Before a test, you would ask your teacher which types of functions that you would be required to ne able to sketch by hand without any graphing calculator. Then you could go study these different types.
Now, if you don't have a graphing calculator, but you have a simple calculator, then you could also try to sketch the graph of a function by just "plotting points". So you draw a coordinate system with an $x$-axis and a $y$-axis and for various values of $x$ you calculate corresponding values of $y$ and then you plot those points. In the end you connect all the dots with lines/curves (so if the dots are all on a straight line, then you can just draw a straight line, but is things seem to curve one way or the other, then you try to "curve with the points").
No matter what, I would recommend to try and sit down with a piece of paper. Draw a coordinate system (maybe your a ruler). And try to sketch graphs of various functions. This will then familiarize you with how graphs of certain functions looks like.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/185149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
definition of the exterior derivative I have a question concerning the definition of $d^*$. It is usually defined to be the (formally) adjoint of $d$? what is the meaning of formally?, is not just the adjoint of $d$? thanks
| I will briefly answer two questions here. First, what does the phrase "formal adjoint" mean in this context? Second, how is the adjoint $d^*$ actually defined?
Definitions: $\Omega^k(M)$ ($M$ a smooth oriented $n$-manifold with a Riemannian metric) is a pre-Hilbert space with norm $$\langle \omega,\eta\rangle_{L^2} = \int_M \omega\wedge *\eta.$$
(Here, $*$ is the Hodge operator.)
For the first question, the "formal adjoint" of the operator $d$ is the operator $d^*$ (if it exists, from some function space to some other function space) that has the property
$$\langle d\omega,\eta\rangle_{L^2} = \langle \omega,d^*\eta\rangle_{L^2}.$$
For the second question, the operator $d^*$ is actually defined as a map $\Omega^{k+1}(M)\to\Omega^k(M)$ by setting $$d^* = *d*.$$
Adjointness is proven by using integration by parts and Stokes' theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/185218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
The primes $p$ of the form $p = -(4a^3 + 27b^2)$ The current question is motivated by this question.
It is known that the number of imaginary quadratic fields of class number 3 is finite.
Assuming the answer to this question is affirmative, I came up with the following question.
Let $f(X) = X^3 + aX + b$ be an irreducible polynomial in $\mathbb{Z}[X]$.
Let $p = -(4a^3 + 27b^2)$ be the discriminant of $f(X)$.
We consider the following conditions.
(1) $p = -(4a^3 + 27b^2)$ is a prime number.
(2) The class number of $\mathbb{Q}(\sqrt{p})$ is 3.
(3) $f(X) \equiv (X - s)^2(X - t)$ (mod $p$), where $s$ and $t$ are distinct rational integers mod $p$.
Question
Are there infinitely many primes $p$ satisfying (1), (2), (3)?
If this is too difficult, is there any such $p$?
I hope that someone would search for such primes using a computer.
| For (229, -4,-1) the polynomial factors as $(x-200)^2(x-58)$
For (1373, -8,-5) the polynomial factors as $(x-860)(x-943)^2$
For (2713, -13,-15) the polynomial factors as $(x-520)^2(x-1673)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/185261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Predicting the next vector given a known sequence I have a sequence of unit vectors $\vec{v}_0,\vec{v}_1,\ldots,\vec{v}_k,\ldots$ with the following property: $\lim_{i\rightarrow\infty}\vec{v}_{i} = \vec{\alpha}$, i.e. the sequence converges to a finite unit vector.
As the sequence is generated by a poorly known process, I am interested in modelling $\vec{v}_k$ given previous generated vectors $\vec{v}_0,\vec{v}_1,\ldots,\vec{v}_{k-1}$.
What are the available mathematical tools which allows me to discover a vector function $\vec{f}$ such that $\vec{v}_k\approx \vec{f}(\vec{v}_{k-1},\vec{v}_{k-2},\ldots,\vec{v}_{k-n})$, for a given $n$, in the $L_p$-norm sense?
EDIT: I am looking along the lines of the Newton's Forward Difference Formula, which predicts interpolated values between tabulated points, except for two differences for my problem: 1) Newton' Forward Difference is applicable for a scalar sequence, and 2) I am doing extrapolation at one end of the sequence, not interpolation in between given values.
ADDITIONAL INFO: Below are plots of the individual components of an 8-tuple unit vector from a sequence of 200:
| A lot of methods effectively work by fitting a polynomial to your data, and then using that polynomial to guess a new value. The main reason for polynomials is that they are easy to work with.
Given that you know your functions have asymptotes, you may get better success by choosing a form that incorporates that fact, such as a rational function. If nothing else, you can always use a sledgehammer to derive a method -- e.g. use the method of least squares to select the coefficients of your rational functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/185309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Rigorous proof of the Taylor expansions of sin $x$ and cos $x$ We learn trigonometric functions in high school, but their treatment is not rigorous.
Then we learn that they can be defined by power series in a college.
I think there is a gap between the two.
I'd like to fill in the gap in the following way.
Consider the upper right quarter of the unit circle $C$ = {$(x, y) \in \mathbb{R}^2$; $x^2 + y^2 = 1$, $0 \leq x \leq 1$, $y \geq 0$}.
Let $\theta$ be the arc length of $C$ from $x = 0$ to $x$, where $0 \leq x \leq 1$.
By the arc length formula, $\theta$ can be expressed by a definite integral of a simple algebraic function from 0 to $x$.
Clearly $\sin \theta = x$ and $\cos\theta = \sqrt{1 - \sin^2 \theta}$.
Then how do we prove that the Taylor expansions of $\sin\theta$ and $\cos\theta$ are the usual ones?
| Since $x^2 + y^2 = 1$, $y = \sqrt{1 - x^2}$,
$y' = \frac{-x}{\sqrt{1 - x^2}}$
By the arc length formula,
$\theta = \int_{0}^{x} \sqrt{1 + y'^2} dx = \int_{0}^{x} \frac{1}{\sqrt{1 - x^2}} dx$
We consider this integral on the interval [$-1, 1$] instead of [$0, 1$].
Then $\theta$ is a monotone strictly increasing function of $x$ on [$-1, 1$].
Hence $\theta$ has the inverse function defined on [$\frac{-\pi}{2}, \frac{\pi}{2}$]. We denote this function also by $\sin\theta$.
We redefine $\cos\theta = \sqrt{1 - \sin^2 \theta}$ on [$\frac{-\pi}{2}, \frac{\pi}{2}$].
Since $\frac{d\theta}{dx} = \frac{1}{\sqrt{1 - x^2}}$,
(sin $\theta)' = \frac{dx}{d\theta} = \sqrt{1 - x^2} =$ cos $\theta$.
On the other hand, $(\cos\theta)' = \frac{d\sqrt{1 - x^2}}{d\theta} = \frac{d\sqrt{1 - x^2}}{dx} \frac{dx}{d\theta} = \frac{-x}{\sqrt{1 - x^2}} \sqrt{1 - x^2} = -x = -\sin\theta$
Hence
$(\sin\theta)'' = (\cos\theta)' = -\sin\theta$
$(\cos\theta)'' = -(\sin\theta)' = -\cos\theta$
Hence by the induction on $n$,
$(\sin\theta)^{(2n)} = (-1)^n\sin\theta$
$(\sin\theta)^{(2n+1)} = (-1)^n\cos\theta$
$(\cos\theta)^{(2n)} = (-1)^n\cos\theta$
$(\cos\theta)^{(2n+1)} = (-1)^{n+1}\sin\theta$
Since $\sin 0 = 0, \cos 0 = 1$,
$(\sin\theta)^{(2n)}(0) = 0$
$(\sin\theta)^{(2n+1)}(0) = (-1)^n$
$(\cos\theta)^{(2n)}(0) = (-1)^n$
$(\cos\theta)^{(2n+1)}(0) = 0$
Note that $|\sin\theta| \le 1$, $|\cos\theta| \le 1$.
Hence, by Taylor's theorem,
$\sin\theta = \sum_{n=0}^{\infty} (-1)^n \frac{\theta^{2n+1}}{(2n+1)!}$
$\cos\theta = \sum_{n=0}^{\infty} (-1)^n \frac{\theta^{2n}}{(2n)!}$
QED
Remark:
When you consider the arc length of the lemniscate instead of the circle, you will encounter
$\int_{0}^{x} \frac{1}{\sqrt{1 - x^4}} dx$. You may find interesting functions like we did with $\int_{0}^{x} \frac{1}{\sqrt{1 - x^2}} dx$.
This was young Gauss's approach and he found elliptic functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/185356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
} |
The measure of $([0,1]\cap \mathbb{Q})×([0,1]\cap\mathbb{Q})$ We know that $[0,1]\cap \mathbb{Q}$ is a dense subset of $[0,1]$ and has measure zero, but what about $([0,1]\cap \mathbb{Q})\times([0,1]\cap \mathbb{Q})$? Is it also a dense subset of $[0,1]\times[0,1]$ and has measure zero too?
Besides, what about its complement? Is it dense in $[0,1]\times[0,1]$ and has measure zero?
| To give a somewhat comprehensive answer:
*
*the set in question is countable (as a product of countable sets), so it is of measure zero (because any countable set is zero with respect to any continuous measure, such as Lebesgue measure).
*it is also dense, because it is a product of dense sets.
*it has measure zero, so its complement has full measure.
*its complement has full measure with respect to Lebesgue measure, so it's dense in $[0,1]^2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/185416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
How to solve for $x$ in $x(x^3+\sin x \cos x)-\sin^2 x =0$?
How do I solve for $x$ in $$x\left(x^3+\sin(x)\cos(x)\right)-\big(\sin(x)\big)^2=0$$
I hate when I find something that looks simple, that I should know how to do, but it holds me up.
I could come up with an approximate answer using Taylor's, but how do I solve this?
(btw, WolframAlpha tells me the answer, but I want to know how it's solved.)
| Using the identity $\cos x=1-2\sin^2(x/2)$ and introduccing the function ${\rm sinc}(x):={\sin x\over x}$ we can rewrite the given function $f$ in the following way:
$$f(x)=x^2\left(x^2\left(1-{1\over2}{\rm sinc}(x){\rm sinc}^2(x/2)\right)+{\rm sinc}(x)\bigl(1-{\rm sinc}(x)\bigr)\right)\ .\qquad(*)$$
Now ${\rm sinc}(x)$ is $\geq0$ on $[0,\pi]$ and of absolute value $\leq1$ throughout. By distinguishing the cases $0<x\leq\pi$ and $x>\pi$ it can be verified by inspection that $f(x)>0$ for $x>0$. Since $f$ is even it follows that $x_0=0$ is the only real zero of $f$.
[One arrives at the representation $(*)$ by expanding the simple functions appearing in the given $f$ into a Taylor series around $0$ and grouping terms of the same order skillfully.]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/185478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
What is the order type of monotone functions, $f:\mathbb{N}\rightarrow\mathbb{N}$ modulo asymptotic equivalence? What about computable functions? I was reading the blog "who can name the bigger number" ( http://www.scottaaronson.com/writings/bignumbers.html ), and it made me curious. Let $f,g:\mathbb{N}\rightarrow\mathbb{N}$ be two monotonicly increasing strictly positive functions. We say that these two functions are asymptotically equivalent if $$\lim_{n \to \infty} \frac{f(n)}{g(n)}= \alpha\in(0,\infty)$$ We will say that $f>g$ if $$\lim_{n \to \infty} \frac{f(n)}{g(n)}=\infty$$ It is quite clear that this is a partial order.
What if any thing can be said about this order type?
| If you want useful classes of "orders of growth" that are totally ordered, perhaps you should learn about things like Hardy fields. And even in this case, of course, Asaf's comment applies, and it should not resemble a well-order at all.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/185542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
Equivalent definitions of linear function We say a transform is linear if $cf(x)=f(cx)$ and $f(x+y)=f(x)+f(y)$. I wonder if there is another definition.
If it's relevant, I'm looking for sufficient but possibly not necessary conditions.
As motivation, there are various ways of evaluating income inequality. Say the vector $w_1,\dots,w_n$ is the income of persons $1,\dots,n$. We might have some $f(w)$ telling us how "good" the income distribution is. It's reasonable to claim that $cf(w)=f(cw)$ but it's not obvious that $f(x+y)=f(x)+f(y)$. Nonetheless, there are some interesting results if $f$ is linear. So I wonder if we could find an alternative motivation for wanting $f$ to be linear.
| Assume that we are working over the reals. Then the condition $f(x+y)=f(x)+f(y)$, together with continuity of $f$ (or even just measurability of $f$) is enough. This can be useful, since on occasion $f(x+y)=f(x)+f(y)$ is easy to verify, and $f(cx)=cf(x)$ is not.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/185651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Ellipse with non-orthogonal minor and major axes? If there's an ellipse with non-orthogonal minor and major axes, what do we call it?
For example, is the following curve a ellipse?
$x = \cos(\theta)$
$y = \sin(\theta) + \cos(\theta) $
curve $C=\vec(1,0)*\cos(\theta) + \vec(1,1)*\cos(\theta) $
The major and minor axes are $\vec(1,0)$ and $\vec(1,1)$. They are not orthogonal.
Is it still an ellipse?
Suppose I have a point $P(p_1,p_2)$ can I find a point Q on this curve that has shortest euclidean distance from P?
| Hint: From $y=\sin\theta+\cos\theta$, we get $y-x=\sin\theta$, and therefore $(y-x)^2=\sin^2\theta=1-x^2$. After simplifying and completing the square, can you recognize the curve?
The major and minor axes do turn out to be orthogonal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/185718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
What are affine spaces for? I'm studying affine spaces but I can't understand what they are for.
Could you explain them to me? Why are they important, and when are they used? Thanks a lot.
| The first space we are introduced in our lives are euclidean spaces, which are the classical beginning point of classical geometry. In these spaces, there is a natural movement between points that are translations, i.e., you can move in a natural way from a point $p$ to a point $q$ through the vector that joint them $\overrightarrow{pq}$.
In this way, vectors represent translations in the euclidean space. Therefore vector spaces are the natural generalization of translations of spaces, but which spaces? Here is where affine spaces are important, because they recover the concept of points which the "arrows" (vectors) of a vector space move.
In conclusion, an affine space is mathematical modelling of an space of points whose main feature is that there is a set of preferred movements (called translations) that permits to go from any point to other point in an unique way and that are modeled through the concept of vector space. Or in other words, affine spaces represent the points that the arrows of vector spaces move.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/185768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 5,
"answer_id": 3
} |
Subsets and Splits