Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Integer partition with fixed number of summands but without order For a fixed $n$ and $M$, I am interested in the number of unordered non-negative integer solutions to $$\sum_{i = 1}^n a_i = M$$ Or, in other words, I am interested in the number of solutions with distinct numbers. For $n = 2$ and $M = 5$, I would consider solutions $(1,4)$ and $(4,1)$ equivalent, and choose the solution with $a_1 \ge a_2 \ge ... \ge a_n \ge 0$ as the representative of the class of equivalent solutions. I know how to obtain the number of total, ordered, solutions with the "stars and bars" method. But unfortunately, I cannot just divide the result by $n!$ since that would only work if all the $a_i$ are distinct.
Let the number of partitions be $P_n(M)$. By looking at the smallest number in the partition, $a_n$, we get a recurrence for $P_n(M)$: $$ P_n(M) = P_{n-1}(M) + P_{n-1}(M-n) + P_{n-1}(M-2n) + P_{n-1}(M-3n) + ... $$ Where $P_n(0)=1$ and $P_n(M)=0$ for $M<0$. The first term in the sum above comes from letting $a_n=0$, the second term from $a_n=1$, the third from $a_n=2$, etc. Now lets look at $g_n$, the generating function for $P_n$: $$ g_n(x) = \sum_{M=0}^\infty P_n(M)\;x^M $$ Plugging the recurrence above into this sum yields a recurrence for $g_n$: \begin{align} g_n(x)&=\sum_{M=0}^\infty P_n(M)\;x^M\\ &=\sum_{M=0}^\infty (P_{n-1}(M) + P_{n-1}(M-n) + P_{n-1}(M-2n) + P_{n-1}(M-3n) + ...)\;x^M\\ &=\sum_{M=0}^\infty P_{n-1}(M)\;(1+x^n+x^{2n}+x^{3n}+...)\\ &=g_{n-1}(x)/(1-x^n) \end{align} Note that $P_0(0)=1$ and $P_0(M)=0$ for $M>0$. This means that $g_0(x)=1$. Combined with the recurrence for $g_n$, we get that $$ g_n(x)=1/(1-x)/(1-x^2)/(1-x^3)/.../(1-x^n) $$ For example, if $n=1$, we get $$ g_1(x) = 1/(1-x) = 1+x+x^2+x^3+... $$ Thus, $P_1(M) = 1$ for all $M$. If $n=2$, we get \begin{align} g_2(x) &= 1/(1-x)/(1-x^2)\\ &= 1+x+2x^2+2x^3+3x^4+3x^5+4x^6+4x^7+5x^8+5x^9+6x^{10}+... \end{align} Thus, $P_2(7)=4$, $P_2(10)=6$, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/54635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 0 }
Bipartite graph cover problem Let $G$ be bipartite with bipartition $A$, $B$. Suppose that $C$ and $C'$ are both covers of $G$. Prove that $C^{\wedge}$ = $(A \cap C \cap C') \cup (B \cap (C \cup C'))$ is also a cover of $G$. Does anyone know which theorem is useful for proving this statement?
This statement is fairly easy to prove without appealing to any special theorems. It might be useful to rewrite the statement Every edge in $G$ has an endvertex in $C''=(A\cap C\cap C')\cup (B\cap(C\cup C')$, which you are trying to prove, as the equivalent statement If an edge $e\in E(G)$ has no endvertex in $B\cap (C\cup C')$, then it has an endvertex in $A\cap C\cap C'$. Hint: every edge $e$ has an endvertex in $A$, an endvertex in $B$, and at least one endvertex in each of $C$ and $C'$. Hope this helps!
{ "language": "en", "url": "https://math.stackexchange.com/questions/54688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Notation/name for the number of times a number can be exactly divided by $2$ (or a prime $p$) I am using this simple snippet of code, variants of which I have seen in many places: for(int k = 0 ; n % 2 == 0 ; k++) n = n / 2; This code repeatedly divides num by 2 until it is odd and on completion k contains the number of divisions performed. I am wondering what the appropriate way to write this using mathematical notation is? Does this correspond to some named concept? Of course, $lg\ n$ gives the appropriate $k$ when $n$ is a power of 2, but not for anything else. For example, $k = 1$ when $n = 6$ and $k = 0$ when $n$ is odd. So it looks it should be specified using a piece-wise function but there may be some mathematical concept or nomenclature here that I am not aware of...
You might call it the "highest power of $2$ dividing $n$," but I'm not aware of any snazzy standalone term for such a thing. However, I have seen it notated succinctly as $2^k\|n$, which means $2^k$ divides into $n$ but $2^{k+1}$ does not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/54965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
non time constructible functions A function $T: \mathbb N \rightarrow \mathbb N$ is time constructible if $T(n) \geq n$ and there is a $TM$ $M$ that computes the function $x \mapsto \llcorner T(\vert x\vert) \lrcorner$ in time $T(n)$. ($\llcorner T(\vert x\vert) \lrcorner$ denotes the binary representation of the number $T(\vert x\vert)$.) Examples for time-constructible functions are $n$, $nlogn$, $n^2$, $2^n$. Time bounds that are not time constructible can lead to anomalous results. --Arora, Barak. This is the definition of time-constructible functions in Computational Complexity - A Modern Approach by Sanjeev Arora and Boaz Barak. It is hard to find valid examples of non-time-constructible functions. $f(n)=c$ is an example of a non-time-constructible function. What more (sophisticated) examples are out there?
You can use the time hierarchy theorem to create a counter-example. You know by the THT that DTIME($n$)$\subsetneq$DTIME($n^2$), so pick a language which is in DTIME($n^2$)$\backslash$DTIME($n$). For this language you have a Turing machine $A$ which decideds if a string is in the language in time $O(n^2)$ and $\omega(n)$. You can now define the following function $f(n) = 2\cdot n+A(n)$ If it is time constructible it contradicts the THT.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
Chord dividing circle , function Two chords $PA$ and $PB$ divide circle into three parts. The angle $PAB$ is a root of $f(x)=0$. Find $f(x)$. Clearly , $PA$ and $PB$ divides circle into three parts means it divides it into $3$ parts of equal areas How can i find $f(x)$ then ? thanks
You may assume your circle to be the unit circle in the $(x,y)$-plane and $P=(1,0)$. If the three parts have to have equal area then $A=\bigl(\cos(2\phi),\sin(2\phi)\bigr)$ and $B=\bigl(\cos(2\phi),-\sin(2\phi)\bigr)$ for some $\phi\in\ ]0,{\pi\over2}[\ $. Calculating the area over the segment $PA$ gives the condition $$2\Bigl({\phi\over2}-{1\over2}\cos\phi\sin\phi\Bigr)={\pi\over3}\ ,$$ or $f(\phi):=\phi-\cos\phi\sin\phi-{\pi\over 3}=0$. This equation has to be solved numerically. One finds $\phi\doteq1.30266$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
(k+1)th, (k+1)st, k-th+1, or k+1? (Inspired by a question already at english.SE) This is more of a terminological question than a purely mathematical one, but can possibly be justified mathematically or simply by just what common practice it. The question is: When pronouncing ordinals that involve variables, how does one deal with 'one', is it pronounced 'one-th' or 'first'? For example, how do you pronounce the ordinal corresponding to $k+1$? There is no such term in mathematics 'infinityeth' (one uses $\omega$, with no affix), but if there were, the successor would be pronounced 'infinity plus oneth'. Which is also 'not a word'. So then how does one pronounce '$\omega + 1$' which is an ordinal? I think it is simply 'omega plus one' (no suffix, and not 'omega plus oneth' nor 'omega plus first'. So how ist pronounced, the ordinal corresponding to $k+1$? * *'kay plus oneth' *'kay plus first' *'kay-th plus one' *'kay plus one' or something else?
If you want a whole lot of non-expert opinions, you can read the comments here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 0 }
How to solve a symbolic non-linear vector equation? I'm trying to find a solution to this symbolic non-linear vector equation: P = a*(V0*t+P0) + b*(V1*t+P1) + (1-a-b)*(V2*t+P2) for a, b and t where P, V0, V1, V2, P0, P1, P2 are known 3d vectors. The interesting bit is that this equation has a simple geometrical interpretation. If you imagine that points P0-P2 are vertices of a triangle, V0-V2 are roughly vertex normals* and point P lies above the triangle, then the equation is satisfied for a triangle containing point P with it's three vertices on the three rays (Vx*t+Px), sharing the same parameter t value. a, b and (1-a-b) become the barycentric coordinates of the point P. In order words for a given P we want to find t such that P is a linear combination of (Vx*t+Px). So if the case is not degenerate, there should be only one well defined solution for t. *) For my needs we can assume these are roughly vertex normals of a convex tri mesh and of course lie in the half space above the triangle. I posted this question to stackoverflow, but no one was able to help me there. Both MatLab and Maxima time out while trying to solve the equation.
Let's rewrite: $$a(P_0 - P_2 + t(V_0-V_2)) + b(P_1 - P_2 + t(V_1 - V_2)) = P - P_2 - t V_2$$ which is linear in $a$ and $b$. If we let $A=P_0-P_2$ and $A'=V_0-V_2$ and $B=P_1-P_2$ and $B'=V_1-V_2$ and $C=P-P_2$ and $C'=-V_2$ then you have $$a(A + tA') + b(B + tB') = C + tC'$$ This can be written as a matrix equation: $$ \begin{bmatrix} A_1 + t A'_1 & B_1 + t B'_1 & C_1 + tC'_1 \\ A_2 + t A'_2 & B_2 + t B'_2 & C_2 + tC'_2 \\ A_3 + t A'_3 & B_3 + t B'_3 & C_3 + tC'_3 \end{bmatrix} \begin{bmatrix} a \\ b \\ -1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0\end{bmatrix}$$ or $Ax=0$, with suitable definitions for $A$ and $x$, and with both $A$ and $x$ unknown. Now you know that this only has a solution if $\det A = 0$, which gives you a cubic in $t$. Solving for $t$ using your favorite cubic solver when gives you $Ax=0$ with only $x$ unknown - and $x$ is precisely the zero eigenvector of the matrix $A$. The fact that the third component of $x$ is $-1$ forces the values of $a$ and $b$, and you are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Perfect square sequence In base 10, the sequence 49,4489,444889,... consists of all perfect squares. Is this true for any other bases (greater than 10, of course)?
$((2*19^5+1)/3^2)=2724919437289,\ \ $ which converted to base $19$ is $88888GGGGH$. It doesn't work in base $13$ or $16$. In base $28$ it gives $CCCCCOOOOP$, where those are capital oh's (worth $24$). This is because if we express $\frac{1}{9}$ in base $9a+1$, it is $0.aaaa\ldots$. So $\left (\frac{2(9a+1)^5+1}{3}\right)^2=\frac{4(9a+1)^10+4(9a+1)^5+1}{9}=$ $ (4a)(4a)(4a)(4a)(4a)(4a)(8a)(8a)(8a)(8a)(8a+1)_{9a+1}$ where the parentheses represent a single digit and changing the exponent from $5$ changes the length of the strings in the obvious way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Find the perimeter of any triangle given the three altitude lengths The altitude lengths are 12, 15 and 20. I would like a process rather than just a single solution.
First, $\begin{aligned}\operatorname{Area}(\triangle ABC)=\frac{ah_a}2=\frac{bh_b}2=\frac{ch_c}2\implies \frac1{h_a}&=\frac{a}{2\operatorname{Area}(\triangle ABC)}\\\frac1{h_b}&=\frac{b}{2\operatorname{Area}(\triangle ABC)}\\\frac1{h_c}&=\frac{c}{2\operatorname{Area}(\triangle ABC)} \end{aligned}$ By the already mentioned Heron's formula: $\begin{aligned}\operatorname{Area}(\triangle ABC)&=\sqrt{s(s-a)(s-b)(s-c)}=\sqrt{\frac{a+b+c}2\cdot\frac{b+c-a}2\cdot\frac{a+c-b}2\cdot\frac{a+b-c}2}\Bigg/:\operatorname{Area}^2(\triangle ABC)\\\frac1{\operatorname{Area}(\triangle ABC)}&=\sqrt{\frac{a+b+c}{2\operatorname{Area}(\triangle ABC)}\cdot\frac{b+c-a}{2\operatorname{Area}(\triangle ABC)}\cdot\frac{a+c-b}{2\operatorname{Area}(\triangle ABC)}\cdot\frac{a+b-c}{2\operatorname{Area}(\triangle ABC)}}\\\frac1{\operatorname{Area}(\triangle ABC)}&=\sqrt{\left(\frac1{h_a}+\frac1{h_b}+\frac1{h_c}\right)\left(\frac1{h_b}+\frac1{h_c}-\frac1{h_a}\right)\left(\frac1{h_a}+\frac1{h_c}-\frac1{h_b}\right)\left(\frac1{h_a}+\frac1{h_b}-\frac1{h_c}\right)}\end{aligned}$ $$\implies\operatorname{Area}(\triangle ABC)=\frac1{\sqrt{\left(\frac1{h_a}+\frac1{h_b}+\frac1{h_c}\right)\left(\frac1{h_b}+\frac1{h_c}-\frac1{h_a}\right)\left(\frac1{h_a}+\frac1{h_c}-\frac1{h_b}\right)\left(\frac1{h_a}+\frac1{h_b}-\frac1{h_c}\right)}}$$ Then, plugg the values $h_a=12,h_b=15,h_c=20$ into the formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 4 }
Uniformly continuous $f$ in $L^p([0,\infty))$ Assume that $1\leq p < \infty $, $f \in L^p([0,\infty))$, and $f$ is uniformly continuous. Prove that $\lim_{x \to \infty} f(x) = 0$ .
Hints: Suppose for a contradiction that $f(x) \not \to 0$. Together with the definition of uniform continuity, conclude that there exist constants $\varepsilon > 0$ and $\delta = \delta(\varepsilon) > 0$ such that for any $M > 0$ there exists $x > M$ for which it holds $$ \int_{(x,x + \delta )} {|f(y)|^p \,dy} \ge \bigg(\frac{\varepsilon }{2}\bigg)^p \delta . $$ However, this would imply $\int_{[0,\infty )} {|f(x)|^p \,dx} = \infty$, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Quasi convexity and strong duality Is there a possibility to prove the strong duality (Lagrangian duality) if the primal problem is quasiconvex? Or is there a technique that proves that the primal problem is convex instead of quasiconvex?
I'm not sure I understand the question clearly enough to give a precise answer, but here is a kind of meta-answer about why you should not expect such a thing to be true. Generally speaking duality results come from taking some given objective function and adding variable multiples of other functions (somehow connected to constraints) to it. Probably the simplest case is adding various linear functions. Proving strong duality for a certain type of setup involves understanding the class of functions thus generated, usually using some sort of convexity property. For example, if our original objective function $f$ is convex and we think of adding arbitrary linear functions $L$ to it, the result $f+L$ will always be convex, so we have a good understanding of functions generated in this way, in particular what their optima look like. Quasiconvexity does not behave nearly so will with respect to the operation of adding linear functions. One way to express this is the following. Let $f:\mathbb{R}^n\to\mathbb{R}$ be a function. Then $f+L$ is quasiconvex for all linear maps $L:\mathbb{R}^n\to\mathbb{R}$ if and only if $f$ is convex. Therefore the class of functions for which the benefits of quasiconvexity (local minima are global, etc.) would help prove strong duality is in some sense just the convex functions. This is not to say strong duality will never hold for quasiconvex objectives, but just that some strong additional conditions are required beyond the usual constraint qualifications used for convex functions: quasiconvexity alone does not buy you much duality-wise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Use of a null option in a combination with repetition problem Ok, so I am working on a combinatorics problem involving combination with repetition. The problem comes off a past test that was put up for study. Here it is: An ice cream parlor sells six flavors of ice cream: vanilla, chocolate, strawberry, cookies and cream, mint chocolate chip, and chocolate chip cookie dough. How many combinations of fewer than 20 scoops are there? (Note: two combinations count as distinct if they differ in the number of scoops of at least one flavor of ice cream.) Now I get the correct answer $\binom{25}{6}$, but the way they arrive at the answer is different and apparently important. I just plug in 20 combinations of 6 flavors into $\binom{n+r-1}{r}=\binom{n+r-1}{n-1}$. The answer given makes use of a "null flavor" to be used in the calculation. I can't figure out for the life of me why, could someone explain this to me? Answer: This is a slight variation on the standard combinations with repetition problem. The difference here is that we are not trying to buy exactly 19 scoops of ice cream, but 19 or fewer scoops. We can solve this problem by introducing a 7 th flavor, called “noflavor” ice cream. Now, imagine trying to buy exactly 19 scoops of ice cream from the 7 possible flavors (the six listed an “no-flavor”). Any combination with only 10 real scoops would be assigned 9 “no-flavor” scoops, for example. There is a one-to-one correspondence between each possible combination with 19 or fewer scoops from 6 flavors as there are to 19 “scoops” from 7 flavors. Thus, using the formula for combination with repetition with 19 items from 7 types, we find the number of ways to buy the scoops is $\binom{19+7-1}{19}=\binom{25}{19}=\binom{25}{6}$. (Grading – 4 pts for mentioning the idea of an extra flavor, 4 pts for attempting to apply the correct formula, 2 pts for getting the correct answer. If a sum is given instead of a closed form, give 6 points out of 10.) Any assistance would be greatly appreciated!
Suppose that the problem had asked how many combinations there are with exactly $19$ scoops. That would be a bog-standard problem involving combinations with possible repetition (sometimes called a stars-and-bars or marbles-and-boxes problem). Then you’d have been trying to distribute $19$ scoops of ice cream amongst $6$ flavors, and you’d have known that the answer was $\binom{19+6-1}{6-1}=\binom{24}{5}$, either by the usual stars-and-bars analysis or simply by having learnt the formula. Unfortunately, the problem actually asks for the number of combinations with at most $19$ scoops. There are at least two ways to proceed. The straightforward but slightly ugly one is to add up the number of combinations with $19$ scoops, $18$ scoops, $17$ scoops, and so on down to no scoops at all. You know that the number of combinations with $n$ scoops is $\binom{n+6-1}{6-1}=\binom{n+5}{5}$, so the answer to the problem is $\sum\limits_{n=0}^{19}\binom{n+5}{5}$. This can be rewritten as $\sum\limits_{k=5}^{24}\binom{k}{5}$, and you can then use a standard identity (sometimes called a hockey stick identity) to reduce this to $\binom{25}{6}$. The slicker alternative is the one used in the solution that you quoted. Pretend that there are actually seven flavors: let’s imagine that in addition to vanilla, chocolate, strawberry, cookies and cream, mint chocolate chip, and chocolate chip cookie dough there is banana. (The quoted solution uses ‘no-flavor’ instead of banana.) Given a combination of exactly $19$ scoops chosen from these seven flavors, you can throw away all of the banana scoops; what remains will be a combination of at most $19$ scoops from the original six flavors. Conversely, if you start with any combination of at most $19$ scoops of the original six flavors, you can ‘pad it out’ to exactly $19$ scoops by adding enough scoops of banana ice cream to make up the difference. (E.g., if you start with $10$ scoops of vanilla and $5$ of strawberry, you’re $4$ scoops short of $19$, so you add $4$ scoops of banana.) This establishes a one-to-one correspondence (bijection) between (1) the set of combinations of exactly $19$ scoops chosen from the seven flavors and (2) the set of combinations of at most $19$ scoops chosen from the original six flavors. Thus, these two sets of combinations are the same size, and counting one is as good as counting the other. But counting the number of combinations of exactly $19$ scoops chosen from seven flavors is easy: that’s just the standard combinations with repetitions problem, and the answer is $\binom{19+7-1}{7-1}=\binom{25}{6}$. Both solutions are perfectly fine; the second is a little slicker and requires less computation, but the first is also pretty straightforward if you know your basic binomial identities.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A compactness problem for model theory I'm working on the following problem: Assume that every model of a sentence $\varphi$ satisfies a sentence from $\Sigma$. Show that there is a finite $\Delta \subseteq \Sigma$ such that every model of $\varphi$ satisfies a sentence in $\Delta$. The quantifiers in this problem are throwing me off; besides some kind of compactness application I'm not sure where to go with it (hence the very poor title). Any hint?
Cute, in a twisted sort of way. You are right, the quantifier structure is the main hurdle to solving the problem. We can assume that $\varphi$ has a model, else the result is trivially true. Suppose that there is no finite $\Delta\subseteq \Sigma$ with the desired property. Then for every finite $\Delta \subseteq \Sigma$, the set $\{\varphi, \Delta'\}$ has a model. (For any set $\Gamma$ of sentences, $\Gamma'$ will denote the set of negations of sentences in $\Gamma$.) By the Compactness Theorem, we conclude that $\{\varphi, \Sigma'\}$ has a model $M$. This model $M$ is a model of $\varphi$ in which no sentence in $\Sigma$ is true, contradicting the fact that every model of $\varphi$ satisfies a sentence from $\Sigma$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Simple problem with pattern matching in Mathematica I'm having trouble setting a pattern for simplifying a complex expression. I've distilled the question down to the simplest case where Mathematica seems to fail. I set up a simple rule based on a pattern: simpRule = a b v___ - c d v___ -> e v which works on the direct case a b - c d /. simpRule e but fails if I simply add a minus sign. -a b + c d /. simpRule -a b + c d How do I go about writing a more robust rule? Or perhaps there's a better way to go about performing simplifications of this sort? Thanks, Keith
You need to be aware of the FullForm of the expressions you are working with to correctly use replacement rules. Consider the FullForm of the two expressions you use ReplaceAll (/.) on: a b - c d // FullForm -a b + c d // FullForm (* Out *) Plus[Times[a, b], Times[-1, c, d]] (* Out *) Plus[Times[-1, a, b], Times[c, d]] From this you can see how the negation is internally represented as multiplication by -1, and therefore, why your rule matches differently. One way to allow for the alternative pattern is this: rule = (-1 a b v___ | a b v___) + (-1 c d v___ | c d v___) -> e v; This makes use of Alternatives. Another, somewhat more opaque way is: rule = (-1 ...) a b v___ + (-1 ...) c d v___ -> e v; Which makes use of RepeatedNull.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Numerical computation of the Rayleigh-Lamb curves The Rayleigh-Lamb equations: $$\frac{\tan (pd)}{\tan (qd)}=-\left[\frac{4k^2pq}{\left(k^2-q^2\right)^2}\right]^{\pm 1}$$ (two equations, one with the +1 exponent and the other with the -1 exponent) where $$p^2=\frac{\omega ^2}{c_L^2}-k^2$$ and $$q^2=\frac{\omega ^2}{c_T^2}-k^2$$ show up in physical considerations of the elastic oscillations of solid plates. Here, $c_L$, $c_T$ and $d$ are positive constants. These equations determine for each positive value of $\omega$ a discrete set of real "eigenvalues" for $k$. My problem is the numerical computation of these eigenvalues and, in particular, to obtain curves displaying these eigenvalues. What sort of numerical method can I use with this problem? Thanks. Edit: Using the numerical values $d=1$, $c_L=1.98$, $c_T=1$, the plots should look something like this (black curves correspond to the -1 exponent, blue curves to the +1 exponent; the horizontal axis is $\omega$ and the vertical axis is $k$):
The Rayleigh-Lamb equations: $$\frac{\tan (pd)}{\tan (qd)}=-\left[\frac{4k^2pq}{\left(k^2-q^2\right)^2}\right]^{\pm 1}$$ are equivalent to the equations (as Robert Israel pointed out in a comment above) $$\left(k^2-q^2\right)^2\sin pd \cos qd+4k^2pq \cos pd \sin qd=0$$ when the exponent is +1, and $$\left(k^2-q^2\right)^2\cos pd \sin qd+4k^2pq \sin pd \cos qd=0$$ when the exponent is -1. Mathematica had trouble with the plots because $p$ or $q$ became imaginary. The trick is to divide by $p$ or $q$ in convenience. Using the numerical values $d=1$, $c_L=1.98$, $c_T=1$, we divide equation for the +1 exponent by $p$ and the equation for the -1 exponent by $q$. Supplying these to the ContourPlot command in Mathematica I obtained the curves for the +1 exponent, and for the -1 exponent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/55872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Showing that $l^p(\mathbb{N})^* \cong l^q(\mathbb{N})$ I'm reading functional analysis in the summer, and have come to this exercise, asking to show that the two spaces $l^p(\mathbb{N})^*,l^q(\mathbb{N})$ are isomorphic, that is, by showing that every $l \in l^p(\mathbb{N})^*$ can be written as $l_y(x)=\sum y_nx_n$ for some $y$ in $l^q(\mathbb N)$. The exercise has a hint. Paraphrased: "To see $y \in l^q(\mathbb N)$ consider $x^N$ defined such that $x_ny_n=|y_n|^q$ for $n \leq N$ and $x_n=0$ for $n > N$. Now look at $|l(x^N)| \leq ||l|| ||x^N||_p$." I can't say I understand the first part of the hint. To prove the statement I need to find a $y$ such that $l=l_y$ for some $y$. How then can I define $x$ in terms of $y$ when it is $y$ I'm supposed to find. Isn't there something circular going on? The exercise is found on page 68 in Gerald Teschls notes at http://www.mat.univie.ac.at/~gerald/ftp/book-fa/index.html Thanks for all answers.
We know that the $(e_n)$ with a $1$ at the $n$-th position and $0$s elsewhere is a Schauder basis for $\ell^p$ (which has some nice alternative equivalent definitions, I recommend Topics in Banach Space Theory by Albiac and Kalton as a nice reference about this. So, every $x \in \ell^p$ has a unique representation by $$x = \sum_{k = 1}^\infty y_k e_k.$$ Now consider $l \in \ell^q$. Because $l$ is bounded we also have that $$l(x) = \sum_{k = 1}^\infty y_k l(e_k).$$ Now set $z_k = f(e_k)$. Consider the following $x_n = (y_k^{(n)})$ where $$y_k^{(n)} = \begin{cases} \frac{|z_k|^q}{z_k} &\text{when $k \leq n$ and $z_k \neq 0$,}\\ 0 &\text{otherwise.} \end{cases}.$$ We have that $$\begin{align}l(x_n) &= \sum_{k = 1}^\infty y_k^{(n)} z_k\\ &= \sum_{k = 1}^n |z_k|^q\\ &\leq \|l\|\|x_n\|\\ &= \|l\| \left ( \sum |x_k^{(n)}|^p \right )^{\frac1p}\\ &= \|l\| \left ( \sum |x_k^{(n)}|^p \right )^{\frac1p}\\ &= \|l\| \left ( \sum |z_k|^q \right )^{\frac1q}. \end{align}$$ Hence we have that $$\sum |z_k|^q = \|l\| \left (\sum |z_k|^q \right )^{\frac1q}.$$ Now we divide and get $$\left ( \sum_{k = 1}^n |z_k|^q \right )^{\frac1q} \leq \|l\|.$$ Take the limit to obtain $$\left ( \sum_{k \geq 1} |z_k|^q \right )^{\frac1q} \leq \|l\|.$$ We conclude that $(z_k) \in \ell^q$. So, now you could try doing the same for $L^p(\mathbf R^d)$ with a $\sigma$-finite measure. A small hint: Using the $\sigma$-finiteness you can reduce to the finite measure case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/56020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
On the growth order of an entire function $\sum \frac{z^n}{(n!)^a}$ Here $a$ is a real positive number. The result is that $f(z)=\sum_{n=1}^{+\infty} \frac{z^n}{(n!)^a}$ has a growth order $1/a$ (i.e. $\exists A,B\in \mathbb{R}$ such that $|f(z)|\leq A\exp{(B|z|^{1/a})},\forall z\in \mathbb{C}$). It is Problem 3* from Chapter 5 of E.M. Stein's book, Complex Analysis, page 157. Yet I don't know how to get this. Will someone give me some hints on it? Thank you very much.
There is a formula expressing the growth order of entire function $f(z)=\sum_{n=0}^\infty c_nz^n\ $ in terms of its Taylor coeffitients: $$ \rho=\limsup_{n\to\infty}\frac{\log n}{\log\frac{1}{\sqrt[n]{|c_n|}}}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/56068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
planar curve if and only if torsion Again I have a question, it's about a proof, if the torsion of a curve is zero, we have that $$ B(s) = v_0,$$ a constant vector (where $B$ is the binormal), the proof ends concluding that the curve $$ \alpha \left( t \right) $$ is such that $$ \alpha(t)\cdot v_0 = k $$ and then the book says, "then the curve is contained in a plane orthogonal to $v_0$." It's a not so important detail but .... that angle might not be $0$, could be not perpendicular to it, anyway, geometrically I see it that $ V_0 $ "cuts" that plane with some angle. My stupid question is why this constant $k$ must be $0$. Or just I can choose some $v_0 $ to get that "$k$"?
The constant $k$ need not be $0$; that would be the case where $\alpha$ lies in a plane through the origin. You have $k=\alpha(0)\cdot v_0$, so for all $t$, $(\alpha(t)-\alpha(0))\cdot v_0=0$. This means that $\alpha(t)-\alpha(0)$ lies in the plane through the origin perpendicular to $v_0$, so $\alpha(t)$ lies in the plane through $\alpha(0)$ perpendicular to $v_0$. (If $0$ is not in the domain, then $0$ could be replaced with any point in the domain of $\alpha$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/56197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Regular curve which tangent lines pass through a fixed point How to prove that if a regular parametrized curve has the property that all its tangent lines passs through a fixed point then its trace is a segment of a straight line? Thanks
In his answer, user14242 used a vector multiplication for two vectors - while I believe it's only defined for 3-dim case. If you talk not only about the 3-dim problem then just writing explicitly the equation of the tangent line you obtain $$ r(t)+\dot{r}(t)\tau(t) = a $$ where $a$ is a fixed point and $\tau(t)$ denotes the value of parameter when the tangent line crosses $a$. Let's assume that $t$ is a natural parametrization of the curve. Taking the derivative w.r.t. $t$ you have $$ \dot{r}(t)(1+\dot{\tau}(t))+n(t)\tau(t) = 0 $$ where $n(t) = \ddot{r}(t)$ is a normal vector. Also $n\cdot\dot{r} = 0$ and then $$ \tau(t)\|n(t)\|^2 = 0. $$ You have $\tau(t) = 0$ iff $r(t) = a$ and for all other points $n(t) = 0$ which gives us $\dot{r}(t) = const$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/56275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Proving $\sum\limits_{k=1}^{n}{\frac{1}{\sqrt{k}}\ge\sqrt{n}}$ with induction I am just starting out learning mathematical induction and I got this homework question to prove with induction but I am not managing. $$\sum\limits_{k=1}^{n}{\frac{1}{\sqrt{k}}\ge\sqrt{n}}$$ Perhaps someone can help me out I don't understand how to move forward from here: $$\sum\limits_{k=1}^{n+1}{\frac{1}{\sqrt{k}}+\frac{1}{\sqrt{n+1}}\ge \sqrt{n+1}}$$ proof and explanation would greatly be appreciated :) Thanks :) EDIT sorry meant GE not = fixed :)
For those who strive for non-induction proofs... Since $\frac 1{\sqrt k} \ge \frac 1{\sqrt n}$ for $1 \le k \le n$, we actually have $$ \sum_{i=1}^n \frac 1{\sqrt k} \ge \sum_{i=1}^n \frac 1{\sqrt n} = \frac n{\sqrt n} = \sqrt n. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/56335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 6, "answer_id": 2 }
Do finite algebraically closed fields exist? Let $K$ be an algebraically closed field ($\operatorname{char}K=p$). Denote $${\mathbb F}_{p^n}=\{x\in K\mid x^{p^n}-x=0\}.$$ It's easy to prove that ${\mathbb F}_{p^n}$ consists of exactly $p^n$ elements. But if $|K|<p^n$, we have collision with previous statement (because ${\mathbb F}_{p^n}$ is subfield of $K$). So, are there any finite algebraically closed fields? And if they exist, where have I made a mistake? Thanks.
No, there do not exist any finite algebraically closed fields. For suppose $K$ is a finite field; then the polynomial $$f(x)=1+\prod_{\alpha\in K}(x-\alpha)\in K[x]$$ cannot have any roots in $K$ (because $f(\alpha)=1$ for any $\alpha\in K$), so $K$ cannot be algebraically closed. Note that for $K=\mathbb{F}_{p^n}$, the polynomial is $$f(x)=1+\prod_{\alpha\in K}(x-\alpha)=1+(x^{p^n}-x).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/56397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45", "answer_count": 4, "answer_id": 0 }
What is the analogue of spherical coordinates in $n$-dimensions? What's the analogue to spherical coordinates in $n$-dimensions? For example, for $n=2$ the analogue are polar coordinates $r,\theta$, which are related to the Cartesian coordinates $x_1,x_2$ by $$x_1=r \cos \theta$$ $$x_2=r \sin \theta$$ For $n=3$, the analogue would be the ordinary spherical coordinates $r,\theta ,\varphi$, related to the Cartesian coordinates $x_1,x_2,x_3$ by $$x_1=r \sin \theta \cos \varphi$$ $$x_2=r \sin \theta \sin \varphi$$ $$x_3=r \cos \theta$$ So these are my questions: Is there an analogue, or several, to spherical coordinates in $n$-dimensions for $n>3$? If there are such analogues, what are they and how are they related to the Cartesian coordinates? Thanks.
I was trying to answer exercise 9 of $ I.5. $ from Einstein gravity in a nutshell by A. Zee that I saw this question so what I am going to say is from this question. It is said that the d-dimensional unit sphere $S^d$ is embedded into $E^{d+1}$ by usual Pythagorean relation$$(X^1)^2+(X^2)^2+.....+(X^{d+1})^2=1$$. Thus $S^1$ is the circle and $S^2$ the sphere. A. Zee says we can generalize what we know about polar and spherical coordinates to higher dimensions by defining $X^1=\cos\theta_1\quad X^2=\sin\theta_1 \cos\theta_2,\ldots $ $X^d=\sin\theta_1 \ldots \sin\theta_{d-1} \cos\theta_d,$ $X^{d+1}=\sin\theta_1 \ldots \sin\theta_{d-1} \sin\theta_d$ where $0\leq\theta_{i}\lt \pi \,$ for $ 1\leq i\lt d $ but $ 0\leq \theta_d \lt 2\pi $. So for $S^1$ we just have ($\theta_1$): $X^1=\cos\theta_1,\quad X^2=\sin\theta_1$ $S^1$ is embedded into $E^2$ and for the metric on $S^1$ we have: $$ds_1^2=\sum_1^2(dX^i)^2=d\theta_1^2$$ for $S^2$ we have ($\theta_1, \theta_2$) so for Cartesian coordinates we have: $X^1=\cos\theta_1,\quad X^2=\sin\theta_1\cos\theta_2,$ $\quad X^3=\sin\theta_1\sin\theta_2$ and for its metric: $$ds_2^2=\sum_1^3(dX^i)^2=d\theta_1^2+\sin^2\theta_1 d\theta_2^2$$ for $S^3$ which is embedded into $E^4$ we have($ \theta_1,\theta_2,\theta_3 $): $X^1=\cos\theta_1,\quad X^2=\sin\theta_1\cos\theta_2,$ $\quad X^3=\sin\theta_1\sin\theta_2\cos\theta_3$ $\quad X^4=\sin\theta_1\sin\theta_2\sin\theta_3 $ $$ds_3^2=\sum_{i=1}^4(dX^1)^i=d\theta_1^2+\sin^2\theta_1 d\theta_2^2+sin^2\theta_1\sin^2\theta_2\,d\theta_3^2$$ Finally, it is not difficult to show the metric on $S^d$ will be: $$ds_d^2=\sum_{i=1}^{d+1}(dX^1)^i=d\theta_1^2+\sin^2\theta_1 d\theta_2^2+sin^2\theta_1\sin^2\theta_2\,d\theta_3^2+\cdots+sin^2\theta_1\cdots\sin^2\theta_{d-1}\,d\theta_d^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/56582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46", "answer_count": 3, "answer_id": 1 }
Asymptotics for prime sums of three consecutive primes We consider the sequence $R_n=p_n+p_{n+1}+p_{n+2}$, where $\{p_i\}$ is the prime number sequence, with $p_0=2$, $p_1=3$, $p_2=5$, etc.. The first few values of $R_n$ for $n=0,1,2,\dots $ are: $10, 15, 23, 31, 41, 49, 59, 71, 83, 97, 109, 121, 131, 143, 159, 173, 187, 199, $ $211,223,235,251,269,287,301,311,319,329,349,271,395,407,425,439, 457$ $\dots \dots \dots$ Now, we define $R(n)$ to be the number of prime numbers in the set $\{R_0, R_1 , \dots , R_n\}$. What I have found (without justification) is that $R(n) \approx \frac{2n}{\ln (n)}$ My lack of programming skills, however, prevents me from checking further numerical examples. I was wondering if anyone here had any ideas as to how to prove this assertion. As a parting statement, I bring up a quote from Gauss, which I feel describes many conjectures regarding prime numbers: "I confess that Fermat's Theorem as an isolated proposition has very little interest for me, because I could easily lay down a multitude of such propositions, which one could neither prove nor dispose of."
What this says (to me) is that there are twice as many primes in R(n) as in the natural numbers. But, since each R(n) is the sum of three primes, all of which are odd (except for R(1)), then each R(n) is odd. This eliminates half of the possible values (the evens), all of which are, of course, composite. So, if the R(n) values are as random as the primes (which, as Kac famously said, "play a game of chance"), then they should be twice as likely as the primes to be primes since they can never be even. As to proving this, haa!
{ "language": "en", "url": "https://math.stackexchange.com/questions/56643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Solving $-u''(x) = \delta(x)$ A question asks us to solve the differential equation $-u''(x) = \delta(x)$ with boundary conditions $u(-2) = 0$ and $u(3) = 0$ where $\delta(x)$ is the Dirac delta function. But inside the same question, teacher gives the solution in two pieces as $u = A(x+2)$ for $x\le0$ and $u = B(x-3)$ for $x \ge 0$. I understand when we integrate the delta function twice the result is the ramp function $R(x)$. However elsewhere in his lecture the teacher had given the general solution of that DE as $u(x) = -R(x) + C + Dx$ So I dont understand how he was able to jump from this solution to the two pieces. Are these the only two pieces possible, using the boundary conditions given, or can there be other solutions? Full solution is here (section 1.2 answer #2) http://ocw.mit.edu/courses/mathematics/18-085-computational-science-and-engineering-i-fall-2008/assignments/pset1.pdf
Both describe the same type of function. The ramp function is nothing but \begin{align} R(x) = \begin{cases}0 & x \leq 0 \\ x & x \geq 0\end{cases} \end{align} If you use the general solution and plug in the same boundary conditions, \begin{align}u(-2) &= -R(-2) + C -2D = C - 2D = 0 \\ u(3) &= -R(3) + C + 3D = -3 + C +3D = 0\end{align} with the solution $C=6/5$, $D=3/5$, and then split it at $x=0$ to get rid of the ramp function and you obtain; \begin{align}u(x) = \begin{cases} \frac65 + \frac35 x & x \leq 0\\ -x + \frac65 + \frac35 x = \frac65 - \frac25 x &x \geq 0\end{cases}\end{align} which is exactly the same expression you got by splitting the function earlier (pratically, there is no difference, but its shorter when written with the ramp function).
{ "language": "en", "url": "https://math.stackexchange.com/questions/56681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 0 }
Please help me to understand why $\lim\limits_{n\to\infty } {nx(1-x^2)^n}=0$ This is the exercise: $$f_{n}(x) = nx(1-x^2)^n,\qquad n \in {N}, f_{n}:[0,1] \to {R}.$$ Find ${f(x)=\lim\limits_{n\to\infty } {nx(1-x^2)^n}}$. I know that $\forall x\in (0,1]$ $\Rightarrow (1-x^2) \in [0, 1) $ but I still don't know how to calculate the limit. $\lim\limits_{n\to\infty } {(1-x^2)^n}=0$ because $(1-x^2) \in [0, 1) $ and that means I have $\infty\cdot0$. I tried transformation to $\frac{0}{0} $ and here is where I got stuck. I hope someone could help me.
Please see $\textbf{9.10}$ at this given link: The Solution is actually worked out for a more general case and I think that should help. * *http://www.csie.ntu.edu.tw/~b89089/book/Apostol/ch9.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/56724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
Generate a random direction within a cone I have a normalized $3D$ vector giving a direction and an angle that forms a cone around it, something like this: I'd like to generate a random, uniformly distributed normalized vector for a direction within that cone. I would also like to support angles greater than pi (but lower or equal to $2\pi$), at which point the shape becomes more like a sphere from which a cone was removed. How can I proceed? I thought about the following steps, but my implementation did not seem to work: * *Find a vector normal to the cone axis vector (by crossing the cone axis vector with the cardinal axis that corresponds with the cone axis vector component nearest to zero, ex: $[1 0 0]$ for $[-1 5 -10]$) *Find a second normal vector using a cross product *Generate a random angle between $[-\pi, \pi]$ *Rotate use the two normal vectors as a $2D$ coordinate system to create a new vector at the angle previously generated *Generate a random displacement value between $[0, \tan(\theta)]$ and square root it (to normalize distribution like for points in a circle) *Normalize the sum of the cone axis vector with the random normal vector times the displacement value to get the final direction vector [edit] After further thinking, I'm not sure that method would work with theta angles greater or equal to pi. Alternative methods are very much welcome.
So you want to uniformly sample on a spherical cap. With the notations of this link, here is some pseudo-code which performs the sampling on the cap displayed in the above link (then it remains to perform a rotation): stopifnot(h > 0 && h < 2 * R) xy = uniform_point_in_unit_circle() k = h * dotproduct(xy) s = sqrt(h * (2 * R - k)) x = xy[1] y = xy[2] return (s * x, s * y, R - k) I have this code in my archives, unfortunately I don't remember how I got it. I put the image of the link here, in case it changes:
{ "language": "en", "url": "https://math.stackexchange.com/questions/56784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 8, "answer_id": 5 }
Is the language $L = \{0^m1^n: m \neq n - 1 \}$ context free? Consider the language: $L = \{0^m1^n : m \neq n - 1 \}$ where $m, n \geq 0$ I tried for hours and hours but couldn't find its context free grammar. I was stuck with a rule which can check on the condition $m \neq n - 1$. Would anyone can help me out? Thanks in advance.
The trick is that you need extra "clock" letters $a,b, \dots$ in the language with non-reversible transformations between them, to define the different phases of the algorithm that builds the string. When the first phase is done, transform $a \to b$, then $b \to c$ after phase 2, etc, then finally remove the extra letters to leave a 0-1 string. The natural place for these extra markers is between the 0's and the 1's, or before the 0's, or after the 1's. The string can be built by first deciding whether $m - (n-1)$ will be positive or negative, then building a chain of 0's (resp. 1's) of some length, then inserting 01's in the "middle" of the string several times. Each of these steps can be encoded by production rules using extra letters in addition to 0 and 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/56836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Semi-partition or pre-partition For a given space $X$ the partition is usually defined as a collection of sets $E_i$ such that $E_i\cap E_j = \emptyset$ for $j\neq i$ and $X = \bigcup\limits_i E_i$. Does anybody met the name for a collection of sets $F_i$ such that $F_i\cap F_j = \emptyset$ for $j\neq i$ but * *$X = \overline{\bigcup\limits_i F_i}$ if $X$ is a topological space, or *$\mu\left(X\setminus\bigcup\limits_i F_i\right) = 0$ if $X$ is a measure space. I guess that semipartition or a pre-partition should be the right term, but I've never met it in the literature.
Brian M. Scott wrote: In the topological case I'd simply call it a (pairwise) disjoint family whose union is dense in $X$; I've not seen any special term for it. In fact, I can remember seeing it only once: such a family figures in the proof that almost countable paracompactness, a property once studied at some length by M.K. Singal, is a property of every space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/56891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$A\in M_{n}(C)$ and $A^*=-A$ and $A^4=I$ Let $A\in M_{n}(C)$ be a matrix such that $A^*=-A$ and $A^4=I$. I need to prove that the eigenvalues of A are between $-i$ to $i$ and that $A^2+I=0$ I didn't get to any smart conclusion. Thanks
Do you recall that hermitian matrices ($A^*=A$) must have real eigenvalues? Similiarly, skew-hermitian matrices, i.e. $A^*=-A$, must have pure imaginary eigenvalues. (see Why are all nonzero eigenvalues of the skew-symmetric matrices pure imaginary?) Also, since $A$ is skew-hermitian, then $A$ is normal too, i.e. $A^*A=AA^*$, so we can apply the spectral theorem: there exists a unitary matrix $U$ such that $A=UDU^{-1}$, where $D$ is a diagonal matrix, whose diagonal entries are the eigenvalues of $A$. Thus we know that $A^4=(UDU^{-1})^4=UD^4U^{-1}=I$, so $D^4=I$, so all the eigenvalues are 4th roots of unity, i.e. $1,-1,i,\text{ or} -i$. But we already know the eigenvalues are pure imaginary, so all the eigenvalues are $i$ or $-i$. So $D^2=-I$. Finally, we have $A^2=(UDU^{-1})^2=UD^2U^{-1}=U(-I)U^{-1}=-I$, i.e. $A^2+I=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/56934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Set Theory: Proving Statements About Sets Let $A, B,$ and $C$ be arbitrary sets taken from the positive integers. I have to prove or disprove that: $$ \text{If }A ∩ B ∩ C = ∅, \text{then } (A ⊆ \sim B) \text{ or } (A ⊆ \sim C)$$ Here is my disproof using a counterexample: If $A = \{ \}$ the empty set, $B = \{2, 3\}$, $C = \{4, 5\}$. With these sets defined for $A, B,$ and $C$, the intersection includes the disjoint set, and then that would lead to $A$ being a subset of $B$ or $A$ being a subset of $C$ which counteracts that if $A ∩ B ∩ C = ∅$, then $(A ⊆ \sim B)$ or $(A ⊆ \sim C)$. Is this a sufficient proof?
As another counterexample, let $A=\mathbb{N}$, and $B$ and $C$ be disjoint sets with at least one element each. Elaborating: On the one hand, since $B$ and $C$ are disjoint, $$ A \cap B \cap C = \mathbb{N} \cap B \cap C = B \cap C = \emptyset. $$ On the other hand, $A = \mathbb{N}$ is not contained in the complement of $B$, since $B$ is not empty, and the same for $C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/56988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Convergence of a double sum Let $(a_i)_{i=1}^\infty$ be a sequence of positive numbers such that $\sum_1^\infty a_i < \infty$. What can we say about the double series $$\sum_{i, j=1}^\infty a_{i+ j}^p\ ?$$ Can we find some values of $p$ for which it converges? I'm especially interested in $p=2$. Intuitively I'm inclined to think that the series converges for $p \ge 2$. This intuition comes from the continuum analog $f(x)= x^a, \quad x>1$: if $a<-1$ we have $$\int_1^\infty f(x)\ dx < \infty$$ and $F(x, y)=f(x+y)$ is $p$-integrable on $(1, \infty) \times (1, \infty)$ for $p \ge 2$.
To sum up, the result is false in general (see first part of the post below), trivially false for nonincreasing sequences $(a_n)$ if $p<2$ (consider $a_n=n^{-2/p}$) and true for nonincreasing sequences $(a_n)$ if $p\ge2$ (see second part of the post below). Rearranging terms, one sees that the double series converges if and only if the simple series $\sum\limits_{n=1}^{+\infty}na_n^p$ does. But this does not need to be the case. To be specific, choose a positive real number $r$ and let $a_n=0$ for every $n$ not the $p$th power of an integer (see notes) and $a_{i^p}=i^{-(1+r)}$ for every positive $i$. Then $\sum\limits_{n=1}^{+\infty}a_n$ converges because $\sum\limits_{i=1}^{+\infty}i^{-(1+r)}$ does but $na_{n}^p=i^{-pr}$ for $n=i^p$ hence $\sum\limits_{n=1}^{+\infty}na_n^p$ diverges for small enough $r$. Notes: (1) If $p$ is not an integer, read $\lfloor i^p\rfloor$ instead of $i^p$. (2) If the fact that some $a_n$ are zero is a problem, replace these by positive values which do not change the convergence/divergence of the series considered, for example add $2^{-n}$ to every $a_n$. To deal with the specific case when $(a_n)$ is nonincreasing, assume without loss of generality that $a_n\le1$ for every $n$ and introduce the integer valued sequence $(t_i)$ defined by $$ a_n\ge2^{-i} \iff n\le t_i. $$ In other words, $$ t_i=\sup\{n\mid a_n\ge2^{-i}\}. $$ Then $\sum\limits_{n=1}^{+\infty}a_n\ge u$ and $\sum\limits_{n=1}^{+\infty}na_n^p\le v$ with $$ u=\sum\limits_{i=0}^{+\infty}2^{-i}(t_i-t_{i-1}),\quad v=\sum\limits_{i=0}^{+\infty}2^{-ip-1}(t_{i+1}^2-t_i^2). $$ Now, $u$ is finite if and only if $\sum\limits_{i=0}^{+\infty}2^{-i}t_i$ converges and $v$ is finite if and only if $\sum\limits_{i=0}^{+\infty}2^{-ip}t_i^2$ does. For every $p\ge2$, one sees that $2^{-ip}t_i^2\le(2^{-i}t_i)^2$, and $\ell^1\subset\ell^2$, hence $u$ finite implies $v$ finite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/57045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Determining which values to use in place of x in functions When solving partial fractions for integrations, solving x for two terms usually isn't all that difficult, but I've been running into problems with three term integration. For example, given $$\int\frac{x^2+3x-4}{x^3-4x^2+4x}$$ The denominator factored out to $x(x-2)^2$, which resulted in the following formulas $$ \begin{align*} x^2+3x-4=\frac{A}{x}+\frac{B}{x-2}+\frac{C}{(x-2)^2}\\ x^2+3x-4= A(x-2)(x-2)^2+Bx(x-2)^2+Cx(x-2)\\ x^2+3x-4= A(x-2)^2+Bx(x-2)+Cx\\\\ \text{when x=0, }A=-1 \text{ and x=2, }C=3 \end{align*} $$ This is where I get stuck, since nothing immediately pops out at me for values that would solve A and C for zero and leave some value for B. How do I find the x-value for a constant that is not immediately apparent?
To find the constants in the rational fraction $$\frac{x^2+3x-4}{x^3-4x^2+4x}=\frac{A}{x}+\frac{B}{x-2}+\frac{C}{(x-2)^2},$$ you may use any set of 3 values of $x$, provided that the denominator $x^3-4x^2+4x\ne 0$. The "standard" method is to compare the coefficients of $$\frac{x^2+3x-4}{x^3-4x^2+4x}=\frac{A}{x}+\frac{B}{x-2}+\frac{C}{(x-2)^2},$$ after multiplying this rational fraction by the denominator $x^3-4x^2+4x=x(x-2)^2$ and solve the resulting linear system in $A,B,C$. Since $$\begin{eqnarray*} \frac{x^{2}+3x-4}{x\left( x-2\right) ^{2}} &=&\frac{A}{x}+\frac{B}{x-2}+% \frac{C}{\left( x-2\right) ^{2}} \\ &=&\frac{A\left( x-2\right) ^{2}}{x\left( x-2\right) ^{2}}+\frac{Bx\left( x-2\right) }{x\left( x-2\right) ^{2}}+\frac{Cx}{x\left( x-2\right) ^{2}} \\ &=&\frac{A\left( x-2\right) ^{2}+Bx\left( x-2\right) +Cx}{x\left( x-2\right) ^{2}} \\ &=&\frac{\left( A+B\right) x^{2}+\left( -4A-2B+C\right) x+4A}{x\left( x-2\right) ^{2}}, \end{eqnarray*}$$ if we equate the coefficients of the plynomials $$x^{2}+3x-4\equiv\left( A+B\right) x^{2}+\left( -4A-2B+C\right) x+4A,$$ we have the system $$\begin{eqnarray*} A+B &=&1 \\ -4A-2B+C &=&3 \\ 4A &=&-4, \end{eqnarray*}$$ whose solution is $$\begin{eqnarray*} B &=&2 \\ C &=&3 \\ A &=&-1. \end{eqnarray*}$$ Alternatively you could use the method indicated in parts A and B, as an example. A. We can multiply $f(x)$ by $x=x-0$ and $\left( x-2\right) ^{2}$ and let $% x\rightarrow 0$ and $x\rightarrow 2$. Since $$\begin{eqnarray*} f(x) &=&\frac{P(x)}{Q(x)}=\frac{x^{2}+3x-4}{x^{3}-4x^{2}+4x} \\ &=&\frac{x^{2}+3x-4}{x\left( x-2\right) ^{2}} \\ &=&\frac{A}{x}+\frac{B}{x-2}+\frac{C}{\left( x-2\right) ^{2}},\qquad (\ast ) \end{eqnarray*}$$ if we multiply $f(x)$ by $x$ and let $x\rightarrow 0$, we find $A$: $$A=\lim_{x\rightarrow 0}xf(x)=\lim_{x\rightarrow 0}\frac{x^{2}+3x-4}{\left( x-2\right) ^{2}}=\frac{-4}{4}=-1.$$ And we find $C$, if we multiply $f(x)$ by $\left( x-2\right) ^{2}$ and let $x\rightarrow 2$: $$C=\lim_{x\rightarrow 2}\left( x-2\right) ^{2}f(x)=\lim_{x\rightarrow 2}\frac{% x^{2}+3x-4}{x}=\frac{2^{2}+6-4}{2}=3.$$ B. Now observing that $$P(x)=x^{2}+3x-4=\left( x+4\right) \left( x-1\right)$$ we can find $B$ by making $x=1$ and evaluate $f(1)$ in both sides of $(\ast)$, with $A=-1,C=3$: $$f(1)=0=2-B.$$ So $B=2$. Or we could make $x=-4$ in $(\ast)$ $$f(-4)=0=\frac{1}{3}-\frac{1}{6}B.$$ We do obtain $B=2$. Thus $$\frac{x^{2}+3x-4}{x\left( x-2\right) ^{2}}=-\frac{1}{x}+\frac{2}{x-2}+\frac{3% }{\left( x-2\right) ^{2}}\qquad (\ast \ast )$$ Remark: If the denominator has complex roots, then an expansion as above is not possible. For instance $$\frac{x+2}{x^{3}-1}=\frac{x+2}{(x-1)(x^{2}+x+1)}=\frac{A}{x-1}+\frac{Bx+C}{% x^{2}+x+1}.$$ You should find $A=1,B=C=-1$: $$\frac{x+2}{x^{3}-1}=\frac{1}{x-1}-\frac{x+1}{x^{2}+x+1}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/57114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Optimum solution to a Linear programming problem If we have a feasible space for a given LPP (linear programming problem), how is it that its optimum solution lies on one of the corner points of the graphical solution? (I am here concerned only with those LPP's which have a graphical solution with more than one corner/end point.) I was asked to take this as a lemma in the class, but got curious about the proof. Any help is sincerely appreciated.
It's a special instance of the following general theorem: The maximum of a convex function $f$ on a compact convex set $S$ is attained in an extreme point of $S$. An extreme point of a convex set $S$ is a point in $S$ which does not lie in any open line segment joining two points of $S$. In your case, the "corner points of the graphical solution" are the only extreme points of the feasible region. It's easy to see that the feasible region of a LPP is convex. It's not always compact, and some LPP indeed have no solution despite having a nonempty feasible region. The linear objective function is clearly convex. If it is minimized instead of maximized, this can be reformulated as maximizing the negative objective function. I quite like the cited theorem, because it highlights that optimization can lead to problematic results for a large class of situations (because a solution at the boundary of the feasible region will become infeasible under perturbations, so it is not a robust solution). It's also similar to the bang-bang principle in optimal control.
{ "language": "en", "url": "https://math.stackexchange.com/questions/57173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 3, "answer_id": 1 }
Stalk of coherent sheaf vanishing is the following true: if I have a coherent sheaf $F$ on a noetherian scheme $X$ with a point $x$ and the stalk $F_x$ is zero, then there is a neighborhood $U$ of $x$, such that the restriction of $F$ to $U$ is zero? Thank you
Yes. For the locality of the problem, you can assume that $X$ is affine: $X = \mathrm{Spec} A$ and $F = \tilde{M}$, where $A$ is a noetherian ring and $M$ is a finite $A$-module. Let $P$ a prime such that $M_P = 0$. Let $\{ x_1, \dots, x_n \}$ a set of generators of $M$ as an $A$-module. Then exist $s_i \in A \setminus P$ such that $s_i x_i = 0$. Pick $s = s_1 \cdots s_n$, then $F \vert_{D(s)} = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/57233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Question about all the homomorphisms from $\mathbb{Z}$ to $\mathbb{Z}$ An exercise in "A first course in Abstract Algebra" asked the following: Describe all ring homomorphisms from the ring $\mathbb{Z},+,\cdot$ to itself. I observed that for any such ring homomorphism the following has to hold: $$\varphi(1) = \varphi(1\cdot 1) = \varphi(1) \cdot \varphi(1)$$ In $\mathbb{Z}$ only two numbers exists so that their square equals itself: 0 and 1. When $\varphi(1) = 0$ then $\varphi = 0$ hence $\forall n \in \mathbb{Z}$: $\varphi(n) = \varphi(n \cdot 1) = \varphi(n) \cdot \varphi(1) = \varphi(n) \cdot 0 = 0$. Now, when $\varphi(1) = 1$ I showed that $\varphi(n) = n$ using induction Base case: $n = 1$, which is true by our assumption Induction hypothesis: $\varphi(m) = m$ for $m < n$ Induction step: $\varphi(n) = \varphi((n-1) + 1) = \varphi(n-1) + \varphi(1) = n-1 + 1 = n$ Now I wonder whether you could show that $\varphi(n) = n$ when $\varphi(1) = 1$ without using induction, which seems overkill for this exercise. EDIT: Forgot about the negative n's. Since $\varphi$ is also a group homomorphism under $\mathbb{Z},+$, we know that $\varphi(-n) = -\varphi(n)$. Thus, $$\varphi(-n) = -\varphi(n) = -n$$
I assume you are talking about Fraleigh's book. If so, he does not require that a ring homomorphism maps the multiplicative identity to itself. Follow his hint by concentrating on the possible values for $f(1)$. If $f$ is a (group) homomorphism for the group $(\mathbb{Z},+)$ and $f(1)=a$, then $f$ will reduce to multiplication by $a$. For what values of $a$ will you get a ring homomorphism? You will need to have $(mn)a=(ma)(na)$ for all pairs $(m,n)$ of integers. What can you conclude about the value of $a$? You still won't have a lot of homomorphisms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/57279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
Definition of manifold From Wikipedia: The broadest common definition of manifold is a topological space locally homeomorphic to a topological vector space over the reals. A topological manifold is a topological space locally homeomorphic to a Euclidean space. In both concepts, a topological space is homeomorphic to another topological space with richer structure than just topology. On the other hand, the homeomorphic mapping is only in the sense of topology without referring to the richer structure. I was wondering what purpose it is to map from a set to another with richer structure, while the mapping preserves the less rich structure shared by both domain and codomain? How is the extra structure on the codomain going to be used? Is it to induce the extra structure from the codomain to the domain via the inverse of the mapping? How is the induction like for a manifold and for a topological manifold? Thanks!
The reason to use topological vector spaces as model spaces (for differential manifolds, that is) is that you can define the differential of a curve in a topological vector space. And you can use this to define the differential of curves in your manifold, i.e. do differential geometry. For more details see my answer here. All finite dimensional topological vector spaces of dimension n are isomorph to $\mathbb{R}^n$ with its canonical topology, so there is not much choice. But in infinite dimensions things get really interesting :-)
{ "language": "en", "url": "https://math.stackexchange.com/questions/57333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Homology of $\mathbb{R}^3 - S^1$ I've been looking for a space on the internet for which I cannot write down the homology groups off the top of my head so I came across this: Compute the homology of $X : = \mathbb{R}^3 - S^1$. I thought that if I stretch $S^1$ to make it very large then it looks like a line, so $\mathbb{R}^3 - S^1 \simeq (\mathbb{R}^2 - (0,0)) \times \mathbb{R}$. Then squishing down this space and retracting it a bit will make it look like a circle, so $(\mathbb{R}^2 - (0,0)) \times \mathbb{R} \simeq S^1$. Then I compute $ H_0(X) = \mathbb{Z}$ $ H_1( X) = \mathbb{Z}$ $ H_n(X) = 0 (n > 1)$ Now I suspect something is wrong here because if you follow the link you will see that the OP computes $H_2(X,A) = \mathbb{Z}$. I'm not sure why he computes the relative homologies but if the space is "nice" then the relative homologies should be the same as the absolute ones, so I guess my reasoning above is flawed. Maybe someone can point out to me what and then also explain to me when $H(X,A) = H(X)$. Thanks for your help! Edit $\simeq$ here means homotopy equivalent.
Consider $X = \mathbb{R}^3 \setminus (S^1 \times \{ 0 \}) \subseteq \mathbb{R}^3$, $U = X \setminus z \text{-axis}$ and $V = B(0,1/2) \times \mathbb{R}$, where $B(0, 1/2)$ is the open ball with radius $1/2$ and center in the origin of $\mathbb{R}^2$. It is clear that $\{ U, V \}$ is an open cover of $X$. Now let's compute the homotopy type of $U$. Consider a deformation $f$ of $((0, +\infty) \times \mathbb{R}) \setminus \{(1,0) \}$ onto the circle of center $(1,0)$ and radius $1/2$. Let revolve $f$ around the $z$-axis: obtain a deformation of $U$ onto the doughnut ($2$-torus) of radii $R=1$ and $r = 1/2$. Since $V$ is contractible and $U \cap V$ is homotopically equivalent to $S^1$, Mayer-Vietoris sequence gives the solution: $H_0(X) = H_1(X) = H_2(X) = \mathbb{Z}$ and $H_i(X) = 0$ for $i \geq 3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/57380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
prove cardinality rule $|A-B|=|B-A|\rightarrow|A|=|B|$ I need to prove this $|A-B|=|B-A|\rightarrow|A|=|B|$ I managed to come up with this: let $f:A-B\to B-A$ while $f$ is bijective. then define $g\colon A\to B$ as follows: $$g(x)=\begin{cases} f(x)& x\in (A-B) \\ x& \text{otherwise} \\ \end{cases}$$ but I'm not managing to prove this function is surjective. Is it not? or am I on the right path? if so how do I prove it? Thanks
Note that $$\begin{align} |A| = |A \cap B| + |A \cap B^c| = |B \cap A| + |B \cap A^c| = |B|. \end{align}$$ Here $E^c$ denotes the compliment of the event $E$ in the universal space $X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/57441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Incorrect manipulation of limits Here's my manipulation of a particular limit: $\displaystyle \lim\limits_{h\rightarrow 0}\Big[\frac{f(x+h)g(x) - f(x)g(x+h)}{h}\Big]$ Using the properties of limits: $\displaystyle \begin{align*} &=\frac{\lim\limits_{h\rightarrow 0}\Big[f(x+h)g(x) - f(x)g(x+h)\Big]}{\lim\limits_{h\rightarrow 0}h}\\ &=\frac{\lim\limits_{h\rightarrow 0}\Big[f(x+h)g(x)\Big] - \lim\limits_{h\rightarrow 0}\Big[f(x)g(x+h)\Big]}{\lim\limits_{h\rightarrow 0}h}\\ &=\frac{\lim\limits_{h\rightarrow 0}\Big[f(x+h)\Big]\lim\limits_{h\rightarrow 0}\Big[g(x)\Big] - \lim\limits_{h\rightarrow 0}\Big[f(x)\Big]\lim\limits_{h\rightarrow 0}\Big[g(x+h)\Big]}{\lim\limits_{h\rightarrow 0}h}\\ &=\frac{f(x)\lim\limits_{h\rightarrow 0}\Big[g(x)\Big] - f(x)\lim\limits_{h\rightarrow 0}\Big[g(x+h)\Big]}{\lim\limits_{h\rightarrow 0}h}\\ &=\frac{f(x)\Big(\lim\limits_{h\rightarrow 0}\Big[g(x)\Big] - \lim\limits_{h\rightarrow 0}\Big[g(x+h)\Big]\Big)}{\lim\limits_{h\rightarrow 0}h}\\ &=\frac{f(x)\Big(\lim\limits_{h\rightarrow 0}\Big[g(x) - g(x+h)\Big]\Big)}{\lim\limits_{h\rightarrow 0}h}\\ &=f(x)\lim\limits_{h\rightarrow 0}\Big(\frac{g(x) - g(x+h)}{h}\Big)\\ &=-f(x)g'(x)\end{align*}$ I'm pretty sure that my end result is incorrect, as I've used arbitrary functions for $f(x)$ and $g(x)$ and it didn't support my conclusion. I think that the factoring of $f(x)$ might be what is incorrect in my manipulation, but I'm not 100% sure. Could someone explain to me what I did wrong and why it is wrong? Which one of the limit "axioms" did I use incorrectly? Thank you.
The answers already posted fully answer your question. So what follows is not an answer to your question, but it may be helpful. Let us assume that $f$ and $g$ are differentiable at $x$. Note that $$f(x+h)g(x) - f(x)g(x+h)= f(x+h)g(x)+(f(x)g(x)-f(x)g(x))-f(x)g(x+h).$$ We have added $0$ in the middle, which is harmless. A trick that looks very similar was undoubtedly used in your book or notes to prove the product rule for differentiation. Rearranging a bit, and with some algebra, we find that $$f(x+h)g(x) - f(x)g(x+h)=(f(x+h)-f(x))g(x)-f(x)(g(x+h) -g(x)),$$ and therefore $$\frac{f(x+h)g(x) - f(x)g(x+h)}{h}=\frac{(f(x+h)-f(x))g(x)}{h}-\frac{f(x)(g(x+h) -g(x))}{h}.$$ The rest is up to you. Added stuff, for the intuition: The following calculation is way too informal, but will tell you more about what's really going on than the mysterious trick. When $h$ is close to $0$, $$f(x+h) \approx f(x)+hf'(x)$$ with the approximation error going to $0$ faster than $h$. Similarly, $$g(x+h) \approx g(x)+hg'(x).$$ Substitute these approximations into the top. Simplify. Something very pretty happens!
{ "language": "en", "url": "https://math.stackexchange.com/questions/57504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 0 }
Proof for divisibility rule for palindromic integers of even length I am studying for a test and came across this in my practice materials. I can prove it simply for some individual cases, but I don't know where to start to prove the full statement. Prove that every palindromic integer in base $k$ with an even number of digits is divisible by $k+1$.
HINT $\rm\ \ mod\ \ x+1:\ \ f(x) + x^{n+1}\:(x^n\ f(1/x))\ \equiv\ f(-1) - f(-1)\:\equiv\: 0$ Remark $\ \ $ It is simple to verify that $\rm\ x^n\ f(1/x)\ $ is the reversal of a polynomial $\rm\:f\:$ of degree $\rm\:n\:,\:$ therefore the above is the general palindromic polynomial with even number of coefficients. See also the closely related question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/57551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Integrate $\int\limits_{0}^{1} \frac{\log(x^{2}-2x \cos a+1)}{x} dx$ How do I solve this: $$\int\limits_{0}^{1} \frac{\log(x^{2}-2x \cos{a}+1)}{x} \ dx$$ Integration by parts is the only thing which I could think of, clearly that seems cumbersome. Substitution also doesn't work.
Please see $\textbf{Problem 4.30}$ in the book: ($\text{solutions are also given at the end}$) * *$\text{The Math Problems Notebook,}$ by Valentin Boju and Louis Funar.
{ "language": "en", "url": "https://math.stackexchange.com/questions/57607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Transitive graph such that the stabilizer of a point has three orbits I am looking for an example of a finite graph such that its automorphism group is transitive on the set of vertices, but the stabilizer of a point has exactly three orbits on the set of vertices. I can't find such an example. Anyone has a suggestion?
Consider the Petersen graph. Its automorphism group is $S_5$ which acts on $2$-subsets of $\{1,\ldots,5\}$ (that can be seen as vertices of the graph). Now the result is clear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/57659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Convergence of sequence points to the point of accumulation Wikipedia says: Every finite interval or bounded interval that contains an infinite number of points must have at least one point of accumulation. If a bounded interval contains an infinite number of points and only one point of accumulation, then the sequence of points converge to the point of accumulation. [1] How could i imagine a bounded interval of infinite points having a single limit point and that sequence of interval's points converge to that limit point. Any examples? [1] http://en.wikipedia.org/wiki/Limit_point Thank you.
What about: $$\left \{\frac{1}{n}\Bigg| n\in\mathbb Z^+ \right \}\cup\{0\}$$ This set (or sequence $a_0=0,a_n=\frac{1}{n}$) is bounded in $[0,1]$ and has only limit point, namely $0$. A slightly more complicated example would be: $$\left \{\frac{(-1)^n}{n}\Bigg| n\in\mathbb Z^+ \right \}\cup\{0\}$$ In the interval $[-1,1]$. Again the only limit point is $0$, but this time we're converging to it from "both sides". I believe that your confusion arises from misreading the text. The claim is that if within a bounded interval (in these examples, we can take $[-1,1]$) our set (or sequence) have infinitely many points and only one accumulation point, then it is the limit of the sequence. There is no claim that the interval itself has only this sequence, or only one limit/accumulation point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/57699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Feller continuity of the stochastic kernel Given a metric space $X$ with a Borel sigma-algebra, the stochastic kernel $K(x,B)$ is such that $x\mapsto K(x,B)$ is a measurable function and a $B\mapsto K(x,B)$ is a probability measure on $X$ for each $x$ Let $f:X\to \mathbb R$. We say that $f\in \mathcal C(B)$ if $f$ is continuous and bounded on $B$. Weak Feller continuity of $K$ means that if $f\in\mathcal C(X)$ then $F\in\mathcal C(X)$ where $$ F(x):=\int\limits_X f(y)K(x,dy). $$ I wonder if it implies that if $g\in \mathcal C(B)$ then $$ G(x):=\int\limits_Bg(y)K(x,dy) $$ also belongs to $\mathcal C(B)$?
To make it clearer, you can re-write $ G(x) = \int\limits_X g(y)\mathbf{1}_B(y) K(x,{\rm d} y)$. Now in general $g{\mathbf 1}_B$ is not continuous anymore so as Nate pointed out you should not expect $G$ to be such. However, if you take a continuous $g $ with $\overline{{\rm support}(g)} \subsetneq B$ then (I let you :) show this) $g{\mathbf 1}_B$ is still continuous and so is $G$ then.
{ "language": "en", "url": "https://math.stackexchange.com/questions/57816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Improper Double Integral Counterexample Let $f: \mathbf{R}^2\to \mathbf{R}$. I want to integrate $f$ over the entire first quadrant, call $D$. Then by definition we have $$\int \int_D f(x,y) dA =\lim_{R\to[0, \infty]\times[0, \infty]}\int \int_R f(x,y) dA$$ where $R$ is a rectangle. I remember vaguely that the above is true if $f$ is positive. In other words, if $f$ is positive, then the shape of the rectangle does not matter. So this brings me to my question: give a function $f$ such that the shape of the rectangles DO MATTER when evaluating the improper double integral.
Let $f$ be 1 below the diagonal, -1 above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/57940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Determining if a quadratic polynomial is always positive Is there a quick and systematic method to find out if a quadratic polynomial is always positive or may have positive and negative or always negative for all values of its variables? Say, for the quadratic inequality $$3x^{2}+8xy+5xz+2yz+7y^{2}+2z^{2}>0$$ without drawing a graph to look at its shape, how can I find out if this form is always greater than zero or has negative results or it is always negative for all non-zero values of the variables? I tried randomly substituting values into the variables but I could never be sure if I had covered all cases. Thanks for any help.
This is what Sylvester's criterion is for. Write your quadratic as $v^T A v$ where $v$ is a vector of variables $(x_1\ x_2\ \cdots\ x_n)$ and $A$ is a matrix of constants. For example, in your case, you are interested in $$\begin{pmatrix} x & y & z \end{pmatrix} \begin{pmatrix} 3 & 4 & 5/2 \\ 4 & 7 & 1 \\ 5/2 & 1 & 2 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \end{pmatrix}$$ Observe that the off diagonal entries are half the coefficients of the quadratic. The standard terminology is that $A$ is "positive definite" if this quantity is positive for all nonzero $v$. Sylvester's criterion says that $A$ is positive definite if and only if the determinants of the top-left $k \times k$ submatrix are positive for $k=1$, $2$, ..., $n$. In our case, we need to test $$\det \begin{pmatrix} 3 \end{pmatrix} =3 \quad \det \begin{pmatrix}3 & 4 \\ 4 & 7\end{pmatrix} = 5 \quad \det \begin{pmatrix} 3 & 4 & 5/2 \\ 4 & 7 & 1 \\ 5/2 & 1 & 2 \end{pmatrix} = -67/4.$$ Since the last quantity is negative, Sylvester's criterion tells us that this quadratic is NOT positive definite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/58010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 1 }
ODE problem shooting Please help me spot my mistake: I have an equation $$(u(x)^{-2} + 4u'(x)^2)^{\frac{1}{2}} - u'(x)\frac{d}{du'}(u(x)^{-2} + 4u'(x)^2)^{\frac{1}{2}} = k$$ where $k$ is a constant. I am quite sure that if I take $u(x) = \sqrt{y(x)}$ I would have the brachistochrone equation, hence I am expecting a cycloid equation if I let $u(x) = \sqrt{y(x)}$ in the result, but I don't seem to get it :( My workings are as follows: $$u(x)^{-2} + 4u'(x)^2- 4u'(x)^2 = k \times (u(x)^{-2} + 4u'(x)^2)^{\frac{1}{2}}$$ $$\implies u(x)^{-4} = k^2 \times (u(x)^{-2} + 4u'(x)^2)$$ $$\implies u'(x)= \frac{1}{2k}\sqrt{u(x)^{-4} - k^2u(x)^{-2}}$$ $$\implies \int \frac{1}{u \sqrt{u^2 - k^2}} du = \int \frac{1}{2k} dx$$ Change variable: let $v = \frac{u}{k}$ $$\implies \int \frac{1}{v \sqrt{v^2 - 1}} dv = \frac{x+a}{2}$$, where $a$ is a constant $$\implies \operatorname{arcsec}(v) = \frac{x+a}{2} $$ $$\implies \operatorname{arcsec}\left(\frac{\sqrt{y}}{k}\right) = \frac{x+a}{2}$$ which does not seem to describe a cycloid... Help would be very much appreciated! Thanks.
In the line after $$ 2k u' = \sqrt{ u^{-4} - k^2 u^{-2}} $$ (the fourth line of your computations) when you divided, you divided wrong. The integrand in the LHS should be $$ \int \frac{u^2}{\sqrt{1-k^2 u^2}} \mathrm{d}u $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/58067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Solve this Inequality I am not sure how to solve this equation. Any ideas $$(1+n) + 1+(n-1) + 1+(n-2) + 1+(n-3) + 1+(n-4) + \cdots + 1+(n-n) \ge 1000$$ Assuming $1+n = a$ The equation can be made to looks like $$a+(a-1)+(a-2)+(a-3)+(a-4)+\cdots+(a-n) \ge 1000$$ How to proceed ahead, or is there another approach to solve this?
Here is an example (probably not the most elegant though...) way to solve this. First either you write $(n+1)a-\frac{n(n+1)}{2}$ and replace $a$ by $n+1$ or you just realize replacing $a$ by $n+1$ in your second inequation that the left side is $\sum_{i=1}^{n+1}i=\frac{(n+1)(n+2)}{2}$. Your inequality is then $$(n+1)^2+(n+1)\geq 2000$$ You can for example consider the polynomial $P(X)=X^2+X-2000$ and you want to study the sign of this, which should not be a problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/58106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Normal subgroups vs characteristic subgroups It is well known that characteristic subgroups of a group $G$ are normal. Is the converse true?
I think the simplest example is the Klein four-group, which you can think of as a direct sum of two cyclic groups of order two, say $A\oplus B$. Because it is abelian, all of its subgroups are normal. However, there is an automorphism which interchanges the two direct summands $A$ and $B$, which shows that $A$ (and $B$) are normal, but not characteristic. (In fact, the other non-trivial proper subgroup, generated by the product of the generators of $A$ and $B$ also works.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/58173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
How I can use the mean value theorem in this problem? Use the Mean Value Theorem to estimate the value of $\sqrt{80}$. and how should we take $f(x)$? Thanks in advance. Regards
You want to estimate a value of $f(x) = \sqrt{x}$, so that's a decent place to start. The mean value theorem says that there's an $a \in (80, 81)$ such that $$ \frac{f(81) - f(80)}{81 - 80} = f'(a). $$ I don't know what $a$ is, but you know $f(81)$ and you hopefully know how to write down $f'$. How small can $f'(a)$ be?
{ "language": "en", "url": "https://math.stackexchange.com/questions/58224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Some question of codimension 1 (1) "For affine variety $V$ of $\mathbb{A}^{n}$ such that its coordinate ring is UFD, closed subvariety of $V$ which has codimension 1 is cut out by a single equation." I looked at the proof of this statement, Where I have been using UFD in I do not know... (2) I want to see proof of follwing statement.... "Any closed subvariety of affine normal variety with codimension 1 is cut out by a single equation."
As for (1): a closed subvariety of $V$ is defined by a prime ideal $p$ of the coordinate ring $k[V]$ of height $1$. If $k[V]$ is a UFD such a prime ideal is principal. As for (2): the statement is false. The closed subvariety is cut out by a single equation locally but not globally as in the case of a UFD.
{ "language": "en", "url": "https://math.stackexchange.com/questions/58298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Example of infinite field of characteristic $p\neq 0$ Can you give me an example of infinite field of characteristic $p\neq0$? Thanks.
Another construction, using a tool from the formal logic: the ultraproduct. The cartesian product of fields $$P = {\Bbb F}_p\times{\Bbb F}_{p^2}\times{\Bbb F}_{p^3}\times\cdots$$ isn't a field ("isn't a model of..." ) because has zero divisors: $$(0,1,0,1,\cdots)(1,0,1,0\cdots)=(0,0,0,0,\cdots).$$ The solution is taking a quotient: let $\mathcal U$ be a nonprincipal ultrafilter on $\Bbb N$. Define $$(a_1,a_2,\cdots)\sim(b_1,b_2,\cdots)$$ when $$\{n\in\Bbb N\,\vert\, a_n=b_n\}\in\mathcal U.$$ The quotient $F=P/\sim$ will be a infinite field of characteristic $p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/58424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "153", "answer_count": 2, "answer_id": 0 }
Prove convexity/concavity of a complicated function Can anyone help me to prove the convexity/concavity of following complicated function...? I have tried a lot of methods (definition, 1st derivative etc.), but this function is so complicated, and I finally couldn't prove... however, I plotted with many different parameters, it's always concave to $\rho$... $$ f\left( \rho \right) = \frac{1}{\lambda }(M\lambda \phi - \rho (\phi - \Phi )\ln (\rho + M\lambda ) + \frac{1}{{{e^{(\rho + M\lambda )t}}\rho + M\lambda }}\cdot( - (\rho + M\lambda )({e^{(\rho + M\lambda )t}}{\rho ^2}t(\phi - \Phi ) ) $$ $$+ M\lambda (\phi + \rho t\phi - \rho t\Phi )) + \rho ({e^{(\rho + M\lambda )t}}\rho + M\rho )(\phi - \Phi )\ln ({e^{(\rho + M\lambda )t}}\rho + M\lambda ))$$ Note that $\rho > 0$ is the variable, and $M>0, \lambda>0, t>0, \phi>0, \Phi>0 $ are constants with any possible positive values...
I am a newcomer so would prefer to just leave a comment, but alas I see no "comment" button, so I will leave my suggestion here in the answer box. I have often used a Nelder-Mead "derivative free" algorithm (fminsearch in matlab) to minimize long and convoluted equations like this one. If you can substitute the constraint equation $g(\rho)$ into $f(\rho)$ somehow, then you can input this as the objective function into the algorithm and get the minimum, or at least a local minimum. You could also try heuristic methods like simulated annealing or great deluge. Heuristic methods would give you a better chance of getting the global minimum if the solution space has multiple local minima. In spite of their scary name, heuristic methods are actually quite simple algorithms. As for proving the concavity I don't see the problem. You mention in your other post that both $g(\rho)$ and $f(\rho)$ have first and second derivatives, so it should be straightforward, right?
{ "language": "en", "url": "https://math.stackexchange.com/questions/58474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
When can a random variable be written as a sum of two iid random variables? Suppose $X$ is a random variable; when do there exist two random variables $X',X''$, independent and identically distributed, such that $$X = X' + X''$$ My natural impulse here is to use Bochner's theorem but that did not seem to lead anywhere. Specifically, the characteristic function of $X$, which I will call $\phi(t)$, must have the property that we can a find a square root of it (e.g., some $\psi(t)$ with $\psi^2=\phi$) which is positive definite. This is as far as I got, and its pretty unenlightening - I am not sure when this can and can't be done. I am hoping there is a better answer that allows one to answer this question merely by glancing at the distribution of $X$.
I think your characteristic function approach is the reasonable one. You take the square root of the characteristic function (the one that is $+1$ at 0), take its Fourier transform, and check that this is a positive measure. In the case where $X$ takes only finitely many values, the characteristic function is a finite sum of exponentials with positive coefficients, and the criterion becomes quite manageable: you need the square root to be a finite sum of exponentials with positive coefficients. More general cases can be more difficult.
{ "language": "en", "url": "https://math.stackexchange.com/questions/58525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 0 }
Elementary central binomial coefficient estimates * *How to prove that $\quad\displaystyle\frac{4^{n}}{\sqrt{4n}}<\binom{2n}{n}<\frac{4^{n}}{\sqrt{3n+1}}\quad$ for all $n$ > 1 ? *Does anyone know any better elementary estimates? Attempt. We have $$\frac1{2^n}\binom{2n}{n}=\prod_{k=0}^{n-1}\frac{2n-k}{2(n-k)}=\prod_{k=0}^{n-1}\left(1+\frac{k}{2(n-k)}\right).$$ Then we have $$\left(1+\frac{k}{2(n-k)}\right)>\sqrt{1+\frac{k}{n-k}}=\frac{\sqrt{n}}{\sqrt{n-k}}.$$ So maybe, for the lower bound, we have $$\frac{n^{\frac{n}{2}}}{\sqrt{n!}}=\prod_{k=0}^{n-1}\frac{\sqrt{n}}{\sqrt{n-k}}>\frac{2^n}{\sqrt{4n}}.$$ By Stirling, $n!\approx \sqrt{2\pi n}\left(\frac{n}{e}\right)^n$, so the lhs becomes $$\frac{e^{\frac{n}{2}}}{(2\pi n)^{\frac14}},$$ but this isn't $>\frac{2^n}{\sqrt{4n}}$.
Here is a better estimate of the quantity. I am not going to prove it. Since I know about it, hence I am sharing and will put the reference. $$\frac{1}{2\sqrt{n}} \le {2n \choose n}2^{-2n} \le \frac{1}{\sqrt{2n}}$$ Refer to Page 590 This is the text book "Boolean functions Complexity" by Stasys Jukna.
{ "language": "en", "url": "https://math.stackexchange.com/questions/58560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 7, "answer_id": 6 }
The smallest subobject $\sum{A_i}$ containing a family of subobjects {$A_i$} In an Abelian category $\mathcal{A}$, let {$A_i$} be a family of subobjects of an object $A$. How to show that if $\mathcal{A}$ is cocomplete(i.e. the coproduct always exists in $\mathcal{A}$), then there is a smallest subobject $\sum{A_i}$ of $A$ containing all of $A_i$? Surely this $\sum{A_i}$ cannot be the coproduct of {$A_i$}, but I have no clue what it should be.
You are quite right that it can't be the coproduct, since that is in general not a subobject of $A$. Here are two ways of constructing the desired subobject: * *As Pierre-Yves suggested in the comments, the easiest way is to take the image of the canonical map $\bigoplus_i A_i \to A$. This works in any cocomplete category with unique epi-mono factorisation. *Alternatively, the subobject $\sum A_i$ can be constructed by taking the colimit over the semilattice of the $A_i$ and their intersections. This construction can be carried out in any bicomplete category, but is not guaranteed to give a subobject of $A$ unless the category is nice enough.
{ "language": "en", "url": "https://math.stackexchange.com/questions/58624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Prove this number fact Prove that $x \neq 0,y \neq 0 \Rightarrow xy \neq 0$. Suppose $xy = 0$. Then $\frac{xy}{xy} = 1$. Can we say that $\frac{xy}{xy} = 0$ and hence $1 = 0$ which is a contradiction? I thought $\frac{0}{0}$ was undefined.
If xy=0, then x=0 or y=0. So, by contraposition, if is not the case that "x=0 OR y=0", then it is not the case that xy=0. By De Morgan's Laws, we have that "if it is not the case that x=0 AND it is not the case that y=0, then it is not the case that xy=0". Now we move the negations around as a "quantifier exchange" and we have "if x is not equal to 0 and y is not equal to 0, then xy is not equal to 0." If xy=0, we can't say that xy/xy=1. Division doesn't exist in the rational numbers. It only exists in non-zero rational numbers, or if you prohibit 0 in any denominator. xy/xy=1 only on the condition that "/" is defined. So this "Suppose xy=0. Then xy/xy=1." isn't correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/58667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 2 }
Is $\mathbb{Q}[2^{1/3}]$ a field? Is $\mathbb{Q}[2^{1/3}]=\{a+b2^{1/3}+c2^{2/3};a,b,c \in \mathbb{Q}\}$ a field? I have checked that $b2^{1/3}$ and $c2^{2/3}$ both have inverses, $\frac{2^{2/3}}{2b}$ and $\frac{2^{1/3}}{2c}$, respectively. There are some elements with $a,b,c \neq 0$ that have inverses, as $1+1*2^{1/3}+1*2^{2/3}$, whose inverse is $2^{1/3}-1$. My problem is that is that I can't seem to find a formula for the inverse, but I also can't seem to find someone who doesn't have an inverse. Thanks for your time.
A neat way to confirm that it is a field: Since $\mathbb{Q}$ is a field, $\mathbb{Q}[x]$ is a PID. $\mathbb{Q}[2^{1/3}] \cong \mathbb{Q}[x] / (x^3 - 2)$. Now, $x^3 - 2$ is irreducible over $\mathbb{Q}$, since if it weren't, there would be a rational root to $x^3 - 2$. Because $\mathbb{Q}[x]$ is a PID and the polynomial is irreducible over $\mathbb{Q}$, $(x^3 - 2)$ is a maximal ideal in $\mathbb{Q}[x]$. By the Correspondence Theorem of Ideals, we see that as $(x^3 - 2)$ is maximal, $\mathbb{Q}[x] / (x^3 - 2)$ must be a field.
{ "language": "en", "url": "https://math.stackexchange.com/questions/58735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 0 }
$f_A(x)=(x+2)^4x^4$, $m_A(x)=(x+2)^2x^2$- What can I know about $A$? $A$ is matrix under $R$ which I know the following information about it: $f_A(x)=(x+2)^4x^4$- Characteristic polynomial $m_A(x)=(x+2)^2x^2$- Minimal polynomial. I'm trying to find out (i) $A$'s rank (ii) $\dim$ $\ker(A+2I)^2$ (iii) $\dim$ $\ker (A+2I)^4$ (iv) the characteristic polynomial of $B=A^2-4A+3I$. I believe that I don't have enough information to determine none of the above. By the power of $x$ in the minimal polynomial I know that the biggest Jordan block of eigenvalue 0 is of size 2, so there can be two options of Jordan form for this eigenvalue: $(J_2(0),J_2(0))$ or $(J_2(0),J_1(0),J_1(0))$, therefore $A$'s rank can be $2$ or $3$. I'm wrong, please correct me. How can I compute the rest? Thanks for the answers.
If the Jordan form of A is C then let P be invertible such that $A=PCP^{-1}$ then $$(A+2I)^2=P(C+2I)^2P^{-1}\;\rightarrow\; \dim\ker((A+2I)^2)=\dim\ker((C+2I)^2)$$ and you know exactly how $(C+2I)^2$ looks like (well, at least the part of the kernel). The same operation should help you solve the rest of the problems
{ "language": "en", "url": "https://math.stackexchange.com/questions/58790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Radical ideal of $(x,y^2)$ How does one show that the radical of $(x,y^2)$ is $(x,y)$ over $\mathbb{Q}[x,y]$? I have no idea how to do, please help me.
Recall that the radical of an ideal $\mathfrak{a}$ is equal to the intersection of the primes containing $\mathfrak{a}$. Here, let $\mathfrak{p}$ be a prime containing $(x, y^2)$. Then $y^2 \in \mathfrak{p}$ implies $y \in \mathfrak{p}$. Can you finish it off from there?
{ "language": "en", "url": "https://math.stackexchange.com/questions/58845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Ratio of circumference to radius I know that the ratio of the circumference to the diameter is Pi - what about the ratio of the circumference to the radius? Does it have any practical purpose when we have Pi? Is it called something (other than 2 Pi)?
That $\pi$ and $2 \pi$ have a very simple relationship to each other sharply limits the extent to which one can be more useful or more fundamental than the other. However, there are probably more formulas that are simpler when expressed using $2\pi$ instead of $\pi$, than the other way around. For example, there is often an algebraic expression involving something proportional to $(2\pi)^n$ and if expressed using powers of $\pi$ this would introduce factors of $2^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/58907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Stalks of the graph of a morphism I am interested in the graph $\Gamma_f$ of a morphism $f: X\rightarrow Y$ between two sufficiently nice schemes $X,Y$. One knows that it is a closed subset of $X\times Y$ (when the schemes are nice, say varieties over a field). I would like to know the following: if you endow it with the reduced structure, what are the stalks of it's structure sheaf in a point $(x,f(x))$ ? Thanks you very much!
Let $f:X\to Y$ be a morphism of $S$-schemes. The graph morphism $\gamma_f:X\to X\times_S Y$ is the pull-back of the diagonal morphism $\delta: Y\to Y\times_S Y$ along $f\times id_Y: X\times_S Y \to Y\times_S Y$. This implies that if $\delta$ is a closed embedding (i.e. $Y$ is separated over $S$) so is $\gamma_f$. So $\gamma_f$ induces an isomorphism between $X$ and its image $\Gamma_f \subset X\times Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/58961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$A\in M_n(\mathbb C)$ invertible and non-diagonalizable matrix. Prove $A^{2005}$ is not diagonalizable $A\in M_n(\mathbb C)$ invertible and non-diagonalizable matrix. I need to prove that $A^{2005}$ is not diagonalizable as well. I am asked as well if Is it true also for $A\in M_n(\mathbb R)$. (clearly a question from 2005). This is what I did: If $A\in M_n(\mathbb C)$ is invertible so $0$ is not an eigenvalue, We can look on its Jordan form, Since we under $\mathbb C$, and it is nilpotent for sure since $0$ is not an eigenvalue, and it has at least one 1 in it's semi-diagonal. Let $P$ be the matrix with Jordan base, so $P^{-1}AP=J$ and $P^{-1}A^{2005}P$ but it leads me nowhere. I tried to suppose that $A^{2005}$ is diagonalizable and than we have this $P^{-1}A^{2005}P=D$ When D is diagonal and we can take 2005th root out of each eigenvalue, but how can I show that this is what A after being diagonalizable suppose to look like, for as contradiction? Thanks
As luck would have it, the implication: $A^n$ diagonalizable and invertible $\Rightarrow A$ diagonalizable, was discussed in XKCD forum recently. See my answer there as well as further dicussion in another thread.
{ "language": "en", "url": "https://math.stackexchange.com/questions/59016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
A sufficient condition for linearity? If $f$ is a linear function (defined on $\mathbb{R}$), then for each $x$, $f(x) – xf’(x) = f(0)$. Is the converse true? That is, is it true that if $f$ is a differentiable function defined on $\mathbb{R}$ such that for each $x$, $f(x) – xf’(x) = f(0)$, then $f$ is linear?
If $f\in C^2$ (it is twice differentiable, and $f''$ is continuous), then the answer is yes; I don't know if it's necessarily true without this hypothesis. If $f(x)-xf'(x)=f(0)$ for all $x$, then $$f(x)=xf'(x)+f(0),$$ so that $$f'(x)=f'(x)+xf''(x)$$ $$0=xf''(x)$$ This shows that $f''(x)=0$ for all $x\neq0$, but because $f''$ is continuous this forces $f''(x)=0$ everywhere. Thus $f'$ must be constant, and thus $f$ must be linear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/59078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 2 }
Prove that all even integers $n \neq 2^k$ are expressible as a sum of consecutive positive integers How do I prove that any even integer $n \neq 2^k$ is expressible as a sum of positive consecutive integers (more than 2 positive consecutive integer)? For example: 14 = 2 + 3 + 4 + 5 84 = 9 + 10 + ... + 15 n = sum (k + k+1 + k+2 + ...) n ≠ 2^k
(The following is merely a simplification of @Arturo Magidin's proof. So, please do not upvote my post.) Suppose $S=k+(k+1)+\ldots+\left(k+(n-1)\right)$ for some $k\ge1$ and $n\ge2$. Then $$ S = nk + \sum_{j=0}^{n-1} j = nk+\frac{n(n-1)}{2} = \frac{n(n+2k-1)}{2}. $$ Hence $S\in\mathbb{N}$ can be written as a consecutive sum of integers if and only if $2S = nN$ with $N\ (=n+2k-1) >n\ge2$ and $N, n$ are of different parities. If $S$ is a power of $2$, so is $2S$. Hence the above factorization cannot be done. If $S$ is not a power of $2$ (whether it is even is immaterial), we may write $2S=ab$, where $a\ge3$ is odd and $b\ge2$ is a power of $2$. Therefore we can put $N=\max(a,b)$ and $n=\min(a,b)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/59131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 5, "answer_id": 2 }
Formula for $1^2+2^2+3^2+...+n^2$ In example to get formula for $1^2+2^2+3^2+...+n^2$ they express $f(n)$ as: $$f(n)=an^3+bn^2+cn+d$$ also known that $f(0)=0$, $f(1)=1$, $f(2)=5$ and $f(3)=14$ Then this values are inserted into function, we get system of equations solve them and get a,b,c,d coefficients and we get that $$f(n)=\frac{n}{6}(2n+1)(n+1)$$ Then it's proven with mathematical induction that it's true for any n. And question is, why they take 4 coefficients at the beginning, why not $f(n)=an^2+bn+c$ or even more? How they know that 4 will be enough to get correct formula?
There are several ways to see this: * *As Rasmus pointed one out in a comment, you can estimate the sum by an integral. *Imagine the numbers being added as cross sections of a quadratic pyramid. Its volume is cubic in its linear dimensions. *Apply the difference operator $\Delta g(n)=g(n+1)-g(n)$ to $f$ repeatedly. Then apply it to a polynomial and compare the results. [Edit in response to the comment:] An integral can be thought of as a limit of a sum. If you sum over $k^2$, you can look at this as adding up the areas of rectangles with width $1$ and height $k^2$, where each rectangle extends from $k-1$ to $k$ in the $x$ direction. (If that's not clear from the words, try drawing it.) Now if you connect the points $(k,k^2)$ by the continuous graph of the function $f(x)=x^2$, the area under that graph is an approximation of the area of the rectangles (and vice versa). So we have $$1^2+\dotso+n^2\approx\int_0^nk^2\mathrm dk=\frac13n^3\;.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/59175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
Unique ways to assign P boxes to C bags? How many ways can I arrange $C$ unlabeled bags into $P$ labeled boxes such that each box receives at least $S$ bags (where C > S and C > P)? Assume that I can combine bags to fit a box. I have just learnt that there are $\binom{C-1}{P-1}$ unique ways to keep P balls into C bags. I am unable to get the answer for the above question from this explanation. For example, if there are 2 Boxes, 3 Bags and each Box should get 1 Bag, then there are two ways: (2,1) and (1,2). Could you please help me to get this? Thank you. $\quad\quad$
First reduce the number of bags by subtracting the required minimum number in each bag. Using your notation: $C' = C-SP$. Now, you're freely place the remaining $C'$ items into $P$ boxes. Which is $(C'+P-1)!/C'!(P-1)!$ Take your example: 4 Boxes, 24 Bags and each Box should get 6 Bags. $C'= 24-6*24 = 0$, $(0+4-1)!/0!(4-1)!=1$. There is only one way! In the referenced link they use $S=1$, which makes $C'=C-P$, so the formula $\binom{C-1}{P-1}$ There is a nice visualization for this, which you don't have to remember the formula. Let's solve the case for 3 identical items to 3 bags (after reducing the required minimum), where items are not distinguishable. Assume we are placing 2 |'s and 3 x's in 5 slots. Some cases are (do all as an exercise): |xxx| x|x|x xxx|| ... Since there are 5 things (| and x) there are $5!$ ways of distributing them. However, we don't differentiate between individual |'s and individual x's so $2!3!$ will be idential to some other and won't be counted. The total number "unique ways" is $5!/(2!3!) = 10$. The \$1M visualization trick is thinking the |'s as the bag (usually states as box) boundries and the left most and right most bags are one sided. Note that the number of bags is one more than the boundaries. The formula you'll derive is $$\binom{\text{bags}+\text{items}-1}{\text{items}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/59234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Maximizing the sum of two numbers, the sum of whose squares is constant How could we prove that if the sum of the squares of two numbers is a constant, then the sum of the numbers would have its maximum value when the numbers are equal? This result is also true for more than two numbers. I tested the results by taking various values and brute-force-ish approach. However I am interested to know a formal proof of the same. This is probably an mild extention to this problem. I encountered this result while searching for a easy solution for the same.
Here's the pedestrian, Calculus I method: Let $x$ and $y$ be the two numbers. The condition "the sum of the squares is a constant" means that there is a fixed number $c$ such that $x^2+y^2=c$. That means that if you know one of the two numbers, say, $x$, then you can figure out the absolute value of the other one: $y^2 = c-x^2$, so $|y|=\sqrt{c-x^2}$. Now, since you want to find the maximum of the sum, then you can restrict yourself to the positive values of $x$ and $y$ (the condition on the sum of squares doesn't restrict the sign). So we may assume $x\geq 0$, $y\geq 0$, and $y=\sqrt{c-x^2}$. And now you want to find the maximum of $x+y = x+\sqrt{c-x^2}$. Thus, this reduces to finding the maximum value of $S(x) = x+\sqrt{c-x^2}$ on the interval $[0,\sqrt{c}]$. By the Extreme Value Theorem, the maximum will be achieved either at an endpoint or at a critical point of $S(x)$. At $0$ we get $S(0) = \sqrt{c}$; at $\sqrt{c}$ we get $S(\sqrt{c}) = \sqrt{c}$. As for critical points, $$S'(x) = 1 - \frac{x}{\sqrt{c-x^2}}.$$ The critical points are $x=\sqrt{c}$ (where $S'(x)$ is not defined), and the point where $x=\sqrt{c-x^2}$, or $2x^2=c$; hence $x^2=\frac{c}{2}$ (which means $y^2=\frac{c}{2}$ as well). At $x=\sqrt{\frac{c}{2}}$, $S(x) = 2\sqrt{\frac{c}{2}} = \sqrt{2c}$. This is clearly the maximum. For the problem with $k$ variables, $k\gt 2$, the "pedestrian method" involves Multivariable calculus and Lagrange multipliers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/59292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 8, "answer_id": 0 }
How to calculate number of lumps of a 1D discrete point distribution? I would like to calculate the number of lumps of a given set of points. Defining "number of lumps" as "the number of groups with points at distance 1" Supose we have a discrete 1D space in this segment For example N=15 . . . . . . . . . . . . . . . 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Then we have a set of M "marks" distributed, For example M=8 Distributed all left: x x x x x x x x . . . . . . . 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Groups with points at minimal distance = 1 (minimum) Distributed divided by two: x x x x . . . . . . . x x x x 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Groups with points at minimal distance = 2 Equi-distributed : x . x . x . x . x . x . x . x 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Groups with points at minimal distance = 8 (maximum) (perhaps other answer here could be "zero lumps" ?) Other distribution, etc: x x . . x x . . x x . . x x . 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Groups with points at minimal distance = 4 It's quite obvious algorithmically, just walking the segment, and count every "rising edge", number of times it passes from empty to a point. But I would like to solve it more "mathematically", to think the problem in an abstract way, having a 1D math solution perhaps would help to scale the concept to higher dimentions, where distance is complex ("walking the segment" trick won't work anymore), (not to mention a discrete metric space).. How can I put that into an equation, a weighted sum or something like that? Thanks for any help
If you are looking for a general formula that, given the points, returns the number of clusters (I think that "cluster" is a more common name that "lump", in this context), I'm afraid you won't find it. The problem is quite complicated, and there are many algorithms (google for hierachical clustering, percolation). For you particular case (discrete grid, and a threshold distance from nearest neighbours as criterion for clusters) this Hoshen-Kopelman Algorithm seems appropiate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/59342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How do Equally Sized Spheres Fit into Space? How much volume do spheres take up when filling a rectangular prism of a shape? I assume it's somewhere in between $\frac{3}{4}\pi r^3$ and $r^3$, but I don't know where. This might be better if I broke it into two questions: First: How many spheres can fit into a given space? Like, packed optimally. Second: Given a random packing of spheres into space, how much volume does each sphere account for? I think that's just about as clear as I can make the question, sorry for anything confusing.
You can read a lot about these questions on Wikipedia. Concerning the random version, there are some links at Density of randomly packing a box. The accepted answer links to a paper that "focuses on spheres".
{ "language": "en", "url": "https://math.stackexchange.com/questions/59401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Throw a die $N$ times, observe results are a monotonic sequence. What is probability that all 6 numbers occur in the sequence? I throw a die $N$ times and the results are observed to be a monotonic sequence. What is probability that all 6 numbers occur in the sequence? I'm having trouble with this. There are two cases: when the first number is 1, and when the first number is 6. By symmetry, we can just consider one of them and double the answer at the end. I've looked at individual cases of $N$, and have that For $ N = 6 $, the probability is $ \left(\frac{1}{6}\right)^2 \frac{1}{5!} $. For $ N = 7 $, the probability is $ \left(\frac{1}{6}\right)^2 \frac{1}{5!}\left(\frac{1}{6} + \frac{1}{5} + \frac{1}{4} + \frac{1}{3} + \frac{1}{2} + 1\right) $. I'm not sure if the above are correct. When it comes to $ N = 8 $, there are many more cases to consider. I'm worried I may be approaching this the wrong way. I've also thought about calculating the probability that a number doesn't occur in the sequence, but that doesn't look to be any easier. Any hints/corrections would be greatly appreciated. Thanks
I have a slightly different answer to the above, comments are very welcome :) The number of monotonic sequences we can observe when we throw a dice $N$ times is 2$ N+5\choose5$-$6\choose1$ since the six sequences which consist of the same number repeatedly are counted as both increasing and decreasing (i.e. we have counted them twice so need to subtract 6 to take account of this). The number of increasing sequences involving all six numbers is $N-1\choose5$ (as has already been explained). Similarly the number of decreasing sequences involving all six numbers is also $N-1\choose5$. Therefore I believe that the probability of all seeing all six numbers given a monotonic sequence is 2$ N+5\choose5$-$6\choose1$ divided by 2$N-1\choose5$. This is only slightly different to the above answers but if anyone has any comments as to whether you agree or disagree with my logic or if you require further explanation I'd be interested to hear from you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/59442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Inserting numbers to create a geometric progression Place three numbers in between 15, 31, 104 in such way, that they would be successive members of geometric progression. PS! I am studying for a test and need help. I would never ask anyone at math.stackexchange.com to do my homework for me. Any help at all would be strongly appreciated. Thank you!
There is no geometric progression that contains all of $15$, $31$, and $104$, let alone also the hypothetical "extra" numbers. For suppose that $15=kr^a$, $31=kr^b$, and $104=kr^c$ where $a$, $b$, $c$ are integers ($r$ need not be an integer, and $a$, $b$, $c$ need not be consecutive). Then $31=kr^ar^{b-a}=15r^{b-a}$. Similarly, $104=31r^{c-b}$. Without loss of generality, we may assume that $r>1$. So $b-a$ and $c-b$ are positive integers. Let $b-a=m$ and $c-b=n$. Then $$\frac{31}{15}=r^m \qquad \text{and}\qquad \frac{104}{31}=r^n.$$ Take the $n$-th power of $31/15$, and the $m$-th power of $104/31$. Each is $r^{mn}$. It follows that $$\left(\frac{31}{15}\right)^n=\left(\frac{104}{31}\right)^m.$$ From this we conclude that $$31^{m+n}=15^n \cdot 104^m.$$ This is impossible, since $5$ divides the right-hand side, but $5$ does not divide the left-hand side. Comment: Have we really answered the question? It asks us to place $3$ numbers between $15$, $31$, $104$ "in such a way that they would be successive members of a geometric progression." Who does "they" refer to? Certainly not all $6$ numbers, since already as we have seen, $15$, $31$, and $104$ cannot all be members of a (single) geometric progression of any kind. But maybe "they" refers to the interpolated numbers! Then there are uncountably many solutions, and even several solutions where the interpolated numbers are all integers. For example, we can use $16$, $32$, $64$. The numbers $15$ and $31$ could be a heavy-handed hint pointing to this answer. Or else we can use $16$, $24$, $36$. Or else $16$, $40$, $100$. Then there is $18$, $24$, $32$, or $18$, $30$, $50$, and so on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/59511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Algebraic proof that collection of all subsets of a set (power set) of $N$ elements has $2^N$ elements In other words, is there an algebraic proof showing that $\sum_{k=0}^{N} {N\choose k} = 2^N$? I've been trying to do it some some time now, but I can't seem to figure it out.
I don't know what you mean by "algebraic". Notice that if $N$ is $0$, we have the empty set, which has exactly one subset, namely itself. That's a basis for a proof by mathematical induction. For the inductive step, suppose a set with $N$ elements has $2^N$ subsets, and consider a set of $N+1$ elements that results from adding one additional element called $x$ to the set. All of the $2^N$ subsets of our original set of $2^N$ elements are also subsets of our newly enlarged set that contains $x$. In addition, for each such set $S$, the set $S\cup\{x\}$ is a subset of our enlarged set. So we have our original $2^N$ subsets plus $2^N$ new subsets---the ones that contain $\{x\}$. The number of subsets of the enlarged set is thus $2^N + 2^N$. Now for an "algebraic" part of the arugment: $2^N + 2^N = 2^{N+1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/59554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
Minimal area of a surface that splits a cube into two equal-size pieces I had read the following problem and its solution from one source problem which was the following: You want to cut a unit cube into two pieces each with volume 1/2. What dividing surface, which might be curved, has the smallest surface area? The author gave his first solution by this way: When bisecting the equilateral triangle, an arc of a circle centered at a vertex had the shortest path. Similarly for this problem, the octant (one-eighth) of a sphere should be the bisecting surface with the lowest area. If the cube is a unit cube, then the octant has volume 1/2, so its radius is given by $$\frac{1}{8}(\frac{4}{3} \pi r^3)=\frac{1}{2}$$ So the radius is $\displaystyle \left( \frac{3}{\pi} \right)^{\frac{1}{3}}$ and the surface area of the octant is $$\text{surface area}=\frac{4 \pi r^2}{8}=1.52$$ (approximate) But after this the author said that he made a mistake; the answer was wrong and the correct one is the simplest surface – a horizontal plane through the center of the cube – which has surface area 1, which is less than the surface area of the octant. But he has not given reasons why the horizontal surface area is the best solution and I need a formula or proof of this. Can you help me?
We know from the isoperimetric inequality that locally the surface must be a sphere (where we can include the plane as the limiting case of a sphere with infinite radius). Also, the surface must be orthogonal to the cube where they meet; if they're not, you can deform the surface locally to reduce its area. A sphere orthogonal to a cube face must have its centre on that face. You can easily show that it can't contain half the volume if it intersects only one or two faces. Thus, it must either intersect at least three adjacent faces, in which case its centre has to be at the vertex where they meet, or it has to intersect at least two opposite faces, in which case it has to be a plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/59609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Equation of the tangents What is the equation of the tangent to $y=x^3-6x^2+12x+2$ that is parallel to the line $y=3x$ ? I have no idea, how to solve, no example is given in the book! Appreciate your help!
Look at a taylor series (ref 1, 2) of your function about a point $x_c$ of order 1 (linear). Then find which $x_c$ produces a line parallel to $3x$, or has a slope of $3$. Hint! There are two solutions actually.
{ "language": "en", "url": "https://math.stackexchange.com/questions/59716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Why is the set of commutators not a subgroup? I was surprised to see that one talks about the subgroup generated by the commutators, because I thought the commutators would form a subgroup. Some research told me that it's because commutators are not necessarily closed under product (books by Rotman and Mac Lane popped up in a google search telling me). However, I couldn't find an actual example of this. What is one? The books on google books made it seem like an actual example is hard to explain. Wikipedia did mention that the product $[a,b][c,d]$ on the free group on $a,b,c,d$ is an example. But why? I know this product is $aba^{-1}b^{-1}cdc^{-1}d^{-1}$, but why is that not a commutator in this group?
See the exercise 2.43 in the book "An introduction to the theory of group"- Joseph Rotman (4th ed.). He also had made a nice remark: The first finite group in which product of two commutators is not a commutator has order 96.
{ "language": "en", "url": "https://math.stackexchange.com/questions/59816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 5, "answer_id": 3 }
QQ plot explanation The figure shows the Q-Q plot of a theoretical and empirical standardized Normal distribution generated through the $qqnorm()$ function of R statistical tool. How can I describe the right tail (top right) that does not follow the red reference line? What does it mean when there is a "trend" that running away from the line? Thank you
It means that in the right tail, your data do not fit normal well, specifically, there are far less numbers there would be in a normal sample. If the black curved up, there would be more than in a typical normal sample. You can think of the black curve as a graph of a function that , if applied to your data, would make them like a normal sample. In the following image, random sample is generated by applying Ilmari Karonen's function to normal sample.
{ "language": "en", "url": "https://math.stackexchange.com/questions/59873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Finite differences of function composition I'm trying to express the following in finite differences: $$\frac{d}{dx}\left[ A(x)\frac{d\, u(x)}{dx} \right].$$ Let $h$ be the step size and $x_{i-1} = x_i - h$ and $x_{i+ 1} = x_i + h$ If I take centered differences evaluated in $x_i$, I get: $\begin{align*}\left\{\frac{d}{dx}\left[ A(x)\frac{d\, u(x)}{dx}\right]\right\}_i &= \frac{\left[A(x)\frac{d\, u(x)}{dx}\right]_{i+1/2} - \left[A(x)\frac{d\, u(x)}{dx}\right]_{i-1/2}}{h} \\ &= \frac{A_{i+1/2}\left[\frac{u_{i+1}-u_{i}}{h}\right] - A_{i-1/2}\left[\frac{u_{i}-u_{i-1}}{h}\right]}{h} \end{align*}$ So, if I use centered differences I would have to have values for $A$ at $i + \frac 12$ and $A$ at $i - \frac 12$; however those nodes don't exist (in my stencil I only have $i \pm$ integer nodes); is that correct? If I use forward or backward differences I need A values at $i$, $i + 1$, $i + 2$ and at $i$, $i -1$, $i -2$ respectively. Am I on the correct path? I would really appreciate any hint. Thanks in advance, Federico
While the approach of robjohn is certainly possible, it is often better to take the approach suggested by the original poster: $$ \left\{\frac{d}{dx}\left[ A(x)\frac{d\, u(x)}{dx}\right]\right\}_i = \frac{A_{i+1/2}\left[\frac{u_{i+1}-u_{i}}{h}\right] - A_{i-1/2}\left[\frac{u_{i}-u_{i-1}}{h}\right]}{h} $$ As he noted, $A$ is evaluated on half grid point. In many cases, that's not much of a problem. For instance, if you start from the diffusion equation with variable diffusivity, $$ \frac{\partial u}{\partial t} = \frac{\partial}{\partial x} \left( A(x) \frac{\partial u}{\partial x} \right), $$ with $A(x)$ given, then it does not matter whether you sample $A$ on grid points or half grid points. It is quite okay to use a different grid for $A$ and $u$ (that's sometimes called a staggered grid). In other cases, you can approximate the values at half grid points by the average: $A_{i+1/2} \approx \frac12 ( A_{i+1} + A_i )$. That gives you almost the same method as the one derived by robjohn.
{ "language": "en", "url": "https://math.stackexchange.com/questions/59923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How to obtain the Standard Deviation of a ratio of independent binomial random variables? X and Y are 2 independent binomial random variables with parameters (n,p) and (m,q) respectively. (trials, probability parameter)
There probably isn't a closed-form formula for this. But $X$ has mean $np$ and standard deviation $\sqrt{np(1-p)}$, and $Y$ has mean $mq$ and standard deviation $mq(1-q)$. Now you need a simple fact: if $X$ has mean $\mu$ and standard deviation $\sigma$, then $\log X$ has mean approximately $\log \mu$ and standard deviation approximately $\sigma/\mu$. This can be derived by Taylor expansion. Intuitively, $X$ "usually" falls in $[\mu-\sigma, \mu+\sigma]$ and so $\log X$ "usually" falls in $[\log (\mu-\sigma), \log (\mu+\sigma)]$. But we have $$ \log (\mu \pm \sigma) = \log \Big(\mu(1 \pm \sigma/\mu)\Big) = \log \mu \pm \log(1 \pm \sigma/\mu) \approx \log \mu \pm \sigma/\mu $$ where the approximation is the first-order Taylor expansion of $\log (1+x)$ for $x$ close to zero. Therefore $\log X$ has mean approximately $\log np$ and standard deviation approximately $\sqrt{(1-p)/np}$. Note that for the Taylor expansion above to be sufficient, $\sigma/\mu=\sqrt{(1-p)/np}$ must be close to zero. Similarly $\log Y$ has mean approximately $\log mq$ and standard deviation approximately $\sqrt{(1-q)/mq}$. So $\log X - \log Y = \log X/Y$ has mean approximately $\log(np/mq)$ and standard deviation approximately $$ \sqrt{{(1-p) \over np} + {(1-q) \over mq}}. $$ But you asked about $X/Y$. Inverting the earlier fact, if $Z$ has mean $\mu$ and standard deviation $\sigma$, then $e^Z$ has mean approximately $e^{\mu}$ and standard deviation approximately $\sigma e^\mu$. Therefore $X/Y$ has mean approximately $np/mq$ (not surprising!) and standard deviation approximately $$ \left( \sqrt{{(1-p) \over np} + {(1-q) \over mq}} \right) {np \over mq}. $$ This approximation works well if $p,q$ and/or $m,n$ are not too small (see Taylor expansion explanation in the middle of this answer).
{ "language": "en", "url": "https://math.stackexchange.com/questions/60032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Checking whether a point lies on a wide line segment I know how to define a point on the segment, but I have this piece is a wide interval. The line has a width. I have $x_1$ $y_1$, $x_2$ $y_2$, width and $x_3$ $y_3$ $x_3$ and $x_4$ what you need to check. perhaps someone can help, and function in $\Bbb{C}$ #
Trying to understand your question, perhaps this picture might help. You seem to be asking how to find out whether the point $C$ is inside the thick line $AB$. You should drop a perpendicular from $C$ to $AB$, meeting at the point $D$. If the (absolute) length of $CD$ is more than half the width of the thick line then $C$ is outside the thick line (as shown in this particular case). If the thick line is in fact a thick segment, then you also have to consider whether $D$ is between $A$ and $B$ (or perhaps slightly beyond one of them, if the thickness extends further).
{ "language": "en", "url": "https://math.stackexchange.com/questions/60070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to compute that the unit digit of $\frac{(7^{6002} − 1)}{2}$? The mother problem is: Find the unit digit in LCM of $7^{3001} − 1$ and $7^{3001} + 1$ This problem comes with four options to choose the correct answer from,my approach,as the two number are two consecutive even numbers hence the required LCM is $$\frac{(7^{3001} − 1)(7^{3001} + 1)}{2}$$ Using algebra that expression becomes $\frac{(7^{6002} − 1)}{2}$,now it is not hard to see that unit digit of $(7^{6002} − 1)$ is $8$. So the possible unit digit is either $4$ or $9$,As there was no $9$ as option I selected $4$ as the unit digit which is correct but as this last part is a kind of fluke I am not sure if my approach is right or not or may be I am unable to figure out the last part how to be sure that the unit digit of $\frac{(7^{6002} − 1)}{2}$ is $4$?
We look directly at the mother problem. Exactly as in your approach, we observe that we need to evaluate $$\frac{(7^{3001}-1)(7^{3001}+1)}{2}$$ modulo $10$. Let our expression above be $x$. Then $2x= (7^{3001}-1)(7^{3001}+1)$. We will evaluate $2x$ modulo $20$. Note that $7^{3000}$ is congruent to $1$ modulo $4$ and modulo $5$. Thus $7^{3001} \equiv 7\pmod{20}$, and therefore $$2x\equiv (6)(8)\equiv 8 \pmod{20}.$$ It follows that $x\equiv 4\pmod{10}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/60138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
Determinantal formula for reproducing integral kernel How do I prove the following? $$\int\det\left(K(x_{i},x_{j})\right)_{1\leq i,j\leq n}dx_{1} \cdots dx_{N}=\underset{i=1}{\overset{n}{\prod}}\left(\int K(x_{i},x_{i})\;dx_{i}-(i-1)\right)$$ where $$K(x,y)=\sum_{l=1}^n \psi_l(x)\overline{\psi_l}(y)$$ and $$\{\psi_l(x)\}_{l=1}^n$$ is an ON-sequence in $L^2$. One may note that $$\int K(x_i,x_j)K(x_j,x_i) \; d\mu(x_i)=K(x_j,x_j)$$ and also that $$\int K(x_a,x_b)K(x_b,x_c)d\mu(x_b)=K(x_a,x_c).$$
(This is too long to fit into a comment.) Note that the product in the integrand of the right side foils out as $$\sum_{A\subseteq [n]} (-1)^{n-|A|}\left(\prod_{i\not\in A} (i-1)\right)\prod_{j\in A}K(x_j,x_j).$$ (Here $[n]=\{1,2,\dots,n\}$.) On the other hand, we can use Leibniz formula to expand the determinant in the left hand side to obtain $$\iint\cdots\int \sum_{\sigma\in S_n} (-1)^{\sigma}\left(\prod_{i=1}^nK(x_i,x_{\sigma(i)}) \right)dx_1\cdots dx_n$$ Interchange the order of summation and integration. Observe that $K(x_1,x_{\sigma(1)})\cdots K(x_n,x_{\sigma(n)})$ can be reordered into a product of factors of the form $K(x_a,x_b)K(x_b,x_c)\cdots K(x_d,x_a)$ (this follows from the cycle decomposition of the permutation $\sigma$), and the integration of this over $x_b,x_c,\dots,x_d$ is, by induction on the two properties at the bottom of the OP, smply $K(x_a,x_a)$. From here we can once again switch the order of summation and integration. I think the last ingredient needed is something from the representation theory of the symmetric group. In other words, we need to know that the number of ways a permutation $\sigma$ can be decomposed into cycles with representatives $i_1,i_2,\dots\in A\subseteq [n]$, weighed by sign, is the coefficient of $\prod_{j\in A}K(x_j,x_j)$ in the integrand's polynomial on the right hand side (or something roughly to this effect).
{ "language": "en", "url": "https://math.stackexchange.com/questions/60192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Finding limit of a quotient with two square roots: $\lim_{t\to 0}\frac{\sqrt{1+t}-\sqrt{1-t}}t$ Find $$ \lim_{t\to 0}\frac{\sqrt{1+t}-\sqrt{1-t}}{t}. $$ I can't think of how to start this or what to do at all. Anything I try just doesn't change the function.
HINT $\ $ Use the same method in your prior question, i.e. rationalize the numerator by multiplying both the numerator and denominator by the numerator's conjugate $\rm\:\sqrt{1+t}+\sqrt{1-t}\:.$ Then the numerator becomes $\rm\:(1+t)-(1-t) = 2\:t,\:$ which cancels with the denominator $\rm\:t\:,\:$ so $\rm\:\ldots$ More generally, using the same notation and method as in your prior question, if $\rm\:f_0 = g_0\:$ then $$\rm \lim_{x\:\to\: 0}\ \dfrac{\sqrt{f(x)}-\sqrt{g(x)}}{x}\ = \ \lim_{x\:\to\: 0}\ \dfrac{f(x)-g(x)}{x\ (\sqrt{f(x)}+\sqrt{g(x)})}\ =\ \dfrac{f_1-g_1}{\sqrt{f_0}+\sqrt{g_0}}$$ In your case $\rm\: f_0 = 1 = g_0,\ \ f_1 = 1,\ g_1 = -1\:,\ $ so the limit $\: =\: (1- (-1))/(1+1)\: =\: 1\:.$ Note again, as in your prior questions, rationalizing the numerator permits us to cancel the common factor at the heart of the indeterminacy - thus removing the apparent singularity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/60243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Calculating points on a plane In the example picture below, I know the points $A$, $B$, $C$ & $D$. How would I go about calculating $x$, $y$, $z$ & $w$ and $O$, but as points on the actual plane itself (e.g. treating $D$ as $(0, 0)$, $A$ as $(0, 1)$, $C$ as $(1, 0)$ and $B$ as $(1, 1)$. Ultimately I need to be able to calculate any arbitrary point on the plane so I'm unsure as to whether this would be possible through linear interpolation of the results above or whether I would actually just have to do this via some form of Matrix calculation? I don't really know matrix math at all! Just looking for something I can implement in JavaScript (in an enviroment that does support matricies).
This should be done in terms of plane projective geometry. This means you have to introduce homogeneous coordinates. The given points $A=(a_1,a_2)$, $\ldots$, and $D=(d_1,d_2)$ have "old" homogeneous coordinates $(a_1,a_2, 1)$, $\ldots$, and $(d_1,d_2,1)$ and should get "new" homogeneous coordinates $\alpha(0,1,1)$, $\beta(1,1,1)$, $\gamma(1,0,1)$, and $\delta(0,0,1)$. There is a certain $(3\times 3)$-matrix $P:=[p_{ik}]$ (determined up to an overall factor) that transforms the old coordinates into the new ones. To find this matrix you have twelve linear equations in thirteen variables which is just right for our purpose. (The values of $\alpha$, $\ldots$, $\delta$ are not needed in the sequel.) After the matrix $P$ has been determined the new affine coordinates $(\bar x, \bar y)$ of any point $(x,y)$ in the drawing plane are obtained by applying $P$ to the column vector $(x,y,1)$. This results in a triple $(x',y',z')$, whereupon one has $$\bar x={x'\over z'}\ ,\quad \bar y={y'\over z'}\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/60294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Fibonacci divisibilty properties $ F_n\mid F_{kn},\,$ $\, \gcd(F_n,F_m) = F_{\gcd(n,m)}$ Can any one give a generalization of the following properties in a single proof? I have checked the results, which I have given below by trial and error method. I am looking for a general proof, which will cover the all my results below: * *Every third Fibonacci number is even. *3 divides every 4th Fibonacci number. *5 divides every 5th Fibonacci number. *4 divides every 6th Fibonacci number. *13 divides every 7th Fibonacci number. *7 divides every 8th Fibonacci number. *17 divides every 9th Fibonacci number. *11 divides every 10th Fibonacci number. *6, 9, 12 and 16 divides every 12th Fibonacci number. *29 divides every 14th Fibonacci number. *10 and 61 divides every 15th Fibonacci number. *15 divides every 20th Fibonacci number.
The general proof of this is that the fibonacci numbers arise from the expression $$F_n \sqrt{-5} = (\frac 12\sqrt{-5}+\frac 12\sqrt{-1})^n - (\frac 12\sqrt{-5}-\frac 12\sqrt{-1})^n$$ Since this is an example of the general $a^n-b^n$, which $a^m-b^m$ divides, if $m \mid n$, it follows that there is a unique number, generally coprime with the rest, for each number. Some of the smaller ones will be $1$. The exception is that if $f_n$ is this unique factor, such that $F_n = \prod_{m \mid n} f_n$, then $f_n$ and $f_{np^x}$ share a common divisor $p$, if $p$ divides either. So for example, $f_8=7$, and $f_{56}=7*14503$, share a common divisor of $7$. This means that modulo over $49$ must evidently work too. So $f_{12} = 6$, shares a common divisor with both $f_4=3$ and $f_3 = 4$, is unique in connecting to two different primes. Gauss's law of quadratic recriprocality applies to the fibonacci numbers, but it's a little more complex than for regular bases. Relative to the fibonacci series, reduce modulo 20, to 'upper' vs 'lower' and 'long' vs 'short'. For this section, 2 is as 7, and 5 as 1, modulo 20. Primes that reduce to 3, 7, 13 and 17 are 'upper' primes, which means that their period divides $p+1$. Primes ending in 1, 9, 11, 19 are lower primes, meaning that their periods divide $p-1$. The primes in 1, 9, 13, 17 are 'short', which means that the period divides the maximum allowed, an even number of times. For 3, 7, 11, 19, it divides the period an odd number of times. This means that all odd fibonacci numbers can be expressed as the sum of two squares, such as $233 = 8^2 + 13^2$, or generally $F_{2n+1} = F^2_n + F^2_{n+1}$ So a prime like $107$, which reduces to $7$, would have an indicated period dividing $108$ an odd number of times. Its actual period is $36$. A prime like $109$ divides $108$ an even number of times, so its period is a divisor of $54$. Its actual period is $27$. A prime like $113$ is indicated to be upper and short, which means that it divides $114$ an even number of times. It actually has a period of $19$. Artin's constant applies here as well. This means that these rules correctly find some 3/4 of all of the periods exactly. The next prime in this progression, $127$, actually has the indicated period for an upper long: 128. So does $131$ (lower long), $137$ (upper short, at 69). Likewise $101$ (lower short) and $103$ (upper long) show the maximum periods indicated. No prime under $20*120^4$ exists, where if $p$ divides some $F_n$, so does $p^2$. This does not preclude the existance of such a number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/60340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 5, "answer_id": 2 }
Method to reverse a Kronecker product Let's say I have two simple vectors: $[0, 1]$ and $[1, 0]$. Their Kronecker product would be $[0, 0, 1, 0]$. Let's say I have only the Kronecker product. How can I find the two initial vectors back? If my two vectors are written as : $[a, b]$ and $[c, d]$, the (given) Kronecker product is: $$[ac, ad, bc, bd] = [k_0, k_1, k_2, k_3]$$ So I have a system of four non linear equations that I wish to solve: $$\begin{align*} ac &= k_0\\ ad&= k_1\\ bc&= k_2\\ bd &=k_3. \end{align*}$$ I am looking for a general way to solve this problem for any number of initial vectors in $\mathbb{C}^2$ (leading my number of variables to $2n$ and my equations to $2^n$ if I have $n$ vectors). So here are a few specific questions: What is the common name of this problem? If a general solution is known, what is its complexity class? Does the fact that I have more and more equations when $n$ goes up compared to the number of variables help? (Note: I really didn't know what to put as a tag.)
I prefer to say the Kronecker of two vectors is reversible, but up to a scale factor. In fact, Niel de Beaudrap has given the right answer. Here I attempt to present it in a concise way. Let $a\in\mathbb{R}^N$ and $b\in\mathbb{R}^M$ be two column vectors. The OP is: given $a\otimes b$, how to determine $a$ and $b$? Note $a\otimes b$ is nothing but $\mathrm{vec} (ba^T)$. The $\mathrm{vec}$ operator reshapes a matrix to a column vector by stacking the column vectors of the matrix. Note $\mathrm{vec}$ is reversible. Therefore, given $c=a\otimes b$, first reshape it to a $M\times N$ matrix $C=ba^T$. $C$ is a rank-one matrix. You can simply decompose $C$ to get $kb$ and $a/k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/60399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 3, "answer_id": 0 }
Fourier transform solution of three-dimensional wave equation One of the PDE books I'm studying says that the 3D wave equation can be solved via the Fourier transform, but doesn't give any details. I'd like to try to work the details out for myself, but I'm having trouble getting started - in particular, what variable should I make the transformation with respect to? I have one time variable and three space variables, and I can't use the time variable because the Fourier transform won't damp it out. If I make the transformation with respect to one of the spatial variables, the differentiations with respect to time and the other two spatial variables become parameters and get pulled outside the transform. But it looks like then I'm still left with a PDE, but reduced by one independent variable. Where do I go from here? Thanks.
You use the Fourier transform in all three space variables. The wave equation $\frac{\partial^2 u}{\partial t^2} = c^2 \left( \frac{\partial^2 u}{\partial x_1^2} + \frac{\partial^2 u}{\partial x_2^2} + \frac{\partial^2 u}{\partial x_3^2}\right)$ becomes $ \frac{\partial^2 U}{\partial t^2} = - c^2 (p_1^2 + p_2^2 + p_3^2) U$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/60461", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the term for a factorial type operation, but with summation instead of products? (Pardon if this seems a bit beginner, this is my first post in math - trying to improve my knowledge while tackling Project Euler problems) I'm aware of Sigma notation, but is there a function/name for e.g. $$ 4 + 3 + 2 + 1 \longrightarrow 10 ,$$ similar to $$4! = 4 \cdot 3 \cdot 2 \cdot 1 ,$$ which uses multiplication? Edit: I found what I was looking for, but is there a name for this type of summation?
The name for $$ T_n= \sum_{k=1}^n k = 1+2+3+ \dotsb +(n-1)+n = \frac{n(n+1)}{2} = \frac{n^2+n}{2} = {n+1 \choose 2} $$ is the $n$th triangular number. This picture demonstrates the reasoning for the name: $$T_1=1\qquad T_2=3\qquad T_3=6\qquad T_4=10\qquad T_5=15\qquad T_6=21$$ $\hskip1.7in$
{ "language": "en", "url": "https://math.stackexchange.com/questions/60578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "99", "answer_count": 4, "answer_id": 3 }
There exists $C\neq0$ with $CA=BC$ iff $A$ and $B$ have a common eigenvalue Question: Suppose $V$ and $W$ are finite dimensional vector spaces over $\mathbb{C}$. $A$ is a linear transformation on $V$, $B$ is a linear transformation on $W$. Then there exists a non-zero linear map $C:V\to W$ s.t. $CA=BC$ iff $A$ and $B$ have a same eigenvalue. ===========This is incorrect========== Clearly, if $CA=BC$, suppose $Ax=\lambda x$, then $B(Cx)=CAx=C(\lambda x)=\lambda (Cx)$, so $A$ and $B$ have same eigenvalue $\lambda$. On the other hand, if $A$ and $B$ have a same eigenvalue $\lambda$, suppose $Ax=\lambda x, By=\lambda y$. Define $C:V\to W$ s.t. $Cx=y$, then $BCx=By=\lambda y=C\lambda x=CAx$. But I don't kow how to make $BC=CA$ for all $x\in V$. ======================================
Here is a simple solution for the if condition. Let $\lambda$ be the common eigenvalue. Let $u$ be a right eigenvector for $B$, that is $$Bu= \lambda u$$ and $v$ be a left eigenvector for $A$, that is $$v^TA= \lambda v^T$$ Then $C =uv^T$ is a non-zero $n \times n$ matrix which works: $$CA = u v^T A = \lambda u v^T =\lambda C$$ $$BC= B u v^T= \lambda u v^T= \lambda C$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/60641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 4 }
Can someone explain consensus theorem for boolean algebra In boolean algebra, below is the consensus theorem $$X⋅Y + X'⋅Z + Y⋅Z = X⋅Y + X'⋅Z$$ $$(X+Y)⋅(X'+Z)⋅(Y+Z) = (X+Y)⋅(X'+Z)$$ I don't really understand it? Can I simplify it to $$X'⋅Z + Y⋅Z = X' \cdot Z$$ I don't suppose so. Anyways, why can $Y \cdot Z$ be removed?
Something like the following: $X \cdot Y + X' \cdot Z + Y \cdot Z $ = $X \cdot Y + X' \cdot Z + (X + X') \cdot Y \cdot Z $ = $X \cdot Y + X \cdot Y \cdot Z + X' \cdot Z + X' \cdot Y \cdot Z$ = $X \cdot (Y + Y \cdot Z) + X' \cdot (Z + Y \cdot Z)$ = $X \cdot Y + X' \cdot Z$
{ "language": "en", "url": "https://math.stackexchange.com/questions/60713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
Orthogonal mapping $f$ which preserves angle between $x$ and $f(x)$ Let $f: \mathbf{R}^n \rightarrow \mathbf{R}^n$ be a linear orthogonal mapping such that $\displaystyle\frac{\langle f(x), x\rangle}{\|fx\| \|x\|}=\cos \phi$, where $\phi \in [0, 2 \pi)$. Are there such mapping besides $id$, $-id$ in case whe $n$ is odd? Is it true that if $n=2k$ then then there exists an orthogonal basis $e_1,\ldots,e_{2k}$ in $\mathbf{R}^{2k}$ such that the matrix $F$ of $f$ in that basis is of the form $$ F=\left [ \begin{array}{rrrrrrrr} A_1 & & &\\ & A_2 & &\\ & & \ddots \\ & & & A_{k}\\ \end{array} \right ], $$ where $$ A_1=A_2=\cdots=A_{k}=\left [ \begin{array}{rr} \cos \phi & -\sin \phi \\ \sin \phi & \cos \phi \\ \end{array} \right ] ? $$ Thanks.
Your orthogonal transformation $f$ can be "complexified" to a unitary transformation $U$ on ${\mathbb C}^n$ such that $U(x + i y) = f(x) + i f(y)$ for $x, y \in {\mathbb R}^n$. Being normal, $U$ can be diagonalized, and its eigenvalues have absolute value 1. The only possible real eigenvalues are $\pm 1$, in which case the eigenvectors can be chosen to be real as well; this corresponds to $\cos \phi = \pm 1$, and your cases $id$ and $-id$. If $v = x + i y$ is an eigenvector for a non-real eigenvalue $\lambda = a + b i$, then $\overline{v} = x - i y$ is an eigenvector for $\overline{\lambda} = a - b i$. Thus the eigenspaces for complex eigenvalues pair up. Now $Uv = \lambda v = (a x - b y) + i (a y + b x)$, so $f(x) = a x - b y$ and $f(y) = a y + b x$. Then $x^T f(x) = a \|x\|^2 - b x^T y = (\cos \phi) \|x\|^2$ and $y^T f(y) = a \|y\|^2 + b x^T y = (\cos \phi) \|y\|^2$. Therefore $b x^T y = (a - \cos \phi) \|x\|^2 = -(a - \cos \phi) \|y\|^2$. Since $b \ne 0$, $\|x\|>0$ and $\|y\|>0$, we must have $x^T y = 0$ and $a = \cos \phi$. Since $|\lambda\|=\sqrt{a^2 + b^2} = 1$, $b = \pm \sin \phi$. This gives you your $2 \times 2$ block $\pmatrix{\cos \phi & -\sin \phi \cr \sin \phi & \cos \phi\cr}$ or $\pmatrix{\cos \phi & \sin \phi \cr -\sin \phi & \cos \phi\cr}$ (but you don't need both of them).
{ "language": "en", "url": "https://math.stackexchange.com/questions/60756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$x=\frac{-b\pm \sqrt{b^2-4ac}}{2a}$ show that $x=-c/b$ when $a=0$ OK, this one has me stumped. Given that the solution for $ax^2+bx+c =0$ $$x=\frac{-b\pm \sqrt{b^2-4ac}}{2a}\qquad(*)$$ How would you show using $(*)$ that $x=-c/b$ when $a=0$ (Please dont use $a=0$ hence, $bx+c=0$.)
One of the two solutions approaches $-\frac{c}{b}$ in the limit as $a\rightarrow 0$. Assuming $b$ is positive, apply L'Hospital's rule to $$x=\frac{-b+\sqrt{b^2-4ac}}{2a}$$ If $b$ is negative, work with the other solution. (And as $a\rightarrow 0$, the second solution approaches $\pm\infty$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/60792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
One last question on the concept of limits I read through this post on the notion of limits Approaching to zero, but not equal to zero, then why do the points get overlapped? But there's one last question I have. [Trust me, it'll be my last ever question on limits!] It clearly says as you get closer and closer to a certain point you are eventually getting closer to the limiting point. I.e., as $\Delta x$ approches $0$, you get to the limit. Let me give you an example which supports the above statement. Let's say you want to evaluate the limit $$ \lim_{x \to 2} \frac{x^2 - 4}{x-2} .$$ Sooner or later, you have to plug in $2$ and you get the answer $4$ and you say that the limit of that function as $x \to 2$ is $4$. But why should I plug in $2$ and not a number close to $2$? $x$ is certainly not equal to $2$, right?
An important point to consider is that you cannot actually "plug in" $2$ and get $4$. When $x = 2$, you will have a division by zero, since $x - 2 = 0$. When calculating $\frac{x^2 - 4}{x - 2}$ for $x = 2$ using a computer, you may get $4$ due to rounding errors when you have set $x$ to a value very close to 2. This is due to the very fact that the function approaches $4$ as $x$ approaches $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/60966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 3 }
Nested Square Roots $5^0+\sqrt{5^1+\sqrt{5^2+\sqrt{5^4+\sqrt\dots}}}$ How would one go about computing the value of $X$, where $X=5^0+ \sqrt{5^1+\sqrt{5^2+\sqrt{5^4+\sqrt{5^8+\sqrt{5^{16}+\sqrt{5^{32}+\dots}}}}}}$ I have tried the standard way of squaring then trying some trick, but nothing is working. I have also looked at some nested radical previous results, but none seem to be of the variety of this problem. Can anyone come up with the answer? Thank
The trick is to pull out a $\sqrt{5}$ factor from the second term: $$ \frac{\sqrt{5^1+ \sqrt{5^2 + \sqrt{5^4 + \sqrt{5^8 + \cdots}}}}}{\sqrt{5}} = \sqrt{1 + \sqrt{1 + \sqrt{1 + \sqrt{1 + \cdots}}}}, $$ which I call $Y$ for convenience. To see why this is true, observe that $$ \begin{align*} \frac{\sqrt{5^1+x}}{\sqrt{5}} = \sqrt{1+\frac{x}{5^1}} \\ \frac{\sqrt{5^2+x}}{5^1} = \sqrt{1+\frac{x}{5^2}} \\ \frac{\sqrt{5^4+x}}{5^2} = \sqrt{1+\frac{x}{5^4}} \end{align*} $$ and so on. Applying these repeatedly inside the nested radicals gives the claim. (The wording of this explanation has been inspired mostly by a comment of @J.M. below.) Now, it only remains to compute $Y$ and $X$. By squaring, we get $$ Y^2 = 1 + Y \ \ \ \ \implies Y = \frac{\sqrt{5}+1}{2}, $$ discarding the negative root. Plugging this value in the definition of $X$, we get: $$ X = 1 + \sqrt{5} Y = 1 + \frac{5+\sqrt{5}}{2} = \frac{7+\sqrt{5}}{2} . $$ Note on the convergence issues. As @GEdgar points out, to complete the proof, I also need to demonstrate that both sides of the first equation do converge to some finite limits. For our expressions, convergence follows from @Bill Dubuque's answer to my question on the defining the convergence of such an expression. I believe that with some work, one can also give a direct proof by showing that this sequence is bounded from above (which I hope will also end up showing the theorem Bill quotes), but I will not pursue this further. Added: See @Aryabhata's answer to a related question for a hands-on proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/61012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 1, "answer_id": 0 }
Canonical to Parametric, Ellipse Equation I've done some algebra tricks in this derivation and I'm not sure if it's okay to do those things. $$\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$$ $$\frac{x^2}{a^2} + \frac{y^2}{b^2} = \cos^2\theta + \sin^2\theta$$ Can I really do this next step? $$\frac{x^2}{a^2} = \cos^2\theta\quad\text{and}\quad\frac{y^2}{b^2} = \sin^2\theta$$ $$x^2 = a^2\cos^2\theta\quad\text{and}\quad y^2 = b^2\sin^2\theta$$ Ignoring the negative numbers: $$x = a\cos\theta\quad\text{and}\quad y = b\sin\theta$$
The idea behind your argument is absolutely fine. Any two non-negative numbers $u$ and $v$ such that $u+v=1$ can be expressed as $u=\cos^2\theta$, $v=\sin^2\theta$ for some $\theta$. This is so obvious that it probably does not require proof. Set $u=\cos^2\theta$. Then $v=1-\cos^2\theta=\sin^2\theta$. The second displayed formula muddies things somewhat. You intended to say that if $x^2/a^2+y^2/b^2=1$, then there exists a $\theta$ such that $x^2/a^2=\cos^2\theta$ and $y^2/b^2=\sin^2\theta$. You did not mean that for any $\theta$, if $x^2/a^2+y^2/b^2=1$ then $x^2/a^2=\cos^2\theta$! But the transition from the second displayed equation to the third could be interpreted as asserting what you clearly did not intend to say. It would be better to do exactly what you did, but to use more geometric language, as follows. $$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1 \quad\text{iff}\quad \left(\frac{x}{a}\right)^2 + \left(\frac{y}{b}\right)^2=1.$$ But the equation on the right holds iff the point $(x/a, y/b)$ lies on the unit circle. The points on the unit circle are parametrized by $(\cos \theta,\sin\theta)$, with $\theta$ ranging over $[0,2\pi)$, so the points on our ellipse are given by $x=a\cos\theta$, $y=a\sin\theta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/61071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Series used in proof of irrationality of $ e^y $ for every rational $ y $ Following from the book "An Introduction to the Theory of Numbers" - Hardy & Wright I am having trouble with this proof. The book uses a familiar proof for the irrationality of e and continues into some generalizations that lose me. In the following statement where is the series coming from or how is the statement derived? $ f = f(x) = \frac{x^n(1 - x)^n}{n!} = \frac{1}{n!} \displaystyle\sum\limits_{m=n}^{2n} c_mx^m $ I understand that given $ 0 < x < 1 $ results in $ 0 < f(x) < \frac{1}{n!} $ but I become confused on . . . Again $f(0)=0$ and $f^{(m)}(0)=0$ if $m < n$ or $m > 2n.$ But, if $n \leq m \leq 2n $, $ f^{(m)}(0)=\frac{m!}{n!}c_m $ an integer. Hence $f(x)$ and all its derivatives take integral values at $x=0.$ Since $f(1-x)=f(x),$ the same is true at $x=1.$ All wording kept intact! The proof that follows actually makes sense when I take for granted the above. I can't however take it for granted as these are, for me, the more important details. So . . .
For your first question, this follows from the binomial theorem $$x^n(1-x)^n=x^n\sum_{m=0}^{n}{n\choose m}(-1)^mx^m=\sum_{m=0}^{n}{n\choose m}(-1)^{m}x^{m+n}=\sum_{m=n}^{2n}{n\choose m-n}(-1)^{m-n}x^m$$ where the last equality is from reindexing the sum. Then let $c_m={n\choose m-n}(-1)^m$, which is notably an integer. I'm not quite clear what your next question is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/61116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Proof for $\max (a_i + b_i) \leq \max a_i + \max b_i$ for $i=1..n$ I know this question is almost trivial because the truth of this statement is completely intuitive, but I'm looking for a nice and as formal as possible proof for $$\max (a_i + b_i) \leq \max a_i + \max b_i$$ with $i=1,\ldots,n$ Thanks in advance, Federico
For any choice of $j$ in $1,2,\ldots,n$, we have that $$a_j\leq\max\{a_i\}\quad\text{and}\quad b_j\leq\max\{b_i\}$$ by the very definition of "$\max$". Then, by additivity of inequalities, we have that for each $j$, $$a_j+b_j\leq\max\{a_i\}+\max\{b_i\}.$$ But because this is true for all $j$, it is true in particular for the largest out of the $a_j+b_j$'s; that is, $$\max\{a_i + b_i\} \leq \max\{a_i\} + \max\{b_i\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/61241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
standard symbol for "to prove" or "need to show" in proofs? Is there a standard symbol used as shorthand for "to prove" or "need to show" in a proof? I've seen "N.T.S." but was wondering if there is anything more abstract — not bound to English.
I routinely use WTS for "Want to Show" - and most teachers and professors that I have come across immediately understood what it meant. I do not know if this is because they were already familiar with it, or if it was obvious to them. But I still use it all the time. I got this from a few grad students at my undergrad, although a very funny internet commentator (Sean Plott, if you happen to know him) once mentioned that he uses it as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/61287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Does the Schur complement preserve the partial order? Let $$\begin{bmatrix} A_{1} &B_1 \\ B_1' &C_1 \end{bmatrix} \quad \text{and} \quad \begin{bmatrix} A_2 &B_2 \\ B_2' &C_2 \end{bmatrix}$$ be symmetric positive definite and conformably partitioned matrices. If $$\begin{bmatrix} A_{1} &B_1 \\ B_1' &C_1 \end{bmatrix}-\begin{bmatrix} A_2 &B_2 \\ B_2' &C_2 \end{bmatrix}$$ is positive semidefinite, is it true $$(A_1-B_1C^{-1}_1B_1')-(A_2-B_2C^{-1}_2B_2')$$ also positive semidefinite? Here, $X'$ means the transpose of $X$.
For a general block matrix $X=\begin{pmatrix}A&B\\C&D\end{pmatrix}$, the Schur complement $S$ to the block $D$ satisfies $$ \begin{pmatrix}A&B\\C&D\end{pmatrix} =\begin{pmatrix}I&BD^{-1}\\&I\end{pmatrix} \begin{pmatrix}S\\&D\end{pmatrix} \begin{pmatrix}I\\D^{-1}C&I\end{pmatrix}. $$ So, when $X$ is Hermitian, $$ \begin{pmatrix}A&B\\B^\ast&D\end{pmatrix} =\begin{pmatrix}I&Y^\ast\\&I\end{pmatrix} \begin{pmatrix}S\\&D\end{pmatrix} \begin{pmatrix}I\\Y&I\end{pmatrix}\ \textrm{ for some } Y. $$ Hence $$ \begin{eqnarray} &&\begin{pmatrix}A_1&B_1\\B_1^\ast&D_1\end{pmatrix} \ge\begin{pmatrix}A_2&B_2\\B_2^\ast&D_2\end{pmatrix} \\ &\Rightarrow& \begin{pmatrix}S_1\\&D_1\end{pmatrix} \ge \begin{pmatrix}I&Z^\ast\\&I\end{pmatrix} \begin{pmatrix}S_2\\&D_2\end{pmatrix} \begin{pmatrix}I\\Z&I\end{pmatrix}\ \textrm{ for some } Z\\ &\Rightarrow& (x^\ast,0)\begin{pmatrix}S_1\\&D_1\end{pmatrix}\begin{pmatrix}x\\0\end{pmatrix} \ge (x^\ast,\ x^\ast Z^\ast) \begin{pmatrix}S_2\\&D_2\end{pmatrix} \begin{pmatrix}x\\Zx\end{pmatrix},\ \forall x\\ &\Rightarrow& x^\ast S_1 x \ \ge\ x^\ast S_2 x + (Zx)^\ast D_2 (Zx) \ \ge\ x^\ast S_2 x,\ \forall x\\ &\Rightarrow& S_1\ge S_2. \end{eqnarray} $$ Edit: In hindsight, this is essentially identical to alex's proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/61417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Motivation of the Gaussian Integral I read on Wikipedia that Laplace was the first to evaluate $$\int\nolimits_{-\infty}^\infty e^{-x^2} \, \mathrm dx$$ Does anybody know what he was doing that lead him to that integral? Even better, can someone pose a natural problem that would lead to this integral? Edit: Many of the answers make a connection to the normal distribution, but then the question now becomes: Where does the density function of the normal distribution come from? Mike Spivey's answer is in the spirit of what I am looking for: an explanation that a calculus student might understand.
The integral you gave, when taken as a definite integral : $\int^{x_2}_{x_1} e^{-x^2} dx$ When scaled by $\frac {1}{\pi^{0.5}}$ is/describes the univariate probability density of a normally-distributed trandom variable $X$ with mean=0 and standard deviation ${\frac {1}{2^{0.5}}}$, i.e. This means that the numerical value of this integral gives you the probability of the event: $x_1 \leq X\leq x_2$ When this integral is scaled by the right factor $K$ it describes a family of normal distributions with mean $\mu$ and standard deviation $\sigma$ You can show it integrates to that constant K (so that when you divide by $K$ , the value of the integral is $1$, which is what makes it into a density function) by using this trick (used for the case mean=0) Set I=$C\int e^{-x^2}$ , then consider $\int e^{-y^2}$ , and then compute their product as (using the fact that $x^2$ is a constant when considered as a function of y, and viceversa for x ): $I^2$=$\int e^{-x^2+y^2}dxdy$ , using a polar change of variable: $x^2+y^2=r^2$ (and, of course, a change of the regions of integration.) The integral is based on non-mathematical assumptions too: http://www.stat.tamu.edu/~genton/2007.AG.Bernoulli.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/61480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 6, "answer_id": 4 }