Search is not available for this dataset
url
string | text
string | date
timestamp[s] | meta
dict |
---|---|---|---|
https://math.stackexchange.com/questions/1546226/number-of-5-digit-numbers-such-that-the-sum-of-their-digits-is-even | # Number of 5-digit numbers such that the sum of their digits is even
I found the same question on this site and many others...where everywhere the answer $45000$ is written ...while I have no problem with this but I am having problems with the logical answer...as everywhere it is written since total numbers is $90000$ so half of the numbers satisfy the above condition(but why??it is not explained anywhere as to how we can say that half the numbers satisfy this condition)...I know that half the numbers are even and half the numbers are odd...but that is not what we are asked here...we need to find numbers whose sum of digits is even ...and not numbers who are just even... I think these conditions are very different but it is used evrwhere...can someone please clarify..this.??..as to how we can say..half the numbers have their sum of digits as even?? i don't think this question is entirely a duplicate but if it please let me know...where I can get the explanation of what I am asking..
• Hint: a sum of numbers is even if and only if there are an even number of odd numbers in the sum. – Colm Bhandal Nov 25 '15 at 18:23
• An other way of seeing it, the last number decide if the sum is odd or even. If the sum of the 4 first numbers is even, then the last number need to be even. If the sum of the 4 first numbers is odd, then the last number need to be odd. – Alain Remillard Nov 25 '15 at 18:26
• @Colm That is a very complicated use of even and odd term...!!..can you please expand this comment just a little bit...so that I can at least get a hint... – Freelancer Nov 25 '15 at 18:26
• @Freelancer- actually I think Alain Remillard's suggestion is the more elegant of the two. Can you see how this leads to the result? How many possibilities are there for that last digit? And how many of them result in an even number, how many in an odd number? – Colm Bhandal Nov 25 '15 at 18:28
To expand on the wonderful insight of Alain Remillard in the comments, let's consider the set of all $5$ digit numbers. Now, you can partition this set into $9 \times 10^3$ subsets (we exclude a leading $0$ as a possibility) each of which contains all numbers with a common prefix of $4$ numbers. For example, one of the partitions, for the prefix $2346$ would be:
$$\{23460, 23461, 23462, 23463, \dots, 23469\}$$
Note that each partition contains exactly $10$ elements, with the last digit numbered $0-9$. Now, to prove that exactly half the numbers in the entire set have even digit sums, all we have to do is to prove that half the numbers in each of these partitions has an even digit sum. So we've reduced the problem to something much simpler! I hope you can see why.
Now, to show that in any partition there are exactly $5$ elements with an even digit sum, consider the first element in the set e.g. in the above it would be $23460$. The sum of its digits is either even or odd. If it's even, then the next number will be odd, because you're just adding $1$ to it, and vice versa (the example is 15, which is odd). Then the sums go: even, odd, even, odd, even, odd etc. Or else they go: odd, even, odd, even etc. In either case, there are exactly $10$ elements, $5$ even, and $5$ odd. And we are done.
Update: As the OP cleverly points out in the comments below, the choice to fix the first four digits is indeed arbitrary. The first digit must be fixed, because there are only $9$ possibilities for this, but after this any three of the remaining four digits can be fixed. The remaining digit will then have ten possibilities, and the proof will proceed just as above.
• Well ..you fixed the first 3-digits...and then showed that for the last digit half numbers will be positive and half will be negative...let us say I want to think in another way and I fix the last three digits(say for example $X234$ and then show that half of the numbers so formed will be having sum as even and half will be having sum as odd...is that correct also?? ...can I do this... – Freelancer Nov 25 '15 at 19:03
• @Freelancer- that is a very clever insight. Nicely though out :):) Yes you can indeed fix any three digits and the first digit and the above method will work. I will update accordingly. – Colm Bhandal Nov 26 '15 at 12:00
• even + even = even
• even + odd = odd
• odd + odd = even
Using this we can show the even-ness of the sum of digits. Take $234582$. We have
$$E + O + E + O + E + E \\= (E + O) + (E + O) + (E + E) \\= O + O + E \\= (O+O) + E \\= E + E \\= E.$$
So, for a five-digit number, what numbers of odd digits can we have to make the sum even?
Next, let's count the number of five-digit numbers that have three odd digits. (Is the sum of digits of these numbers even or odd?)
There are $125$ three-digit odd numbers. (Why?) There are $_5C_3 = 10$ combinations of places we can put the digits in order within the five-digit number. Then, there are $25$ two-digit even numbers. So the number of five digit numbers with three odd digits is $125 \cdot 10 \cdot 25 = 31250.$ (I'm assuming leading zeroes are valid (like $00123$)).
To solve the problem, then, you'll need to determine what numbers of odd digits will give you an even sum, and then count each of those cases. | 2021-01-27T15:45:58 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1546226/number-of-5-digit-numbers-such-that-the-sum-of-their-digits-is-even",
"openwebmath_score": 0.7157057523727417,
"openwebmath_perplexity": 185.39753959843088,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9787126475856414,
"lm_q2_score": 0.8757869916479466,
"lm_q1q2_score": 0.8571438053168258
} |
http://math.stackexchange.com/questions/30463/isomorphic-groups/30465 | # Isomorphic groups
I know that there is a formal definition isomorphism but for the purpose of this homework questions, I call two groups isomorphic if they have the same structure, that is group table for one can be turned into the table for the other by a suitable renaming.
Now consider $\mathbf{Z}_2=\{0,1\}$ (group under addition modulo $2$) and $\mathbf{Z}_3^{\times}=\{1,2\}$ (group under multiplication modulo $3$). In general $\mathbf{Z}_n^{\times}=\{a|0\leq a\leq n-1 \text{ and }\gcd(a,n)=1\}$. Now the group table for $\mathbf{Z}_2$ is: $$\begin{array}{c|cc} &0&1\\ \hline 0&0&1\\ 1&1&0 \end{array}$$ and the group table for $\mathbf{Z}_3^{\times}$ is: $$\begin{array}{c|cc} &1&2\\ \hline 1&1&2\\ 2&2&1 \end{array}$$ Obviously these two groups are isomorphic because if in the first table, I replace $0$s by $1$s and $1$s by $2$s, I get exactly the second table.
However, if I write out the group table for $\mathbf{Z}_4$ and $\mathbf{Z}_5^{\times}$, there is no way that group table for one can be turned into the table for the other by a suitable renaming. This means that these two groups are not isomorphic but the question asks me to prove that they are isomorphic. Now what should I do?
-
There is no difference between the usual formal notion of "being isomorphic" and the one you propose to use! – Mariano Suárez-Alvarez Apr 2 '11 at 5:10
In any case, you should look harder at the group tables for $\mathbb Z_4$ and $\mathbb Z_5^\times$!... – Mariano Suárez-Alvarez Apr 2 '11 at 5:10
There is a suitable renaming. Keep trying... – Zev Chonoles Apr 2 '11 at 5:12
In short, the two examples are all concerned with the structure of cyclic groups, et c'est la méme. – awllower Apr 2 '11 at 7:08
You are incorrect in claiming that there is no way to turn one table into the other. But the key is that you are not just allowed to rename, you are also allowed to list the elements in a different order! (After all, listing the elements of $\mathbf{Z}_4$ in a different order in the table will not change the group or the operation, will it?) This amounts to shuffling rows and columns together: if you exchange rows 2 and 3, say, you should also exchange columns 2 and 3.
The table for $\mathbf{Z}_4$ is: $$\begin{array}{c|cccc} +&0&1&2&3\\ \hline 0&0&1&2&3\\ 1&1&2&3&0\\ 2&2&3&0&1\\ 3&3&0&1&2 \end{array}$$ The table for $Z_5^{\times}$ is: $$\begin{array}{c|cccc} \times&1&2&3&4\\ \hline 1&1&2&3&4\\ 2&2&4&1&3\\ 3&3&1&4&2\\ 4&4&3&2&1 \end{array}$$ If you are going to be able to rename the entries in the last table to match the first, then "1" must be renamed "0". Now, notice that there is only one of the remaining four elements that when operated with itself gives you the "identity"; since in $\mathbf{Z}_4$ this happens for $2$, you may want to shuffle the rows and columns to move that element to be in the third row and column and see what you have then.
-
c'est superbe!! – awllower Apr 2 '11 at 7:11
Apparently not; any reason for the downvote? – Arturo Magidin Apr 2 '11 at 18:13
Who downvoted ? – awllower Apr 3 '11 at 2:45
Don't know; but someone did... – Arturo Magidin Apr 3 '11 at 2:50
Since $\mathbb{Z}_n^\times$ is cyclic, reordering can be easily done by picking a generator element (i. e., any other than 1), and computing its power. In this case, the powers of 2 are, in order, 1, 2, 4 and 3, which is the ordering for the first line of the table for $\mathbb{Z}_5^\times$ that will suit the isomorphism. – Luke May 10 '11 at 1:04
Your "naive" definition of isomorphic almost coincides with the abstract one. The difference is that apart from renaming you must also be allowed to change the order of the elements. Then you should have no difficulties with $\mathbb{Z}/4\mathbb{Z}$ vs. $(\mathbb{Z}/5\mathbb{Z})^\times$.
-
Note by the way that this little confusion is a good argument for learning the actual definition of group isomorphism, rather than trying to avoid it by only thinking in terms of "group tables". – Pete L. Clark Apr 2 '11 at 12:30 | 2015-01-29T21:17:39 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/30463/isomorphic-groups/30465",
"openwebmath_score": 0.8749757409095764,
"openwebmath_perplexity": 336.7763461347079,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9825575132207566,
"lm_q2_score": 0.8723473879530491,
"lm_q1q2_score": 0.8571314801717705
} |
https://stats.stackexchange.com/questions/389453/calculating-conditional-probability-with-marginal | # Calculating conditional probability with marginal
Assume we are given a joint distribution $$P(X,Y)$$ where $$P(0,0)=0.1$$, $$P(0,1) = 0.4$$, $$P(1,0)=0.3$$, and $$P(1,1)=0.2$$. The goal is to compute $$P(X|Y=1)$$.
Traditionally, solving a conditional probability problem $$P(A|B)$$ simplifies to $$\frac{P(A,B)}{P(B)}$$, but I'm unsure how to apply it to this case. In particular, I am unclear on what the probability $$P(X,Y=1)$$ means since $$P(X)$$ is a marginal probability.
To avoid this, I enumerated the different values $$X$$ takes on and plugged it into the original quantity to solve -- $$P(X|Y=1)$$:
• $$P(X=0|Y=1) = 0.4/0.6 = 4/6$$
• $$P(X=1|Y=1) = 0.2/0.6 = 2/6$$
This gives me the final answer of $$P(X|Y=1) = [4/6, 2/6]$$, but I'm not sure whether the answer should be multiple probabilities or a single probability.
• P(X|Y=2) is a distribution in the same way that P(X) is also a distribution (and not just one "probability"). One probably is P(X=0, Y=0)=0.1, for example. – nbro Jan 28 at 2:14
• @nbro Does this mean my answer is correct? – Shrey Jan 28 at 4:04
• you're ok. just as $p(x,y)$ is a 2D function(table), $p(x)$, $p(x|Y=y)$ are 1D functions (rows). – gunes Jan 28 at 4:52
• $P(X|Y=1)$ is shorthand for $P(X=x|Y=1)$ – StatsStudent Jan 28 at 5:47
You have gone about this correctly, but the final answers are typically written as a function of $$x$$. It's helpful, I think, to remember that $$P(X|Y=1)$$ is just shorthand for $$P(X=x|Y=1)$$ where $$x$$ is any number in the support of $$x$$ (in this case 0 and 1). So you'd calculate this as follows:
$$\begin{eqnarray*} \\{P(X|Y=1)} & = & {P\left(X=x|Y=1\right)}\\ & = & \frac{P\left(X=x,\,Y=1\right)}{P\left(Y=1\right)}\\ & = & \frac{P\left(X=x,\,Y=1\right)}{P\left(X=0,\,Y=1\right)+P\left(X=1,\,Y=1\right)}\\ & = & \frac{P\left(X=x,\,Y=1\right)}{0.4+0.2}\\ & = & \frac{P\left(X=x,\,Y=1\right)}{0.6} \end{eqnarray*}$$
Now, writing this as a function of $$x$$ gives:
$$\begin{eqnarray*} P\left(X=x|Y=1\right) & = & \begin{cases} \frac{P\left(X=0,\,Y=1\right)}{0.6} & ,\text{for }x=0\\ \frac{P\left(X=1,\,Y=1\right)}{0.6} & ,\text{for }x=1 \end{cases}\\ & = & \begin{cases} \frac{0.4}{0.6} & ,\text{for }x=0\\ \frac{0.2}{0.6} & ,\text{for }x=1 \end{cases}\\ & = & \begin{cases} \frac{2}{3} & ,\text{for }x=0\\ \frac{1}{3} & ,\text{for }x=1 \end{cases} \end{eqnarray*}$$
You are correct. You are supposed to be getting "multiple" probabilities.
You said that your goal is to figure out $$P(X|Y = 1)$$. Well, in order to achieve your goal, you need to know what the probability that $$X = 0$$ given that $$Y = 1$$ is as well as what the probability that $$X = 1$$ given that $$Y = 1$$ is, which you did.
nbro's comment is basically telling you that when you get "multiple" probabilities, you are actually computing a probability mass (or density) function (or, equivalently, yet not the same, the distribution of the random variable $$X$$ given $$Y = 1$$).
Note: $$P(X|Y=1)=[4/6,2/6]$$ is the probability mass function of the random variable $$X$$ given $$Y = 1$$ because it tells you what that probability is for all the possible values of $$X$$, ie, $$X = 0$$ and $$X$$ = 1.
Finally, I use "multiple" in quotes because nobody really says that. In your case, they would ask something like: "This gives me the final answer of P(X|Y=1)=[4/6,2/6], but I'm not sure whether the answer should be $${\it \text{ a probability mass function}}$$ or a single probability." | 2019-08-19T18:43:03 | {
"domain": "stackexchange.com",
"url": "https://stats.stackexchange.com/questions/389453/calculating-conditional-probability-with-marginal",
"openwebmath_score": 0.9003010988235474,
"openwebmath_perplexity": 185.84113606590793,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575116884778,
"lm_q2_score": 0.8723473829749844,
"lm_q1q2_score": 0.8571314739438562
} |
https://math.stackexchange.com/questions/3678244/if-series-is-absolutely-convergent-then-sum-limits-n-in-ia-n-sum-limits | # If series is absolutely convergent then $\sum \limits_{n\in I}a_n=\sum \limits_{k=1}^{\infty}\sum \limits_{n\in I_k}a_n.$
Suppose that the series $$\sum \limits_{n=1}^{\infty}a_n$$ is absolutely convergent and let $$I\subseteq \mathbb{N}$$ such that $$I=\bigsqcup\limits_{k=1}^{\infty}I_k$$. Then show that $$\sum \limits_{n\in I}a_n=\sum \limits_{k=1}^{\infty}\sum \limits_{n\in I_k}a_n. \qquad (*)$$
I don't have any idea how to solve it.
I do know that in any absolute convergent series permutation of terms does not change the sum and I guess it should be used somehow in order to prove equality $$(*)$$.
Can anyone show the rigorous proof of equality $$(*)$$, please?
• here is a proof. Here is another. – Masacroso May 16 at 20:14
• What's the definition of summation here? – Hashem Ben Abdelbaki May 16 at 20:14
• @Masacroso, link which you provided have not nothing in common with my question. I do know that in absolute convergent series any permutation does not change the sum. – ZFR May 16 at 20:22
• @ZFR you had written "I do know that in any absolute convergent series permutation of terms does not change the sum. But I want to prove it rigorously and cannot do it." Hence my comment providing two formal proofs. Make clear your question please. – Masacroso May 16 at 20:23
• @Masacroso, sorry about that. Done – ZFR May 16 at 20:27
First assume that $$a_n \ge 0$$ and define $$\sum_{n \in I} a_n = \sup_{J \subset I, J \text{ finite}} \sum_{n \in J} a_n$$. Note that it follows that if $$I \subset I'$$ then $$\sum_{n \in I} a_n \le \sum_{n \in I'} a_n$$.
From https://math.stackexchange.com/a/3680889/27978 we see that if $$K = K_1 \cup \cdots \cup K_m$$, a disjoint union, then $$\sum_{n \in K} a_n = \sum_{n \in K_1} a_n + \cdots + \sum_{n \in K_m} a_n$$.
Since $$I'=I_1 \cup \cdots \cup I_m \subset I$$ we see that $$\sum_{n \in I} a_n \ge \sum_{n \in I'} a_n = \sum_{k=1}^m \sum_{n \in I_k} a_n$$. It follows that $$\sum_{n \in I} a_n \ge \sum_{k=1}^\infty \sum_{n \in I_k} a_n$$. This is the 'easy' direction.
Let $$\epsilon>0$$, then there is some finite $$J \subset I$$ such that $$\sum_{n\in J} a_n > \sum_{n \in I} a_n -\epsilon$$. Since $$J$$ is finite and the $$I_k$$ are pairwise disjoint we have $$J \subset I'=I_1 \cup \cdots \cup I_m$$ for some $$m$$ and so $$\sum_{k=1}^\infty \sum_{n \in I_k} a_n \ge \sum_{k=1}^m\sum_{n \in I_k} a_n \ge \sum_{k=1}^m\sum_{n \in J \cap I_k} a_n = \sum_{n\in J} a_n > \sum_{n \in I} a_n -\epsilon$$.
(It is not relevant here, but a small proof tweak shows that the result holds true even if the $$a_n$$ do not have a finite sum.)
Now suppose we have $$a_n \in \mathbb{R}$$ and $$\sum_{n \in I} |a_n| = \sum_{n=1}^\infty |a_n|$$ is finite. We need to define what we mean by $$\sum_{n \in I} a_n$$. Note that $$(a_n)_+=\max(0,a_n) \ge 0$$ and $$(a_n)_-=\max(0,-a_n) \ge 0$$. Since $$0 \le (a_n)_+ \le |a_n|$$ and $$0 \le (a_n)_- \le |a_n|$$ we see that $$\sum_{n \in I} (a_n)_+ = \sum_{k=1}^\infty \sum_{n \in I_k} (a_n)_+$$ and similarly for $$(a_n)_-$$.
This suggests the definition (cf. Lebesgue integral) $$\sum_{n \in I} a_n = \sum_{n \in I} (a_n)_+ - \sum_{n \in I} (a_n)_-$$.
With this definition, all that remains to be proved is that $$\sum_{k=1}^\infty \sum_{n \in I_k} a_n = \sum_{k=1}^\infty \sum_{n \in I_k} (a_n)_+ - \sum_{k=1}^\infty \sum_{n \in I_k} (a_n)_-$$ and this follows from summability and the fact that for each $$k$$ we have $$\sum_{n \in I_k} a_n = \sum_{n \in I_k} (a_n)_+ - \sum_{n \in I_k} (a_n)_-$$.
Note: To elaborate the last sentence, recall that I defined $$\sum_{n \in I_k} a_n$$ to be $$\sum_{n \in I_k} (a_n)_+ - \sum_{n \in I_k} (a_n)_-$$, so all that is happening here is the definition is applied to $$I_k$$ rather than $$I$$. Then to finish, note that if $$d_k,b_k,c_k$$ are summable and satisfy $$d_k=b_k-c_k$$ then $$\sum_{k=1}^\infty d_k= \sum_{k=1}^\infty b_k- \sum_{n=1}^\infty c_k$$, where $$d_k = \sum_{n \in I_k} a_n$$, $$b_k = \sum_{n \in I_k} (a_n)_+$$ and $$c_k = \sum_{n \in I_k} (a_n)_-$$.
• Great answer! But probably you meant bijection $\sigma: \mathbb{N}\to I$, right? – ZFR May 18 at 18:42
• @ZFR: Correct, I will fix. – copper.hat May 18 at 18:58
• When you wrote that "It is straightforward to show that for any bijection $\sigma:\mathbb{N} \to I$ we have $\sum_{n \in I} a_n = \sum_{n=1}^\infty a_{\sigma(n)}$." Do you mean here that $a_n\geq 0$ or not? I am a bit confused – ZFR May 18 at 19:17
• (+1) This is less fussy, more self-contained, more authoritative, and better connected to the literature than my answer, so it ought to replace it as the accepted answer. (I haven't read @Matematleta's answer yet, because it looks harder to understand than these two, so I reserve judgement on it, for now!) On this answer: am I right in thinking that one can also write $\sum_{n \in I}a_n = s$ iff for all $\epsilon > 0$ there exists finite $J \subseteq I$ such that $\left\lvert\sum_{n \in J}a_n- s\right\rvert < \epsilon$? – Calum Gilhooley May 18 at 19:18
• You really helped me to understand the thing which i did not understand for a long time. Thank you so much! I really appreciate your permanent help! – ZFR May 18 at 20:50
Suppose for the moment that the result is known to be true for convergent series of non-negative terms.
If $$\sum_{n=1}^\infty a_n$$ is an absolutely convergent series of real numbers, define $$a_n = b_n - c_n,$$ for all $$n \geqslant 1,$$ where $$c_n = 0$$ when $$a_n \geqslant 0$$ and $$b_n = 0$$ when $$a_n \leqslant 0.$$ Then $$|a_n| = b_n + c_n,$$ therefore $$\sum_{n=1}^\infty b_n$$ and $$\sum_{n=1}^\infty c_n$$ are convergent series of non-negative terms, therefore: \begin{align*} \sum_{n \in I}a_n & = \sum_{n \in I}b_n - \sum_{n \in I}c_n \\ & = \sum_{k=1}^\infty\sum_{n \in I_k}b_n - \sum_{k=1}^\infty\sum_{n \in I_k}c_n \\ & = \sum_{k=1}^\infty\left( \sum_{n \in I_k}b_n - \sum_{n \in I_k}c_n\right) \\ & = \sum_{k=1}^\infty\sum_{n \in I_k}(b_n - c_n) \\ & = \sum_{k=1}^\infty\sum_{n \in I_k}a_n. \end{align*} So it is enough to prove the result on the assumption that $$a_n \geqslant 0$$ for all $$n \geqslant 1.$$
Given any set $$K \subseteq \mathbb{N},$$ I shall use the Iverson bracket notation: $$[n \in K] = \begin{cases} 1 & \text{if } n \in K, \\ 0 & \text{if } n \notin K. \end{cases}$$ I shall assume that, however the notation $$\sum_{n \in K}a_n$$ has been defined, it satisfies the identity: $$\sum_{n \in K}a_n = \sum_{n=1}^\infty a_n[n \in K].$$ Let $$J_k = I_1 \cup I_2 \cup \cdots \cup I_k$$ ($$k = 1, 2, \ldots$$). Because the $$I_k$$ are disjoint, we have $$[n \in J_k] = [n \in I_1] + [n \in I_2] + \cdots + [n \in I_k],$$ therefore $$\sum_{n \in I_1}a_n + \sum_{n \in I_2}a_n + \cdots + \sum_{n \in I_k}a_n = \sum_{n \in J_k}a_n \leqslant \sum_{n \in I}a_n,$$ therefore $$\sum_{k=1}^\infty\sum_{n \in I_k}a_n \leqslant \sum_{n \in I}a_n,$$ and the outer infinite sum on the left hand side exists, because its partial sums are bounded above by the sum on the right hand side. On the other hand, for all $$m \geqslant 1,$$ \begin{align*} \sum_{n=1}^ma_n[n \in I] & = \sum_{n=1}^ma_n[n \in I_1] + \sum_{n=1}^ma_n[n \in I_2] + \cdots + \sum_{n=1}^ma_n[n \in I_r] \\ & \leqslant \sum_{n \in I_1}a_n + \sum_{n \in I_2}a_n + \cdots + \sum_{n \in I_r}a_n \\ & \leqslant \sum_{k=1}^\infty\sum_{n \in I_k}a_n, \end{align*} where $$r = \max\{k \colon n \leqslant m \text{ for some } n \in I_k\},$$ therefore $$\sum_{n \in I}a_n \leqslant \sum_{k=1}^\infty\sum_{n \in I_k}a_n,$$ and the two inequalities together prove (*).
• Thanks a lot for your answer! I read your answer and I have two questions: 1) I am a bit confused by the definition of $r$? Could you explain it in details, please? 2) Have you ever used that the series $\sum \limits_{n=1}^{\infty}a_n$ is absolutely convergent? – ZFR May 17 at 2:16
• I would define $r$ in this way: for each $n\in I$ s.t. $1\leq n\leq m$ there is $p(n)$ s.t. $n\in I_{p(n)}$ and let's take $r=\max \{p(1),\dots,p(m)\}$ then $[n\in I]=[n\in I_1]+\dots+[n\in I_r]$. Is my reasoning correct? And still I would be happy if you can explain and point the moments where you have used that the series is absolutely convergent. – ZFR May 17 at 2:30
• Your definition of $r$ looks right to me, and looks equivalent to mine. Certainly I had much the same idea in mind. I used the absolute convergence of $\sum_{n=1}^\infty a_n$ when I inferred that the two series of non-negative terms $\sum_{n=1}^\infty b_n$ and $\sum_{n=1}^\infty c_n$ are convergent. I'm going to try to get some more sleep now, but I'll have another look at the whole thing after lunch, and see if I can make it any clearer. (Assuming it isn't all just a load of dingo's kidneys, of course!) – Calum Gilhooley May 17 at 9:04
• I deliberately chose not to write \begin{gather*} b_n = \frac{|a_n| + a_n}2 \geqslant 0, \\ c_n = \frac{|a_n| - a_n}2 \geqslant 0, \end{gather*} but perhaps it would have been clearer had I done so. – Calum Gilhooley May 17 at 15:00
• Perhaps I should also have stated explicitly that the identity I took to be satisfied by $\sum_{n \in K}a_n$ is a possible definition of that notation; but it is not the only possible definition; and you didn't say what definition you were using; so I left it open. – Calum Gilhooley May 17 at 15:05
I think there is an elementary proof (one without measure theory), that we can adapt from a similar claim in Apostol's Analysis book. Without loss of generality, $$I=\mathbb N$$. For each $$k\in \mathbb N,\ I_k$$ may be regarded as a map from some subset $$\{1,2,\cdots,\}\subseteq \mathbb N$$, to $$\{\sigma_k(1),\sigma_k(2),\cdots,\}$$ which may or may not be infinite, so $$\sigma_k$$ is an injective map from the susbet of $$\mathbb N$$ of the same cardinality as $$|I_k|,$$ starting at $$1$$, to the $$\textit{set}\ I_k.$$ If $$|I_k|=j$$, extend $$I_k$$ to all of $$\mathbb N$$ by mapping $$n\in \mathbb N\setminus \{1,2,\cdots, j\}$$ to $$\mathbb N\setminus \{\sigma_k(1),\sigma_k(2),\cdots, \sigma_k(j)\}$$ injectively and defining $$a'_n:=0$$ for all $$n\in \mathbb N\setminus \{\sigma_k(1),\sigma_k(2),\cdots, \sigma_k(j)\}$$. This construction will not affect any of the sums, so without loss of generality, $$I_k$$ maps $$\mathbb N$$ to a subset of $$\mathbb N$$ such that
$$\tag1 I_k\ \text{is injective on}\ \mathbb N$$
$$\tag2 \text{the range of each}\ I_k \ \text{is a subset of } \ \mathbb N, \text{say}\ P_k$$
$$\tag3 \text{the}\ P_k\ \text{are disjoint}$$
Now put $$\tag4 b_k(n)=a_{I_{k}(n)}\ \text{and}\ s_k=\sum^\infty_{n=0}b_k(n)$$
which is well-defined by $$(1)-(3).$$ We have to prove that
$$\tag5 \sum^\infty_{k=0}a_k=\sum^\infty_{k=0}s_k$$
It's easy to show that the right hand side of this converges absolutely. To find the sum, set $$\epsilon>0$$ and choose $$N$$ large enough so that $$\sum^\infty_{k=0}|a_k|-\sum^n_{k=0}|a_k|<\frac{\epsilon}{2}$$ as soon as $$n>N.$$ This implies also that
$$\tag6\left|\sum^\infty_{k=0}a_k-\sum^n_{k=0}a_k\right|<\frac{\epsilon}{2}$$
Now choose $$\{I_1,\cdots, I_r\}$$ so that each element of $$\{a_1,\dots ,a_N\}$$ appears in the sum $$\sum^\infty_{n=0}a_{I_{1(n)}}+\cdots +\sum^\infty_{n=0}a_{I_{r(n)}}=s_1+\cdots+ s_r.$$ Then, if $$n>r,N$$ we have
$$\tag 7\left|\sum^n_{k=0}s_k-\sum^n_{k=0}a_k\right|<\sum^\infty_{n=N+1}<\frac{\epsilon}{2}$$
Now $$(5)$$ folllows from $$(6)$$ and $$(7).$$
• I am a bit confused with your definition of $I_k$? Could you clarify it, please? – ZFR May 16 at 23:26
• $I_k$ is some subset of $\mathbb N$. So it may be regarded as a function on the subset of $\mathbb N$ of the same cardinality. For instance, if $I_k=\{2,5,7\}$ then $I_k$ the function would map $\{1,2,3\}$ to $\{2,5,7\}$. If $I_k$ is infinite, then it is countable, so there is an injective function from $\mathbb N$ to it. That would be our $I_k$ considered as a function on $\mathbb N.$ – Matematleta May 16 at 23:31
• Hmm. I guess that i got you. Also could you show why the RHS of (5) converges absolutely, please? – ZFR May 16 at 23:37
• Hint: show directly by comparison with $\sum |a_n|$ that $\{s_k\}$ has bounded partial sums. – Matematleta May 16 at 23:42 | 2020-10-26T00:35:28 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3678244/if-series-is-absolutely-convergent-then-sum-limits-n-in-ia-n-sum-limits",
"openwebmath_score": 0.9980955123901367,
"openwebmath_perplexity": 174.6296223630094,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982557516796073,
"lm_q2_score": 0.8723473779969194,
"lm_q1q2_score": 0.8571314735082184
} |
https://math.stackexchange.com/questions/132495/why-dont-you-need-the-axiom-of-choice-when-constructing-the-inverse-of-an-inj/132498 | # Why don't you need the Axiom of Choice when constructing the “inverse” of an injection?
Suppose $f:X\rightarrow Y$ is a surjection and you want to show that there exists $g:Y\rightarrow X$ s.t. $f\circ g=\mathrm{id}_Y$. You need the AC to show this.
However, suppose $f$ is a injection and you want show there is $g$ s.t. $g\circ f=\mathrm{id}_X$. Then, according to my textbook, you don't need the AC to show this.
This is counterintuitive to me, because it's like you need a special axiom to claim that an infinite product of big sets is nonempty, while you don't need one to claim that an infinite product of singleton sets is nonempty, which seems smaller than the former.
So why don't you need the AC to show the latter?
EDIT: $X$ should be nonempty.
EDIT 2: I realized (after asking this) that my question mostly concerns whether the AC is needed to say that an infinite product of finite sets is nonempty, and why.
• I can't fix your details (Asaf will probably see to that) but here's the intuition: Why do we need AC? Because with big sets, the problem is specifying how to pick the elements. No such problem with singleton sets! – Ragib Zaman Apr 16 '12 at 13:32
• Making one choice (or some fixed finite number of choices) doesn't require Axiom of Choice. It's entailed by first-order logic, q.v. Rule C. – hardmath Apr 16 '12 at 13:42
• hardmath: your comment is best in answering my question as I hold in my heart. Could you please elaborate on that? – Pteromys Apr 16 '12 at 13:47
• @Pteromys: Read this answer to why we can choose from non-empty sets. Finitely many choices follow by induction. – Asaf Karagila Apr 16 '12 at 13:48
• @Pteromys: As others have noted, the Axiom of Choice is needed to deal with infinite products of different sets, even in the case those sets each contain two elements. We've moved sufficiently far from your original Question, it might be best to formulate a new one if you want elaboration! – hardmath Apr 16 '12 at 14:58
The need for the axiom of choice is to choose arbitrary elements. Injectivity eliminates this need.
Assume that $A$ is not empty, if $f\colon A\to B$ is injective this means that if $b\in B$ is in the range of $f$ then there is a unique $a\in A$ such that $f(a)=b$.
This means that we can define (from $f$) what is the $a$ to which we send $b$.
So if $f$ is not onto $B$ we have two options:
1. $b\in B$ in the range of $f$, then we have exactly one option to send $b$ to.
2. $b\in B$ not in the range of $f$. Since $A$ is not empty fix in advance some $a_0\in A$ and send $b$ to $a_0$.
Another way to see this is let $B=B'\cup Rng(f)$, where $B'\cap Rng(f)=\varnothing$. Fix $a_0\in A$ and define $g|_{B'}(x)=a_0$. For every $b\in Rng(f)$ we have that $f^{-1}[\{b\}]=\{a\in A\mid f(a)=b\}$ is a singleton, so there is only one function which we can define in: $$\prod_{b\in Rng(f)}f^{-1}[\{b\}]$$
Now let the unique function in the product be $g|_{Rng(f)}$ and define $g$ to be the union of these two.
Your intuition about the need for the axiom of choice is true for surjections, if $f$ was surjective then we only know that $f^{-1}[\{b\}]$ is non-empty for every $b\in B$, and we need the full power of the axiom of choice to ensure that an arbitrary surjection has an inverse function.
To the edited question:
The axiom of choice is needed because we have models in which the axiom of choice does not hold, where there exists an infinite family of pairs whose product is empty.
There are weaker forms from which follow choice principles for finite sets. However these are still not provable from ZF on its own.
As indicated by Chris Eagle in the comments, and as I remark above, in a product of singletons there is no need for the axiom of choice since there is only one way to choose from a singleton.
• Why not just "flip the graph" with $(x,y) \mapsto (y,x)$ to get a function $g: f(X) \to X$ (since $f$ is injective) and extend it to $Y \smallsetminus f(X)$ by $g(y) = x_0$ as you said? – t.b. Apr 16 '12 at 13:42 | 2019-05-26T15:19:58 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/132495/why-dont-you-need-the-axiom-of-choice-when-constructing-the-inverse-of-an-inj/132498",
"openwebmath_score": 0.8408396244049072,
"openwebmath_perplexity": 129.30781144029993,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575162853136,
"lm_q2_score": 0.8723473680407889,
"lm_q1q2_score": 0.857131463280188
} |
https://codegolf.stackexchange.com/questions/74391/one-zero-dividend | # One-zero dividend
## Challenge description
For every positive integer n there exists a number having the form of 111...10...000 that is divisible by n i.e. a decimal number that starts with all 1's and ends with all 0's. This is very easy to prove: if we take a set of n+1 different numbers in the form of 111...111 (all 1's), then at least two of them will give the same remainder after division by n (as per pigeonhole principle). The difference of these two numbers will be divisible by n and will have the desired form. Your aim is to write a program that finds this number.
## Input description
A positive integer.
## Output description
A number p in the form of 111...10...000, such that p ≡ 0 (mod n). If you find more than one - display any of them (doesn't need to be the smallest one).
## Notes
Your program has to give the answer in a reasonable amount of time. Which means brute-forcing is not permited:
p = 0
while (p != 11..10.00 and p % n != 0)
p++
Neither is this:
do
p = random_int()
while (p != 11..10.00 and p % n != 0)
Iterating through the numbers in the form of 11..10..00 is allowed.
Your program doesn't need to handle an arbitrarily large input - the upper bound is whatever your language's upper bound is.
## Sample outputs
2: 10
3: 1110
12: 11100
49: 1111111111111111111111111111111111111111110
102: 1111111111111111111111111111111111111111111111110
• Can we have a reasonable upper bound to the possible output? (Something about less than 2.4 billion (approx. the max value of a signed integer) should be fine, as arrays or lists might be required for some implementations) – Tamoghna Chowdhury Feb 28 '16 at 15:25
• @MartinBüttner I think that the first satisfying output should be enough (reasonable timeframe constraint) – Tamoghna Chowdhury Feb 28 '16 at 15:26
• The last 0 is not necessary in the 49 test case. – CalculatorFeline Feb 28 '16 at 15:36
• @CatsAreFluffy I think all numbers need to contain at least 1 and at least one 0, otherwise 0 is a solution for any input. (Would be good to clarify this though.) – Martin Ender Feb 28 '16 at 15:38
• Just requiring one 1 should work. – CalculatorFeline Feb 28 '16 at 15:41
## Mathematica, 29 bytes
⌊10^(9EulerPhi@#)/9⌋10^#&
Code by Martin Büttner.
On input $$\n\$$, this outputs the number with $$\9\varphi(n)\$$ ones followed by $$\n\$$ zeroes, where $$\\varphi(\cdot)\$$ is the Euler totient function. With a function phi, this could be expressed in Python as
lambda n:'1'*9*phi(n)+'0'*n
It would suffice to use the factorial $$\n!\$$ instead of $$\\varphi(n)\$$, but printing that many ones does not have a reasonable run-time.
Claim: $$\9\varphi(n)\$$ ones followed by $$\n\$$ zeroes is a multiple of $$\n\$$.
Proof: First, let's prove this for the case that $$\n\$$ is not a multiple of $$\2, 3, \text{or } 5\$$. We'll show the number with consisting of $$\\varphi(n)\$$ ones is a multiple of $$\n\$$.
The number made of $$\k\$$ ones equals $$\\frac{10^k-1}9\$$. Since $$\n\$$ is not a multiple of $$\3\$$, this is a multiple of $$\n\$$ as long as $$\10^k-1\$$ is a factor of $$\n\$$, or equivalently if $$\10^k \equiv 1\mod n\$$. Note that this formulation makes apparent that if $$\k\$$ works for the number of ones, then so does any multiple of $$\k\$$.
So, we're looking for $$\k\$$ to be a multiple of the order of $$\k\$$ in the multiplicative group modulo n. By Lagrange's Theorem, any such order is a divisor of the size of the group. Since the elements of the group are the number from $$\1\$$ to $$\n\$$ that are relatively prime to $$\n\$$, its size is the Euler totient function $$\\varphi(n)\$$. So, we've shown that $$\10^{\varphi(n)} \equiv 1 \mod n\$$, and so the number made of $$\\varphi(n)\$$ ones is a multiple of $$\n\$$.
Now, let's handle potential factors of $$\3\$$ in $$\n\$$. We know that $$\10^{\varphi(n)}-1\$$ is a multiple of $$\n\$$, but $$\\frac{10^{\varphi(n)}-1}9\$$ might not be. But, $$\\frac{10^{9\varphi(n)}-1}9\$$ is a multiple of $$\9\$$ because it consists of $$\9\varphi(n)\$$ ones, so the sum of its digits a multiple of $$\9\$$. And we've noted that multiplying the exponent $$\k\$$ by a constant preserves the divisibility.
Now, if $$\n\$$ has factors of $$\2\$$'s and $$\5\$$'s, we need to add zeroes to end of the output. It way more than suffices to use $$\n\$$ zeroes (in fact $$\\log_2(n)\$$ would do). So, if our input $$\n\$$ is split as $$\n = 2^a \times 5^b \times m\$$, it suffices to have $$\9\varphi(m)\$$ ones to be a multiple of $$\n\$$, multiplied by $$\10^n\$$ to be a multiple of $$\2^a \times 5^b\$$. And, since $$\n\$$ is a multiple of $$\m\$$, it suffices to use $$\9\varphi(n)\$$ ones. So, it works to have $$\9\varphi(n)\$$ ones followed by $$\n\$$ zeroes.
• Just to make sure no one thinks this was posted without my permission: xnor came up with the method and proof all on his own, and I just supplied him with a Mathematica implementation, because it has a built-in EulerPhi function. There is nothing mind-blowing to the actual implementation, so I'd consider this fully his own work. – Martin Ender Feb 28 '16 at 18:29
## Python 2, 44 bytes
f=lambda n,j=1:j/9*j*(j/9*j%n<1)or f(n,j*10)
When j is a power of 10 such as 1000, the floor-division j/9 gives a number made of 1's like 111. So, j/9*j gives 1's followed by an equal number of 0's like 111000.
The function recursively tests numbers of this form, trying higher and higher powers of 10 until we find one that's a multiple of the desired number.
• Oh, good point, we only need to check 1^n0^n... – Martin Ender Feb 28 '16 at 16:03
• @MartinBüttner If it's any easier, it also suffices to fix the number of 0's to be the input value. Don't know if it counts as efficient to print that many zeroes though. – xnor Feb 28 '16 at 16:37
• Why does checking 1^n0^n work? – Lynn Feb 28 '16 at 17:54
• @Lynn Adding more zeroes can't hurt, and there's infinitely many possible numbers of ones, some number will have enough of both ones and zeroes. – xnor Feb 28 '16 at 18:15
# Pyth, 11 bytes
.W%HQsjZTT
Test suite
Basically, it just puts a 1 in front and a 0 in back over and over again until the number is divisible by the input.
Explanation:
.W%HQsjZTT
Implicit: Q = eval(input()), T = 10
.W while loop:
%HQ while the current value mod Q is not zero
jZT Join the string "10" with the current value as the separator.
s Convert that to an integer.
T Starting value 10.
# Haskell, 51 bytes
\k->[b|a<-[1..],b<-[div(10^a)9*10^a],bmodk<1]!!0
Using xnor’s approach. nimi saved a byte!
## CJam, 2825 19 bytes
Saved 6 bytes with xnor's observation that we only need to look at numbers of the form 1n0n.
ri:X,:)Asfe*{iX%!}=
Test it here.
### Explanation
ri:X e# Read input, convert to integer, store in X.
,:) e# Get range [1 ... X].
As e# Push "10".
fe* e# For each N in the range, repeat the characters in "10" that many times,
e# so we get ["10" "1100" "111000" ...].
{iX%!}= e# Select the first element from the list which is divided by X.
# JavaScript (ES6), 65 bytes
Edit 2 bytes saved thx @Neil
It works within the limits of javascript numeric type, with 17 significant digits. (So quite limited)
a=>{for(n='';!(m=n+=1)[17];)for(;!(m+=0)[17];)if(!(m%a))return+m}
Less golfed
function (a) {
for (n = ''; !(m = n += '1')[17]; )
for (; !(m += '0')[17]; )
if (!(m % a))
return +m;
}
• Why not for(m=n;? – Neil Feb 28 '16 at 17:11
• @Neil because I need at least one zero. Maybe I can find a shorter way ... (thx for the edit) – edc65 Feb 28 '16 at 18:30
• Oh, that wasn't clear in the question, but I see now that the sample outputs all have at least one zero. In that case you can still save a byte using for(m=n;!m[16];)if(!((m+=0)%a)). – Neil Feb 28 '16 at 18:42
• @Neil or even 2 bytes. Thx – edc65 Feb 28 '16 at 18:52
# Husk, 11 10 bytes
-1 byte thanks to Razetime!
ḟ¦⁰modṘḋ2N
Try it online! Constructs the infinite list [10,1100,111000,...] and selects the first element which is a multiple of the argument.
# Mathematica, 140 55 bytes
NestWhile["1"<>#<>"0"&,"1",FromDigits@#~Mod~x>0&/.x->#]
Many bytes removed thanks to xnor's 1^n0^n trick.
Minimal value, 140 156 bytes This gives the smallest possible solution.
NestWhile["1"<>#&,ToString[10^(Length@NestWhileList[If[EvenQ@#,If[10~Mod~#>0,#/2,#/10],#/5]&,#,Divisors@#~ContainsAny~{2, 5}&],FromDigits@#~Mod~m>0&/.m->#]&
It calculates how many zeros are required then checks all possible 1 counts until it works. It can output a number with no 0 but that can be fixed by adding a <>"0" right before the final &.
## Haskell, 37 bytes
f n=[d|d<-"10",i<-[1..n*9],gcd n i<2]
This uses the fact that it works to have 9*phi(n) ones, where phi is the Euler totient function. Here, it's implemented using gcd and filtering, producing one digit for each value i that's relatively prime to it that is in the range 1 and 9*n. It also suffices to use this many zeroes.
# Jelly, 11 bytes
⁵DxⱮḅ⁵ḍ@Ƈ⁸Ḣ
Try it online!
Ignoring runtime requirements, we can reduce this to 10 bytes using xnor's method, but with $$\\varphi(n)\$$ replaced with $$\n!\$$:
!,µ⁵Dx"µFḌ
Try it online!
## How they work
⁵DxⱮḅ⁵ḍ@Ƈ⁸Ḣ - Main link. Takes n on the left
⁵D - Yield [1, 0]
Ɱ - For each integer i in [1, 2, ..., n]:
x - Repeat each element of [1, 0] i times
ḅ⁵ - Convert each back into an integer
Ƈ - Keep those for which the following is true:
ḍ@ - Is divisible by
⁸ - n
Ḣ - Take the first value of those remaining
!,µ⁵Dx"µFḌ - Main link. Takes n on the left
! - Yield n!
, - Yield [n!, n]. Call this l
µ - Begin new link with l on the left
⁵D - Yield [1, 0]
" - Zip [1, 0] with [n!, n] and do the following over each pair:
x - Repeat
- This yields [[1, 1, ..., 1], [0, 0, ..., 0]].
The first element has n! 1s and the second n 0s
Call this k
µ - Begin new link with k on the left
F - Flatten k
Ḍ - Convert to integer
# Perl 5, 26 bytes
includes a byte for -n (-M5.01 is free)
($.="1$.0")%$_?redo:say$.
## Explanation
$. starts off with value 1. We immediately concatenate it with 1 beforehand and 0 afterward, yielding 110, and reassign that to $. — that's what the $.="1$.0" does.
The assignment returns the assigned value so when we take it modulo the input number $_ we obtain 0 (false) if 110 is divisible by the input and nonzero (true) otherwise. • In the latter case, we redo, i.e. repeat the block (the -n switch makes the whole thing a loop block, and incidentally, assigns $_ to the input number). This concatenates another 1 and 0, yielding 11100, etc.
• If 110, or 11100, or 1111000, or whatever we're up to, is divisible by the input, we stop and say (print) it.
Note that every number divides such a number (i.e. that has one more 1 than the numbers of 0s it has). After all, if it divides something with more 1s than 0s, you can append 0s and it'll still divide that. And if divides something with fewer 1s than 0s, say m 1s and n 0s with m < n, then it also divides the number with 2m 1s and n 0s.
## Sage, 33 bytes
lambda n:'1'*9*euler_phi(n)+'0'*n
This uses xnor's method to produce the output.
Try it online
# 05AB1E, 8 bytes
$Õ9*×Î׫ Port of @xnor's Mathematica answer, so make sure to upvote him!! Explanation: $ # Push 1 and the input-integer
Õ # Pop the input, and get Euler's totient of this integer
9* # Multiply it by 9
× # Repeat the 1 that many times as string
Î # Push 0 and the input-integer
× # Repeat the 0 the input amount of times as string
« # Concatenate the strings of 1s and 0s together
# (after which the result is output implicitly)
A more to-the-point iterative approach would be 11 bytes:
TS∞δ×øJ.ΔIÖ
Explanation:
T # Push 10
S # Convert it to a list of digits: [1,0]
∞ # Push an infinite positive list: [1,2,3,...]
δ # Apply double-vectorized on these two lists:
× # Repeat the 1 or 0 that many times as string
# (we now have a pair of infinite lists: [[1,11,111,...],[0,00,000,...]])
ø # Zip/transpose; swapping rows/columns:
# [[1,0],[11,00],[111,000],...]
J # Join each inner pair together:
# [10,1100,111000,...]
.Δ # Find the first value in this list which is truthy for:
IÖ # Check that it's divisible by the input-integer
# (after which the result is output implicitly)
# bc, 58 bytes
define f(n){for(x=1;m=10^x/9*10^x;++x)if(m%n==0)return m;}
## Sample results
200: 111000
201: 111111111111111111111111111111111000000000000000000000000000000000
202: 11110000
203: 111111111111111111111111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000000000000000000000000000000000000000000000000000
204: 111111111111111111111111111111111111111111111111000000000000000000000000000000000000000000000000
205: 1111100000
206: 11111111111111111111111111111111110000000000000000000000000000000000
207: 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
208: 111111000000
209: 111111111111111111000000000000000000
210: 111111000000
211: 111111111111111111111111111111000000000000000000000000000000
212: 11111111111110000000000000
213: 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
214: 1111111111111111111111111111111111111111111111111111100000000000000000000000000000000000000000000000000000
215: 111111111111111111111000000000000000000000
216: 111111111111111111111111111000000000000000000000000000
217: 111111111111111111111111111111000000000000000000000000000000
218: 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
219: 111111111111111111111111000000000000000000000000
# dc, 27 bytes
Odsm[O*lmdO*sm+O*dln%0<f]sf
This defines a function f that expects its argument in the variable n. To use it as a program, ?sn lfx p to read from stdin, call the function, and print the result to stdout. Variable m and top of stack must be reset to 10 (by repeating Odsm) before f can be re-used.
## Results:
200: 111000
201: 111111111111111111111111111111111000000000000000000000000000000000
202: 11110000
203: 111111111111111111111111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000000000000000000000000000000000000000000000000000
204: 111111111111111111111111111111111111111111111111000000000000000000000000000000000000000000000000
205: 1111100000
206: 11111111111111111111111111111111110000000000000000000000000000000000
207: 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
208: 111111000000
209: 111111111111111111000000000000000000
210: 111111000000
211: 111111111111111111111111111111000000000000000000000000000000
212: 11111111111110000000000000
213: 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
214: 1111111111111111111111111111111111111111111111111111100000000000000000000000000000000000000000000000000000
215: 111111111111111111111000000000000000000000
216: 111111111111111111111111111000000000000000000000000000
217: 111111111111111111111111111111000000000000000000000000000000
218: 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
219: 111111111111111111111111000000000000000000000000
` | 2021-06-14T15:40:23 | {
"domain": "stackexchange.com",
"url": "https://codegolf.stackexchange.com/questions/74391/one-zero-dividend",
"openwebmath_score": 0.38380372524261475,
"openwebmath_perplexity": 1459.298190489912,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575175622121,
"lm_q2_score": 0.8723473647220786,
"lm_q1q2_score": 0.8571314611332632
} |
http://mathhelpforum.com/pre-calculus/62126-completing-square.html | 1. ## Completing the Square
What is the minimum point of the graph of the equation
y = 2x^2 + 8x + 9?
I understand the minimum point is the vertex (h, k).
To complete the square, I set the function to zero first.
2x^2 + 8x + 9 = 0
I then subtracted 9 from both sides.
2x^2 + 8x = -9
I then took half of the second term and squared it.
2x^2 + 8x = 16 - 9
Where do I go from here?
2. Hello !
y = 2x² + 8x + 9
y = 2 (x² + 4x + 9/2)
y = 2 ((x+2)²+1/2)
Therefore the minimum point of the graph is such as x+2=0 => x=-2 and y=1
3. Originally Posted by magentarita
What is the minimum point of the graph of the equation
y = 2x^2 + 8x + 9?
I understand the minimum point is the vertex (h, k).
To complete the square, I set the function to zero first.
2x^2 + 8x + 9 = 0
I then subtracted 9 from both sides.
2x^2 + 8x = -9
I then took half of the second term and squared it.
2x^2 + 8x = 16 - 9
Where do I go from here?
===============================================
magentarita,
I see where your problem is.
2x^2 + 8x = -9 ------ ok so far.
Before you take half of the secod term and square,
Make sure the coefficient of suared term 2x^2 to be 1.
Therefore, 2x^2 + 8x = -9
becomes (x^2) + 4x = -9/2 -- simply didived by 2.
(x^2) + 4x = -9/2 -- now you are ready to take half of the secod term and square, you get
(x^2) + 4x + 2^2 = (-9/2 ) + 2^2
(x + 2)^2 = -1/2 now complete the square part is done at this point.
Rewrite original question to y = [(x + 2)^2 ]+ 1/2
Can you see y = a ( x-h )^2 + k --- standard form.
Vertex is at ( h, k ) = ( -2, 1/2) and a =1
Opens up vertically, therefor, mini. point is at vertex.
4. ## ok............
Originally Posted by 2976math
===============================================
magentarita,
I see where your problem is.
2x^2 + 8x = -9 ------ ok so far.
Before you take half of the secod term and square,
Make sure the coefficient of suared term 2x^2 to be 1.
Therefore, 2x^2 + 8x = -9
becomes (x^2) + 4x = -9/2 -- simply didived by 2.
(x^2) + 4x = -9/2 -- now you are ready to take half of the secod term and square, you get
(x^2) + 4x + 2^2 = (-9/2 ) + 2^2
(x + 2)^2 = -1/2 now complete the square part is done at this point.
Rewrite original question to y = [(x + 2)^2 ]+ 1/2
Can you see y = a ( x-h )^2 + k --- standard form.
Vertex is at ( h, k ) = ( -2, 1/2) and a =1
Opens up vertically, therefor, mini. point is at vertex.
ok but none the choices given for the minimum point is (-2, 1/2).
(2,33)
(2,17)
(-2,-15)
(-2,1)
5. ## ok....
Originally Posted by running-gag
Hello !
y = 2x² + 8x + 9
y = 2 (x² + 4x + 9/2)
y = 2 ((x+2)²+1/2)
Therefore the minimum point of the graph is such as x+2=0 => x=-2 and y=1
You got the right answer. But how did you get y = 1?
I got y = 1/2
6. Originally Posted by magentarita
You got the right answer. But how did you get y = 1?
I got y = 1/2
Substitute x= -2 into the expression :
2 [ (-2+2)² + 1/2 ] = 2 [ 0+1/2 ] = 1
7. Originally Posted by magentarita
ok but none the choices given for the minimum point is (-2, 1/2).
(2,33)
(2,17)
(-2,-15)
(-2,1)
I found the problem!
y should be 1 not 1/2, so answer (-2,1) is correct.
Reason: I forgot to divid y by 2.
Remember orginal question: y =( 2x^2 ) + 8x +9
(1/2) y = (x^2) + 4x + ( 9/2) -- divid by 2 both side
(1/2) y = [( x +2)^2 ] + (1/2) -- complete square right side
y = 2 [( x +2)^2 ] + 1 ----- multiply 2 both side
y = a [ ( x-h)^2 ] + k
where a=2, h=-2, k=1 sorry
8. I think you were originally trying to solve for the minimum point by finding the vertex through the method of 'completing the square'.
From $y=ax^2+bx+c$, you want to convert to $y=a(x-h)^2+k$, where $(h, k)$ represent the vertex of your parabola and thus the maximum or minimum point depending on a<0 or a>0.
$y=2x^2+8x+9$ Factor 2 out of first 2 terms
$y=2(x^2+4x)+9$ Next, complete the square in parentheses.
$y=2(x^2+4x+4)+9-8$ Notice that we added 2 times 4, so we subtract 8 to balance the side
$y=2(x+2)^2+1$ Now, we're in vertex form.
$V(h, k)=V(-2, 1) = \ \ minimum \ \ point$
9. ## ok.........
Originally Posted by Moo
Substitute x= -2 into the expression :
2 [ (-2+2)² + 1/2 ] = 2 [ 0+1/2 ] = 1
By subbing -2 for x, I get y = 1.
I got it.
10. ## ok..........
Originally Posted by 2976math
I found the problem!
y should be 1 not 1/2, so answer (-2,1) is correct.
Reason: I forgot to divid y by 2.
Remember orginal question: y =( 2x^2 ) + 8x +9
(1/2) y = (x^2) + 4x + ( 9/2) -- divid by 2 both side
(1/2) y = [( x +2)^2 ] + (1/2) -- complete square right side
y = 2 [( x +2)^2 ] + 1 ----- multiply 2 both side
y = a [ ( x-h)^2 ] + k
where a=2, h=-2, k=1 sorry
Be be sorry, be right. We all make mistakes, right?
11. ## ok..........
Originally Posted by masters
I think you were originally trying to solve for the minimum point by finding the vertex through the method of 'completing the square'.
From $y=ax^2+bx+c$, you want to convert to $y=a(x-h)^2+k$, where $(h, k)$ represent the vertex of your parabola and thus the maximum or minimum point depending on a<0 or a>0.
$y=2x^2+8x+9$ Factor 2 out of first 2 terms
$y=2(x^2+4x)+9$ Next, complete the square in parentheses.
$y=2(x^2+4x+4)+9-8$ Notice that we added 2 times 4, so we subtract 8 to balance the side
$y=2(x+2)^2+1$ Now, we're in vertex form.
$V(h, k)=V(-2, 1) = \ \ minimum \ \ point$
I enjoyed the steps as a guide. Well-done! | 2017-01-21T11:04:40 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/pre-calculus/62126-completing-square.html",
"openwebmath_score": 0.8046611547470093,
"openwebmath_perplexity": 1917.9567759754982,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9825575178175919,
"lm_q2_score": 0.8723473630627235,
"lm_q1q2_score": 0.8571314597256313
} |
http://math.stackexchange.com/questions/899109/problems-that-become-easier-in-a-more-general-form | # Problems that become easier in a more general form
When solving a problem, we often look at some special cases first, then try to work our way up to the general case.
It would be interesting to see some counterexamples to this mental process, i.e. problems that become easier when you formulate them in a more general (or ambitious) form.
## Motivation/Example
Recently someone asked for the solution of $a,b,c$ such that $\frac{a}{b+c} = \frac{b}{a+c} = \frac{c}{a+b} (=t).$
Someone suggested writing this down as a system of linear equations in terms of $t$ and solving for $a,b,c$. It turns out, either (i) $a=b=c$ or (ii) $a+b+c=0$.
Solution (i) is obvious from looking at the problem, but (ii) was not apparent to me until I solved the system of equations.
Then I wondered how this would generalize to more variables, and wrote the problem as: $$\frac{x_i}{\sum x - x_i} = \frac{x_j}{\sum x - x_j} \quad \forall i,j\in1,2,\dots,n$$
Looking at this formulation, both solutions became immediately evident without the need for linear algebra (for (ii), set $\sum x=0$ so each denominator cancels out its numerator).
-
The linked Question : math.stackexchange.com/questions/897118/… – lab bhattacharjee Aug 16 '14 at 12:16
Well, I personally don't think this is actually simplification by generalization, but rather simplification motivated by generalization. Like if you write $\frac{a}{b+c}$ as $\frac{a}{(a + b + c) - a}$, you'd say that the problem becomes obvious too. – Tunococ Aug 16 '14 at 12:59
@Tunococ Fair enough, I'm just saying that writing it like this didn't occur to me until I thought of generalizing it. I understand that this is all a little subjective (hence the "soft-question" tag). – MGA Aug 16 '14 at 13:01
This is relevant. – Julien Godawatta Aug 16 '14 at 13:58
Introducing topology makes certain proofs in real analysis clearer and more elegant, though I'm not sure whether they are necessarily easier per se. – Harry Johnston Aug 17 '14 at 0:24
Consider the following integral $\displaystyle\int_{0}^{1}\dfrac{x^7-1}{\ln x}\,dx$. All of our attempts at finding an anti-derivative fail because the antiderivative isn't expressable in terms of elementary functions.
Now consider the more general integral $f(y) = \displaystyle\int_{0}^{1}\dfrac{x^y-1}{\ln x}\,dx$.
We can differentiate with respect to $y$ and evaluate the resulting integral as follows:
$f'(y) = \displaystyle\int_{0}^{1}\dfrac{d}{dy}\left[\dfrac{x^y-1}{\ln x}\right]\,dx = \int_{0}^{1}x^y\,dx = \left[\dfrac{x^{y+1}}{y+1}\right]_{0}^{1} = \dfrac{1}{y+1}$.
Since $f'(y) = \dfrac{1}{y+1}$, we have $f(y) = \ln(y+1)+C$ for some constant $C$.
Trivially, $f(0) = \displaystyle\int_{0}^{1}\dfrac{x^0-1}{\ln x}\,dx = \int_{0}^{1}0\,dx = 0$. Hence $C = 0$, and thus, $f(y) = \ln(y+1)$.
Therefore, our original integral is $\displaystyle\int_{0}^{1}\dfrac{x^7-1}{\ln x}\,dx = f(7) = \ln 8$.
This technique of generalizing an integral by introducing a parameter and differentiating w.r.t. that parameter is known as Feynman Integration.
-
Great example! Really neat – mattecapu Aug 16 '14 at 19:59
My favorite illustration so far. Of course the others are nice too. – MGA Aug 16 '14 at 20:22
The special form (not only the general form) can also easily be determined by setting $\ln x=-t$ and then use Frullani's integral. – Tunk-Fey Aug 17 '14 at 14:44
The integrand does have an anti-derivative in terms of the upper incomplete gamma function: $\Gamma(0, -\ln x) - \Gamma(0, -8\,\ln x)$. That still doesn't help because it's undefined at both limits of integration. – Tavian Barnes Aug 17 '14 at 21:52
@Tunk-Fey You're right, apologies. Deleting my comment. – MGA Aug 18 '14 at 11:39
George Polya's book How to Solve It calls this phenomenon "The Inventor's Paradox": "The more ambitious plan may have more chances of success." The book gives several examples, including the following.
1) Consider the problem: "A straight line and a regular octahedron are given in position. Find a plane that passes through the given line and bisects the volume of the given octahedron." If we generalize this to "a straight line and a solid with a center of symmetry are given in position..." it becomes very easy. (The plane goes through the center of symmetry and the line.)
The book also gives other examples of the Inventor's Paradox, but "more ambitious" is not always the same as "more general." Consider: "Prove that $1^3 + 2^3 + 3^3 + ... + n^3$ is a perfect square." Polya shows that it is easier to prove (by mathematical induction) that "$1^3 + 2^3 + 3^3 + ... + n^3 = (1 + 2 + 3 + ...+ n)^2$". This is more ambitious but is not more general.
The web page Generalizations in Mathematics gives many similar examples. It even gets into the difference between "more ambitious" and "more general."
-
Good point regarding generality/ambitiousness, and both are interesting. I have made a minor edit to the question to reflect this. – MGA Aug 16 '14 at 12:51
In the first the easy approach is "more general" in the sense that it picks out the significant property of an entity and ignores any other particular properties it may have. There are loads of examples like it, for example any time you're asked to prove some property of a particular group that's true of all Abelian groups, or some property of a particular function that's true of any continuous monotonic function, and so on. The problem is, "identify the useful property of this object", so the more general problem where you only have the useful property is always easier ;-) – Steve Jessop Aug 16 '14 at 19:26
The cubes and square example, termed 'more ambitious', is an example of the technique of solving a problem by the addition of an invented constraining hypothesis, and thus is actually a case of particularization. – Jose Brox Aug 20 '14 at 8:00
I recall something like this coming up when evaluating certain summations. For example, consider:
$$\sum_{n=0}^{\infty} {n \over 2^n}$$
We can generalize this by letting $f(x) = \sum_{n=0}^{\infty} nx^n$, so:
\begin{align} {f(x) \over x} &= \sum_{n=0}^{\infty} nx^{n-1} \\ &= {d \over dx} \sum_{n=0}^{\infty} x^n \\ &= {d \over dx} {1 \over {1-x}} = {1 \over (x-1)^2} \end{align}
Therefore,
$$f(x) = {x \over (x-1)^2}$$
The solution to the original problem is $f({1 \over 2}) = 2$.
-
Interestingly, this also shows $\sum_{n=0}^\infty \frac{1}{2^n} = \sum_{n=0}^\infty \frac{n}{2^n}$, which succeeded at surprising me. :) – Keba Aug 3 '15 at 23:26
The solution to the Monty Hall problem
Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, follows the fixed protocol of opening another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?
becomes more obvious when you generalize it to an $N$-door problem with the host opening $N-2$ doors. For $N\gg3$ most people's intuition revolts against staying with the original choice.
-
A more helpful approach to the standard Monty Hall is to examine what is wrong with the above formulation. The story, as told here, is consistent with a game show where the host offers a chance to switch only to contestants who have already chosen the correct door. When playing that game, you should never switch regardless of how many doors there are. In the standard problem, it is explicitly stated that the host will always offer the switch. That's what makes the odds $N-1$ to $1$ in favor of the switch; and $2$-to-$1$ odds is good enough. – David K Aug 16 '14 at 15:22
@DavidK - the text I included is Wikipedia's description of the Monty Hall problem. Have modified it. – Johannes Aug 16 '14 at 20:20
The problem has been underspecified in some widely-read sources; no surprise one of those got copied to Wikipedia. I'll accept this version as implying the player knows the fixed protocol. In that case, if someone's intuition requires $N$ to be increased above $3$ before they think the switch is beneficial, they do not understand the reason why it is beneficial. They have merely been nudged from an incorrect guess to an unjustified but luckily correct guess. – David K Aug 18 '14 at 23:12
@DavidK: I think that would be a fair criticism of someone who claims to be game-theoretically rational, but I doubt such a person exists; would a reasonable person expect a Bayes factor of $2:1$ to come to his rescue? Even though it wouldn't be necessary to the idealised mathematical observer with the eyes of a hawk, in many contexts exaggerating the difference helps me notice there is one, whence I can see via a monotonicity argument that the effect is non-zero (if weak) for $N \gtrsim 3$. – Vandermonde Feb 16 '15 at 19:27
Heck, what I thought negligible was a raw probability of 1/6, which I ought to appreciate even then. – Vandermonde Feb 16 '15 at 21:25
I'm not entirely convinced that problems made somehow easier by generalizations is exactly what is going on here.
In the example provided in your question, what made the solution to the general problem appear easier is that it dawned on you that $$x_1+\cdots+x_{j-1}+x_{j+1}+\cdots+x_n=\sum_{i=1}^nx_i-x_j.$$ Indeed, (as tunococ commented) had the less general problem been written as $$\frac{a}{a+b+c-a}=\frac{b}{a+b+c-b}=\frac{c}{a+b+c-c},$$ then your easier solution to the general problem applies here as well. I would argue that, if anything, the generalization helped you notice a pattern you had not before seen. Would you still have noticed this pattern had you not formulated the problem in a general way? Perhaps, perhaps not.
In my opinion, what your experience shows is that formulating a problem $Q$ in more general terms $P$ is one of many ways by which one can gain a fundamental insight that provides the key to the solution of the general problem $P$ (and thus inevitably also solves the initial special case $Q$ also). Sometimes, this can lead to a solution that was as of yet unknown to you and that will be more elegant or easier than the previous solutions. However, given that such an insight could easily have come without generalizing the problem, the fact that the solution did come from you thinking about the generalization seems highly circumstantial to me.
EDIT: JimmyK4542's example (and Feynmann's integration trick) seems like a spectacular demonstration of the phenomenon, however.
-
I merely meant that as an example to get the discussion started, and you're of course right that in this case one doesn't have to generalize to see the solution. But personally, I didn't think of writing $+a-a$ etc. because I didn't feel the need to. Once I considered the general case, I was compelled to write it like this. Anyway, I think that better examples have now been given on here that really illustrate the point - I'm particularly fond of the Feynman integration trick. – MGA Aug 16 '14 at 20:16
JimmyK4542's example (and Feynmann's integration trick in general) indeed contradicts my claim that a solution to a general problem always applies to a special case. I'll edit this out. – user78270 Aug 16 '14 at 23:53
On this site one frequently finds under the linear-algebra tag questions of the kind: what is the determinant of a matrix $$\begin{pmatrix}a&b&b&b\\b&a&b&b\\b&b&a&b\\b&b&b&a\end{pmatrix}?$$ (I've just posted this question, which contains a list of such questions). It turns out finding an answer to this question becomes almost trivial (see my answer to the linked question) when reformulated more generally as
What is the characteristic polynomial of a square matrix$~A$ of rank$~1$?
knowing that by specialisation the answer gives the determinant of $\lambda I-A$ for any scalar$~\lambda$.
-
From time to time famous problems have such feature. History suggests this point: the transcendentalness of $\pi$ solves the long-lasting problem of squaring a circle, analytic geometry and irrationality theory solve the problem of doubling a cubic, Galois's invention of group theory and quintic function, Kummer's invention of ideals and Fermat's last theorem, global differential geometry and Chern's intrinsic proof of Gauss-Bonnet theorem, and so on.
-
A broadly successful application of this was introduced by Richard Bellman under the phrase dynamic programming. The story of the "birth" of this now foundational topic in applied math is told largely in Bellman's own words here.
A related term gives more evidence of the connection of ideas: invariant imbedding.
A good discussion of dynamic programming references and examples came up early at StackOverflow, but was subsequently closed as off-topic.
An illustration is finding a shortest path between two specified points by "imbedding" that problem in finding all shortest paths from one point, Dijkstra's algorithm.
-
Generalization comes up a lot when doing induction. For example,
$$\forall n ~~ \sum_{k=0}^n 2^{-k} \le 2$$
is difficult to prove directly using induction on $n$. However, if you generalize to a stronger statement:
$$\forall n ~~ \sum_{k=0}^n 2^{-k} \le 2 - 2^{-n}$$
Then induction may be used directly:
$$\sum_{k=0}^{n+1} 2^{-k} \le 2 - 2^{-n - 1}$$ $$\sum_{k=0}^n 2^{-k} + 2^{-n-1} \le 2 - 2^{-n - 1}$$ $$\sum_{k=0}^n 2^{-k}\le 2 - 2^{-n}$$
Obviously you could see that it is a geometric series, but that is a generalization also.
problems that become easier when you formulate them in a more general (or ambitious) form
The potential difficulty of a generalization isn't the only disadvantage. If you disprove a generalization, then you haven't disproven the original theorem. In that respect, a generalization effectively forces you to pick sides in the investigation of a theorem.
-
The most spectacular example I have seen is this one:
Suppose A is an $n\times n$ matrix with eigenvalues $\lambda_1$, ..., $\lambda_n$, including each eigenvalue according to its multiplicity. Then $A^2$ has eigenvalues $\lambda_1^2$, ..., $\lambda_n^2$ including multiplicity.
To prove this is in fact very very hard. (It's easy to show that $\lambda_1^2$, ..., $\lambda_n^2$ are all eigenvalues of $A$ by considering their eigenvectors, but unless you the dimensions of the eigenspaces match the multiplicities you're stuck.)
However, the proof of the following statement is actually perfectly possible using elementary arguments (albeit clever arguments):
Suppose A is an $n\times n$ matrix with eigenvalues $\lambda_1$, ..., $\lambda_n$, including each eigenvalue according to its multiplicity. Then for any polynomial $g(x)$, $g(A)$ has eigenvalues $g(\lambda_1)$, ..., $g(\lambda_n)$ including multiplicity.
-
A nice example appeared on this web site today: Every prime number $p\ge 5$ has $24\mid p^2-1$ .
As posed, the problem sounds like it might be difficult. But it is very easy to show the more general result that every $n$ of the form $6k\pm 1$ has the required property.
- | 2016-06-24T22:08:07 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/899109/problems-that-become-easier-in-a-more-general-form",
"openwebmath_score": 0.9017611145973206,
"openwebmath_perplexity": 539.8623476551608,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9343951680216529,
"lm_q2_score": 0.9173026641072386,
"lm_q1q2_score": 0.857123176955193
} |
http://math.stackexchange.com/questions/70177/for-a-prime-p-determine-the-number-of-positive-integers-whose-greatest-proper | # For a prime $p$, determine the number of positive integers whose greatest proper divisor is $p$
I'm having a bit of difficulty writing a graceful proof for the following problem:
For a prime $p$, determine the number of positive integers whose greatest proper divisor is $p$.
Let $A$ be the set of positive integers whose greatest proper divisor is $p$. I will show that $A=\{\alpha p\,|\,\alpha \;\text{prime},\; \alpha \leq p\}$ so that $|A|=\pi(p)$, the number of primes less than or equal to $p$.
Assume that $n=\alpha p$ for some prime $\alpha \leq p$. The factors of $n$ are $1, \alpha, p, \alpha p$. (In the case that $\alpha =p$, the factors of $n$ are $1,p, p^2$.) The greatest proper divisor of $n$ therefore is $p$ and so $n\in A$.
Conversely, for any number in $A$, clearly we have that $p$ is the greatest prime dividing it. Moreover, any three primes dividing it would contradict $p$ being the greatest divisor (this can be seen in the following way: if $\alpha_1\neq \alpha_2$ are two primes dividing the number, neither of which is $p$, then we have $\alpha_1<p<\alpha_1p<\alpha_1\alpha_2p$ from which we infer that $p$ is not the greatest proper divisor.) Therefore, a number in $A$ must have at most two prime factors, one being $p$. (We can rule out that a number in $A$ has exactly the one prime factor, $p$, since it has no proper divisors.) Hence, $A\subset \{\alpha p\,|\,\alpha \;\text{prime},\; \alpha \leq p\}$.
Perhaps there's a more elegant proof, but I'm concerned with my writing style. Can anyone help me rephrase my argument in a smoother way or critique it?
edit: incorporated the changes suggested by Henning.
-
Isn't that what I did in the first paragraph? – sasha Oct 5 '11 at 22:48
perhaps you did – Henry Oct 5 '11 at 22:54
Looks sound and direct to me. There is only minor copy-editing to do, such as:
• The condition $2\le\alpha$ is redundant; it is already implied by $\alpha$ being prime.
• The part "Let $n\in\{\alpha p\,|\,\alpha \;\text{prime},\; 2\leq \alpha \leq p\}$. Then $n=\alpha p$ for some prime $\alpha$, $2\leq \alpha \leq p$" is probably more verbose than you need to be. I would just write, "Assume that $n=\alpha p$ for some prime $\alpha \leq p$."
• To make the structure of the proof more explicit, you could write "Conversely," rather than "Now," at the beginning of the third paragraph. This tells the reader that you have now finished something and is proceeding to the opposite direction of what you just proved.
• Calling the other prime $q$ would be somewhat more conventional than $\alpha$.
-
Great, thank you for the suggestions. Last thing--if you could judge from this one post, would you say that my mathematical writing is verbose overall? Definitely would like to change that... – sasha Oct 6 '11 at 7:49 | 2014-07-12T12:39:36 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/70177/for-a-prime-p-determine-the-number-of-positive-integers-whose-greatest-proper",
"openwebmath_score": 0.9306151866912842,
"openwebmath_perplexity": 188.67262868551668,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806523850542,
"lm_q2_score": 0.8740772450055544,
"lm_q1q2_score": 0.8571032351424774
} |
http://mathhelpforum.com/calculus/24574-trig-integration.html | # Math Help - trig integration
1. ## trig integration
After I complete $\int \frac{\sqrt{x^2-9}}{x}\,dx$ by sec substitution I get
$3\sec^{-1}\left(\frac{x}{3}\right)-\sqrt{x^2-3}+C$.
But this doesn't match the calculator's answer:
$3\tan^{-1}\left(\frac{\sqrt{x^2-9}}{3}\right) - \sqrt{x^2-3}+C$
After differentiating the result my answer is different as well... I thought the answers we got would be equivalent though...
Then the second question is $f(x) = 3$. Find the equation.
2. Originally Posted by DivideBy0
After I complete $\int \frac{\sqrt{x^2-9}}{3}\,dx$ by sec substitution I get
$3\sec^{-1}\left(\frac{x}{3}\right)-\sqrt{x^2-3}+C$.
But this doesn't match the calculator's answer:
$3\tan^{-1}\left(\frac{\sqrt{x^2-9}}{3}\right) - \sqrt{x^2-3}+C$
After differentiating the result my answer is different as well... I thought the answers we got would be equivalent though...
Then the second question is $f(x) = 3$. Find the equation.
I guess the right integral is: $\int \frac{\sqrt{x^2-9}}{x}\,dx$
3. Woops, sorry, you're right that's what I meant
4. Hello, DivideBy0!
After I complete $\int \frac{\sqrt{x^2-9}}{x}\,dx$ by sec substitution I get: . $3\sec^{-1}\! \left(\frac{x}{3}\right)-\sqrt{x^2-3}+C$.
But this doesn't match the calculator's answer: / $3\tan^{-1}\!\left(\frac{\sqrt{x^2-9}}{3}\right) - \sqrt{x^2-3}+C$
The answers are equivalent.
Let $\theta \:=\:\sec^{-1}\!\left(\frac{x}{3}\right)$
Then we have: . $\sec\theta \:=\:\frac{x}{3} \:=\:\frac{hyp}{adj}$
$\theta$ is in a right triangle with: $adj = 3,\;hyp = x$
. . Using Pythagorus: . $opp = \sqrt{x^2-9}$
So: . $\tan\theta \:=\:\frac{\sqrt{x^2-9}}{3}$
Hence:. . $\theta \:=\:\tan^{-1}\!\left(\frac{\sqrt{x^2-9}}{3}\right)$
See? . . . . . $\sec^{-1}\!\left(\frac{x}{3}\right) \;=\;\tan^{-1}\left(\frac{\sqrt{x^2-9}}{3}\right)$
5. Originally Posted by DivideBy0
After I complete $\int \frac{\sqrt{x^2-9}}{x}\,dx$ by sec substitution I get
$3\sec^{-1}\left(\frac{x}{3}\right)-\sqrt{x^2-3}+C$.
But this doesn't match the calculator's answer:
$3\tan^{-1}\left(\frac{\sqrt{x^2-9}}{3}\right) - \sqrt{x^2-3}+C$
After differentiating the result my answer is different as well... I thought the answers we got would be equivalent though...
Then the second question is $f(x) = 3$. Find the equation.
the answar is
$-3\sec^{-1}\left(\frac{x}{3}\right)+\sqrt{x^2-9}+C$.
or
$-3\cos^{-1}\left(\frac{3}{x}\right)+\sqrt{x^2-9}+C$.
6. Thanks Soroban for clearing it up a bit more... but when I differentiated on the calculator I still got
$\frac{d}{\,dx}\left(3\sec^{-1}(\frac{x}{3})-\sqrt{x^2-3}\right)=\frac{9|\frac{1}{x}|}{\sqrt{x^2-9}}-\frac{x}{\sqrt{x^2-3}}$
But the original expression obviously has no absolute value signs.
7. Originally Posted by DivideBy0
$\int \frac{\sqrt{x^2-9}}{x}\,dx$
You don't even need trig. sub. - this only requires a simple substitution.
Step #1: The make-up.
$\int {\frac{{\sqrt {x^2 - 9} }}
{x}\,dx} = \int {\frac{{\sqrt {x^2 - 9} \cdot x}}
{{x^2 }}\,dx} .$
Step #2: The substitution.
$u^2 = x^2 - 9 \implies u\,du = x\,dx,$
$\int {\frac{{\sqrt {x^2 - 9} }}
{x}\,dx} = \int {\frac{{u^2 }}
{{u^2 + 9}}\,du} .$
Step #3: Simple trick & mission almost-accomplished.
$\int {\frac{{u^2 }}
{{u^2 + 9}}\,du} = \int {du} - 9\int {\frac{1}
{{u^2 + 9}}\,du} = u - 3\arctan \frac{u}
{3}+k.$
Step #4: Back substitute.
$\int {\frac{{\sqrt {x^2 - 9} }}
{x}\,dx} = \sqrt {x^2 - 9} - 3\arctan \frac{{\sqrt {x^2 - 9} }}
{3} + k.$ | 2015-07-06T12:17:21 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/calculus/24574-trig-integration.html",
"openwebmath_score": 0.957465648651123,
"openwebmath_perplexity": 2682.5602332306307,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806523850543,
"lm_q2_score": 0.87407724336544,
"lm_q1q2_score": 0.857103233534213
} |
https://math.stackexchange.com/questions/3030285/ways-of-selecting-3-balls-out-of-9-balls-if-at-least-one-black-ball-is-to-be | # Ways of selecting $3$ balls out of $9$ balls if at least one black ball is to be selected
A box contains two white, three black, and four red balls. In how many ways can three balls be drawn from the box if at least one black ball is to be included in the draw.
The answer given is $$64$$. I attempted the problem with two different approaches.
My two different attempts:
Attempt #1
Number of ways in which any $$3$$ balls can be drawn out of $$9$$ balls $$\binom{9}{3}=84$$ Number of ways in which no black balls are drawn out of $$9$$ balls (choosing $$3$$ balls from the remaining $$6$$ balls) $$\binom{6}{3}=20$$ Thus, the number of ways of choosing at least $$1$$ black ball is $$84-20=64$$
Attempt #2
Number of ways of drawing $$1$$ black ball out of $$3$$ black balls $$\binom{3}{1}=3$$ Now, we have to draw two more balls, we can choose those balls from the $$8$$ remaining balls $$\binom{8}{2}=28$$ Since both the above events are associated with each other, by fundamental principle of counting, the number of ways of drawing at least one black ball out of the $$9$$ balls is $$3\times28=84$$
I think my second attempt should also be right. Please explain what I'm doing wrong with my second attempt.
The second attempt counts the combination $$\{B_1,B_2,B_3\}$$ three times as $$(B_1, \{B_2, B_3\})$$, $$(B_2, \{B_3, B_1\})$$ and $$(B_3, \{B_1, B_2\})$$.
N.B: $$(B_i, \{B_j, B_k\})$$ means pick $$B_i$$ first, followed by the combination $$\{B_j, B_k\}$$.
We have the basic product rule
$$|A\times B| = \#\{(a,b) \mid A \times B\} = |A| \times |B|$$
for cardinals which holds for any sets $$A$$ and $$B$$.
By writing $$3 \times 28$$, you are actually counting $$\{B_1, B_2, B_3\} \times \{B_2,B_3, \dots\}$$, which doesn't answer the problem.
In , $$(a,b)$$ is defined as $$\{a,\{a,b\}\}$$, so order matters.
• Would you please elaborate the answer? I get that the second attempt is wrong, but I'm unable to understand it clearly. – Azelf Dec 7 '18 at 20:14
• @Azelf My original notation would be clear enough. I've edited my answer to make it even clearer. – GNUSupporter 8964民主女神 地下教會 Dec 7 '18 at 20:57
A selection of three balls that includes at least one black ball has either one black ball and two of the other six balls, two blacks balls and one of the other six balls, or three black balls and none of the other six balls. Therefore, the number of ways of selecting at least one black ball when three balls are selected from two white, three black, and four red balls is $$\binom{3}{1}\binom{6}{2} + \binom{3}{2}\binom{6}{1} + \binom{3}{3}\binom{6}{0} = 45 + 18 + 1 = 64$$
You counted each case in which $$k$$ black balls were selected $$k$$ times, once for each of the $$k$$ ways you could have designated one of those black balls as the designated black ball. Notice that $$\color{red}{\binom{1}{1}}\binom{3}{1}\binom{6}{2} + \color{red}{\binom{2}{1}}\binom{3}{2}\binom{6}{1} + \color{red}{\binom{3}{1}}\binom{3}{3}\binom{6}{0} = 45 + 36 + 3 = 84$$ To illustrate, place numbers on the black balls. If you select black balls $$b_1$$ and $$b_2$$ and a red ball, you count this selection twice: $$\begin{array}{c c} \text{designated black ball} & \text{additional balls}\\ b_1 & b_2, r\\ b_2 & b_1, r \end{array}$$ If you select all three black balls, you count this selection three times: $$\begin{array}{c c} \text{designated black ball} & \text{additional balls}\\ b_1 & b_2, b_3\\ b_2 & b_1, b_3\\ b_3 & b_1, b_2 \end{array}$$ | 2019-10-17T01:23:57 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3030285/ways-of-selecting-3-balls-out-of-9-balls-if-at-least-one-black-ball-is-to-be",
"openwebmath_score": 0.8001653552055359,
"openwebmath_perplexity": 154.21553549898297,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9805806518175514,
"lm_q2_score": 0.8740772351648677,
"lm_q1q2_score": 0.8571032249968491
} |
https://math.stackexchange.com/questions/72067/is-there-a-combinatorial-way-to-see-the-link-between-the-beta-and-gamma-function | Is there a combinatorial way to see the link between the beta and gamma functions?
The Wikipedia page on the beta function gives a simple formula for it in terms of the gamma function. Using that and the fact that $\Gamma(n+1)=n!$, I can prove the following formula: $$\begin{eqnarray*} \frac{a!b!}{(a+b+1)!} & = & \frac{\Gamma(a+1)\Gamma(b+1)}{\Gamma(a+1+b+1)}\\ & = & B(a+1,b+1)\\ & = & \int_{0}^{1}t^{a}(1-t)^{b}dt\\ & = & \int_{0}^{1}t^{a}\sum_{i=0}^{b}\binom{b}{i}(-t)^{i}dt\\ & = & \int_{0}^{1}\sum_{i=0}^{b}\binom{b}{i}(-1)^{i}t^{a+i}dt\\ & = & \sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\int_{0}^{1}t^{a+i}dt\\ & = & \sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\left[\frac{t^{a+i+1}}{a+i+1}\right]_{t=0}^{1}\\ & = & \sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\frac{1}{a+i+1}\\ b! & = & \sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\frac{(a+b+1)!}{a!(a+i+1)} \end{eqnarray*}$$ This last formula involves only natural numbers and operations familiar in combinatorics, and it feels very much as if there should be a combinatoric proof, but I've been trying for a while and can't see it. I can prove it in the case $a=0$: $$\begin{eqnarray*} & & \sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\frac{(b+1)!}{0!(i+1)}\\ & = & \sum_{i=0}^{b}(-1)^{i}\frac{b!(b+1)!}{i!(b-i)!(i+1)}\\ & = & b!\sum_{i=0}^{b}(-1)^{i}\frac{(b+1)!}{(i+1)!(b-i)!}\\ & = & b!\sum_{i=0}^{b}(-1)^{i}\binom{b+\text{1}}{i+1}\\ & = & b!\left(1-\sum_{i=0}^{b+1}(-1)^{i}\binom{b+\text{1}}{i}\right)\\ & = & b! \end{eqnarray*}$$ Can anyone see how to prove it for arbitrary $a$? Thanks!
• The usual interpretation of "combinatoric proof" (that I'm accustomed to) is to show that the beta function counts something; what exactly do you mean by "combinatoric proof" here? Oct 12, 2011 at 17:15
• In any event: it might be more interesting to establish this relationship instead... Oct 12, 2011 at 17:18
• I'm with @J.M. - your derivation for $a=0$ doesn't really look like a combinatorial proof, as you're using only symbolic manipulation instead of counting and combining objects.
– anon
Oct 12, 2011 at 21:03
Here's a combinatorial argument for $a!\, b! = \sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\frac{(a+b+1)!}{(a+i+1)}$, which is just a slight rewrite of the identity you want to show.
Suppose you have $a$ red balls numbered $1$ through $a$, $b$ blue balls numbered $1$ through $b$, and one black ball.
Question: How many permutations of the balls have all the red balls first, then the black ball, and then the blue balls?
Answer 1: $a! \,b!$. There are $a!$ ways to choose the red balls to go in the first $a$ slots, $b!$ ways to choose the blue balls to go in the last $b$ slots, and $1$ way for the black ball to go in slot $a+1$.
Answer 2: Let $A$ be the set of all permutations in which the black ball appears after all the red balls (irrespective of where the blue balls go). Let $B_i$ be the subset of $A$ such that the black ball appears after blue ball $i$. Then the number of permutations we're after is also given by $|A| - \left|\bigcup_{i=1}^b B_i\right|$. Since the probability that the black ball appears last of any particular $a+i+1$ balls is $\frac{1}{a+i+1}$, and there are $(a+b+1)!$ total ways to arrange the balls, by the principle of inclusion-exclusion we get $$\frac{(a+b+1)!}{a+1} - \sum_{i=1}^{b}\binom{b}{i}(-1)^{i+1}\frac{(a+b+1)!}{(a+i+1)} = \sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\frac{(a+b+1)!}{(a+i+1)}.$$
• Fantastic! How did you find this? Oct 12, 2011 at 22:17
• @Steven: I thought about it for way too long. :) More seriously, an alternating binomial sum smells like inclusion-exclusion to me. I also thought I could generalize my answer to a similar question, and that turned out to work, although it took a while to get the formulation right. I kept trying to apply inclusion-exclusion to the full set of permutations, and it finally hit me that I only needed to consider subsets of the set I call $A$. And thanks! Oct 12, 2011 at 22:30
• Nicely done indeed!
– robjohn
Oct 13, 2011 at 0:37
• @robjohn: And thanks for the edit. Not sure how I managed to leave that out! :) Oct 13, 2011 at 1:32
• Beautiful, this is exactly the kind of answer I was hoping for, thank you! Oct 13, 2011 at 7:04
Using partial fractions, we have that $$\frac{1}{(a+1)(a+2)\dots(a+b+1)}=\frac{A_1}{a+1}+\frac{A_2}{a+2}+\dots+\frac{A_{b+1}}{a+b+1}\tag{1}$$ Use the Heaviside Method; multiply $(1)$ by $(a+k)$ and set $a=-k$ to solve $(1)$ for $A_k$: $$A_k=\frac{(-1)^{k-1}}{(k-1)!(b-k+1)!}=\frac{(-1)^{k-1}}{b!}\binom{b}{k-1}\tag{2}$$ Plugging $(2)$ into $(1)$, yields $$\frac{a!}{(a+b+1)!}=\sum_{k=1}^{b+1}\frac{(-1)^{k-1}}{b!}\binom{b}{k-1}\frac{1}{a+k}\tag{3}$$ Multiplying $(3)$ by $b!$ and reindexing, gives us $$\frac{a!b!}{(a+b+1)!}=\sum_{k=0}^{b}(-1)^k\binom{b}{k}\frac{1}{a+k+1}\tag{4}$$ and $(4)$ is your identity.
Update: Starting from the basic binomial identity $$(1-x)^b=\sum_{k=0}^b(-1)^k\binom{b}{k}x^k\tag{5}$$ multiply both sides of $(5)$ by $x^a$ and integrate from $0$ to $1$: $$B(a+1,b+1)=\sum_{k=0}^b(-1)^k\binom{b}{k}\frac{1}{a+k+1}\tag{6}$$
• FYI: This argument appears on pages 188-189 of Concrete Mathematics, 2nd edition, where it is discussed in the context of the $n$th forward difference formula. Oct 12, 2011 at 20:50
• This identity is one of my favorite uses of partial fractions and it turns up when using Euler's Transform for series acceleration.
– robjohn
Oct 12, 2011 at 21:00
• @Mike: not surprising since it computes the $b^{th}$ forward difference of $\frac{1}{a+1}$. Thanks for the reference!
– robjohn
Oct 12, 2011 at 21:02
Seven years later I found another way to attack this. Define $$f(b, a) = \frac{a!b!}{(a+b+1)!}$$ and $$h(b, a) = \sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\frac{1}{a+i+1}$$. To connect the two, we define $$g$$ such that $$g(0, a) = \frac{1}{a + 1}$$ and $$g(b + 1, a) = g(b, a) - g(b, a + 1)$$ and prove by induction in $$b$$ that $$f = g = h$$. In each case the base case is straightforward and we consider only the inductive step.
$$\begin{eqnarray*} & & g(b + 1, a) \\ & = & g(b, a) - g(b, a + 1) \\ & = & f(b, a) - f(b, a + 1) \\ & = & \frac{a!b!}{(a+b+1)!} - \frac{(a+1)!b!}{(a+b+2)!} \\ & = & \frac{a!b!(a + b + 2)}{(a+b+2)!} - \frac{a!b!(a+1)}{(a+b+2)!} \\ & = & \frac{a!b!(b+1)}{(a+b+2)!} \\ & = & \frac{a!(b+1)!}{(a+b+2)!} \\ & = & f(b+1, a)\\ \end{eqnarray*}$$
$$\begin{eqnarray*} & & g(b + 1, a) \\ & = & g(b, a) - g(b, a + 1) \\ & = & h(b, a) - h(b, a + 1) \\ & = & \left(\sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\frac{1}{a+i+1}\right) - \left(\sum_{i=0}^{b}\binom{b}{i}(-1)^{i}\frac{1}{a+i+2}\right) \\ & = & \frac{1}{a+1} + \left(\sum_{i=1}^{b}\binom{b}{i}(-1)^{i}\frac{1}{a+i+1}\right) - \left(\sum_{i=0}^{b-1}\binom{b}{i}(-1)^{i}\frac{1}{a+i+2}\right) - (-1)^{b}\frac{1}{a+b+2}\\ & = & \frac{1}{a+1} + \left(\sum_{i=1}^{b}\binom{b}{i}(-1)^{i}\frac{1}{a+i+1}\right) + \left(\sum_{i=1}^{b}\binom{b}{i-1}(-1)^{i}\frac{1}{a+i+1}\right) + (-1)^{b+1}\frac{1}{a+b+2}\\ & = & \frac{1}{a+1} + \left(\sum_{i=1}^{b}\left(\binom{b}{i} + \binom{b}{i-1}\right)(-1)^{i}\frac{1}{a+i+1}\right) + (-1)^{b+1}\frac{1}{a+b+2}\\ & = & \frac{1}{a+1} + \left(\sum_{i=1}^{b}\binom{b+1}{i}(-1)^{i}\frac{1}{a+i+1}\right) + (-1)^{b+1}\frac{1}{a+b+2}\\ & = & \sum_{i=0}^{b+1}\binom{b+1}{i}(-1)^{i}\frac{1}{a+i+1} \\ & = & h(b + 1, a) \\ \end{eqnarray*}$$ | 2022-08-08T20:36:19 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/72067/is-there-a-combinatorial-way-to-see-the-link-between-the-beta-and-gamma-function",
"openwebmath_score": 0.8793419003486633,
"openwebmath_perplexity": 176.87786336871503,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9905874123575265,
"lm_q2_score": 0.8652240964782012,
"lm_q1q2_score": 0.8570800988397201
} |
https://math.stackexchange.com/questions/2746757/taxonomy-of-polygons | # Taxonomy of polygons
I've written a tree-like layout to help myself remember which polygons are sub-types of others, because I always get confused. I was just wondering if this is right:
|quadrilateral
|parallelogram
|rectangle
|square
|oblong
|rhomboid
|kite (corrected after rschwieb's answer, a rhombus is a kite)
|rhombus
|square
|trapezoid(AmE) / trapezium (BrE)
So a square is a rhombus and a parallelogram.
Also, I know that there are two definitions of "trapezoid." Under the inclusive definition "trapezoid" is immediately under "quadrilateral" in the tree and above parallelogram and kite. Under this definition all squares are trapezoids.
Is my tree correct, at least ignoring the difference in the trapezoid definition difference?
Edit: Thanks to rschwieb for helping me realise that a rhombus is a kite. There is also a nice Euler diagram Wikipedia
You need to see this post I wrote some time ago.
In short, the education system (at least in the US) has confused this issue and made it harder than it has to be.
There is a very natural hierarchy the depends on logical connections between quadrilaterals, and there is really no benefit to using the “exclusive version” of definitions.
I would argue for this picture for the main characters:
Actually there is a little puzzle where you can figure out a new node to insert between "quadrilateral" and "kite" which also connects to "parallelogram," and I have never seen this shape mentioned in a textbook. It's just not common enough to encounter in normal life.
• I see in your chart you've used the inclusive definition of trapezoid, that's all good. I see the difference between this chart and the "frankenstein" chart you referred to is that the kite is in a separate area on its own, but in yours it traces down to rhombus. In yours it shows a rhombus as being a kite. This seems unintuitive with my everyday notion of a kite, but Wikipedia says: "If all four sides of a kite have the same length (that is, if the kite is equilateral), it must be a rhombus." Also, the other chart doesn't link the kite to the rhombus at all. I think this chart is easier. – Zebrafish Apr 21 '18 at 2:21
• There's just one thing I see wrong with it, a rectangle is an isosceles trapezoid? Are those two supposed to be connected with a line over on the right? – Zebrafish Apr 21 '18 at 2:49
• "Rectangles and squares are usually considered to be special cases of isosceles trapezoids though some sources would exclude them." Wow, there are a few a discrepancies in definitions, that's not helping. – Zebrafish Apr 21 '18 at 2:52
• @Zebrafish as I mentioned “exclusive definitions” aren’t as useful, they’re harder to state, and make proving things less convenient. – rschwieb Apr 21 '18 at 3:41
• @Zebrafish yes, I would cal a rectangle an isosoles trapezoid. – rschwieb Apr 21 '18 at 3:43
Can't a kite also be a parallelogram, in the case where all sides are equal?
That of course depends on your definition of kite... I've rarely seen the term used at all. You can exclude that case specifically and your tree is then okay.
Wikipedia's Kite (geometry) article seems to include that in their special cases.
EDIT: In that special case, it can also be a rhombus or a square
• A kite that’s also a parallelogram is a rhombus. – rschwieb Apr 21 '18 at 3:45 | 2019-07-19T14:45:36 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2746757/taxonomy-of-polygons",
"openwebmath_score": 0.7078220248222351,
"openwebmath_perplexity": 830.8944931659497,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9626731115849662,
"lm_q2_score": 0.89029422102812,
"lm_q1q2_score": 0.8570623079832539
} |
https://brilliant.org/discussions/thread/mathematical-induction-part-2/ | # Mathematical Induction PART 2
In Part 1
It shows that
Step 1 is Show it is true for n=1
Step 2 is Show that if n=k is true then n=k+1 is also true
so How to Do it?
Step 1 : * prove* it is true for n=1 (normally)
Step 2 : done in this way normally we can prove it out..
First ~ Assume it is true forn=k
Second ~ Prove it is true for n=k+1 (normally we can use the n=k case as fact )
S (n / k) must be all positive integers including n/k
EXAMPLE
PROVE 1 + 3 + 5 + ... + (2n-1) = n^2
Fisrt : Show it is true for n=1
from LEFT : 1+3+5+.....+(2(1)-1) = 1
from RIGHT n^2 = (1^2) =1 SO 1 = 1^2 is True
SECOND : Assume it is true for n=k
1 + 3 + 5 + ... + (2^k-1) = k^2 is True
(prove it by yourself :) write down your steps in comment )
THIRD : prove it is true for k+1
1 + 3 + 5 + ... + (2^k-1) + (2^(k+1)-1) = (k+1)^2 ... ? (prove it by yourself :) write down your steps in comment )
We know that 1 + 3 + 5 + ... + (2k-1) = k^2 (the assumption above), so we can do a replacement for all but the last term:
k^2 + (2(k+1)-1) = (k+1)^2
THEN expanding all terms:
LEFT : k^2 + 2k + 2 - 1 = k^2 + 2k+1
NEXT simplifying :
RIGHT :k^2 + 2k + 1 = k^2 + 2k + 1
LEFT and RIGHT are the same! So it is true, it is proven.
THEREFORE :
1 + 3 + 5 + ... + (2(k+1)-1) = (k+1)^2 is TRUE!!!!!!
Mathematical Induction IS done !!! :)
Note by Nicole Ling
5 years, 4 months ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
calculus?
- 5 years, 1 month ago
so it is math
- 5 years, 1 month ago
really?
- 5 years, 1 month ago
what
- 5 years, 1 month ago
roar
- 5 years, 1 month ago
feeling lucky
- 5 years, 1 month ago
feeling happy
- 5 years, 1 month ago
- 5 years, 1 month ago
tiger
- 5 years, 1 month ago
is that true?
- 5 years, 1 month ago
yes
- 5 years, 1 month ago
no
- 5 years, 1 month ago
What have you been telling?
- 5 years, 1 month ago
what is your question ? I can't understand what did you want to say through your comments..
- 5 years ago
a
- 5 years, 1 month ago
good
- 5 years, 1 month ago
nice
- 5 years, 1 month ago
× | 2020-06-04T08:58:41 | {
"domain": "brilliant.org",
"url": "https://brilliant.org/discussions/thread/mathematical-induction-part-2/",
"openwebmath_score": 0.9713510274887085,
"openwebmath_perplexity": 3636.1558847293654,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes\n\n",
"lm_q1_score": 0.9626731147976795,
"lm_q2_score": 0.8902942159342104,
"lm_q1q2_score": 0.8570623059397442
} |
https://math.hecker.org/2011/09/02/linear-algebra-and-its-applications-exercise-2-2-12/ | ## Linear Algebra and Its Applications, Exercise 2.2.12
Exercise 2.2.12. What is a 2 by 3 system of equations $Ax = b$ that has the following general solution?
$x = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} + w \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix}$
Answer: The general solution above is the sum of a particular solution and a homogeneous solution, where
$x_{particular} = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}$
and
$x_{homogeneous} = w \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix}$
Since $w$ is the only variable referenced in the homogeneous solution it must be the only free variable, with $u$ and $v$ being basic. Since $u$ is basic we must have a pivot in column 1, and since $v$ is basic we must have a second pivot in column 2. After performing elimination on $A$ the resulting echelon matrix $U$ must therefore have the form
$U = \begin{bmatrix} *&*&* \\ 0&*&* \end{bmatrix}$
To simplify solving the problem we can assume that $A$ also has this form; in other words, we assume that $A$ is already in echelon form and thus we don’t need to carry out elimination. The matrix $A$ then has the form
$A = \begin{bmatrix} a_{11}&a_{12}&a_{13} \\ 0&a_{22}&a_{23} \end{bmatrix}$
where $a_{11}$ and $a_{22}$ are nonzero (because they are pivots).
We then have
$Ax_{homogeneous} = \begin{bmatrix} a_{11}&a_{12}&a_{13} \\ 0&a_{22}&a_{23} \end{bmatrix} w \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} = 0$
If we assume that $w$ is 1 and express the right-hand side in matrix form this then becomes
$\begin{bmatrix} a_{11}&a_{12}&a_{13} \\ 0&a_{22}&a_{23} \end{bmatrix} \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$
or (expressed as a system of equations)
$\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcl}a_{11}&+&2a_{12}&+&a_{13}&=&0 \\ &&2a_{22}&+&a_{23}&=&0 \end{array}$
The pivot $a_{11}$ must be nonzero, and we arbitrarily assume that $a_{11} = 1$. We can then satisfy the first equation by assigning $a_{12} = 0$ and $a_{13} = -1$. The pivot $a_{22}$ must also be nonzero, and we arbitrarily assume that $a_{22} = 1$ as well. We can then satisfy the second equation by assigning $a_{23} = -2$. Our proposed value of $A$ is then
$A = \begin{bmatrix} 1&0&-1 \\ 0&1&-2 \end{bmatrix}$
so that we have
$Ax_{homogeneous} = \begin{bmatrix} 1&0&-1 \\ 0&1&-2 \end{bmatrix} \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}$
as required.
We next turn to the general system $Ax = b$. We now have a value for $A$, and we were given the value of the particular solution. We can multiply the two to calculate the value of $b$:
$b = Ax_{particular} = \begin{bmatrix} 1&0&-1 \\ 0&1&-2 \end{bmatrix} \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}$
This gives us the following as an example 2 by 3 system that has the general solution specified above:
$\begin{bmatrix} 1&0&-1 \\ 0&1&-2 \end{bmatrix} \begin{bmatrix} u \\ v \\ w \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}$
or
$\setlength\arraycolsep{0.2em}\begin{array}{rcrcrcl}u&&&-&w&=&1 \\ &&v&-&2w&=&1 \end{array}$
Finally, note that the solution provided for exercise 2.2.12 at the end of the book is incorrect. The right-hand side must be a 2 by 1 matrix and not a 3 by 1 matrix, so the final value of 0 in the right-hand side should not be present.
NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.
If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.
This entry was posted in linear algebra. Bookmark the permalink.
### 2 Responses to Linear Algebra and Its Applications, Exercise 2.2.12
1. Daniel says:
I think that your final note is incorrect, due to the fact that if you find the general solution for the system Ax=b that you found, you’ll have to write the solution like Strang does it in (3) page (76). There are three entries on the solution because “x” vector lenght. The general solution (in Matlab notation) is x = [u; v; w] = [1+w; 1+2w; w]= [1; 1; 0] + w*[1; 2; 1]. The general solution he proposed at the begining of the exercise
• hecker says:
My apologies for the delay in responding. Are you referring to my final sentence about the solution to exercise 2.2.12 given on page 476 in the back of the book? If so, I think I may have confused you. I am *not* saying that Strang wrote the general solution incorrectly in the statement of the exercise on page 79, or that Strang found an incorrect solution to the exercise.
Rather my point is as follows: In the statement of the solution on page 476 Strang shows as a solution the same 2 by 3 matrix that I derived above, and Strang shows that 2 by 3 matrix multiplying the vector (u, v, w) just as I do above, representing a system of two equations in three unknowns. However on the right-hand side Strang shows that 2 by 3 matrix multiplying the vector (u, v, w) to produce the vector (1, 1, 0). This cannot be: since the matrix has only two rows, that multiplication would produce a vector with only two elements, not three (as in the book). Those two elements represent the right-hand sides of the corresponding system of two equations.
So the left-hand side in the solution of 2.2.12 on page 476 is correct, but the right-hand side of the solution of 2.2.12 on page 476, namely the vector (1, 1, 0), is not. Instead the right-hand side should be the vector (1, 1) as I derived above. | 2019-11-12T19:01:54 | {
"domain": "hecker.org",
"url": "https://math.hecker.org/2011/09/02/linear-algebra-and-its-applications-exercise-2-2-12/",
"openwebmath_score": 0.9118801355361938,
"openwebmath_perplexity": 208.41991238078072,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9884918533088548,
"lm_q2_score": 0.8670357666736772,
"lm_q1q2_score": 0.857057791884327
} |
http://math.stackexchange.com/questions/584655/what-is-fracd-arctanxdx | # What is $\frac{d(\arctan(x))}{dx}$?
Let $v= \arctan{x}$. Now I want to find $\frac{dv}{dx}$. My method is this: Rearranging yields $\tan(v) = x$ and so $dx = \sec^2(v)dv$. How do I simplify from here? Of course I could do something like $dx = \sec^2(\arctan(x))dv$ so that $\frac{dv}{dx} = \cos^2(\arctan(x))$ but I am sure a better expression exists. I am probably just missing some crucial step where we convert one of the trigonometric expressions into an expression involving $x$. Thanks in advance for any help or tips!
-
## 4 Answers
The derivative of $\tan v$ is $1+\tan^2 v$. It will be easier to simplify, since here $v=\arctan x$.
You may check:
$$\sec^2 v = \frac{1}{\cos^2 v} = \frac{\cos^2 v + \sin^2 v}{\cos^2 v} = 1+\tan^2 v$$
Then
$$\mathrm{d}x = (1+\tan^2 v) \ \mathrm{d}v = (1+x^2) \ \mathrm{d}v$$
And
$$\frac{\mathrm{d}v}{\mathrm{d}x}=\frac{1}{1+x^2}$$
-
Perfect thanks a lot, this is very clear. I see now that using $\cos^2(x) = \frac{1}{\tan(x)^2+1}$ will also change the expression $\cos^2(\arctan(x))$ into $\frac{1}{1+x^2}$. Thanks for the answer! – Slugger Nov 28 '13 at 15:24
Another way :
$$\frac{d\arctan x}{dx}=\lim_{h\to0}\frac{\arctan(x+h)-\arctan x}h$$
$$\displaystyle=\lim_{h\to0}\frac{\arctan\frac{x+h-x}{1+(x+h)x}}h$$
$$\displaystyle=\lim_{h\to0}\left(\frac{\arctan\frac h{1+(x+h)x}}{\frac h{1+(x+h)x}}\right)\cdot\frac1{\lim_{h\to0}\{1+(x+h)x\}}=1\cdot\frac1{1+x^2}$$
as $\displaystyle\lim_{u\to0}\frac{\arctan u}u=\lim_{v\to0}\frac v{\tan v}=\lim_{v\to0}\cos v\cdot\frac1{\lim_{v\to0}\frac{\sin v}v}=1\cdot1$
-
Definition of the derivative! +1. – Ahaan S. Rungta Nov 28 '13 at 15:30
Thank you for your answer! It is nice to see how the answer tackle my question in a multitude of ways! – Slugger Nov 28 '13 at 15:34
@Slugger, my pleasure. Please have a look into math.stackexchange.com/questions/579170/… – lab bhattacharjee Nov 28 '13 at 15:46
You can also use the Inverse Derivative Formula, which states that if $f(x)$ and $g(x)$ are inverse functions, we have $$g'(x) = \dfrac {1}{f'(g(x))}.$$So, if $g(x)=\arctan x$, our task is to find $g'(x)$. In that case, we have $f(x)=\tan x$, which gives us $f'(x)=sec^2 x$, so we can substitute: \begin {align*} g'(x) &= \dfrac {1}{f'(g(x))} \\&= \dfrac {1}{\sec^2 (g(x))} \\&= \dfrac {1}{\sec^2 (\arctan x)}. \end {align*}We can find $\sec (\arctan x)$ geometrically. Consider a right triangle with legs of length $x$, $1$, and $\sqrt{1+x^2}$. Let $\theta$ be the angle opposite to the leg of length $x$. Then, $$\sec \left( \arctan x \right) = \sec (\theta) = \sqrt {1+x^2},$$ so our answer is $$\dfrac {1}{\left( \sqrt{1+x^2} \right)^2} = \boxed {\dfrac {1}{1+x^2}}.$$
-
Thanks you for your answer! – Slugger Nov 28 '13 at 15:35
You're welcome! :) – Ahaan S. Rungta Nov 28 '13 at 15:36
$$v=\arctan(x)\Rightarrow x=\tan v\Rightarrow x'=\frac{1}{(\tan v)'}=\frac{1}{(\frac{\sin v}{\cos v})'}=\frac{1}{\frac{1}{\cos^2 v}}=\cos^2 v=\frac{\cos^2 v}{1}$$ $$=\frac{\cos^2 v}{\cos^2 v+\sin^2 v}=\frac{\frac{\cos^2 v}{\cos^2 v}}{\frac{\cos^2 v+\sin^2 v}{\cos^2 v}}=\frac{1}{1+\tan^2 v}=\frac{1}{1+x^2}$$ i.e $$v'=(\arctan(x))'=\frac{1}{1+x^2}$$
-
I am not sure I follow exactly. Your second equation says $x=\tan v$ and then you follow by saying $x' =\frac{1}{(\tan v)'}$... Maybe I am missing something – Slugger Nov 28 '13 at 15:27
Sir, my solution is correct, You can also use the Inverse Derivative Formula, – Madrit Zhaku Nov 28 '13 at 15:31
Oh, it seems this is essentially what I did, but I posted a few minutes later. I didn't see your post when I posted, so it's not that I copied. Should I delete my post? – Ahaan S. Rungta Nov 28 '13 at 15:32
@MadritZhaku When somebody asks you to explain what you did, "it is correct" is not as helpful as actually explaining what you did. If you don't have the patience to elaborate, don't respond at all. Your comment came off quite rude. – Ahaan S. Rungta Nov 28 '13 at 15:33
all is well that ends well :) – Slugger Nov 28 '13 at 15:37 | 2014-12-23T05:35:33 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/584655/what-is-fracd-arctanxdx",
"openwebmath_score": 0.9461610317230225,
"openwebmath_perplexity": 558.4447670146069,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9884918487320493,
"lm_q2_score": 0.8670357615200474,
"lm_q1q2_score": 0.8570577828217518
} |
http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html | This is called the eigendecomposition. Learn that the eigenvalues of a triangular matrix are the diagonal entries. » If they are numeric, eigenvalues are sorted in order of decreasing absolute value. Computing Eigenvalues and Eigenvectors Eigenvalue Problems Eigenvalues and Eigenvectors Geometric Interpretation Eigenvalue Problems Eigenvalue problems occur in many areas of science and engineering, such as structural analysis Eigenvalues are also important in analyzing numerical methods Theory and algorithms apply to complex matrices as well. Thx in advance!. 1 Let A be an n × n matrix. Eigenvalues and the characteristic. This means the only eigenvalue is 0, and every nonzero plynomial is an eigenvector, so the eigenspace of eigenvalue 0 is the whole space V. To better understand these concepts, let's consider the following situation. I need some help with the following problem please? Let A be a 3x3 matrix with eigenvalues -1,0,1 and corresponding eigenvectors l1l. There is a hope. *XP the eigenvalues up to a 4*4 matrix can be calculated. Requirements: The program should… (Use your code from programming assignment 7 for items 1 through 4) 1. A normal matrix is de ned to be a matrix M, s. Solution: Ax = x )x = A 1x )A 1x = 1 x (Note that 6= 0 as Ais invertible implies that det(A) 6= 0). Any help is greatly appreciated. Matrices and Eigenvectors It might seem strange to begin a section on matrices by considering mechanics, but underlying much of matrix notation, matrix algebra and terminology is the need to describe the physical world in terms of straight lines. λ 1, λ 2, λ 3, …, λ p. The Eigenvalues from a Covariance matrix inform us about the directions (read: principal components) along which the data has the maximum spread. It decomposes matrix using LU and Cholesky decomposition. Hi, I'm having trouble with finding the eigenvectors of a 3x3 matrix. 2 FINDING THE EIGENVALUES OF A MATRIX Consider an n£n matrix A and a scalar ‚. In my earlier posts, I have already shown how to find out eigenvalues and the corresponding eigenvectors of a matrix. Eigenvalues and Eigenvectors Consider multiplying a square 3x3 matrix by a 3x1 (column) vector. The eigen-values are di erent for each C, but since we know the eigenvectors they are easy to diagonalize. The calculator will perform symbolic calculations whenever it is possible. is nonsingular, and hence invertible. (In fact, the eigenvalues are the entries in the diagonal matrix D {\displaystyle D} (above), and therefore D {\displaystyle D} is uniquely determined by A {\displaystyle A} up to the order of its entries. The eigenvalues and eigenvectors of a matrix may be complex, even when the matrix is real. With this installment from Internet pedagogical superstar Salman Khan's series of free math tutorials, you'll learn how to determine the eigenvalues of 3x3 matrices in eigenvalues. Eigenvectors of Rin C2 for the eigenvalues iand iare i 1 and 1, respectively. Given a matrix A, recall that an eigenvalue of A is a number λ such that Av = λ v for some vector v. We give two different proofs. I've already tried to use the EigenvalueDecomposition from Accord. With the program EIGENVAL. Eigenvalues [ m, UpTo [ k]] gives k eigenvalues, or as many as are available. Theorem EDELI Eigenvectors with Distinct Eigenvalues are Linearly Independent. You have 3 vector equations Au1=l1u1 Au2=l2u2 Au3=l3u3 Consider the matrix coefficients a11,a12,a13, etc as unknowns. I'm having a problem finding the eigenvectors of a 3x3 matrix with given eigenvalues. the eigenvalues of a triangular matrix (upper or lower triangular) are the entries on the diagonal. One approach is to raise the matrix to a high power. The diagonal matrix D contains eigenvalues. 2 MATH 2030: EIGENVALUES AND EIGENVECTORS De nition 0. The others are not eigenvectors. Let us rearrange the eigenvalue equation to the form , where represents a vector of all zeroes (the zero vector). This matrix calculator computes determinant , inverses, rank, characteristic polynomial , eigenvalues and eigenvectors. While the matrix representing T is basis dependent, the eigenvalues and eigenvectors are not. Let p(t) be the […] Determine Whether Given Matrices are Similar (a) Is the. Also, the eigenvalues and eigenvectors satisfy (A - λI)X r = 0 r. Eigenvalues and Eigenvectors. The first one is a simple one - like all eigenvalues are real and different. That is, the eigenvectors are the vectors that the linear transformation A merely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. Free Matrix Eigenvectors calculator - calculate matrix eigenvectors step-by-step This website uses cookies to ensure you get the best experience. And we used the fact that lambda is an eigenvalue of A, if and only if, the determinate of lambda times the identity matrix-- in this case it's a 2 by 2 identity matrix-- minus A is equal to 0. Consider the two-dimensional vectors a and b shown here. It decomposes matrix using LU and Cholesky decomposition. The problem is I don't know how to write (1,0,0) as a lineair combination of my eigenvectors. Note that the multiplication on the left hand side is matrix multiplication (complicated) while the mul-. Shouldn't it? $\endgroup$ – Janos Jun 27 '19 at 10:04. And that says, any value, lambda, that satisfies this equation for v is a non-zero vector. (9-4) Hence, the eigenspace associated with eigenvalue λ is just the kernel of (A - λI). It decomposes matrix using LU and Cholesky decomposition. Theorem EDELI Eigenvectors with Distinct Eigenvalues are Linearly Independent. The l =1 eigenspace for the matrix 2 6 6 4 2 1 3 4 0 2 1 3 2 1 6 5 1 2 4 8 3 7 7 5 is two-dimensional. Find the eigenvalues and eigenvectors. First, form the matrix. If you're seeing this message, it means we're having trouble loading external resources on our website. There will be an eigenvalue corresponding to each eigenvector of a matrix. Given any square matrix A ∈ M n(C),. 5 of the textbook. Eigenvalues and Eigenvectors. I have a 3x3 covariance matrix (so, real, symmetric, dense, 3x3), I would like it's principal eigenvector, and speed is a concern. proc iml; x = hadamard(16); call eigen(val, vec, x); print (val) vec[format=5. EIGENVALUES AND EIGENVECTORS Definition 7. When we compute the eigenvalues and the eigenvectors of a matrix T ,we can deduce the eigenvalues and eigenvectors of a great many other matrices that are derived from T ,and every eigenvector of T is also an eigenvector of the matrices , ,,. A non-square matrix A does not have eigenvalues. The eigenvalues of a selfadjoint matrix are always real. Philip Petrov ( https://cphpvb. I need some help with the following problem please? Let A be a 3x3 matrix with eigenvalues -1,0,1 and corresponding eigenvectors l1l. Finding a set of matrices based on eigenvalues and eigenvectors with constraints. [email protected] 224 CHAPTER 7. To calculate the eigenvalues and eigenvector of the Hessian, you would first calculate the Hessian (a symmetric 3x3 matrix, containing the second derivatives in each of the 3 directions) for each pixel. If the resulting V has the same size as A, the matrix A has a full set of linearly independent eigenvectors that satisfy A*V = V*D. The non-symmetric problem of finding eigenvalues has two different formulations: finding vectors x such that Ax = λx, and finding vectors y such that y H A = λy H (y H implies a complex conjugate transposition of y). the coefficients, which, in order to have non-zero solutions, has to have a singular matrix (zero determinant). Also, the method only tells you how to find the largest eigenvalue. Call you eigenvectors u1,u2,u3. Find a basis for this eigenspace. Eigenvectors and eigenspaces for a 3x3 matrix. Get 1:1 help now from expert Advanced Math tutors. exp(xA) is a fundamental matrix for our ODE Repeated Eigenvalues When an nxn matrix A has repeated eigenvalues it may not have n linearly independent eigenvectors. As an example, in the case of a 3 X 3 Matrix and a 3-entry column vector,. is nonsingular, and hence invertible. In the last video we set out to find the eigenvalues values of this 3 by 3 matrix, A. matrix then det(A−λI) = 0. 2 Eigenvalues and Eigenvectors of the power Matrix. It decomposes matrix using LU and Cholesky decomposition. Finding a set of matrices based on eigenvalues and eigenvectors with constraints. In linear algebra, the Eigenvector does not change its direction under the associated linear transformation. Eigenvalues code in Java. An eigenvalue of a matrix is nothing but a special scalar that is used in the multiplication of matrices and is of great importance in physics as well. Eigenvalues [ m, spec] is always equivalent to Take [ Eigenvalues [ m], spec]. The eigenvalues are 4; 1; 4(4is a double root), exactly the diagonal elements. I have a 3x3 real symmetric matrix, from which I need to find the eigenvalues. The matrix of this transformation is the 6 6 all-zero matrix (in arbitrary basis). Obtain the Eigenvectors and Eigenvalues from the covariance matrix or correlation matrix, or perform Singular Value Decomposition. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. the coefficients, which, in order to have non-zero solutions, has to have a singular matrix (zero determinant). Nth power of a square matrix and the Binet Formula for Fibonacci sequence Yue Kwok Choy Given A= 4 −12 −12 11. The eigenvalues and eigenvectors of a matrix are scalars and vectors such that. 2 Eigenvalues and Eigenvectors (cont’d) Example. The associated eigenvectors can now be found. Matrix A: 0 -6 10-2 12 -20-1 6 -10 I got the eigenvalues of: 0, 1+i, and 1-i. Get the free "Eigenvalues Calculator 3x3" widget for your website, blog, Wordpress, Blogger, or iGoogle. The zero vector 0 is never an eigenvectors, by definition. Spectral theorem: For a normal matrix M2L(V), there exists an. Eigenvectors and Eigenvalues When a random matrix A acts as a scalar multiplier on a vector X, then that vector is called an eigenvector of X. If you can draw a line through the three points (0, 0), v and Av, then Av is just v multiplied by a number λ; that is, Av = λv. Eigenvector and Eigenvalue. We take an example matrix from a Schaum's Outline Series book Linear Algebra (4 th Ed. Example: The Hermitian matrix below represents S x +S y +S z for a spin 1/2 system. Find the eigenvectors and the corresponding eigenvalues of T T T. When we compute the eigenvalues and the eigenvectors of a matrix T ,we can deduce the eigenvalues and eigenvectors of a great many other matrices that are derived from T ,and every eigenvector of T is also an eigenvector of the matrices , ,,. Eigenvalues and Eigenvectors. Any help is greatly appreciated. This matrix calculator computes determinant, inverses, rank, characteristic polynomial, eigenvalues and eigenvectors. If you love it, our example of the solution to eigenvalues and eigenvectors of 3×3 matrix will help you get a better understanding of it. The eigenvalue with the largest absolute value is called the dominant eigenvalue. 1; Lecture 13: Basis=? For A 3X3 Matrix: Ex. We have A= 5 2 2 5 and eigenvalues 1 = 7 2 = 3 The sum of the eigenvalues 1 + 2 = 7+3 = 10 is equal to the sum of the diagonal entries of the matrix Ais 5 + 5 = 10. Today we’re going to talk about a special type of symmetric matrix, called a positive definite matrix. We recall that a scalar l Î F is said to be an eigenvalue (characteristic value, or a latent root) of A, if there exists a nonzero vector x such that Ax = l x, and that such an x is called an eigen-vector (characteristic vector, or a latent vector) of A corresponding to the eigenvalue l and that the pair (l, x) is called an. The determinant will be computed by performing a Laplace expansion along the second row: The roots of the characteristic equation, are clearly λ = −1 and 3, with 3 being a double root; these are the eigenvalues of B. p ( t) = − ( t − 2) ( t − 1) ( t + 1). If you have trouble understanding your eigenvalues and eigenvectors of 3×3 matrix. You can use this to find out which of your. EIGENVALUES AND EIGENVECTORS 6. Here det (A) is the determinant of the matrix A and tr(A) is the trace of the matrix A. Eigenvalues [ m, - k] gives the k that are smallest in absolute value. These straight lines may be the optimum axes for describing rotation of a. For background on these concepts, see 7. Similar matrices always has the same eigenvalues, but their eigenvectors could be different. Tied eigenvalues make the problem of reliably returning the same eigenvectors even more interesting. 366) •eigenvectors corresponding to distinct eigenvalues are orthogonal (→TH 8. In this case, they are the measure of the data's covariance. eig computes eigenvalues and eigenvectors of a square matrix. is the characteric equation of A, and the left part of it is called characteric polynomial of A. SOLUTION: • In such problems, we first find the eigenvalues of the matrix. The characteristic polynomial (CP) of an nxn matrix A A is a polynomial whose roots are the eigenvalues of the matrix A A. The picture is more complicated, but as in the 2 by 2 case. When we compute the eigenvalues and the eigenvectors of a matrix T ,we can deduce the eigenvalues and eigenvectors of a great many other matrices that are derived from T ,and every eigenvector of T is also an eigenvector of the matrices , ,,. - Duration: 4:53. Logical matrices are coerced to numeric. Any value of λ for which this equation has a solution is known as an eigenvalue of the matrix A. Equation (1) is the eigenvalue equation for the matrix A. First, form the matrix. Eigenvalues and Eigenvectors of a Matrix Description Calculate the eigenvalues and corresponding eigenvectors of a matrix. Eigen vector, Eigen value 3x3 Matrix Calculator. Eigenvalues and eigenvectors - physical meaning and geometric interpretation applet Introduction. Today Courses Practice Algebra Geometry Number Theory Calculus Probability Find the eigenvalues of the matrix A = (8 0 0 6 6 11 1 0 1). Those are the “eigenvectors”. Eigenvalues and Eigenvectors. Eigenvalues are simply the coefficients attached to eigenvectors, which give the axes magnitude. Decomposing a matrix in terms of its eigenvalues and its eigenvectors gives valuable insights into the properties of the matrix. To find the eigenvectors and eigenvalues for a 3x3 matrix. Title: Eigenvalues and Eigenvectors of the Matrix of Permutation Counts Authors: Pawan Auorora , Shashank K Mehta (Submitted on 16 Sep 2013 ( v1 ), last revised 20 Sep 2013 (this version, v2)). The generalized eigenvalue problem is to determine the solution to the equation Av = λBv, where A and B are n-by-n matrices, v is a column vector of length n, and λ is a scalar. Compute eigenvectors and corresponding eigenvalues Sort the eigenvectors by decreasing eigenvalues and choose eigenvectors with the largest eigenvalues to form a dimensional matrix (where every column represents an eigenvector) Use this eigenvector matrix to transform the samples onto the new subspace. (1) The story begins in finding the eigenvalue(s) and eigenvector(s) of A. This procedure will lead you to a homogeneus 3x3 system w. Shio Kun for Chinese translation. But for the eigenvectors, it is, since the denominator is going to be (nearly) zero. Show Instructions In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. To find the eigenvalues, we need to minus lambda along the main diagonal and then take the determinant, then solve for lambda. Learn to recognize a rotation-scaling matrix, and compute by how much the matrix rotates and scales. eig computes eigenvalues and eigenvectors of a square matrix. The eigenvalues and eigenvectors of a matrix have the following important property: If a square n n matrix A has n linearly independent eigenvectors then it is diagonalisable, that is, it can be factorised as follows A= PDP 1 where D is the diagonal matrix containing the eigenvalues of A along the diagonal, also written as D = diag[l 1;l 2;:::;l n]. [i 1]t, for any nonzero scalar t. In the present case, since we are dealing with a 3 X 3 Matrix and a 3-entry column vector,. The eigenvalues and eigenvectors of a matrix are scalars and vectors such that. The zero vector 0 is never an eigenvectors, by definition. They are used in a variety of data science techniques such as Principal Component Analysis for dimensionality reduction of features. oregonstate. The l =2 eigenspace for the matrix 2 4 3 4 2 1 6 2 1 4 4 3 5 is two. And, thanks to the Internet, it's easier than ever to follow in their footsteps (or just finish your homework or study for that next big test). The eigenvectors are a lineal combination of atomic movements, which indicate global movement of the proteins (the essential deformation modes), while the associated eigenvalues indicate the expected displacement along each eigenvector in frequencies (or distance units if the Hessian is not mass-weighted), that is, the impact of each deformation movement in the. MATRIX NORMS 219 Moreover, if A is an m × n matrix and B is an n × m matrix, it is not hard to show that tr(AB)=tr(BA). This is called the eigendecomposition. If the resulting V has the same size as A, the matrix A has a full set of linearly independent eigenvectors that satisfy A*V = V*D. Find more Mathematics widgets in Wolfram|Alpha. Eigenvalues are simply the coefficients attached to eigenvectors, which give the axes magnitude. I know that I need to work backwards on this problem so I set up the characteristic equation. In MATLAB eigenvalues and eigenvectors of matrices can be calculated by command eig. In the last video we set out to find the eigenvalues values of this 3 by 3 matrix, A. technique for computing the eigenvalues and eigenvectors of a matrix, converging superlinearly with exponent 2 +. If X is a unit vector, λ is the length of the vector produced by AX. As noted above the eigenvalues of a matrix are uniquely determined, but for each eigenvalue there are many eigenvectors. 1) where F0 is the free energy at the stationary point, x is a column matrix whose entries xi (i=1,2,…n). Learn that the eigenvalues of a triangular matrix are the diagonal entries. Write the system in matrix form as Equivalently, (A nonhomogeneous system would look like. The result is a 3x1 (column) vector. ) c) This is very easy to see. l2l Find A. using the Cayley-Hamilton theorem. Example 4 Suppose A is this 3x3 matrix: [1 1 0] [0 2 0] [0 -1 2]. The eigenspaces corresponding to these matrices are orthogonal to each other, though the eigenvalues can still be complex. Theorem If A is an matrix and is a eigenvalue of A, then the set of all eigenvectors of , together with the zero vector, forms a subspace of. These vectors are eigenvectors of A, and these numbers are eigenvalues of A. p ( t) = − ( t − 2) ( t − 1) ( t + 1). For example, suppose we wish to solve the. The eigenvalues are 4; 1; 4(4is a double root), exactly the diagonal elements. Let A be the matrix given by A = [− 2 0 1 − 5 3 a 4 − 2 − 1] for some variable a. By definition of the kernel, that. square matrix and S={x1,x2,x3,…,xp} S = { x 1, x 2, x 3, …, x p } is a set of eigenvectors with eigenvalues λ1,λ2,λ3,…,λp. ; Solve the linear system (A - I 3) v = 0 by finding the reduced row echelon form of A - I 3. Since W x = l x then (W- l I) x = 0. Any help is greatly appreciated. Eigenvectors and eigenvalues with numpy. Easycalculation. The characteristic polynomial (CP) of an nxn matrix A A is a polynomial whose roots are the eigenvalues of the matrix A A. In Section 5. We can't expect to be able to eyeball eigenvalues and eigenvectors everytime. l2l Find A. EIGENVALUES AND EIGENVECTORS 6. As we sometimes have to diagonalize a matrix to get the eigenvectors and eigenvalues, for example diagonalization of Hessian(translation, rotation projected out) matrix, we can get the. Eigenvalues/vectors are instrumental to understanding electrical circuits, mechanical systems, ecology and even Google's PageRank algorithm. Call you matrix A. If X is a unit vector, λ is the length of the vector produced by AX. It also includes an analysis of a 2-state Markov chain and a discussion of the Jordan form. The first one is a simple one – like all eigenvalues are real and different. A scalar λ is said to be a eigenvalue of A, if Ax = λx for some vector x 6= 0. We need to get to the bottom of what the matrix A is doing to. For a unique set of eigenvalues to determinant of the matrix (W-l I) must be equal to zero. Eigenvalues [ m, spec] is always equivalent to Take [ Eigenvalues [ m], spec]. Call you matrix A. EIGENVALUES AND EIGENVECTORS 6. (3,2,4) and (0,-1,1) are eigenvectors. For Example, if x is a vector that is not zero, then it is an eigenvector of a square matrix A, if Ax is a scalar multiple of x. In machine learning, eigenvectors and eigenvalues come up quite a bit. Example 5 Suppose A is this 3x3 matrix: [ 0 0 2] [-3 1 6] [ 0 0 1]. Finding eigenvectors of a 3x3 matrix 2. Find the eigenvalues and bases for each eigenspace. These straight lines may be the optimum axes for describing rotation of a. The determinant will be computed by performing a Laplace expansion along the second row: The roots of the characteristic equation, are clearly λ = −1 and 3, with 3 being a double root; these are the eigenvalues of B. Right when you reach $0$, the eigenvalues and eigenvectors become real (although there is only eigenvector at this point). Enter a matrix. If you're behind a web filter, please make sure that the domains *. l0l l0l ; l1l ; l1l respectively. 2; Lecture 14: Basis=? For A 2X2 Matrix; Lecture 15: Basis=? For A 3X3 Matrix: 1/3; Lecture 16: Basis. [email protected] This matrix calculator computes determinant, inverses, rank, characteristic polynomial, eigenvalues and eigenvectors. Repeated Eigenvalues We conclude our consideration of the linear homogeneous system with constant coefficients x Ax' (1) with a brief discussion of the case in which the matrix has a repeated eigenvalue. The first one is a simple one - like all eigenvalues are real and different. Find all values of a which will guarantee that A has eigenvalues 0, 3, and − 3. I guess A is 3x3, so it has 9 coefficients. 1 Let A be an n × n matrix. Linear Algebra: Introduction to Eigenvalues and Eigenvectors. λ is an eigenvalue (a scalar) of the Matrix [A] if there is a non-zero vector (v) such that the following relationship is satisfied: [A](v) = λ (v) Every vector (v) satisfying this equation is called an eigenvector of [A] belonging to the eigenvalue λ. eig returns a tuple (eigvals,eigvecs) where eigvals is a 1D NumPy array of complex numbers giving the eigenvalues of. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. The eigenvalue-eigenvector problem for A is the problem of nding numbers and vectors v 2R3 such that Av = v : If , v are solutions of a eigenvector-eigenvalue problem then the vector v is called an eigenvector of A and is called an eigenvalue of A. if TRUE, the matrix is assumed to be symmetric (or Hermitian if complex) and only its lower triangle (diagonal included) is used. Let's see if visualization can make these ideas more intuitive. Show that the eigenvalues of A are real. Also, the method only tells you how to find the largest eigenvalue. Today Courses Practice Algebra Geometry Number Theory Calculus Probability Find the eigenvalues of the matrix A = (8 0 0 6 6 11 1 0 1). Eigenvalues [ m, - k] gives the k that are smallest in absolute value. The calculator will perform symbolic calculations whenever it is possible. The eigenvalues are r1=r2=-1, and r3=2. Learn more Algorithm for finding Eigenvectors given Eigenvalues of a 3x3 matrix in C#. As is to be expected, Maple's. A simple way to do this is to apply three gradient filters (in. DA: 66 PA: 49 MOZ Rank: 16. For the purpose of analyzing Hessians, the eigenvectors are not important, but the eigenvalues are. The Eigenvector which corresponds to the maximum Eigenvalue of the Covariance matrix, C, will be the. Eigenvalues and Eigenvectors on Brilliant, the largest community of math and science problem solvers. (1) The eigenvalues of a triangle matrix are its diagonal elements. We also review eigenvalues and eigenvectors. The 3x3 matrix can be thought of as an operator - it takes a vector, operates on it, and returns a new vector. Note that the multiplication on the left hand side is matrix multiplication (complicated) while the mul-. To be able to show that a given matrix is orthogonal. We were transforming a vector of points v into another set of points v R by multiplying by. Use [W,D] = eig(A. Linear Algebra: Introduction to Eigenvalues and Eigenvectors. Eigenvectors and eigenspaces for a 3x3 matrix. 1) can be rewritten. 2]; quit; This Hadamard matrix has 8 eigenvalues equal to 4 and 8 equal to -4. Share a link to this answer. For a Hermitian matrix the eigenvalues should be real. A simple online EigenSpace calculator to find the space generated by the eigen vectors of a square matrix. Computing the eigenvectors of a 3x3 symmetric matrix in C++ Every once in a while Google makes me wonder how people ever managed to do research 15 years ago. But Bcan have at most n linearly independent eigenvectors, so the eigenvalues obtained in this way must be all of B’s eigenvalues. Try to find the eigenvalues and eigenvectors of the following matrix:. Eigenvector and Eigenvalue. But the problem is I can't write (1,0,0) as a combination of those eigenvectors. Eigenvalues and Eigenvectors, Imaginary and Real. Remember that the solution to. In the last video, we started with the 2 by 2 matrix A is equal to 1, 2, 4, 3. A matrix is diagonalizable if it has a full set of eigenvectors. It will be tedious for hand computation. Since W x = l x then (W- l I) x = 0. There could be multiple eigenvalues and eigenvectors for a symmetric and square matrix. Resize; Like. Namely, prove that (1) the determinant of A is the product of its eigenvalues, and (2) the trace of A is the sum of the eigenvalues. It will then compute the eigenvalues (real and complex) and eigenvectors (real and complex) for that matrix. Eigenvalues of the said matrix [ 2. 3 Eigenvalues and Eigenvectors. You have 3x3=9 linear equations for nine unknowns. a numeric or complex matrix whose spectral decomposition is to be computed. They have many uses! A simple example is that an eigenvector does not change direction in a transformation:. 1 3 4 5 , l = 1 11. For background on these concepts, see 7. The eigenvalues correspond to rows in the eigenvector matrix. In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. Show transcribed image text. The zero vector 0 is never an eigenvectors, by definition. A real number λ is said to be an eigenvalue of a matrix A if there exists a non-zero column vector v such that A. Most of the methods on this website actually describe the programming of matrices. Just write down two generic diagonal matrices and you will see that they must. And that says, any value, lambda, that satisfies this equation for v is a non-zero vector. For now, I have original matrix in 2D array, I have eigenvalues in variables, and I have second matrix that has result of Eigenvalue*I - A (eigenvalue times matrix that has 1 on diagonal minus original matrix) So my form for now is lets say in example:-1 0 -1 v1 0-2 0 -2 v2 = 0-1 0 -1 v3 0. *XP the eigenvalues up to a 4*4 matrix can be calculated. Matrix D is the canonical form of A--a diagonal matrix with A's eigenvalues on the main diagonal. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. Eigenvectors of repeated eigenvalues. Learn to recognize a rotation-scaling matrix, and compute by how much the matrix rotates and scales. Specify the eigenvalues The eigenvalues of matrix $\mathbf{A}$ are thus $\lambda = 6$, $\lambda = 3$, and $\lambda = 7$. The normalized eigenvector for = 5 is: The three eigenvalues and eigenvectors now can be recombined to give the solution to the original 3x3 matrix as shown in Figures 8. The above equation is called the eigenvalue. That example demonstrates a very important concept in engineering and science - eigenvalues and. [As to follow the definition the zero vector i. Find all values of a which will guarantee that A has eigenvalues 0, 3, and − 3. Any value of λ for which this equation has a solution is known as an eigenvalue of the matrix A. 1 (Eigenvalue, eigenvector) Let A be a complex square matrix. com is the most convenient free online Matrix Calculator. The l =2 eigenspace for the matrix 2 4 3 4 2 1 6 2 1 4 4 3 5 is two-dimensional. Example solving for the eigenvalues of a 2x2 matrix. A simple online EigenSpace calculator to find the space generated by the eigen vectors of a square matrix. (3,2,4) and (0,-1,1) are eigenvectors. Below, change the columns of A and drag v to be an. a numeric or complex matrix whose spectral decomposition is to be computed. Introduction. An eigenvector associated with λ1 is a nontrivial solution~v1 to (A λ1I)~v =~0: (B. The calculator will perform symbolic calculations whenever it is possible. Finding eigenvectors of a 3x3 matrix 2. *XP the eigenvalues up to a 4*4 matrix can be calculated. The determinant will be computed by performing a Laplace expansion along the second row: The roots of the characteristic equation, are clearly λ = −1 and 3, with 3 being a double root; these are the eigenvalues of B. Eigenvalues and eigenvectors in Maple Maple has commands for calculating eigenvalues and eigenvectors of matrices. Its roots are 1 = 1+3i and 2 = 1 = 1 3i: The eigenvector corresponding to 1 is ( 1+i;1). 4 A symmetric matrix: € A. Substitute one eigenvalue λ into the equation A x = λ x—or, equivalently, into ( A − λ I) x = 0—and solve for x; the resulting nonzero solutons form the set of eigenvectors of A corresponding to the selectd eigenvalue. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. A matrix is diagonalizable if it has a full set of eigenvectors. The generalized eigenvalue problem is to determine the solution to the equation Av = λBv, where A and B are n-by-n matrices, v is a column vector of length n, and λ is a scalar. is a linearly independent set. If W is a matrix such that W'*A = D*W', the columns of W are the left eigenvectors of A. Your matrix is Hermitian - look up "Rayleigh quotient iteration" to find its eigenvalues and eigenvectors. Solution: Ax = x )x = A 1x )A 1x = 1 x (Note that 6= 0 as Ais invertible implies that det(A) 6= 0). Recall as well that the eigenvectors for simple eigenvalues are linearly independent. Now we need to get the matrix into reduced echelon form. Eigenvectors of repeated eigenvalues. A = 1 u 1 u 1 T u 1 T u 1 − 2 u 2 u 2 T u 2 T u 2 + 2 u 3 u 3 T u 3 T u 3. xla is an addin for Excel that contains useful functions for matrices and linear Algebra: Norm, Matrix multiplication, Similarity transformation, Determinant, Inverse, Power, Trace, Scalar Product, Vector Product, Eigenvalues and Eigenvectors of symmetric matrix with Jacobi algorithm, Jacobi's rotation matrix. This representation turns out to be enormously useful. Title: Eigenvalues and Eigenvectors of the Matrix of Permutation Counts Authors: Pawan Auorora , Shashank K Mehta (Submitted on 16 Sep 2013 ( v1 ), last revised 20 Sep 2013 (this version, v2)). The first one is a simple one - like all eigenvalues are real and different. You have 3 vector equations Au1=l1u1 Au2=l2u2 Au3=l3u3 Consider the matrix coefficients a11,a12,a13, etc as unknowns. Learn to find complex eigenvalues and eigenvectors of a matrix. Earlier on, I have also mentioned that it is possible to get the eigenvalues by solving the characteristic equation of the matrix. 369) EXAMPLE 1 Orthogonally diagonalize. The normalized eigenvector for = 5 is: The three eigenvalues and eigenvectors now can be recombined to give the solution to the original 3x3 matrix as shown in Figures 8. For a Hermitian matrix the eigenvalues should be real. If you're seeing this message, it means we're having trouble loading external resources on our website. this expression for A is called the spectral decomposition of a symmetric matrix. Find a basis for this eigenspace. This can be factored to. The calculator will perform symbolic calculations whenever it is possible. Namely, prove that (1) the determinant of A is the product of its eigenvalues, and (2) the trace of A is the sum of the eigenvalues. The eigenvalues are r1=r2=-1, and r3=2. 2 examples are given : first the eigenvalues of a 4*4 matrix is calculated. For example, suppose we wish to solve the. Repeated Eigenvalues We conclude our consideration of the linear homogeneous system with constant coefficients x Ax' (1) with a brief discussion of the case in which the matrix has a repeated eigenvalue. You have 3 vector equations Au1=l1u1 Au2=l2u2 Au3=l3u3 Consider the matrix coefficients a11,a12,a13, etc as unknowns. oregonstate. Ask Question Asked 2 years, 8 months ago. True A 3x3 matrix can have a nonreal complex eigenvalue with multiplicity 2. Browse other questions tagged linear-algebra matrices eigenvalues-eigenvectors or ask your own question. Finding eigenvectors of a 3x3 matrix 2. Learn to find complex eigenvalues and eigenvectors of a matrix. Eigenvalues and eigenvectors calculator. It decomposes matrix using LU and Cholesky decomposition. I guess A is 3x3, so it has 9 coefficients. Prove that the diagonal elements of a triangular matrix are its eigenvalues. Determining the eigenvalues of a 3x3 matrix. This is called the eigendecomposition. Equation (1) can be stated equivalently as (A − λ I) v = 0 , {\displaystyle (A-\lambda I)v=0,} (2) where I is the n by n identity matrix and 0 is the zero vector. We all know that for any 3 × 3 matrix, the number of eigenvalues is 3. This calculator allows to find eigenvalues and eigenvectors using the Characteristic polynomial. If you have trouble understanding your eigenvalues and eigenvectors of 3×3 matrix. Matrix A: 0 -6 10-2 12 -20-1 6 -10 I got the eigenvalues of: 0, 1+i, and 1-i. We know that the row space of a matrix is orthogonal to its null space, then we can compute the eigenvector(s) of an eigenvalue by verifying the linear independence of the. 860) by computing Av/l and confirming that it equals v. net) for Bulgarian translation. real()" to get rid of the imaginary part will give the wrong result (also, eigenvectors may have an arbitrary complex phase!). Complex eigenvalues and eigenvectors of a matrix. The 3x3 matrix can be thought of as an operator - it takes a vector, operates on it, and returns a new vector. We have A= 5 2 2 5 and eigenvalues 1 = 7 2 = 3 The sum of the eigenvalues 1 + 2 = 7+3 = 10 is equal to the sum of the diagonal entries of the matrix Ais 5 + 5 = 10. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. The Mathematics Of It. Let us rearrange the eigenvalue equation to the form , where represents a vector of all zeroes (the zero vector). Eigenvalues of the said matrix [ 2. Repeated Eigenvalues Occasionally when we have repeated eigenvalues, we are still able to nd the correct number of linearly independent eigenvectors. Judging from the name covmat, I'm assuming you are feeding a covariance matrix, which is symmetric (or hermitian. 1 $\begingroup$ My question is Eigenvalue/eigenvector reordering and/or renormalisation? 0. Eigenvalues and Eigenvectors Calculator for 3x3 Matrix easycalculation. Since the zero-vector is a solution, the system is consistent. Theorem 11. Here we can confirm the eigenvalue/eigenvector pair l=-. These straight lines may be the optimum axes for describing rotation of a. In general, you can skip the multiplication sign, so 5x is equivalent to 5*x`. Once we have the eigenvalues we can then go back and determine the eigenvectors for each eigenvalue. 1) then v is an eigenvector of the linear transformation A and the scale factor λ is the eigenvalue corresponding to that eigenvector. We all know that for any 3 × 3 matrix, the number of eigenvalues is 3. I only found 2 eigenvectors cos l2=l3. Let's consider a simple example with a diagonal matrix: A = np. Hot Network Questions Quick way to find the square root of 123. EigenValues is a special set of scalar values, associated with a linear system of matrix equations. And, thanks to the Internet, it's easier than ever to follow in their footsteps (or just finish your homework or study for that next big test). Follow the next steps for calulating the eigenvalues (see the figures) 1: make a 4*4 matrix [A] and fill the rows and colums with the numbers. MATRIX NORMS 219 Moreover, if A is an m × n matrix and B is an n × m matrix, it is not hard to show that tr(AB)=tr(BA). First, form the matrix. In each case determine which vectors are eigenvectors and identify the associated eigenvalues. The Overflow Blog Defending yourself against coronavirus scams. Lambda represents a scalar value. This problem has been solved! See the answer. Lecture 7: Given The Eigenvector, Eigenvalues=? Lecture 8: Eigenvector=? Of A 3X3 Matrix; Lecture 9: Bases And Eigenvalues: 1; Lecture 10: Bases And Eigenvalues: 2; Lecture 11: Basis=? For A 2X2 Matrix; Lecture 12: Basis=? For A 3X3 Matrix: Ex. Using MatLab to find eigenvalues, eigenvectors, and unknown coefficients of initial value problem. Here A is a matrix, v is an eigenvector, and lambda is its corresponding eigenvalue. Note: The two unknowns can also be solved for using only matrix manipulations by starting with the initial conditions and re-writing: Now it is a simple task to find γ 1 and γ 2. Hi, I'm having trouble with finding the eigenvectors of a 3x3 matrix. In my earlier posts, I have already shown how to find out eigenvalues and the corresponding eigenvectors of a matrix. By definition of the kernel, that. The eigenvalue is the factor which the matrix is expanded. It's the eigenvectors that determine the dimensionality of a system. Thanks for the A2A… Eigenvalues and the Inverse of a matrix If we take the canonical definition of eigenvectors and eigenvalues for a matrix, $M$, and further assume that $M$ is invertible, so there exists, [math]M^{-1}[/math. The eigenvalues are numbers, and they’ll be the same for Aand B. ) by Seymour Lipschutz and Marc. If is a diagonal matrix with the eigenvalues on the diagonal, and is a matrix with the eigenvectors as its columns, then. *XP the eigenvalues up to a 4*4 matrix can be calculated. (1) The eigenvalues of a triangle matrix are its diagonal elements. Is there a fast algorithm for this specific problem? I've seen algorithms for calculating all the eigenvectors of a real symmetric matrix, but those routines seem to be optimized for large matrices, and I don't care. Diagonal matrix. I'm trying to calculate eigenvalues and eigenvectors of a 3x3 hermitian matrix (named coh). The numerical computation of eigenvalues and eigenvectors is a challenging issue, and must be be deferred until later. The roots are lambda 1 equals 1, and lambda 2 equals 3. It decomposes matrix using LU and Cholesky decomposition. The eigenvalues of a triangular matrix are the entries on the main diagonal. I am new to Mathematica so I am not very familiar with the syntax and I can not find out what is wrong with my code. degree polynomial. square matrix and S={x1,x2,x3,…,xp} S = { x 1, x 2, x 3, …, x p } is a set of eigenvectors with eigenvalues λ1,λ2,λ3,…,λp. 2]; quit; This Hadamard matrix has 8 eigenvalues equal to 4 and 8 equal to -4. Then if λ is a complex number and X a non–zero com-plex column vector satisfying AX = λX, we call X an eigenvector of A, while λ is called an eigenvalue of A. Learn to find complex eigenvalues and eigenvectors of a matrix. Multiply an eigenvector by A, and the. Eigenvalues [ m, - k] gives the k that are smallest in absolute value. Determining the eigenvalues of a 3x3 matrix Linear Algebra: Eigenvectors and Eigenspaces for a 3x3 matrix Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. , the covariance matrix of a random vector)), then all of its eigenvalues are real, and all of its eigenvectors are orthogonal. Linear Algebra: Introduction to Eigenvalues and Eigenvectors. oregonstate. A simple online EigenSpace calculator to find the space generated by the eigen vectors of a square matrix. Ask Question Asked 2 years, 8 months ago. (a) Set T: R2!R2 to be the linear transformation represented by the matrix 2 0 0 3. 1 Let A be an n × n matrix. , the covariance matrix of a random vector)), then all of its eigenvalues are real, and all of its eigenvectors are orthogonal. Eigenvalues and Eigenvectors Consider multiplying a square 3x3 matrix by a 3x1 (column) vector. I only found 2 eigenvectors cos l2=l3. Today Courses Practice Algebra Geometry Number Theory Calculus Probability Find the eigenvalues of the matrix A = (8 0 0 6 6 11 1 0 1). If you're seeing this message, it means we're having trouble loading external resources on our website. In the last video we set out to find the eigenvalues values of this 3 by 3 matrix, A. The non-symmetric problem of finding eigenvalues has two different formulations: finding vectors x such that Ax = λx, and finding vectors y such that y H A = λy H (y H implies a complex conjugate transposition of y). 366) •A is orthogonally diagonalizable, i. We take an example matrix from a Schaum's Outline Series book Linear Algebra (4 th Ed. The matrix is (I have a ; since I can't have a space between each column. There could be multiple eigenvalues and eigenvectors for a symmetric and square matrix. As the eigenvalues of are ,. The eigenvalue is the factor which the matrix is expanded. Form the matrix A − λI , that is, subtract λ from each diagonal element of A. To begin, let v be a vector (shown as a point) and A be a matrix with columns a1 and a2 (shown as arrows). Note that the multiplication on the left hand side is matrix multiplication (complicated) while the mul-. 1} it is straightforward to show that if $$\vert v\rangle$$ is an eigenvector of $$A\text{,}$$ then, any multiple $$N\vert v\rangle$$ of $$\vert v\rangle$$ is also an eigenvector since the (real or complex) number \(N. The eigenvalues and eigenvectors of a matrix may be complex, even when the matrix is real. We need to get to the bottom of what the matrix A is doing to. They have many uses! A simple example is that an eigenvector does not change direction in a transformation:. Hermitian Matrix giving non-real eigenvalues. , a matrix equation) that are sometimes also known as characteristic vectors, proper vectors, or latent vectors (Marcus and Minc 1988, p. Eigenvectors are a special set of vectors associated with a linear system of equations (i. Form the matrix A − λI , that is, subtract λ from each diagonal element of A. Assuming K = R would make the theory more complicated. For a square matrix A, an Eigenvector and Eigenvalue make this equation true (if we can find them):. The vector x is called an eigenvector corresponding to λ. To compute the Transpose of a 3x3 Matrix, CLICK HERE. [i 1]t, for any nonzero scalar t. Eigenvalues and Eigenvectors Consider multiplying a square 3x3 matrix by a 3x1 (column) vector. A simple online EigenSpace calculator to find the space generated by the eigen vectors of a square matrix. The calculator will perform symbolic calculations whenever it is possible. Earlier on, I have also mentioned that it is possible to get the eigenvalues by solving the characteristic equation of the matrix. Since doing so results in a determinant of a matrix with a zero column, $\det A=0$. Let us rearrange the eigenvalue equation to the form , where represents a vector of all zeroes (the zero vector). Eigenvalues and Eigenvectors of a 3 by 3 matrix Just as 2 by 2 matrices can represent transformations of the plane, 3 by 3 matrices can represent transformations of 3D space. Linear Algebra: Eigenvalues of a 3x3 matrix. The eigenvectors corresponding to di erent eigenvalues need not be orthogonal. *XP the eigenvalues up to a 4*4 matrix can be calculated. Once we have the eigenvalues we can then go back and determine the eigenvectors for each eigenvalue. is a linearly independent set. But I want to use the class Jama for the calculation of the eigenvalues and eigenvectors, but I do not know how to use it, could anyone give me a hand? Thanks. If you love it, our example of the solution to eigenvalues and eigenvectors of 3×3 matrix will help you get a better understanding of it. det ( A − λ I) = 0. By ranking your eigenvectors in order of their eigenvalues, highest to lowest, you get the principal components in order of significance. If W is a matrix such that W'*A = D*W', the columns of W are the left eigenvectors of A. We all know that for any 3 × 3 matrix, the number of eigenvalues is 3. I'm having a problem finding the eigenvectors of a 3x3 matrix with given eigenvalues. Eigenvalues, Eigenvectors, and Diagonal-ization Math 240 Eigenvalues and Eigenvectors Diagonalization Complex eigenvalues Find all of the eigenvalues and eigenvectors of A= 2 6 3 4 : The characteristic polynomial is 2 2 +10. In linear algebra, the Eigenvector does not change its direction under the associated linear transformation. Lambda represents a scalar value. I need some help with the following problem please? Let A be a 3x3 matrix with eigenvalues -1,0,1 and corresponding eigenvectors l1l. The eigenvalues of a hermitian matrix are real, since (λ − λ)v = (A * − A)v = (A − A)v = 0 for a non-zero eigenvector v. Not too bad. You can use this to find out which of your. Eigenvalues and Eigenvectors Consider multiplying a square 3x3 matrix by a 3x1 (column) vector. Calculate the eigenvalues and the corresponding eigenvectors of the matrix. Our general strategy was: Compute the characteristic polynomial. The second examples is about a 3*3 matrix. Let us consider an example of two matrices, one of them is a diagonal one, and another is similar to it: A = {{1, 0, 0}, {0, 2, 0}, {0, 0, 0. Notice: If x is an eigenvector, then tx with t6= 0 is also an eigenvector. 6 Prove that if the cofactors don't all vanish they provide a column eigenvector. How to find the Eigenvalues of a 3x3 Matrix - Duration: 3:56. array ( [ [ 1, 0 ], [ 0, -2 ]]) print (A) [ [ 1 0] [ 0 -2]] The function la. Find more Mathematics widgets in Wolfram|Alpha. You might be stuck with thrashing through an algebraic. Eigenvalues and Eigenvectors, Imaginary and Real. A simple online EigenSpace calculator to find the space generated by the eigen vectors of a square matrix. real()" to get rid of the imaginary part will give the wrong result (also, eigenvectors may have an arbitrary complex phase!). Linear Algebra: Eigenvalues of a 3x3 matrix. The columns of V present eigenvectors of A. Our general strategy was: Compute the characteristic polynomial. Take for example 0 @ 3 1 2 3 1 6 2 2 2 1 A One can verify that the eigenvalues of this matrix are = 2;2; 4. Eigenvalues and Eigenvectors Consider multiplying a square 3x3 matrix by a 3x1 (column) vector. Eigenvector and Eigenvalue. 224 CHAPTER 7. An eigenvector associated with λ1 is a nontrivial solution~v1 to (A λ1I)~v =~0: (B. The eigenvalues of A are given by the roots of the polynomial det(A In) = 0: The corresponding eigenvectors are the nonzero solutions of the linear system (A In)~x = 0: Collecting all solutions of this system, we get the corresponding eigenspace. If you have trouble understanding your eigenvalues and eigenvectors of 3×3 matrix. Eigenvalues and Eigenvectors. 2, replacing ${\bb v}^{(j)}$ by ${\bb v}^{(j)}-\sum_{k\neq j} a_k {\bb v}^{(k)}$ results in a matrix whose determinant is the same as the original matrix. Eigenvectors of repeated eigenvalues. Let A be an n×n matrix and let λ1,…,λn be its eigenvalues. The matrix looks like this |0 1 1| A= |1 0 1| |1 1 0| When I try to solve for the eigenvectors I end up with a 3x3 matrix containing all 1's and I get stumped there. If the matrix A is symmetric then •its eigenvalues are all real (→TH 8. Description of Lab: Your program will ask the user to enter a 3x3 matrix. Maths with Jay 35,790 views. org/math/linear-algebra/alternate_bases/eigen_everything/v/linear-. We therefore saw that they were all real. 7 Choose a random 3 by 3 matrix and find an eigenvalue and corresponding eigenvector. As is to be expected, Maple's. This condition will give you the eigenvalues and then, solvning the system for each eigenvalue, you will find the eigenstates. This matrix calculator computes determinant , inverses, rank, characteristic polynomial , eigenvalues and eigenvectors. v is an eigenvector with associated eigenvalue 3. I'm having a problem finding the eigenvectors of a 3x3 matrix with given eigenvalues. Since v is non-zero, the matrix is singular, which means that its determinant is zero. I can find the eigenvector of the eigenvalue 0, but for the complex eigenvalues, I keep on getting the reduced row echelon form of:. Find all values of a which will guarantee that A has eigenvalues 0, 3, and − 3. Question: Find The Eigenvalues And Eigenvectors Of Matrices 3x3 This problem has been solved! See the answer. Observation: det (A - λI) = 0 expands into an kth degree polynomial equation in the unknown λ called the characteristic equation. At this special case, all vectors as still rotated counterclockwise except those in the direction of $(0,1)$ (which is the eigenvector). This calculator allows you to enter any square matrix from 2x2, 3x3, 4x4 all the way up to 9x9 size. Find the eigenvalues and eigenvectors. 3 4 4 8 Solution. In Section 5. Scaling your VPN overnight Finding Eigenvectors of a 3x3 Matrix (7. Eigenvalues and Eigenvectors of a 3 by 3 matrix Just as 2 by 2 matrices can represent transformations of the plane, 3 by 3 matrices can represent transformations of 3D space. 1 3 4 5 , l = 1 11. The vector x is called an eigenvector corresponding to λ. Eigenvalues and Eigenvectors. Exactly one option must be correct). find the eigenvalues and eigenvectors of matrices 3x3. In linear algebra, the Eigenvector does not change its direction under the associated linear transformation. The normalized eigenvector for = 5 is: The three eigenvalues and eigenvectors now can be recombined to give the solution to the original 3x3 matrix as shown in Figures 8. Decomposing a matrix in terms of its eigenvalues and its eigenvectors gives valuable insights into the properties of the matrix. Equation (1) can be stated equivalently as (A − λ I) v = 0 , {\displaystyle (A-\lambda I)v=0,} (2) where I is the n by n identity matrix and 0 is the zero vector. They are used in a variety of data science techniques such as Principal Component Analysis for dimensionality reduction of features. The result is a 3x1 (column) vector. In this tutorial, we will explore NumPy's numpy. 1) can be rewritten. - Jonas Aug 16 '11 at 3:12. Given a matrix A, recall that an eigenvalue of A is a number λ such that Av = λ v for some vector v. If A is real, there is an orthonormal basis for R n consisting of eigenvectors of A if and only if A is symmetric. This polynomial is called the characteristic polynomial. Question: How do you determine eigenvalues of a 3x3 matrix? Eigenvalues: An eigenvalue is a scalar {eq}\lambda {/eq} such that Ax = {eq}\lambda {/eq}x for a nontrivial x. In linear algebra the characteristic vector of a square matrix is a vector which does not change its direction under the associated linear transformation. The zero vector 0 is never an eigenvectors, by definition. In this python tutorial, we will write a code in Python on how to compute eigenvalues and vectors. Let A be the matrix given by A = [− 2 0 1 − 5 3 a 4 − 2 − 1] for some variable a. trace()/3) -- note that (in exact math) this shifts the eigenvalues but does not influence the eigenvectors. It will then compute the eigenvalues (real and complex) and eigenvectors (real and complex) for that matrix. Determining the eigenvalues of a 3x3 matrix Linear Algebra: Eigenvectors and Eigenspaces for a 3x3 matrix Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. For the following matrices find (a) all eigenvalues (b) linearly independent eigenvectors for each eigenvalue (c) the algebraic and geometric multiplicity for each eigenvalue and state whether the matrix is diagonalizable. Enter a matrix. Definition 4. Diagonalizable Matrices. Quiz % 1)Simplify 2) Similarly, the characteristic equation of a 3x3 matrix: Eigenvalues or, can be written as well as Find eigenvalues and eigenvectors of matrix. Maths with Jay 35,790 views. Find more Mathematics widgets in Wolfram|Alpha. 2 Vectors that maintain their orientation when multiplied by matrix A D Eigenvalues: numbers (λ) that provide solutions for AX = λX. net) for Bulgarian translation. I can find the eigenvector of the eigenvalue 0, but for the complex eigenvalues, I keep on getting the reduced row echelon form of:.
1v7l3vc886zlg2, m83gip4j73vjfp, o5m33ui5xk4r4g, d76u869uc05fv, gbo71a9euqdvt, du2sldoup68h, 8wu1rxtgoc7tng0, o7uvrhbdlti2, ay9wjkki4za5y, olesufgei95yk, bs98c7ai3z00, 8tzdyariin4c7h, t1kt2p3udiq, s9o8ylnuu7ft83, ks0tprhj6stqjd, q1dh0lb81aofsm, ukci9e944q, ajxzbl7lregmo1, n5h8l6ozp41yg, 7pb60d55luftbpv, i10r3fzynjgw, l0gavghjyl1b6t, 74ssaxe7i8vfqyx, xlux7x1ozv, smnjnrqshi, 5pnzldwnn2ig, jernunvmrjcuv, 1emb6psqm3, 3yhxk3w0palopu3 | 2020-05-27T22:26:15 | {
"domain": "svc2006.it",
"url": "http://svc2006.it/pmey/eigenvalues-and-eigenvectors-of-3x3-matrix.html",
"openwebmath_score": 0.8708940148353577,
"openwebmath_perplexity": 349.6453176432852,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9884918502576513,
"lm_q2_score": 0.8670357598021707,
"lm_q1q2_score": 0.8570577824463963
} |
https://questioncove.com/updates/4e56f2e10b8b9ebaa8948ee6 | Mathematics
OpenStudy (anonymous):
Prove that: $if:\mathop {\lim }\limits_{n \to \infty } {a_n} = a$ then $\mathop {\lim }\limits_{n \to \infty } \frac{{{a_1} + 2{a_2} + ... + n{a_n}}} {{{n^2}}} = \frac{a} {2}$
OpenStudy (anonymous):
Can I use L'hospital Rule Like that: $\mathop {\lim }\limits_{n \to \infty } \frac{{\sum\nolimits_{k = 1}^n {k{a_k}} }} {{{n^2}}} = \mathop {\lim }\limits_{n \to \infty } \frac{{\sum\nolimits_{k = 1}^n {{a_k}} }} {{2n}} = \mathop {\lim }\limits_{n \to \infty } \frac{{an}} {{2n}} = \frac{a} {2}$ Is it a rigorous way to prove that? Please tell me your thought. It will be a appreciated.
OpenStudy (anonymous):
i am thinking. maybe can prove it directly?
myininaya (myininaya):
we can use l'hospital
OpenStudy (anonymous):
I don't know how to do that, I want to use the precise definiition of series limit.
OpenStudy (anonymous):
you need epsilon, N proof? because it looks like you should get $a\sum_{k=1}^nk=\frac{a(n(n+1)}{2}$
OpenStudy (anonymous):
OpenStudy (anonymous):
In fact, it is a question from mathematical analysis. So yes, epsilon, N proof will be best.
myininaya (myininaya):
$\lim_{n \rightarrow \infty}\frac{a_1+a_2+\cdot \cdot \cdot n a_n}{n^2} \cdot \frac{\frac{1}{n^2}}{\frac{1}{n^2}}$$=\lim_{n \rightarrow \infty}\frac{\frac{a_1}{n^2}+\frac{a_2}{n^2}+ \cdot \cdot \cdot +\frac{n a_n}{n^2}}{\frac{n^2}{n^2}}$ $=\lim_{n \rightarrow \infty}\frac{0+0+\cdot \cdot \cdot +\frac{1}{n}a_n}{1}=\lim_{n \rightarrow \infty}\frac{a_n}{n}=0$ ? how can we show its $\frac{a}{2}$
OpenStudy (anonymous):
The answer from textbook is a/2. And I saw a guy use l'hospital rule in that way, so i just copy his way....I hope It work. But The way use l'hospital rule like that is really confused... AND If you can give the " epsilon-N " proof, that will be best...
OpenStudy (anonymous):
Take your time. I have go to work now. See you.
myininaya (myininaya):
i'm saying i can't prove that because it doesn't = a/2 i get 0 see above
myininaya (myininaya):
and the way i used above wasn't l'hospital
OpenStudy (zarkon):
it is a/2
myininaya (myininaya):
what did i do wrong then
OpenStudy (zarkon):
you caont do what you did above
myininaya (myininaya):
myininaya (myininaya):
i see
myininaya (myininaya):
its because i'm forgeting the terms before na_n
OpenStudy (zarkon):
yes
myininaya (myininaya):
like (n-1)a_{n-1}
OpenStudy (zarkon):
yes
myininaya (myininaya):
and so on...
OpenStudy (zarkon):
yes
myininaya (myininaya):
lol
myininaya (myininaya):
i will let you prove it
myininaya (myininaya):
because i know you want to badly
myininaya (myininaya):
lol
OpenStudy (anonymous):
why isn't it $\lim_{n\rightarrow \infty}\frac{an(n+1)}{2n^2}=\frac{a}{2}$?
OpenStudy (zarkon):
how do you factor out an a?
OpenStudy (zarkon):
that is what it looks like you are doing...replacing a_n with a
OpenStudy (zarkon):
the L'Hospitals rule use by prost in his first post is incorrect too
myininaya (myininaya):
wheres that?
OpenStudy (zarkon):
this ... $\mathop {\lim }\limits_{n \to \infty } \frac{{\sum\nolimits_{k = 1}^n {k{a_k}} }} {{{n^2}}} = \mathop {\lim }\limits_{n \to \infty } \frac{{\sum\nolimits_{k = 1}^n {{a_k}} }} {{2n}} = \mathop {\lim }\limits_{n \to \infty } \frac{{an}} {{2n}} = \frac{a} {2}$
OpenStudy (zarkon):
an epsilon based proof is the way to go I believe
OpenStudy (anonymous):
I am back.. Hard Day.. Does anyone heard Stolz Theorem? Use it, it said: $\mathop {\lim }\limits_{n \to \infty } \frac{{{a_1} + 2{a_2} + ... + n{a_n}}} {{{n^2}}} = \mathop {\lim }\limits_{n \to \infty } \frac{{n{a_n}}} {{{n^2} - {{\left( {n - 1} \right)}^2}}} = \mathop {\lim }\limits_{n \to \infty } \frac{{an}} {{2n - 1}} = \frac{a} {2}$ and foreget my first proof above. It makes people confused and make no sense. The Stolz Theorem is very useful. But i still want the " epsilon-N " proof. Can anyone prove it? | 2022-08-10T04:11:47 | {
"domain": "questioncove.com",
"url": "https://questioncove.com/updates/4e56f2e10b8b9ebaa8948ee6",
"openwebmath_score": 0.9999911785125732,
"openwebmath_perplexity": 2256.058497394396,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9884918519527645,
"lm_q2_score": 0.8670357546485407,
"lm_q1q2_score": 0.8570577788217987
} |
https://math.stackexchange.com/questions/3749429/how-many-ways-to-of-place-1-2-3-dots-9-in-a-circle-so-the-sum-of-any-thre/3749438 | # How many ways to of place $1, 2, 3, \dots, 9$ in a circle so the sum of any three consecutive numbers is divisible by $3.$
Determine the number of ways of placing the numbers $$1, 2, 3, \dots, 9$$ in a circle, so that the sum of any three numbers in consecutive positions is divisible by $$3.$$ (Two arrangements are considered the same if one arrangement can be rotated to obtain the other.)
I've experimented with possible combinations and found that it works when we put a multiple of 3 next to a number one more than a multiple of three beside a number that is two more than a multiple of 3. If we continue with this pattern around the circle, it works.
However, I'm curious in finding a more systematic approach than listing out all different combinations.
• Well consider the numbers in position $k$ and $k+3$ will have to be congruent $\mod 3$.. – fleablood Jul 8 at 5:08
In general, suppose we have the numbers $$1,2,\dots,3n$$, and we would like to place them in a circle so that the sum of any three consecutive terms is divisible by $$3$$.
Observe that the numbers at positions $$k$$ and $$k+3$$ must always be congruent modulo $$3$$. Thus we can partition the points along the circle into three sets which stand for the residues modulo $$3$$ of the positions. If we fix the number $$1$$ at, say, position $$1$$, then this tells us that every point at position $$3k+1$$ has residue $$1$$ modulo $$3$$.
Now we have a choice: either the numbers at positions $$3k+2$$ have residue $$0$$, or they have residue $$2$$. Either way, note that each of the three "partition classes" can be arranged in $$n!$$ different ways, giving us $$2(n!)^3$$ possibilities.
In this particular case, $$n=3$$, and the answer is $$432$$ (if rotations are counted as the same).
• If rotations are the same we can assume position $1$ has $3$. Then positions $4$ and $9$ have $6$ and $9$. there are $2$ ways to do that. position $2$ is either $\pm 1\pmod 3$. There are $2$ choices. position $2\equiv position 5\equiv position 8$ so there are $3!$ ways to do that. And $position 3\equiv position 6\equiv \position 9$ so there are $3!$ ways to do that. So there are $2*2*6*6=144$ ways if rotations are counted the same. If reflections are counted the same there are $2*6*6=72$ ways. – fleablood Jul 8 at 5:37
Brain storming. If we label the number positions as $$a_1,.....,a_9$$ then $$a_k+a_{k+1} + a_{k+2} \equiv 0 \equiv a_{k+1} + a_{k+2} + a_{k+3}\pmod 3$$ so $$a_k\equiv a_{k+3}\pmod 3$$.
The there are only three equivalence classes each with $$3$$ elements and so $$a_3, a_6, a_9$$ must all contain elements from one equivalence class. There are $$3$$ choices of which class and $$3!$$ ways to place the elements. $$a_1, a_4, a_7$$ must also contain elements from one equivalence class and there are $$2$$ choices of classes and $$3!$$ ways to arrange them. and for $$a_2, a_5, a_8$$ there is one choice of classe and $$3!$$ ways to arrange them.
So there $$3*3!*2*3!*1*3! = 6^4$$ ways to do this.
As rotations are considered the same (but not mirror symmetries???) divide by $$9$$.
So the answer is $$\frac {6^4}9$$.
• Rotations are the same, so I think this overcounts by a factor of 3 – boink Jul 8 at 5:18
• Since we're placing them in a circle, it's possible we ought to divide by 9 or maybe 18 to get rid of the symmetric options (equivalently, require $a_1=1$, and possibly divide by 2). It's hard to tell, though. – Arthur Jul 8 at 5:19
• @fleablood I just meant that the problem statement says they're the same – boink Jul 8 at 5:22
• "Since we're placing them in a circle" Placing the them in a circle means that $a_1$ is congruent to $a_8$. It DOESN"T mean that rotations are considered to be the same any more than Mr. Left in a word problem means you need to subtract. But if rotations are the same then divide by $9$. If symmetry is considered the same divide by $18$. – fleablood Jul 8 at 5:22
• "I just meant that the problem statement says they're the same" Oh, I didn't see that part. That's one of my pet peeve. If Mr. Left works $8$ hours a day, $5$ days a week how many hours a week does he work. Answer: Well, since the problem contains the word "left" that means we subtract so the answer is $8-5=3$. And question. How many ways are there to place people are a table. Answer: As the problem contains the word "table" and tables are circles rotations are the same.... No, they aren't unless the question says they are. – fleablood Jul 8 at 5:28 | 2020-11-24T04:30:14 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3749429/how-many-ways-to-of-place-1-2-3-dots-9-in-a-circle-so-the-sum-of-any-thre/3749438",
"openwebmath_score": 0.8086175918579102,
"openwebmath_perplexity": 151.29742150008124,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9884918516137419,
"lm_q2_score": 0.867035752930664,
"lm_q1q2_score": 0.8570577768297469
} |
https://math.stackexchange.com/questions/2059752/mod-distributive-law-factoring-bmod-ab-bmod-ac-ab-bmod-c | # mod Distributive Law, factoring $\!\!\bmod\!\!:$ $\ ab\bmod ac = a(b\bmod c)$
I stumbled across this problem
Find $$\,10^{\large 5^{102}}$$ modulo $$35$$, i.e. the remainder left after it is divided by $$35$$
Beginning, we try to find a simplification for $$10$$ to get: $$10 \equiv 3 \text{ mod } 7\\ 10^2 \equiv 2 \text{ mod } 7 \\ 10^3 \equiv 6 \text{ mod } 7$$
As these problems are meant to be done without a calculator, calculating this further is cumbersome. The solution, however, states that since $$35 = 5 \cdot 7$$, then we only need to find $$10^{5^{102}} \text{ mod } 7$$. I can see (not immediately) the logic behind this. Basically, since $$10^k$$ is always divisible by $$5$$ for any sensical $$k$$, then: $$10^k - r = 5(7)k$$ But then it's not immediately obvious how/why the fact that $$5$$ divides $$10^k$$ helps in this case.
My question is, is in general, if we have some mod system with $$a^k \equiv r \text{ mod } m$$ where $$m$$ can be decomposed into a product of numbers $$a \times b \times c \ \times ...$$, we only need to find the mod of those numbers where $$a, b, c.....$$ doesn't divides $$a$$? (And if this is the case why?) If this is not the case, then why/how is the solution justified in this specific instance?
• How is $10 \equiv 1$ mod 7? – Junglemath May 2 '19 at 13:13
• @Junglemath, idk.... – q.Then May 14 '20 at 7:17
The "logic" is that we can use a mod distributive law to pull out a common factor $$\,c=5,\,$$ i.e.
$$ca\bmod cn =\, c(a\bmod n)\quad\qquad$$
This decreases the modulus from $$\,cn\,$$ to $$\,n, \,$$ simplifying modular arithmetic. Also it may eliminate CRT = Chinese Remainder Theorem calculations, eliminating needless inverse computations, which are much more difficult than above for large numbers (or polynomials, e.g. see this answer).
This distributive law is often more convenient in congruence form, e.g.
$$\quad \qquad ca\equiv c(a\bmod n)\ \ \ {\rm if}\ \ \ \color{#d0f}{cn\equiv 0}\ \pmod{\! m}$$
because we have: $$\,\ c(a\bmod n) \equiv c(a\! +\! kn)\equiv ca+k(\color{#d0f}{cn})\equiv ca\pmod{\!m}$$
e.g. in the OP: $$\ \ I\ge 1\,\Rightarrow\, 10^{\large I+N}\!\equiv 10^{\large I}(10^{\large N}\!\bmod 7)\ \ \ {\rm by}\ \ \ 10^I 7\equiv 0\,\pmod{35}$$
Let's use that. First note that exponents on $$10$$ can be reduced mod $$\,6\,$$ by little Fermat,
i.e. notice that $$\ \color{#c00}{{\rm mod}\,\ 7}\!:\,\ 10^{\large 6}\equiv\, 1\,\Rightarrow\, \color{#c00}{10^{\large 6J}\equiv 1}.\$$ Thus if $$\ I \ge 1\$$ then as above
$$\phantom{{\rm mod}\,\ 35\!:\,\ }\color{#0a0}{10^{\large I+6J}}\!\equiv 10^{\large I} 10^{\large 6J}\!\equiv 10^{\large I}(\color{#c00}{10^{\large 6J}\!\bmod 7})\equiv \color{#0a0}{10^{\large I}}\,\pmod{\!35}$$
Our power $$\ 5^{\large 102} = 1\!+\!6J\$$ by $$\ {\rm mod}\,\ 6\!:\,\ 5^{\large 102}\!\equiv (-1)^{\large 102}\!\equiv 1$$
Therefore $$\ 10^{\large 5^{\large 102}}\!\! = \color{#0a0}{10^{\large 1+6J}}\!\equiv \color{#0a0}{10^{\large 1}} \pmod{\!35}\$$
Remark $$\$$ For many more worked examples see the complete list of linked questions. Often this distributive law isn't invoked by name. Rather its trivial proof is repeated inline, e.g. from a recent answer, using $$\,cn = 14^2\cdot\color{#c00}{25}\equiv 0\pmod{100}$$
\begin{align}&\color{#c00}{{\rm mod}\ \ 25}\!:\ \ \ 14\equiv 8^{\large 2}\Rightarrow\, 14^{\large 10}\equiv \overbrace{8^{\large 20}\equiv 1}^{\rm\large Euler\ \phi}\,\Rightarrow\, \color{#0a0}{14^{\large 10N}}\equiv\color{#c00}{\bf 1}\\[1em] &{\rm mod}\ 100\!:\,\ 14^{\large 2+10N}\equiv 14^{\large 2}\, \color{#0a0}{14^{\large 10N}}\! \equiv 14^{\large 2}\!\! \underbrace{(\color{#c00}{{\bf 1} + 25k})}_{\large\color{#0a0}{14^{\Large 10N}}\!\bmod{\color{#c00}{25}}}\!\!\! \equiv 14^{\large 2} \equiv\, 96\end{align}
This distributive law is actually equivalent to CRT as we sketch below, with $$\,m,n\,$$ coprime
\begin{align} x&\equiv a\!\!\!\pmod{\! m}\\ \color{#c00}x&\equiv\color{#c00} b\!\!\!\pmod{\! n}\end{align} $$\,\Rightarrow\, x\!-\!a\bmod mn\, =\, m\left[\dfrac{\color{#c00}x-a}m\bmod n\right] = m\left[\dfrac{\color{#c00}b-a}m\bmod n\right]$$
which is exactly the same form solution given by Easy CRT. But the operational form of this law often makes it much more convenient to apply in computations versus the classical CRT formula.
Fractional extension It easily extends to fractions, e.g. from here
Notice $$\,\ \dfrac{\color{#c00}{11}}{35}\bmod \color{#c00}{11}(9)\,=\, \color{#c00}{11}(\color{#0a0}8)\,$$ by $$\color{#0a0}{\bmod 9\!:\ \dfrac{1}{35}\equiv \dfrac{1}{-1}\equiv 8},\$$ via
Theorem $$\ \ \dfrac{\color{#c00}ab}d\bmod \color{#c00}ac\, =\, \color{#c00}a\left(\color{#0a0}{\dfrac{b}d\bmod c}\right)\ \$$ if $$\ \ (d,ac) = 1$$
Proof $$\,$$ Bezout $$\Rightarrow$$ exists $$\, d' \equiv d^{-1}\pmod{\!ac}.\,$$ Factoring out $$\,\color{#c00}a\,$$ by mDL
$$\color{#c00}abd'\bmod \color{#c00}ac\, =\ \color{#c00}a(bd'\bmod c)\qquad\qquad\qquad$$
and $$\,dd' \equiv 1\pmod{\!ac}\Rightarrow dd' \equiv 1\pmod{\!c},\,$$ so $$\,d'\bmod c = d^{-1}\bmod c$$
First, note that $10^{7}\equiv10^{1}\pmod{35}$.
Therefore $n>6\implies10^{n}\equiv10^{n-6}\pmod{35}$.
Let's calculate $5^{102}\bmod6$ using Euler's theorem:
• $\gcd(5,6)=1$
• Therefore $5^{\phi(6)}\equiv1\pmod{6}$
• $\phi(6)=\phi(2\cdot3)=(2-1)\cdot(3-1)=2$
• Therefore $\color\red{5^{2}}\equiv\color\red{1}\pmod{6}$
• Therefore $5^{102}\equiv5^{2\cdot51}\equiv(\color\red{5^{2}})^{51}\equiv\color\red{1}^{51}\equiv1\pmod{6}$
Therefore $10^{5^{102}}\equiv10^{5^{102}-6}\equiv10^{5^{102}-12}\equiv10^{5^{102}-18}\equiv\ldots\equiv10^{1}\equiv10\pmod{35}$.
Carrying on from your calculation: \begin{align} 10^3&\equiv 6 \bmod 7 \\ &\equiv -1 \bmod 7 \\ \implies 10^6 = (10^3)^2&\equiv 1 \bmod 7 \end{align} We could reach the same conclusion more quickly by observing that $7$ is prime so by Fermat's Little Theorem, $10^{(7-1)}\equiv 1 \bmod 7$.
So we need to know the value of $5^{102}\bmod 6$, and here again $5\equiv -1 \bmod 6$ so $5^{\text{even}}\equiv 1 \bmod 6$. (Again there are other ways to the same conclusion, but spotting $-1$ is often useful).
Thus $10^{\large 5^{102}}\equiv 10^{6k+1}\equiv 10^1\equiv 3 \bmod 7$.
Now the final step uses the Chinese remainder theorem for uniqueness of the solution (to congruence): \left .\begin{align} x&\equiv 0 \bmod 5 \\ x&\equiv 3 \bmod 7 \\ \end{align} \right\}\implies x\equiv 10 \bmod 35 | 2021-08-02T05:17:41 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2059752/mod-distributive-law-factoring-bmod-ab-bmod-ac-ab-bmod-c",
"openwebmath_score": 0.996309220790863,
"openwebmath_perplexity": 946.4025963228933,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211561049158,
"lm_q2_score": 0.8791467785920306,
"lm_q1q2_score": 0.8570108790929958
} |
https://math.stackexchange.com/questions/3221999/probability-of-failure-of-a-light-bulb-in-years | # Probability of failure of a light bulb in years
Let's assume we have a lightbulb with a maximum lifespan 4 years. We are asked to create a transition matrix (Markov chain theory) for the bulb. The bulb is checked once a year and if it's found that the bulb does not work it is replaced by a new one.
We know that the probabilities of failure during the 4 year period are: 0.2, 0.4, 0.3 and 0.1. After four years, the lightbulb is replaced with probability 1. So we have the following 4 states:
• $$S_0$$ - the lightbulb is new
• $$S_1$$ - the lightbulb is 1 year old
• $$S_2$$ - the lightbulb is 2 years old
• $$S_3$$ - the lightbulb is 3 years old
How to create the probability transition matrix? For the first year, it's clear. The bulb goes dead with probability $$p_{0,0} = 0.2$$ and it keeps working with probability $$p_{0,1} = 0.8$$. But I am not sure how to calculate to the following years. In the materials for my course, I found the following calculation:
$$p_{2,1} = \frac{0.4}{0.8} \, , p_{2,3} = \frac{0.3+0.1}{0.8} = 0.5 \, , p_{3,1} = \frac{0.3}{0.4} = 0.75 \, , p_{3,4} = \frac{0.1}{0.4} = 0.25$$
So the probability transition matrix is:
$$\begin{bmatrix} 0.2&0.8&0&0\\0.5&0&0.5&0\\0.75&0&0&0.25\\1&0&0&0\end{bmatrix}$$
Is this correct? I fail to see why $$p_{2,3}$$ uses the probability of failure in the last year.
• The matrix is right. It's Bayes' theorem. – Oolong milk tea May 11 '19 at 11:50
• Does the second row represent the transition probabilities from $S_1$ to $S_0$, $S_1$,..,$S_3$, respectively? – John Douma May 11 '19 at 12:01
• @JohnDouma Yes, it does. – Jiří Pešík May 11 '19 at 12:04
• Then shouldn't those $0.5$s be replaced by $0.4$ and $0.6$? – John Douma May 11 '19 at 12:05
• @JohnDouma The numbers are results of $p_{2,1} = 0.5$ and $p_{2,3} = 0.5$ given above the matrix. The point is I am not sure if they're right or not. – Jiří Pešík May 11 '19 at 12:08
The probabilities known, summing to $$1$$, are the probability at birth of failing during the 1st, 2nd, 3d or 4th year. Upon failure, the bulb is replaced with a new one, which thus has the same probabilities as above.
We have therefore the following scheme.
So the probability (at birth) $$P_2$$ to fail in the 2nd year (not before, and not after) will be given by the probability to survive for the first year times the probability $$p_2$$ to fail exactly in the 2nd year (given that it survived the first). And analogously for the others, i.e. \eqalign{ & p_{\,1} = 0.2 \cr & \left( {1 - p_{\,1} } \right)p_{\,2} = 0.8 \cdot p_{\,2} = P_{\,2} = 0.4\quad \Rightarrow \quad p_{\,2} = 0.5 \cr & \left( {1 - p_{\,1} } \right)\left( {1 - p_{\,2} } \right)p_{\,3} = P_{\,3} \quad \quad \Rightarrow \quad p_{\,3} = 0.75 \cr & \left( {1 - p_{\,1} } \right)\left( {1 - p_{\,2} } \right)\left( {1 - p_{\,3} } \right)p_{\,4} = P_{\,4} \quad \Rightarrow \quad p_{\,4} = 1 \cr}
And $$p_k,(1-p_k)$$ are the entries of the matrix.
The key lies in interpreting the phrase “the probabilities of failure during the 4 year period” (was the original in English?). It’s pretty unlikely that these numbers represent the transition probabilities that you’re trying to construct. It would be a rather miraculous light bulb that gets more reliable the longer it’s in service. The numbers sum to $$1$$ and we know that the bulb only lasts four years maximum, so the likelier reading is that these numbers are the probabilities that when the bulb fails, it is in its $$k$$th year of service. That is, these four numbers are the probabilities that is it year $$k$$ or service given that the bulb has been found to have failed.
This reading is borne out by the computations in the course materials, which are applications of Bayes’ theorem (as pointed out by Oolong milk tea). For the transition matrix, you need the probabilities that the bulb is found to have failed given that it is year $$k$$ of service, which is just the sort of thing that Bayes’ theorem allows you to compute from the given data.
So, for example, the denominator of $$p_{21}=0.8$$ is the probability that the light bulb is one year old, i.e., the probability that it didn’t fail in its first year of service, which is simply $$p_{12}=0.8$$. The numerator is the probability of the bulb’s being a year old given that it failed the test, which is $$0.4$$ from the given probabilities, multipled by $$1$$ since it did fail the test. The computation of $$p_{23}$$ is a bit odd. It looks like the authors used $$\Pr(\text{test fails in third year} \cup \text{test fails in fourth year}) = 0.3+0.1$$ for the probability that the bulb passes when tested during the second year. One could’ve simply set $$p_{23}=1-p_{21}$$ since there are only two possible transitions from $$S_1$$. Fortunately, the two values agree.
• The original article was not in English, however, it was described very briefly. Your interpretation makes sense to me. – Jiří Pešík May 12 '19 at 20:22 | 2020-01-21T14:47:06 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3221999/probability-of-failure-of-a-light-bulb-in-years",
"openwebmath_score": 0.9645945429801941,
"openwebmath_perplexity": 506.7833399709539,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211604938803,
"lm_q2_score": 0.8791467706759583,
"lm_q1q2_score": 0.8570108752347849
} |
https://math.stackexchange.com/questions/1191651/solving-systems-of-linear-differential-equations-by-elimination | # Solving Systems of Linear Differential Equations by Elimination
For a homework problem, we are provided:
$\frac{dx}{dt}=-y + t$
$\frac{dy}{dt}=x-t$
Putting these into differential operator notation and separating the dependent variables from the independent:
$Dx-y=t$
$Dy-x=-t$
My first inclination is to apply the D operator to the second equation to eliminate Dx and get:
$D^2y+y=t-1$
I solve the homogenous part and end up with $y_c=C_1\cos(t) + C_2\sin(t).$
Using annihilator approach and method of undetermined coefficients, I determine that $y_p=t-1$.
General solution for $y(t) = C_1\cos(t)+C_2\sin(t)+t-1$.
After plugging $y$ into the second equation, I get $x(t)=-C_1\sin(t)+C_2\cos(t)+1+t$
Checking my answer against the back of the book, they show: $x(t) = C_1\cos(t)+C_2\sin(t)+t+1$ and $y(t)=C_1\sin(t)-C_2\cos(t)+t-1$
I can't seem to find what I did wrong. Chegg solutions shows to eliminate y instead of x, and got the book's solution. Does the variable chosen for elimination matter? Halp!
• Oops, you're correct. I interchanged the x(t) and y(t) from the back of the book. Will fix now. – Irongrave Mar 16 '15 at 0:37
• Both your solution and the book solution are correct and coincide up to renaming resp. replacing the constants. – Dr. Lutz Lehmann Mar 16 '15 at 3:16
• Look here. – Kw08 Jan 22 '17 at 1:32
Are you sure the original system is written as it is in the book? (Problem was updated to correct $x(t), y(t))$.
I get $$x'' +x = t + 1 \implies x(t) = c_1 \cos t + c_2 \sin t + t + 1$$ and $$y'' + y = t - 1 \implies y(t) = c_1 \sin t + c_2 \cos t + t - 1.$$
Note: this can be written as $y(t) = c_1 \sin t - c_2 \cos t + t - 1$ because $c_2$ is an arbitrary constant. Showing the negative removes the confusion when plugging back in so the authors decided to show it as part of the solution.
You can easily verify this solution by plugging it back into the original system.
Update Lets do the first in more detail. We have:
$$x'' +x = t + 1$$
To solve the homogeneous, we have $m^2 + 1 = 0 \implies m_{1,2} = \pm ~ i$, yielding:
$$x_h(t) = c_1 \cos t + c_2 \sin t$$
For the particular, we can choose $x_p = a + b t$, and substituting back into the DEQ, yields:
$$x'' + x = a + bt = 1 + t \implies a = b = 1$$
This produces:
$$x(t) = x_h(t) + x_p(t) = c_1 \cos t + c_2 \sin t + t + 1$$
• Thanks, I just fixed that. I am still having difficulty understanding where I went wrong. – Irongrave Mar 16 '15 at 0:39
• I will add details on the $x(t)$ calculation. – Amzoti Mar 16 '15 at 0:41
• Thanks. I get that that works if you choose to eliminate y, but why doesn't eliminating x work the same way? – Irongrave Mar 16 '15 at 0:46
• It does, in my solution I absorb the negative they show into the constant. Clear? – Amzoti Mar 16 '15 at 0:47
• @Irongrave: Of course do the signs and the order of the constants matter, since the two solution components are connected by them. However, it is quite permissible to do a linear transformation of the constants by replacing $C_1=D_2$ and $C_2=-D_1$ to go from the book solution with constants $(C_1,C_2)$ to your solution form with renamed constants $(D_1,D_2)$. – Dr. Lutz Lehmann Mar 16 '15 at 3:14
You could of course also see the complex differential equation $$\dot z=i·z+(1-i)·t$$ in this system.
Its homogeneous solution is $z=c·e^{it}$. The inhomogeneous solution can be constructed as usual per linear ansatz $z_p=a+b·t$ leading to $$b=i·a+i·b·t+(1-i)·t$$ and thus $b=1+i$ and $a=1-i$ and the full solution $$z=c·e^{it}+(1-i)·(1+i·t)$$ Separating into real and imaginary part gives the solution of the original system. | 2019-12-15T10:55:59 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1191651/solving-systems-of-linear-differential-equations-by-elimination",
"openwebmath_score": 0.9066839218139648,
"openwebmath_perplexity": 388.29875462359126,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211604938802,
"lm_q2_score": 0.8791467659263148,
"lm_q1q2_score": 0.8570108706047319
} |
https://math.stackexchange.com/questions/3707227/strategy-to-calculate-fracddx-left-fracx2-6x-92x2x32-right/3707238 | # Strategy to calculate $\frac{d}{dx} \left(\frac{x^2-6x-9}{2x^2(x+3)^2}\right)$.
I am asked to calculate the following: $$\frac{d}{dx} \left(\frac{x^2-6x-9}{2x^2(x+3)^2}\right).$$ I simplify this a little bit, by moving the constant multiplicator out of the derivative: $$\left(\frac{1}{2}\right) \frac{d}{dx} \left(\frac{x^2-6x-9}{x^2(x+3)^2}\right)$$ But, using the quotient-rule, the resulting expressions really get unwieldy: $$\frac{1}{2} \frac{(2x-6)(x^2(x+3)^2) -(x^2-6x-9)(2x(2x^2+9x+9))}{(x^2(x+3)^2)^2}$$
I came up with two approaches (3 maybe):
1. Split the terms up like this: $$\frac{1}{2}\left( \frac{(2x-6)(x^2(x+3)^2)}{(x^2(x+3)^2)^2} - \frac{(x^2-6x-9)(2x(2x^2+9x+9))}{(x^2(x+3)^2)^2} \right)$$ so that I can simplify the left term to $$\frac{2x-6}{x^2(x+3)^2}.$$ Taking this approach the right term still doesn't simplify nicely, and I struggle to combine the two terms into one fraction at the end.
2. The brute-force-method. Just expand all the expressions in numerator and denominator, and add/subtract monomials of the same order. This definitely works, but i feel like a stupid robot doing this.
3. The unofficial third-method. Grab a calculator, or computer-algebra-program and let it do the hard work.
Is there any strategy apart from my mentioned ones? Am I missing something in my first approach which would make the process go more smoothly? I am looking for general tips to tackle polynomial fractions such as this one, not a plain answer to this specific problem.
• Frankly, I think that we waste a lot of time worrying about simplifying things like this. Try a couple of things. If you can't get something nice relatively quickly, throw a CAS at it and get on with your life. In the real world, you will encounter very few situations where you want to factor a polynomial and are also able to do so (e.g. almost every polynomial of degree $5$ or higher cannot be factored; in many real-world settings, the polynomials come pre-factored; etc). – Xander Henderson Jun 6 '20 at 2:51
• In examples like this (that is, if you want to differentiate a rational function), you might save some time using logarithmic differentiation. – Xander Henderson Jun 6 '20 at 2:52
• @XanderHenderson I agree that in modern times one should focus more on understanding and applying concepts, than on pure computation. However, in this case i am happy to have been pointed to logarithmic differentiation, partial fractions and polynomial long division which made me realize some knowledge-gaps i was unaware of, which would not have happened had i used a CAS. – LeonTheProfessional Jun 6 '20 at 7:16
Logarithmic differentiation can also be used to avoid long quotient rules. Take the natural logarithm of both sides of the equation then differentiate: $$\frac{y'}{y}=2\left(\frac{1}{x-3}-\frac{1}{x}-\frac{1}{x+3}\right)$$ $$\frac{y'}{y}=-\frac{2\left(x^2-6x-9\right)}{x(x+3)(x-3)}$$ Then multiply both sides by $$y$$: $$y'=-\frac{{\left(x-3\right)}^3}{x^3{\left(x+3\right)}^3}$$
• Great!! Yet another approach! I will look into logarithmic differentiation a bit more. With these tools i don't need to feel like a stupid robot anymore ;) – LeonTheProfessional Jun 5 '20 at 17:41
• Yep. You should logarithmic differentiation because it's more applicable than the other methods previously mentioned. Especially if there are square root, cube root, etc. of polynomials in the numerator/denominator. – Ty. Jun 5 '20 at 17:43
HINT
To begin with, notice that \begin{align*} x^{2} - 6x - 9 = 2x^{2} - (x^{2} + 6x + 9) = 2x^{2} - (x+3)^{2} \end{align*} Thus it results that \begin{align*} \frac{x^{2} - 6x - 9}{2x^{2}(x+3)^{2}} = \frac{2x^{2} - (x+3)^{2}}{2x^{2}(x+3)^{2}} = \frac{1}{(x+3)^{2}} - \frac{1}{2x^{2}} \end{align*}
In the general case, polynomial long division and the partial fraction method would suffice to solve this kind of problem.
• Thank you! I think i stared too long at this problem, and have to take a step back to notice these patterns. I will examine the problem again (tomorrow) with your hints, and see if any more questions arise. If so, i will comment here. If no other answers of overwhelming enlightment appear, i will mark this answer as the accepted one. :) – LeonTheProfessional Jun 5 '20 at 17:35
Note that $$x^2-6x-9 = (x-3)^2 - 18$$. So after pulling out the factor of $$\frac 12$$, it suffices to compute $$\frac{d}{dx} \left(\frac{x-3}{x(x+3)}\right)^2$$ and $$\frac{d}{dx} \left(\frac{1}{x(x+3)}\right)^2.$$ These obviously only require finding the derivative of what's inside, since the derivative of $$(f(x))^2$$ is $$2f(x)f'(x)$$.
For a final simplification, note that $$\frac{1}{x(x+3)} = \frac{1}{3} \left(\frac 1x - \frac{1}{x+3}\right),$$ so you'll only ever need to take derivatives of $$\frac 1x$$ and $$\frac {1}{x+3}$$ to finish, since the $$x-3$$ in the numerator of the first fraction will simplify with these to give an integer plus multiples of these terms.
As a general rule, partial fractions will greatly simplify the work required in similar problems.
• I found this partial fraction using polynomial long division, yet it didn't appear to me, that any remainder appearing there would obviously disappear when taking the derivative. This really is a great hint! – LeonTheProfessional Jun 5 '20 at 17:39
• @hdighfan I think there is a slight typo - it should read that the derivative of $\left(f(x)\right)^2$ is $2f(x)f'(x)$. – Zubin Mukerjee Jun 6 '20 at 3:01
• I have edited it to fix, please revert if not wanted – Zubin Mukerjee Jun 6 '20 at 20:18
• Ah, of course. Thanks for the edit. – hdighfan Jun 6 '20 at 20:19 | 2021-02-27T05:25:02 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3707227/strategy-to-calculate-fracddx-left-fracx2-6x-92x2x32-right/3707238",
"openwebmath_score": 0.8510565161705017,
"openwebmath_perplexity": 383.7864381263352,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9748211582993982,
"lm_q2_score": 0.8791467595934563,
"lm_q1q2_score": 0.8570108625020557
} |
http://math.stackexchange.com/questions/135234/showing-a-function-is-not-uniformly-continuous | # Showing a function is not uniformly continuous
I am looking at uniform continuity (for my exam) at the moment and I'm fine with showing that a function is uniformly continuous but I'm having a bit more trouble showing that it is not uniformly continuous, for example:
show that $x^4$ is not uniformly continuous on $\mathbb{R}$, so my solution would be something like:
Assume that it is uniformly continuous then:
$$\forall\epsilon\geq0\exists\delta>0:\forall{x,y}\in\mathbb{R}\ \mbox{if}\ |x-y|<\delta \mbox{then} |x^4-y^4|<\epsilon$$
Take $x=\frac{\delta}{2}+\frac{1}{\delta}$ and $y=\frac{1}{\delta}$ then we have that $|x-y|=|\frac{\delta}{2}+\frac{1}{\delta}-\frac{1}{\delta}|=|\frac{\delta}{2}|<\delta$ however $$|f(x)-f(y)|=|\frac{\delta^3}{8}+3\frac{\delta}{4}+\frac{3}{2\delta}|$$
Now if $\delta\leq 1$ then $|f(x)-f(y)|>\frac{3}{4}$ and if $\delta\geq 1$ then $|f(x)-f(y)|>\frac{3}{4}$ so there exists not $\delta$ for $\epsilon < \frac{3}{4}$ and we have a contradiction.
So I was wondering if this was ok (I think it's fine) but also if this was the general way to go about showing that some function is not uniformly continuous? Or if there was any other ways of doing this that are not from the definition?
Thanks very much for any help
-
So, is this an exam question? – user21436 Apr 22 '12 at 12:17
No, I'm just practicing for my exam where questions like this (not this one though) will come it. This is from one of the past papers that are for revision – hmmmm Apr 22 '12 at 12:29
Just trying to ensure we are not taken for a ride. Hope you don't mind. :) – user21436 Apr 22 '12 at 12:33
@KannappanSampath no its fine- it annoys me when I see people posting assessment questions on forums :) – hmmmm Apr 22 '12 at 12:36
To show that it is not uniformly continuous on the whole line, there are two usual (and similar) ways to do it:
1. Show that for every $\delta > 0$ there exist $x$ and $y$ such that $|x-y|<\delta$ and $|f(x)-f(y)|$ is greater than some positive constant (usually this is even arbitrarily large).
2. Fix the $\varepsilon$ and show that for $|f(x)-f(y)|<\varepsilon$ we need $\delta = 0$.
First way:
Fix $\delta > 0$, set $y = x+\delta$ and check $$\lim_{x\to\infty}|x^4 - (x+\delta)^4| = \lim_{x\to\infty} 4x^3\delta + o(x^3) = +\infty.$$
Second way:
Fix $\epsilon > 0$, thus $$|x^4-y^4| < \epsilon$$ $$|(x-y)(x+y)(x^2+y^2)| < \epsilon$$ $$|x-y|\cdot|x+y|\cdot|x^2+y^2| < \epsilon$$ $$|x-y| < \frac{\epsilon}{|x+y|\cdot|x^2+y^2|}$$
but this describes a necessary condition, so $\delta$ has to be at least as small as the right side, i.e.
$$|x-y| < \delta \leq \frac{\epsilon}{|x+y|\cdot|x^2+y^2|}$$
so if either of $x$ or $y$ tends to infinity then $\delta$ tends to $0$.
Hope that helps ;-)
Edit: after explanation and calculation fixes, I don't disagree with your proof.
-
Thanks for the reply, I think that I use that we are considering all of $\mathbb{R}$ when i choose $x=\delta+\frac{1}{\delta}$ and $y=\frac{1}{\delta}$ as these would not be valid for small $\delta$ in bounded interval? – hmmmm Apr 22 '12 at 13:38
@hmmmm Ok, I misunderstood what you were saying there. If the calculations are alright, then your proof is fine. – dtldarek Apr 22 '12 at 13:44
I will comment on your solution after writing another approach. For any $x,y\in\mathbb{R}$ we have: \begin{align*} |x^{4}-y^{4}|=|(x^{2}-y^{2})(x^{2}+y^{2})|=|(x-y)(x+y)(x^{2}+y^{2})|=|x-y|\cdot |x+y|\cdot |x^{2}+y^{2}| \end{align*}
So what you can see is that even if you take arbitrarily close $x$ and $y$, you can grow the distance of $x^{4}$ and $y^{4}$ as much as you want by taking them far enough away from zero. You can easily conclude from here that the function is not uniformly continuous by a contraposition for example.
Alright, then to your solution. If the calculations would be correct, then it would be fine. You could assume at first that such $\delta>0$ exists for $0<\varepsilon<3$ and conclude with a contradiction. However, I got a bit different calculations than you. Using the above equation we see that: \begin{align*} |f(\frac{\delta}{2}+\frac{1}{\delta})-f(\frac{1}{\delta})|&=|(\frac{\delta}{2}+\frac{1}{\delta})^{4}-\frac{1}{\delta^{4}}|=|\frac{\delta}{2}(\frac{\delta}{2}+\frac{2}{\delta})((\frac{\delta}{2}+\frac{1}{\delta})^{2}+\frac{1}{\delta^{2}})| \\ &= |(\frac{\delta^{2}}{4}+1)(\frac{\delta^{2}}{4}+2\cdot \frac{\delta}{2}\cdot \frac{1}{\delta}+\frac{1}{\delta^{2}}+\frac{1}{\delta^{2}})| \\ &=|(\frac{\delta^{2}}{4}+1)(\frac{\delta^{2}}{4}+1+\frac{2}{\delta^{2}})| \\ &= |\frac{\delta^{4}}{16}+\frac{\delta^{2}}{4}+\frac{1}{2}+\frac{\delta^{2}}{4}+1+\frac{2}{\delta^{2}}| \\ &= |\frac{\delta^{4}}{16}+\frac{\delta^{2}}{2}+\frac{2}{\delta^{2}}+\frac{3}{2}|\\ &= \frac{\delta^{4}}{16}+\frac{\delta^{2}}{2}+\frac{2}{\delta^{2}}+\frac{3}{2} \end{align*} If you're able to find a lower bound for this (which is quite easy) as you did previously, then by choosing an epsilon smaller than that fixed number you may conclude as you did in your original post by contradiction.
-
Hey sorry about that, I edited it-hopefully it's right now? – hmmmm Apr 22 '12 at 13:39
I also edited now my calculation which still differs abit from your new one. Could you show the steps of how you got this answer for $|f(x)-f(y)|$? – Thomas E. Apr 22 '12 at 13:47
yeah I messed that up quite a bit sorry (I had the wrong power and the wrong delta's) – hmmmm Apr 22 '12 at 13:50
It should be $|(\frac{\delta}{2}+\frac{1}{\delta})^4-\frac{1}{\delta^4}|$ which would give $|\frac{\delta^4}{16}+\frac{\delta^2}{2}+\frac{2}{\delta}+\frac{3}{2}|$ I think I could conclude a similar thing from here? – hmmmm Apr 22 '12 at 13:52
Except that is the last $-\frac{1}{\delta^{4}}$ missing from there? Otherwise it looks close to mine. – Thomas E. Apr 22 '12 at 13:57
I think you should make this a little simpler (and for uniform continuity in general) All you need to do to show $f:X \to Y$ is not uniformly continuous on $X$ (let's suppose there both subsets of $\Bbb R$), then just give me a SINGLE epsilon such that, NO MATTER HOW SMALL delta is chosen, there will be x and y closer than delta for which the difference in function values exceed epsilon. Thus for instance $|(N+\theta)^4- N^4| \ge 4\theta N^3$ so if you choose $x$ really big (with respect to $\delta, x=N+\theta\,\,\,\ \text{and}\,\,\, y = N,$ then if $0 < \delta/2 < \theta < \delta$ you have $|x-y| < \delta$ yet you still have the variable N to play with to make the difference in function values as large as you like (in particular the difference in function values can always be made bigger than 3 regardless of how small $\delta$ is). Nevertheless, I think your proof is an accurate job! | 2014-03-12T21:28:09 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/135234/showing-a-function-is-not-uniformly-continuous",
"openwebmath_score": 0.9927499890327454,
"openwebmath_perplexity": 266.53601893974314,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9766692284751636,
"lm_q2_score": 0.8774767922879693,
"lm_q1q2_score": 0.8570045817287523
} |
https://mathhelpboards.com/threads/more-than-one-equation-for-a-given-trig-graph.3641/ | # TrigonometryMore than one equation for a given Trig Graph?
#### m3dicat3d
##### New member
Hi all.. another Trig question here...
Let's say I'm given a graph of a sinusoidal function and asked to find its equation, but I'm not told whether this is a sine or cosine function and I'm left to determine that myself.
I understand that evaluating where the graph intersects the y axis is the straight-forward, easiest approach. For instance, take this graph where the y interval is .5 and the x interval is pi/2
I can say that it's a sine graph easily by sight, but also b/c it intersects the y axis at y=0. And given the phase shift and no vertical shift, the equation is f(x) = sin [(2/3)x].
BUT, couldn't this also be f(x) = cos [{(2/3)x} - (pi/2)] since sin(x) and cos(x) are seperated only by a phase shift of pi/2?
This is meant for my own edification and not to make this kind of exercise more confusing than be. I'm simply interseted if this is in fact mathematically accurate that you could have more than one equation (a sine or a cosine "version") for a given sinusoidal curve.
My calculator returns coincidental curves when I graph both the sine and cosine "versions" of this given graph, but I know my calculator isn't really a mathematician either haha, so I thought I'd ask some real mathematicians instead
Thanks
#### Ackbach
##### Indicium Physicus
Staff member
Note that by the addition of angles identity, that
$$\cos(2x/3- \pi/2)= \cos(2x/3) \cos( \pi/2)+ \sin(2x/3) \sin( \pi/2) = \cos(2x/3) \cdot 0+ \sin(2x/3) \cdot 1= \sin(2x/3).$$
So yes, you can definitely have more than one representation of the same graph, as you have seen on your calculator.
#### m3dicat3d
##### New member
Thank You!!! Thank You!!! Thank You!!! Thank You!!! Thank You!!!
Excellent answer, and as I am still reviewing my Trig, I hadn't even considered the identity perspective of it... I can't stress enough how much having that perspective helps me even more with this...
Again, Thank you... man this place rocks!
#### MarkFL
Staff member
As sine and cosine are complementary or co-functions, this just means they are out of phase by $\displaystyle \frac{\pi}{2}$ radians, or 90°.
You may have noticed that a sine curve, if moved 1/4 period to the left, becomes a cosine curve, or conversely, a cosine curve moved 1/4 period to the right becomes a sine curve.
You are doing well to see this, it shows you are trying to understand it on a deeper level rather than just plugging into formulas. Both the sine function and the cosine function, and linear combinations of the two (with equal amplitudes) are called sinusoidal functions.
#### m3dicat3d
##### New member
Thanks MarkFL, I appreciate the encouraging words. I'm studying for my State certification exam to teach HS math here in TX, and I tutor HS students in the meantime. I'm no math genius by far, so when I ask some of my questions I sometimes feel they might be dumb (which no one here has made me feel like I'm glad to say). I'm trying to see those "nuances" in the math in case they might help my students, and again, I appreciate your words, and the full out decency of the community here. It's a great place to learn
#### Ackbach
##### Indicium Physicus
Staff member
Thank You!!! Thank You!!! Thank You!!! Thank You!!! Thank You!!!
Excellent answer, and as I am still reviewing my Trig, I hadn't even considered the identity perspective of it... I can't stress enough how much having that perspective helps me even more with this...
Again, Thank you... man this place rocks!
You're quite welcome! Glad to be of help. | 2021-02-25T16:26:51 | {
"domain": "mathhelpboards.com",
"url": "https://mathhelpboards.com/threads/more-than-one-equation-for-a-given-trig-graph.3641/",
"openwebmath_score": 0.8644052147865295,
"openwebmath_perplexity": 1098.1628063593155,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9863631659211718,
"lm_q2_score": 0.8688267813328976,
"lm_q1q2_score": 0.8569787346726185
} |
http://mathhelpforum.com/pre-calculus/214181-roots-unity-length-one-root-other-roots.html | # Thread: Roots of unity and the length from one root to the other roots
1. ## Roots of unity and the length from one root to the other roots
Hello,
I had an assignment that required me to solve for the roots of unity for various equations of the form $z^n -1 = 0$. Then , I was asked to represent the roots of unity for each equation on an argand diagram in the form of a regular polygon.
I did all of that , however, i have a question:
is there a relation ship between the power n and the length from one root to the other roots?
When n = 3,
The roots are $-0.5 \pm 0.8660i$ and 1. The length from one root to the other roots are both 1.7321 (sqrt of 3)
When n = 4,
The roots are $\pm 1, 0 \pm 1i$. The length from one root to the other roots are (sqrt 2, sqrt and 2)
I am wondering , for the equation $z^3 - 1 = 0$ , is there a formula relating the power of z to the length from one roots to the other roots?
Thanks.
2. ## Re: Roots of unity and the length from one root to the other roots
The nth roots of unity lie at the vertices of a polygon with n sides, each vertex having distance from the center of 1. Drawing a line from the center to each vertex divides the polygon into n isosceles triangles, each having two sides of length 1, vertex angle of 360/n degrees and you want to find the length of the third side. If you draw a line from the vertex of such a triangle to the center of the base, you get two right triangles with hypotenuse length 1 and one angle of 180/n. The opposite side of that triangle has length 1(sin(180)/n) so the side of the polygon is twice that :
2sin(180/n).
3. ## Re: Roots of unity and the length from one root to the other roots
Hi, thank you very much for answering the question first.
However, i have already got the conjecture that the length of the polygon would be 2 sin (180/n), but this conjecture cannot be proven algebraically or MI that this statement would be true for any value of n, can it?
I am just wondering is there any other relationships between the length from one root to the other roots? (Lets say, if you draw a line from one root (any root) to the other roots (not only the adjacent root)
n = 3, length is sqrt 3 and sqrt 3
n = 4, length is sqrt 2 , 2 and sqrt 2
n = 5, length is not an exact value.. but what i have got is 1.1756, 1.9021, 1.9021 and 1.1756
n = 6, length is 1, sqrt 3, 2, sqrt 3, 1
n = 7, length is 0.86, 1.56, 1.94, 1.94, 1.56, 0.86
seems like theres a pattern here, but i can't seem to figure it out... would help a lot if someone could tell me if theres a conjecture for this or not, thank you very much. | 2017-09-21T16:37:05 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/pre-calculus/214181-roots-unity-length-one-root-other-roots.html",
"openwebmath_score": 0.756908118724823,
"openwebmath_perplexity": 184.76773029695687,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9843363499098283,
"lm_q2_score": 0.8705972801594706,
"lm_q1q2_score": 0.8569605489935974
} |
https://web2.0calc.com/questions/help_4799 | +0
# help
0
62
4
Which is larger, the blue area or the orange area?
Nov 14, 2019
#1
+105411
+2
Let the radius of the orange circle = r
So......its area = pi * r^2
We can find the side length,s, of the equilateral triangle thusly
tan (30°) = r / [(1/2)s]
1/ √3 = r / [ (1/2)s ]
(1/ √3) ( 1/2)s = r
s = (2√3) r = [ √12 ] r
And the area of the equilateral triangle is ( √3/ 4 ) ([√12]r)^2 = 3√3 r^2
So....the blue area inside the equilateral triangle =
[area of equailateral triangle - area of small circle ] / 3
[√3 r^2 - (1/3) pi r^2 ] = r^2 [ √3 - pi/3] (1)
The radius of the larger circle can be found as
√[ [√3r ]^2 + r^2 ] = √ [ 3r^2 + r^2 ] = 2r
So....the area of the larger circle = pi (2r)^2 = 4pi r^2
So the area between the side of the equilateral triangle and the larger circle is
[Area of larger circle - area of equilateral triangle ] / 3 =
[ 4pi r^2 - 3√3r^2 ] / 3 = r^2 [ (4/3)pi - √3] (2)
So the sum of (1) and (2) is the sum of the blue areas
r^2 [ √3 - pi/3 + (4/3) pi - √3 ] = pi r^2
So.....the orange and blue areas are equal !!!!
Nov 14, 2019
#2
+70
+2
Here is a different approach.The arc AC has measure $$\frac{1}{3}\cdot2\pi$$ and the angle AOB has measure half of the arc or $$\frac {\pi}{3}$$. If the radius of the larger circle is $$R$$, then the measure of OB, the radius of the smaller circle, is $$Rcos(\frac{\pi}{3})$$and so the orange region has area $$\pi(Rcos(\frac{\pi}{3}))^2=\frac{1}{4}\pi R^2$$ . The blue region, however, has area equal to $$\frac{1}{3}$$of the difference between area of the larger circle and the area of the smaller circle, i.e. $$\frac{1}{3}(\pi R^2-\frac{1}{4}\pi R^2) =\frac{1}{4}\pi R^2$$. So the two regions have the same area, each being $$\frac{1}{4}$$ of the area of the larger circle.
Nov 14, 2019
#3
+105411
+1
Thanks, Gadfly....I like that approach !!!
Nov 14, 2019
#4
+23575
+2
Which is larger, the blue area or the orange area?
$$\begin{array}{|rcll|} \hline {\color{orange}\text{orange}} +3\times {\color{blue}\text{blue}} &=& \pi r_{\text{circumcircle}}^2 \quad | \quad : {\color{orange}\text{orange}} \\ 1+3\times \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& \dfrac{ \pi r_{\text{circumcircle}}^2 } { {\color{orange}\text{orange}} } \quad | \quad {\color{orange}\text{orange}} = \pi r_{\text{incircle}}^2 \\ 1+3\times \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& \dfrac{ \pi r_{\text{circumcircle}}^2 } { \pi r_{\text{incircle}}^2 } \\ 1+3\times \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& \left( \dfrac{ r_{\text{circumcircle}} } { r_{\text{incircle}} } \right)^2 \\ 3\times \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& \left( \dfrac{ r_{\text{circumcircle}} } { r_{\text{incircle}} } \right)^2 -1 \\ \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& \dfrac{1}{3}\left( \left( \dfrac{ r_{\text{circumcircle}} } { r_{\text{incircle}} } \right)^2 -1 \right) \\ \\ && \boxed{\text{here, if triangle is equilateral: }\\ \mathbf{2\times r_{\text{incircle}} = r_{\text{circumcircle}}!!!} } \\\\ \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& \dfrac{1}{3}\left( \left( \dfrac{ 2\times r_{\text{incircle}} } { r_{\text{incircle}} } \right)^2 -1 \right)\\ \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& \dfrac{1}{3}\times(2^2 -1) \\ \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& \dfrac{1}{3}\times(3) \\ \dfrac{ {\color{blue}\text{blue}} } { {\color{orange}\text{orange}} } &=& 1 \\ \mathbf{ {\color{blue}\text{blue}} } &=& \mathbf{ {\color{orange}\text{orange}} } \\ \hline \end{array}$$
Nov 15, 2019
edited by heureka Nov 15, 2019 | 2019-12-14T07:09:04 | {
"domain": "0calc.com",
"url": "https://web2.0calc.com/questions/help_4799",
"openwebmath_score": 0.945934534072876,
"openwebmath_perplexity": 10881.816147161699,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9843363480718236,
"lm_q2_score": 0.8705972801594706,
"lm_q1q2_score": 0.8569605473934355
} |
https://mathstodon.xyz/@christianp/108424166828836198 | A while ago @davidphys1 asked why nobody had made animations of the shunting yard algorithm with cutesy trains.
There is no surer way to summon me!
I've spent some of my spare time over the bank holidays making exactly that: somethingorotherwhatever.com/s
· · Web · · ·
The shunting yard algorithm neatly solves the problem of translating a mathematical expression written in infix notation (operators go between the numbers/letters) to postfix notation (operators go after the things they act on).
The core problem is that you need to work out what just what an operator applies to: with the order of operations, it might be just one number, or it might a large sub-expression.
The algorithm solves this by holding operators on a separate stack until they're needed
Here are some animations to illustrate. In the first, the operations happen left-to-right, so they appear in the same order in the output as in the input.
In the second, the addition must happen after the multiplication, so it's held back.
In the third, brackets ensure that the addition happens first.
The last wrinkle for standard arithmetic is that exponentiation is right-associative: while for the other operations you work left-to-right:
1 − 2 − 3 = (1 − 2) − 3,
the order goes the other way for exponentiation:
1 ^ 2 ^ 3 = 1 ^ (2 ^ 3)
@christianp @davidphys1 Wow that's a very cute animation!
@christianp
Doesn't that depend on the domain?
I mean, some rules aren't valid anymore for, say, Quaternions or Octaves…
@RyunoKi *leans heavily on the word "standard"*
@RyunoKi but no, I think that if I wrote an expression $$a \times b \times c$$, where a, b and c are octonions and so multiplication isn't associative, I'd expect you to interpret it as $$a \times b \times c$$, by convention.
I might include some brackets, to avoid relying on convention.
@christianp
So… the algorithm isn't considering this constraint. Fine. Space for further research :)
@RyunoKi no, it is. Or I don't understand what you mean.
@RyunoKi just noticed I missed the brackets in my second-last tweet! I had trouble LaTeXing, then didn't check how it looked!
I'd expect you to interpret $$a \times b \times c$$ as $$(a \times b) \times c$$
@christianp
I even switched to browser view to check whether something got lost in transmission 😅
@christianp
„The core problem is that you need to work out what just what an operator applies to: with the order of operations, it might be just one number, or it might a large sub-expression.
The algorithm solves this by holding operators on a separate stack until they're needed“
Is that assuming „standard" arithmetic like associative multiplication? Or the lack thereof?
Like, once „computer does that“ many people rely on it without questioning the result.
@RyunoKi right, the algorithm says that multiplication is left-associative. For real numbers, it doesn't matter.
@christianp
Thanks. Important fact in certain circumstances.
@christianp
Especially the order and arguments can change depending on how you put the brackets.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon! | 2022-06-30T06:34:31 | {
"domain": "mathstodon.xyz",
"url": "https://mathstodon.xyz/@christianp/108424166828836198",
"openwebmath_score": 0.7036386132240295,
"openwebmath_perplexity": 1594.6830523310077,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9843363499098283,
"lm_q2_score": 0.8705972768020107,
"lm_q1q2_score": 0.8569605456887277
} |
https://math.stackexchange.com/questions/2394083/is-frac2001020-cdot-19-an-integer-or-not?noredirect=1 | # Is $\frac{200!}{(10!)^{20} \cdot 19!}$ an integer or not?
A friend of mine asked me to prove that $$\frac{200!}{(10!)^{20}}$$ is an integer
I used a basic example in which I assumed that there are $200$ objects places in $20$ boxes (which means that effectively there are $10$ objects in one box). One more condition that I adopted was that the boxes are distinguishable but the items within each box are not. Now the number of permutations possible for such an arrangement are : $$\frac{200!}{\underbrace{10! \cdot 10! \cdot 10!\cdots 10!}_{\text{20 times}}}$$ $$\Rightarrow \frac{200!}{(10!) ^{20}}$$
Since these are just ways of arranging, we can be pretty sure that this number is an integer.
Then he made the problem more complex by adding a $19!$ in the denominator, thus making the problem: Is $$\frac{200!}{(10!)^{20} \cdot 19!}$$ an integer or not?
The $19!$ in the denominator seemed to be pretty odd and hence I couldn't find any intuitive way to determine the thing. Can anybody please help me with the problem?
• There might be some clever way, but this is where I would start counting primes. Are there enough $2$'s in the numerator? $3$'s? – Arthur Aug 15 '17 at 7:23
• Use the fact that every set of $k$ integers will have one integer divisible by $k$. Show that that means $(m+1)(m+2).... (m+k)$ will be divisible by $k!$. And that means $(1*2....10)(11*... 20)...... (191*....200)$ is divisible by $10!*10!*....10!$. – fleablood Aug 15 '17 at 7:26
• That's an interesting friend you have there. – uniquesolution Aug 15 '17 at 7:30
• To eliminate 19! Not that all k divide 10*k. Then each 10k+1 to 10k+9 is divisible by 9! And 10k/k = 10 so it is divisible by 10 as well. – fleablood Aug 15 '17 at 7:34
• An interesting question (at least I think it would be) is what is the largest integer $k$ so that $k!*10^{20}$ divides $200!$. – fleablood Aug 15 '17 at 16:27
You assumed the boxes were distinguishable, leading to $\frac{200!}{(10!)^{20}}$, ways to fill the boxes. If you make them indistinguishable, you merge the $20!$ ways of reordering the boxes into one, so that previous answer overcounts each way of filling indistinguishable boxes by a factor of $20!$. Therefore you are left with $\frac{200!}{(10!)^{20}}/20!$ ways to fill 20 indistinguishable boxes, which then must be an integer. After multiplying by $20$ it is of course still an integer.
We know that $\dfrac{(mn)!}{n!(m!)^n}$ is an integer for $m,n \in \Bbb N$ $^{(*)}$ . Let $n = 20$ and $m = 10$, then $\dfrac{(200)!}{20!(10!)^{20}}$ is an integer.
Multiply by $20$, $\dfrac{(200)!}{19!(10!)^{20}}$ is an integer.
Using induction, this answer says that $$\frac{(mn)!}{(m!)^nn!}=\prod_{k=1}^n\binom{mk-1}{m-1}$$ Plug in $m=10$ and $n=20$ to get $$\frac{200!}{10!^{20}\,20!}=\prod_{k=1}^{20}\binom{10k-1}{9}$$ Multiply by $20$ to get $$\frac{200!}{10!^{20}\,19!}=20\,\prod_{k=1}^{20}\binom{10k-1}{9}$$
Another Approach
Note that \begin{align} \binom{kn}{n} &=\frac{(kn-n+1)(kn-n+2)\cdots(kn-1)\,kn}{1\cdot2\cdots(n-1)\,n}\\ &=\frac{(kn-n+1)(kn-n+2)\cdots(kn-1)\,k}{1\cdot2\cdots(n-1)}\\ &=\binom{kn-1}{n-1}\,k \end{align} Therefore, since we can write a multinomial as a product of binomials, \begin{align} \frac{(mn)!}{n!^m} &=\prod_{k=1}^m\binom{kn}{n}\\ &=\prod_{k=1}^m\binom{kn-1}{n-1}\,k\\ &=m!\,\prod_{k=1}^m\binom{kn-1}{n-1} \end{align} and so $$\frac{(mn)!}{n!^m\,m!}=\prod_{k=1}^m\binom{kn-1}{n-1}$$ Plug in $m=20$ and $n=10$ and multiply by $20$ to get $$\frac{200!}{10!^{20}\,19!}=20\,\prod_{k=1}^{20}\binom{10k-1}{9}$$
• But this is my answer ... – user8277998 Aug 16 '17 at 8:49
• @123: I see... I cited an answer that I wrote and didn't see that you had parenthetically cited the question to which that was an answer. However, without the connection given by the citations, the answers do not look the same. If this bothers you, I will delete my answer. – robjohn Aug 16 '17 at 13:01
• @123: I have added another proof of the same identity to differentiate our answers. I will still delete this answer if you think they are too close. – robjohn Aug 16 '17 at 13:28
• No, I have no problem with your answer, you can keep it as it is. – user8277998 Aug 16 '17 at 14:36
A long version: $$\frac{200!}{10!^{20} \cdot 19!}=\frac{30\cdot31\cdot .. \cdot200}{10!^{19}}\cdot \frac{29!}{10!\cdot(29-10)!}=...$$ which is $$...=\frac{30\cdot31\cdot .. \cdot200}{10!^{19}}\cdot \binom{29}{10}=\\ \frac{\color{red}{30} ..\color{red}{40} ..\color{red}{50} ..\color{red}{60}..\color{red}{70}..\color{red}{80}..\color{red}{90}..\color{red}{10^2}..\color{red}{110}..\color{red}{120}..\color{red}{130}..\color{red}{140}..\color{red}{150}..\color{red}{160}..\color{red}{170}..\color{red}{180}..\color{red}{190}..\color{red}{2\cdot10^{2}}}{10!^{19}}\cdot \binom{29}{10}=...$$ $20$ numbers divisible by 10, or $$3\cdot4\cdot5\cdot..\cdot9\cdot11\cdot..\cdot19\cdot20\cdot\frac{31..39\cdot41..49\cdot51..59\cdot..\cdot191..199}{9!^{19}}\cdot \binom{29}{10}=\\ 10\cdot\frac{2..9\cdot11..19\cdot31..39\cdot41..49\cdot51..59\cdot..\cdot191..199}{9!^{19}}\cdot \binom{29}{10}=...$$ cardinality of $\{31,41,51,61,71,81,91,101,111,121,131,141,151,161,171,181,191\}$ is 17 $$...=10\cdot \frac{1..9}{9!}\cdot\frac{11..19}{9!}\cdot\frac{31..39}{9!}\cdot..\cdot\frac{191..199}{9!}\cdot \binom{29}{10}=\\ 10\cdot \binom{9}{9} \cdot \binom{19}{9} \cdot \binom{39}{9}\cdot .. \cdot \binom{199}{9} \cdot \binom{29}{10}$$
• I would appreciate the down-voters to at least comment ... – rtybase Aug 15 '17 at 8:29
Consider $V=(10k+1)*....*(10k+9)$.
By your reasoning, ${10k+9 \choose 9}=(10k+1)*....*(10k+9)/9!$ is an integer.
And $10(k+1)/10(k+1)$ is an integer.
So $(10k+1)*....*(10 (k+1))$ is divisible by $9!*10*(k+1)=10!*(k+1)$.
So $200!$ is divisible by $10!*1*10!*2*10!*3*.....*10!*19=(10!)^{20}*19!$
• Don't you mean it is divisible by $10!*(k+1)$ ? – Jaap Scherphuis Aug 15 '17 at 7:55
• Yeah, I guess I did. – fleablood Aug 15 '17 at 16:22
I computed the answer just for fun using Java, and it's indeed an integer!
41355508127520659545494261323391337886154686759988983912363570790033502473625361601944917427369977161391866491251801111884812210789772970682172860398969828337097889527312353089859289462934116034461288917394623420753412096000000
import java.math.BigDecimal;
import java.math.RoundingMode;
public class JustForFun{
public static void main(String []args){
BigDecimal thFact = new BigDecimal("1");
BigDecimal tenFact = null, ntFact = null, tenFactPow20 = null;
/* Computes 200! and stores it in a */
for (int i = 1; i <= 200; i++) {
thFact = thFact.multiply(new BigDecimal(i + ""));
/* stores 10! in b */
if (i == 10)
tenFact = thFact;
/* stores 19! in c */
if (i == 19)
ntFact = thFact;
}
tenFactPow20 = tenFact.pow(20);
tenFactPow20 = tenFactPow20.multiply(ntFact);
thFact = thFact.divide(tenFactPow20);
System.out.println(thFact);
}
}
Since I am pushing 70 yrs old, it seems appropriate to dinosaur-excerpt "Elementary Number Theory" 1938 (Uspensky and Heaslett).
For real # $$r$$, let $$\lfloor r\rfloor \equiv$$ the floor of $$r$$.
Let $$p$$ be any prime #.
Let $$V_p(n) : n ~\in ~\mathbb{Z^+} ~\equiv~$$ the largest exponent $$\alpha$$ such that $$p^{\alpha} | n$$.
That is, if $$\alpha = V_p(n),$$ then $$p^{(\alpha + 1)} \not | ~n.$$
From Uspensky and Heaslett, $$V_p(n!) = \left\lfloor\frac{n}{p^1}\right\rfloor ~+~ \left\lfloor\frac{n}{p^2}\right\rfloor ~+~ \left\lfloor\frac{n}{p^3}\right\rfloor ~+~ \left\lfloor\frac{n}{p^4}\right\rfloor \cdots$$
Clearly, given two positive integers $$A,B$$, $$~\frac{A}{B}$$ will be an integer $$\iff$$
for every prime # $$p$$ that occurs in the prime factorization of $$B$$,
$$V_p(B) \leq V_p(A).$$
It is immediate, that given the OP's original question, the only prime #'s that need to be checked are those prime #'s that are $$\leq 19.$$ Further, you can see at a glance the prime #'s 11, 13, 17, and 19 can not pose a problem.
Therefore, the problem reduces to manually applying Uspenky and Heaslett's formula to the numerator and denominator of the OP's original query with respect to the prime #'s 2,3,5,7.
Empirically, they each check out okay. Therefore, the fraction is an integer.
In your face $$21^{\text{st}}$$ century! | 2021-01-16T05:46:16 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2394083/is-frac2001020-cdot-19-an-integer-or-not?noredirect=1",
"openwebmath_score": 0.780359148979187,
"openwebmath_perplexity": 416.9148791756454,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9843363512883316,
"lm_q2_score": 0.870597270087091,
"lm_q1q2_score": 0.8569605402791093
} |
https://proofwiki.org/wiki/Coprimality_Relation_is_not_Antisymmetric | # Coprimality Relation is not Antisymmetric
## Theorem
Consider the coprimality relation on the set of integers:
$\forall x, y \in \Z: x \perp y \iff \gcd \set {x, y} = 1$
where $\gcd \set {x, y}$ denotes the greatest common divisor of $x$ and $y$.
Then:
$\perp$ is not antisymmetric.
## Proof
We have:
$\gcd \set {3, 5} = 1 = \gcd \set {5, 3}$
and so:
$3 \perp 5$ and $5 \perp 3$
However, it is not the case that $3 = 5$.
The result follows by definition of antisymmetric relation.
$\blacksquare$ | 2021-04-11T13:26:09 | {
"domain": "proofwiki.org",
"url": "https://proofwiki.org/wiki/Coprimality_Relation_is_not_Antisymmetric",
"openwebmath_score": 0.9990094304084778,
"openwebmath_perplexity": 567.9665807161637,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9843363503693294,
"lm_q2_score": 0.8705972566572504,
"lm_q1q2_score": 0.8569605262595482
} |
https://math.stackexchange.com/questions/3238042/limit-of-powers-of-3-times3-matrix/3239827 | # Limit of powers of $3\times3$ matrix
Consider the matrix
$$A = \begin{bmatrix} \frac{1}{2} &\frac{1}{2} & 0\\ 0& \frac{3}{4} & \frac{1}{4}\\ 0& \frac{1}{4} & \frac{3}{4} \end{bmatrix}$$
What is $$\lim_{n→\infty}A^n$$ ?
A)$$\begin{bmatrix} 0 & 0 & 0\\ 0& 0 & 0\\ 0 & 0 & 0 \end{bmatrix}$$ B)$$\begin{bmatrix} \frac{1}{4} &\frac{1}{2} & \frac{1}{2}\\ \frac{1}{4}& \frac{1}{2} & \frac{1}{2}\\ \frac{1}{4}& \frac{1}{2} & \frac{1}{2}\end{bmatrix}$$ C)$$\begin{bmatrix} \frac{1}{2} &\frac{1}{4} & \frac{1}{4}\\ \frac{1}{2}& \frac{1}{4} & \frac{1}{4}\\ \frac{1}{2}& \frac{1}{4} & \frac{1}{4}\end{bmatrix}$$ D)$$\begin{bmatrix} 0 &\frac{1}{2} & \frac{1}{2}\\ 0 & \frac{1}{2} & \frac{1}{2}\\ 0 & \frac{1}{2} & \frac{1}{2}\end{bmatrix}$$ E) The limit exists, but it is none of the above
The given answer is D). How does one arrive at this result?
• Did you try doing for small values of $n$. Take $n=2, 3,4$ and post your observations (if any) as well. – Vizag May 24 '19 at 11:12
By this question, we know that
$$$$A^n= \begin{pmatrix} 2^{-n} & n\cdot 2^{-n-1} - 2^{-n-1} + \frac12 & {1-\frac{n+1}{2^n}\over2}\\ 0 & {2^{-n}+1\over2} & {1-2^{-n}\over2} \\ 0 & {1-2^{-n}\over2} & {2^{-n}+1\over2} \end{pmatrix}.$$$$
It is thus clear that $$\lim_{n\to\infty} A^n = \begin{pmatrix} 0 &\frac{1}{2} & \frac{1}{2}\\ 0 & \frac{1}{2} & \frac{1}{2}\\ 0 & \frac{1}{2} & \frac{1}{2}\end{pmatrix}$$.
• $2^{-n}$ tends to 0, when $n->\infty$.....right?? – Srestha May 25 '19 at 19:27
• @Srestha that is correct – Maximilian Janisch May 25 '19 at 19:40
If you are in $$1$$, you have same probability to stay there or to pass to $$2$$, but no way to get back from there. Thus you are finally drifting to $$2$$.
States $$2$$ and $$3$$ are symmetrical: at long they will tend to be equally populated, independently of the starting conditions.
Therefore also starting from $$1$$ you will at long be split between $$2$$ and $$3$$.
It’s often worth examining a matrix for obvious eigenvectors and eigenvalues, especially in artificial exercises, before plunging into computing and solving the characteristic equation. From the first column of $$A$$, we see that $$(1,0,0)^T$$ is an eigenvector with eigenvalue $$\frac12$$. The rows of $$A$$ all sum to $$1$$, so $$(1,1,1)$$ is an eigenvector with eigenvalue $$1$$. The remaining eigenvalue $$\frac12$$ can be found by examining the trace.
$$A$$ is therefore similar to a matrix of the form $$J=D+N$$, where $$D=\operatorname{diag}\left(1,\frac12,\frac12\right)$$ and $$N$$ is nilpotent of order no greater than 2. (If $$A$$ is diagonalizable, then $$N=0$$.) $$D$$ and $$N$$ commute, so expanding via the Binomial Theorem, $$(D+N)^n=D^n+nND^{n-1}$$. In the limit, $$D^n=\operatorname{diag}(1,0,0)$$ and the first column of $$N$$ is zero, so the second term vanishes. Thus, if $$A=PJP^{-1}$$, then $$\lim_{n\to\infty}A^n=P\operatorname{diag}(1,0,0)P^{-1}$$, but the right-hand side is just the projector onto the eigenspace of $$1$$. Informally, repeatedly multiplying a vector by $$A$$ leaves that vector’s component in the direction of $$(1,1,1)^T$$ fixed, while the remainder of the vector eventually dwindles away to nothing.
Since $$1$$ is a simple eigenvalue, there’s a shortcut for computing this projector that doesn’t require computing the change-of-basis matrix $$P$$: if $$\mathbf u^T$$ is a left eigenvector of $$1$$ and $$\mathbf v$$ a right eigenvector, then the projector onto the right eigenspace of $$1$$ is $${\mathbf v\mathbf u^T\over\mathbf u^T\mathbf v}.$$ (This formula is related to the fact that left and right eigenvectors with different eigenvalues are orthogonal.) We already have a right eigenvector, and a left eigenvector is easily found by inspection: the last two columns both sum to $$1$$, so $$(0,1,1)$$ is a left eigenvector of $$1$$. This gives us $$\lim_{n\to\infty}A^n = \frac12\begin{bmatrix}1\\1\\1\end{bmatrix}\begin{bmatrix}0&1&1\end{bmatrix} = \begin{bmatrix}0&\frac12&\frac12\\0&\frac12&\frac12\\0&\frac12&\frac12\end{bmatrix}.$$ | 2021-06-15T19:11:13 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3238042/limit-of-powers-of-3-times3-matrix/3239827",
"openwebmath_score": 0.9713488221168518,
"openwebmath_perplexity": 219.66960025485517,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9904406019226685,
"lm_q2_score": 0.8652240738888188,
"lm_q1q2_score": 0.8569530525404251
} |
http://mathematica.stackexchange.com/questions/15047/how-to-draw-fractal-images-of-iteration-functions-on-the-riemann-sphere?answertab=votes | How to draw Fractal images of iteration functions on the Riemann sphere?
Prof. McClure, in the work "M. McClure, Newton's method for complex polynomials. A preprint version of a “Mathematical graphics” column from Mathematica in Education and Research, pp. 1–15 (2006)", discusses how Mathematica can be applied to iteration functions for obtaining the basins of attraction (or their fractal images). Below, I provide his code for the fractal image of the polynomial $p(z)=z^3-1$:
p[z_] := z^3 - 1;
theRoots = z /. NSolve[p[z] == 0, z]
cp = Compile[{{z, _Complex}}, Evaluate[p[z]]];
n = Compile[{{z, _Complex}}, Evaluate[Simplify[z - p[z]/p'[z]]]];
bail = 150;
orbitData = Table[
NestWhileList[n, x + I y, Abs[cp[#]] > 0.01 &, 1, bail],
{y, -1, 1, 0.01}, {x, -1, 1, 0.01}
];
numRoots = Length[Union[theRoots]];
sameRootFunc = Compile[{{z, _Complex}}, Evaluate[Abs[3 p[z]/p'[z]]]];
whichRoot[orbit_] :=
Module[{i, z},
z = Last[orbit]; i = 1;
Scan[If[Abs[z - #] < sameRootFunc[z], Return[i], i++] &, theRoots];
If[i <= numRoots, {i, Length[orbit]}, None]
];
rootData = Map[whichRoot, orbitData, {2}];
colorList = {{cc, 0, 0}, {cc, cc, 0}, {0, 0, cc}};
cols = rootData /. {
{k_Integer, l_Integer} :> (colorList[[k]] /. cc -> (1 - l/(bail + 1))^8),
None -> {0, 0, 0}
};
Graphics[{Raster[cols]}]
My main question is here. He nicely obtained the fractal images on the complex plane, while it would be an interesting challenge to obtain these images on the Riemann sphere, e.g.
It seems the complex plane in this case has been replaced by a sphere, but how? I will be thankful if someone could revise the code given above for obtaining such beautiful fractal images on the Riemann sphere. Any tips and tricks will be fully appreciated as well.
-
Would you care to share where your example spherical projection came from? – Mark McClure Dec 24 '12 at 13:04
As the other answers have shown, it's fairly easy to map an image onto a parametrized surface using textures. It can be a bit tricky, though, getting the image to mesh well with the transformation. J.M. hit on the crucial issue, namely that we compute the image using points that map to the sphere with minimal distortion. This answer is largely an expansion on his, although there are some differences and other ideas as well.
First, the article that Fazlollah refers to is some years old now, and the code can be improved in light of the many changes since V5, so let's start by showing how to generate regular Newton iteration images for general polynomials. Given a polynomial function $f(z)$, the following code computes the corresponding Newton's method iteration function $n$. It then defines the command limitInfo that iterates $n$ up to $50$ times from a starting point $z_0$ terminating when $|f(z)|$ is small and returning the last iterate and the number of iterates required for $|f(z)|$ to get small. It's compiled, listable and set to run in parallel, so it should be pretty fast. In addition to the function f, there are two numeric parameters to set, bail and r.
f = Function[z, z^3 - 1]; (* A very standard example *)
n = Function[z, Evaluate[Simplify[z - f[z]/f'[z]]]];
bail = 50;
(* bail is the number of iterates before bailing. *)
(* Doesn't have to be particularly large, *)
(* if there are only simple roots. *)
r = 0.01;
(* We assume that if |z-z0|<r, then we've *)
(* converged to the root z0. *)
limitInfo = With[{bail = bail, r = r, f = f, n = n},
Compile[{{z0, _Complex}},
Module[{z, cnt},
cnt = 0; z = z0;
While[Abs[f[z]] > r && cnt < bail,
z = n[z];
cnt = cnt + 1
];
{z, cnt}],
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
]];
Since the function is listable and runs in parallel, we can simply apply it to a table of data on one fell swoop.
step = 4/801; (* The denominator is essentially the resolution. *)
limitData = limitInfo[
Table[x + I*y, {y, 2, -2, -step}, {x, -2, 2, step}]]; // AbsoluteTiming
(* Out: {1.492716, Null} *)
Each element is a pair that indicates the limiting behavior and how long it took to get there.
limitData[[1, 1]]
(* Out: {-0.499835 + 0.866044 I, 5. + 0. I} *)
I guess we need a function that takes something like that and turns it into a color.
roots = z /. NSolve[f[z] == 0, z];
preColors = List @@@ Table[ColorData[61, k], {k, 1, Length[roots]}];
preColors = Append[preColors, {0.0, 0.0, 0.0}];
color = With[{bail = bail, roots = roots, preColors = preColors},
Compile[{{z, _Complex}, {cnt, _Complex}},
Module[{arg, time, i},
arg = Arg[z];
time = Abs[cnt];
i = 1;
Scan[If[Abs[z - #] < 0.1, Return[i], i++] &, roots];
Abs[preColors[[i]]*(cnt/bail)^(0.2)]
(* The exponent 0.2 adjusts the brightness of the image. *)]]
];
Now, we apply that function and generate the image.
colors = Apply[color, limitData, {2}];
Image[colors, ImageSize -> 2/step]
To map onto a sphere nicely, we'll discard the rectangular grid of points that we used above in favor of a collection of points that looks something like the following (although, we'll want higher resolution, of course):
step = Pi/12;
pts = Table[Cot[phi/2] Exp[I*theta],
{phi, step, Pi - step, step}, {theta, -Pi, Pi, step}];
ListPlot[{Re[#], Im[#]} & /@ Flatten[pts, 1],
AspectRatio -> Automatic, PlotRange -> All,
Epilog -> {Red, Circle[]}]
The expression $\cot(\phi/2) e^{i\theta}$ is the stereographic projection of a point expressed in spherical coordinates $(1,\phi,\theta)$ onto the plane. As a result, the corresponding points on the sphere are nicely distributed. Note, for example, that the number of points inside and outside of the unit circle are the same.
Graphics3D[{{Opacity[0.8], Sphere[]},
Point[Flatten[Table[{Cos[theta] Sin[phi], Sin[theta] Sin[phi], Cos[phi]},
{phi, step, Pi - step, step}, {theta, -Pi, Pi, step}], 1]]}]
Now, we increase the resolution and use the same limitInfo and color functions as before.
step = Pi/500;
limitData = limitInfo[Table[Cot[phi/2] Exp[I*theta],
{phi, step, Pi - step, step}, {theta, -Pi, Pi, step}]];
colors = Apply[color, limitData, {2}];
rect = Image[colors, ImageSize -> 4/step]
The image looks a bit different, but it's perfect for use as a spherical texture.
ParametricPlot3D[{Cos[theta] Sin[phi], Sin[theta] Sin[phi], Cos[phi]} ,
{theta, -Pi, Pi}, {phi, 0, Pi}, Mesh -> None, PlotPoints -> 100,
Boxed -> False, PlotStyle -> Texture[Show[rect]],
Lighting -> "Neutral", Axes -> False]
We can incorporate all of this into a Module.
newtonSphere[fIn_, var_, resolution_, bail_: 50, r_: 0.01] := Module[
{f, n, limitInfo, color, colors, roots, preColors, step, limitData, rect},
f = Function[var, fIn];
n = Function[var, Evaluate[Simplify[var - f[var]/f'[var]]]];
limitInfo = With[{bailLoc = bail, rLoc = r, fLoc = f, nLoc = n},
Compile[{{z0, _Complex}},
Module[{z, cnt},
cnt = 0; z = z0;
While[Abs[fLoc[z]] > rLoc && cnt < bailLoc,
z = nLoc[z];
cnt = cnt + 1
];
{z, cnt}],
RuntimeAttributes -> {Listable},
Parallelization -> True,
RuntimeOptions -> "Speed"
]];
roots = z /. NSolve[f[z] == 0, z];
preColors = List @@@ Table[ColorData[61, k], {k, 1, Length[roots]}];
preColors = Append[preColors, {0.0, 0.0, 0.0}];
color = With[{bailLoc = bail, rootsLoc = roots, preColorsLoc = preColors},
Compile[{{z, _Complex}, {cnt, _Complex}},
Module[{arg, time, i},
arg = Arg[z];
time = Abs[cnt];
i = 1;
Scan[If[Abs[z - #] < 0.1, Return[i], i++] &, rootsLoc];
preColorsLoc[[i]]*(cnt/bailLoc)^(0.2)
]]
];
step = Pi/resolution;
limitData = limitInfo[
Table[Cot[phi/2] Exp[I*theta], {phi, step, Pi - step,
step}, {theta, -Pi, Pi, step}]];
colors = Apply[color, limitData, {2}];
rect = Image[colors, ImageSize -> 4/step];
ParametricPlot3D[{Cos[theta] Sin[phi], Sin[theta] Sin[phi],
Cos[phi]} ,
{theta, -Pi, Pi}, {phi, 0, Pi}, Mesh -> None, PlotPoints -> 100,
Boxed -> False, PlotStyle -> Texture[Show[rect]],
Lighting -> "Neutral", Axes -> False]
];
Now, if I had to guess, I'd say that example image in the original post was generated by a small perturbation of $z^8-z^2$.
newtonSphere[(2 z/3)^8 - (2 z/3)^2 + 1/10, z, 500]
Here are a few more examples.
pic1 = newtonSphere[z^2 - 1, z, 401];
SeedRandom[1];
pic2 = newtonSphere[Sum[RandomInteger[{-3, 5}] z^k, {k, 0, 8}], z, 400];
pic3 = newtonSphere[z^10 - z^5 - 1, z, 400, 200];
pic4 = newtonSphere[z^5 - z - 0.99, z, 400];
GraphicsGrid[{
{pic1, pic2},
{pic3, pic4}
}]
In the top row, we see that the result for quadratic polynomials is typically rather boring while that for a random degree 8 polynomial can be quite cool. On the bottom right, we see a black region. The color function is setup to default to black when none of the roots are detected. This can certainly happen; in fact the Newton iteration function for this example has an attractive orbit of period 6 leading to the quadratic like Julia set seen in the image. Sometimes black can occur simply because we didn't iterate enough, which is why I used the optional fourth argument for the image in the bottom left.
-
Now imagine those hanging from a fractal christmas tree. (+1) – Jens Dec 24 '12 at 17:40
Here is my modest attempt, based on the formulae for stereographic projection in this Wikipedia entry (where the north pole corresponds to the point at infinity) and using a technique similar to the one in this answer:
newtonRaphson = Compile[{{n, _Integer}, {c, _Complex}},
Arg[FixedPoint[(# - (#^n - 1)/(n #^(n - 1))) &, c, 30]]]
tex = Image[DensityPlot[
newtonRaphson[3, Cot[ϕ/2] Exp[I θ]], {θ, -π, π}, {ϕ, 0, π},
AspectRatio -> Automatic,
ColorFunction -> (Which[# < .3, Red, # > .7, Yellow, True, Blue] &),
Frame -> False, ImagePadding -> None, PlotPoints -> 400,
PlotRange -> All, PlotRangePadding -> None],
ImageResolution -> 256];
(* yes, I know that I could have used SphericalPlot3D[]... *)
ParametricPlot3D[{Sin[ϕ] Cos[θ], Sin[ϕ] Sin[θ], Cos[ϕ]}, {θ, -π, π}, {ϕ, 0, π},
Axes -> None, Boxed -> False, Lighting -> "Neutral", Mesh -> None,
PlotStyle -> Texture[tex], TextureCoordinateFunction -> ({#4, #5} &)]
-
Thanks for your response. There are two problems. 1. How can one zoom in on a particluar place on this sphere without lowering the quality to observe the fractal behaviour of the method? 2. The "spcae size" of the output image? In fact, how one can save as the output fractal image with low "disk size" without lowering the quality in EPS format? For example, for $n=8$, its size is more than 2MB! – Fazlollah Soleymani Nov 22 '12 at 21:16
You'll have to play with PlotPoints and ImageResolution on your own, of course... – Guess who it is. Nov 22 '12 at 23:13
Thanks, the problem is that when we reduce PlotPoints or ImageResolution, the quality gets down dramatically. I am looking for a fast way to obtain high quality pics, with small space size, just like the one given in the question. Rasterize@... is a good choice, but it disables the feature of rotating the 3D pic, and also gets the quality lower. – Fazlollah Soleymani Nov 23 '12 at 8:20
Well, I'd say this is one of those "no such thing as a free lunch" things. If you want to be able to zoom, you certainly need high resolution, which will demand more of your computer's resources... if you want small file sizes, you'll have to sacrifice quality somewhat. Scylla and Charybdis, you know... – Guess who it is. Nov 23 '12 at 9:49
img2 = ImageCrop[Image[Graphics[{Raster[cols]}, PlotRangePadding -> 0,
ImagePadding -> 0, ImageMargins -> 0]], {343, 343}];
SphericalPlot3D[1 , {u, 0, Pi}, {v, 0, 2 Pi}, Mesh -> None,
TextureCoordinateFunction -> ({#1, #2} &),
PlotStyle -> Directive[Specularity[White, 10], Texture[img2]],
Lighting -> "Neutral", Axes -> False, ImageSize -> 500]
where img2 is cropped version of the 2D image in OP's question.
Check Texture for more examples.
-
Thanks for your reply, but there are big white circles at the middle of each basin. They should not be here. Please rotate your image, and then you will see the incomplete fractal image. Can you solve this drawback? – Fazlollah Soleymani Nov 22 '12 at 20:59
img2 is the cropped version of your 2D image: img2=ImageCrop[ Image[Graphics[{Raster[cols]}, PlotRangePadding -> 0, ImagePadding -> 0, ImageMargins -> 0]], {343, 343}]. – kglr Nov 22 '12 at 21:50
Now that I think about it, one could have directly produced an image instead of going through the Raster[] route: Image[cols]. – Guess who it is. Nov 24 '12 at 14:12
I've decided to write a simplification+extension of Mark's routine as a separate answer. In particular, I wanted a routine that yields Riemann sphere fractals not only for Newton-Raphson, but also its higher-order generalizations (e.g. Halley's method).
I decided to use Kalantari's "basic iteration" family for the purpose. An $n$-th order member of the family looks like this:
$$x_{k+1}=x_k-f(x_k)\frac{\mathcal D_{n-1}(x_k)}{\mathcal D_n(x_k)}$$
where
$$\mathcal D_0(x_k)=1,\qquad\mathcal D_n(x_k)=\begin{vmatrix}f^\prime(x_k)&\tfrac{f^{\prime\prime}(x_k)}{2!}&\cdots&\tfrac{f^{(n-2)}(x_k)}{(n-2)!}&\tfrac{f^{(n-1)}(x_k)}{(n-1)!}\\f(x_k)&f^\prime(x_k)&\ddots&\vdots&\tfrac{f^{(n-2)}(x_k)}{(n-2)!}\\&f(x_k)&\ddots&\ddots&\vdots\\&&\ddots&\ddots&\vdots\\&&&f(x_k)&f^\prime(x_k)\end{vmatrix}$$
As noted in that paper, the basic family generalizes the Newton-Raphson iteration; $n=1$ corresponds to Newton-Raphson, while $n=2$ gives Halley's method. (Relatedly, see also Kalantari's work on polynomiography.)
Here's a routine for $\mathcal D_n(x)$:
iterdet[f_, x_, 0] := 1;
iterdet[f_, x_, n_Integer?Positive] := Det[ToeplitzMatrix[PadRight[{D[f, x], f}, n],
Table[SeriesCoefficient[Function[x, f]@\[FormalX], {\[FormalX], x, k}], {k, n}]]]
Here is the routine for generating the Riemann sphere fractals:
Options[rootFractalSphere] = {ColorFunction -> Automatic, ImageResolution -> 400,
MaxIterations -> 50, Order -> 1, Tolerance -> 0.01};
rootFractalSphere[fIn_, var_, opts : OptionsPattern[]] /; PolynomialQ[fIn, var] :=
Module[{γ = 0.2, bail, cf, colList, f, h, itFun, ord, roots, tex, tol},
f = Function[var, fIn];
ord = OptionValue[Order];
itFun = Function[var, var - Simplify[f[var] iterdet[f[var], var, ord - 1]/
iterdet[f[var], var, ord]] // Evaluate];
roots = var /. NSolve[f[var], var];
cf = OptionValue[ColorFunction];
If[cf === Automatic, cf = ColorData[61]];
colList = Append[Table[List @@ ColorConvert[cf[k], RGBColor], {k, Length[roots]}],
{0., 0., 0.}];
bail = OptionValue[MaxIterations]; tol = OptionValue[Tolerance];
makeColor = Compile[{{z0, _Complex}},
Module[{cnt = 0, i = 1, z},
z = FixedPoint[(++cnt; itFun[#]) &, z0, bail,
SameTest -> (Abs[f[#2]] < tol &)];
Scan[If[Abs[z - #] < 10 tol, Return[i], i++] &, roots];
Abs[colList[[i]] (cnt/bail)^γ]],
CompilationOptions -> {"InlineExternalDefinitions" -> True},
RuntimeAttributes -> {Listable}, RuntimeOptions -> "Speed"];
h = π/OptionValue[ImageResolution];
tex = DeveloperToPackedArray[makeColor[
Table[Cot[φ/2] Exp[I θ], {φ, h, π - h, h}, {θ, -π, π, h}]]];
ParametricPlot3D[{Cos[θ] Sin[φ], Sin[θ] Sin[φ], Cos[φ]}, {θ, -π, π}, {φ, 0, π},
Axes -> False, Boxed -> False, Lighting -> "Neutral", Mesh -> None,
PlotPoints -> 75, PlotStyle -> Texture[tex],
Evaluate[Sequence @@ FilterRules[{opts}, Options[Graphics3D]]]]]
Other notes:
• The compiled functions limitInfo[] and color[] have been merged into the single function makeColor[]. This function was not localized on purpose to allow its use even after executing rootFractalSphere[].
• Texture[] can directly accept an array of RGB triplets, so there is no need to use Image[] if these triplets are being generated directly by makeColor[].
Now, for some examples. The first two are Newton-Raphson fractals:
rootFractalSphere[z^3 - 1, z]
rootFractalSphere[(2 z/3)^8 - (2 z/3)^2 + 1/10, z]
Here is a fractal generated by Halley's method:
rootFractalSphere[(2 z/3)^8 - (2 z/3)^2 + 1/10, z, Order -> 2]
Finally, a fractal from a third order iteration:
rootFractalSphere[z^10 - z^5 - 1, z, ColorFunction -> ColorData[54],
MaxIterations -> 200, Order -> 3]
`
-
Thanks. An excellent answer. Just one note. There might be some diverging points (black areas) in the fractal picture for some test problems. On the other hand, we used a color correspond to a point in a sphere or a rectangular domain. So, the domain of working (I mean the mesh of points) are finite. So is it possible to count the number of diverging points? I mean it would be nice to have the percentage of diverging points for each fractal picture. Is it possible to cunt the number of diverging points in your implementation? – Fazlollah Soleymani Apr 6 '13 at 12:25 | 2015-07-01T23:25:08 | {
"domain": "stackexchange.com",
"url": "http://mathematica.stackexchange.com/questions/15047/how-to-draw-fractal-images-of-iteration-functions-on-the-riemann-sphere?answertab=votes",
"openwebmath_score": 0.42025870084762573,
"openwebmath_perplexity": 4975.400829917264,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.97112909472487,
"lm_q2_score": 0.8824278788223265,
"lm_q1q2_score": 0.8569513871207133
} |
http://descarga.nu/ujwn9x/quadratic-trinomial-examples-eb1a1a | The Distributive Law is used in reverse to factorise a quadratic trinomial, as illustrated below.. We can use the box method to factorise a quadratic trinomial. For example: Here b = –2, and c = –15. But a "trinomial" is any three-term polynomial, which may not be a quadratic (that is, a degree-two) polynomial. Free online science printable worksheets for year 11, solve quadratic java, sample algebra test solving addition equations, College algebra tutorial. Trinomial. Examples are 7a2 + 18a - 2, 4m2, 2x5 + 17x3 - 9x + 93, 5a-12, and 1273. To figure out which it is, just carry out the O + I from FOIL. A quadratic trinomial is a polynomial with three terms and the degree of the trinomial must be 2. Think FOIL. Example 1; Example 2; Example 3; Example 4; Example 5; Example 1 Example. Quadratic equation of leading coefficient 1. Let’s consider two cases: (1) Leading coefficient is one, a = 1, and (2) leading coefficient is NOT 1, a ≠ 1. Factor a Quadratic Trinomial. So (3x5)2 = 9x10. x2 + 20x + ___ x2 - 4x + ___ x2 + 5x + ___ 100 4 25/4 … Example 6: A quadratic relation has an equation in factored form. Yes, … Vocabulary. This math video tutorial shows you how to factor trinomials the easy fast way. By the end of this section, you will be able to: Factor trinomials of the form ; Factor trinomials of the form ; Before you get started, take this readiness quiz. Do you see how all three terms are present? The x-intercepts of the parabola are − 4 and 1. Again, think about FOIL and where each term in the trinomial came from. For example: $$x^2 + y^2 + xy$$ and $$x^2 + 2x + 3xy$$. The general form of a quadratic equation is. Expand the equation (2x – 3) 2 = 25 to get; 4x 2 – 12x + 9 – 25 = 0 4x 2 – 12x – 16 = 0. Example … You need to think about where each of the terms in the trinomial came from. Simplify: ⓐ ⓑ If you missed this problem, review . The general form of a quadratic trinomial is ax 2 + bx + c, where a is the leading coefficient (number in front of the variable with highest degree) and c is the constant (number with no variable). Log base change on the TI-89, cube root graph, adding/subtracting positive and negative numbers, java applet factoring mathematics algebra, trigonometry questions, algebra1 prentice hall. Following is an explanation of polynomials, binomials, trinomials, and degrees of a polynomial. Remember, when a term with an exponent is squared, the exponent is multiplied by 2, the base is squared. In a quadratic equation, leading coefficient is nothing but the coefficient of x 2. For example, x + 2. The answer would be 5 and 6. Step 1: Identify if the trinomial is in quadratic form. We require two numbers that multiply to – 18 and add to 7. ax 2 + bx + c. 3x 2 + 7x – 6. ac = 3 × – 6 … Example 3. On this page we will learn what a trinomial in quadratic form is, and what a trinomial in quadratic form is not. In the next section, we will address the technique used to factor $$ax^2+bx+c$$ when $$a \neq 1$$. A quadratic trinomial is a trinomial of which the highest power of any variable is two. Learn how to factor quadratic expressions as the product of two linear binomials. A special type of trinomial can be factored in a manner similar to quadratics since it can be viewed as a quadratic in a new variable (x n below). A binomial is a … In general g(x) = ax 2 + bx + c, a ≠ 0 is a quadratic polynomial. (2x + ? Solution . Quality resources and hosting are expensive, Creative Commons Attribution 4.0 International License. Now you’ll need to “undo” this multiplication—to start with the product and end up with the factors. Below are 4 examples of how to use algebra tiles to factor, starting with a trinomial where A=1 (and the B and C values are both positive), all the way to a trinomial with A>1 (and negative B and/or C values). Jenn, Founder Calcworkshop ®, 15+ Years Experience (Licensed & Certified Teacher) Finding the degree of a polynomial is nothing more than locating the largest … To factorise a quadratic trinomial. $$\text{Examples of Quadratic Trinomials}$$ For example, let us apply the AC test in factoring 3x 2 + 11x + 10. trinomial, as illustrated below. Factoring Trinomials Formula, factoring trinomials calculator, factoring trinomials a 1,factoring trinomials examples, factoring trinomials solver. Summary: A quadratic form trinomial is of the form axk + bxm + c, where 2m = k. It is possible that these expressions are factorable using techniques and methods appropriate for quadratic equations. It is due to the presence of three, unlike terms, namely, 3x, 6x 2 and 2x 3. That would be a – 5 and a + 3. For example, 2x 2 − 7x + 5. If you experience difficulties when using this Website, tell us through the feedback form or by phoning the contact telephone number. In the given trinomial, the product of A and C is 30. We begin by showing how to factor trinomials having the form $$ax^2 + bx + c$$, where the leading coefficient is a = 1; that is, trinomials having the form $$x^2+bx+c$$. Quadratic equation of leading coefficient not equal to 1. Below are 4 examples of how to use algebra tiles to factor, starting with a trinomial where A=1 (and the B and … This is a quadratic form polynomial because the second term’s variable, x3, squared is the first term’s variable, x6. Solving Trinomial Equations Using The Quadratic Formula, Algebra free worked examples for children in 3rd, 4th, 5th, 6th, 7th & 8th grades, worked algebra problems, solutions to algebra questions for children, algebra topics with worked exercises on , inequalities, intergers, logs, polynomials, angles, linear equations, quadratic equation, monomials & more This page will focus on quadratic trinomials. Which of the following is a quadratic? All my letters are being represented by numbers. The last term is plus. A special type of trinomial can be factored in a manner similar to quadratics since it can be viewed as a quadratic in a new variable (x n below). NCERT Solutions For Class 12 Physics; NCERT Solutions For Class 12 Chemistry ; NCERT Solutions For Class 12 Biology; NCERT Solutions For Class 12 Maths; NCERT Solutions … And the middle term's coefficient is also plus. Let’s look first at trinomials with only the middle term negative. For example, the box for is: \begin{array}{|c|c|c} \hline x^2 & 3x & x \\ \hline 2x & 6 & 2 \\ \hline x & 3 \\ \end{array} Therefore Factoring quadratic trinomial and how to factor by grouping. Generally we have two types of quadratic equation. What happens when there are negative terms? This will help you see how the factoring works. The solution a 1 = 2 and a 2 = 1 of the above system gives the trinomial factoring: (x 2 + 3x+ 2) = (x + a 1)(x + a 2) … Factorise 3x 2 + 7x – 6. This part will focus on factoring a quadratic when a, the x 2-coefficient, is 1. Solution. x is being squared. Likewise, 11pq + 4x 2 –10 is a trinomial. The product of two linear factors yields a quadratic trinomial; and the D is a perfect square because it is the square of 5. The tricky part here is figuring out the factors of 8 and 30 that can be arranged to have a difference of 43. b) Write the equation in vertex form. So either -5 × 1 or 5 × -1. Now hopefully, we have got the basic difference between Monomial, Binomial and Trinomial. If you're behind a web filter, please make sure that … Solution: Find the product of the first and the last constants. Examples, solutions, videos, worksheets, ... Scroll down the page for more examples and solutions of factoring trinomials. Solve the following quadratic equation (2x – 3) 2 = 25. NCERT Solutions. Following is an example of trinomial: x 3 + x 2 + 5x 2x 4 -x 3 + 5 A trinomial meaning in math is, it is a type of polynomial that contains only three terms. For example, a univariate (single-variable) quadratic function has the form = + +, ≠in the single variable x.The graph of a univariate quadratic function is a parabola whose axis of symmetry is parallel to the y-axis, as shown at right.. (y+a) (y+b) = y (y+b) + a (y+b) = y 2 + by + ay + ab = y 2 + y (a+b) + ab … Expand the equation (2x – 3) 2 = 25 to get; 4x 2 – 12x + 9 – 25 = 0 4x 2 – 12x – 16 = 0. Perfect Square Trinomial – Explanation & Examples A quadratic equation is a polynomial of second degree usually in the form of f(x) = ax 2 + bx + c where a, b, c, ∈ R and a ≠ 0. Donate Login … How to factor a quadratic trinomial: 5 examples and their solutions. A trinomial is a sum of three terms, while a multinomial is more than three. A trinomial is a polynomial or algebraic expression, which has a maximum of three non-zero terms. Problem 1. This part will focus on factoring a quadratic when a, the x 2 -coefficient, is 1. a x 2 + b x + c = 0 → (x + r) (x + s) Let's solve the following equation by factoring the trinomial: Quadratic is another name for a polynomial of the 2nd degree. A few examples of trinomial expressions are: – 8a 4 +2x+7; 4x 2 + 9x + 7; Monomial: Binomial: Trinomial: One Term: Two terms: Three terms: Example: x, 3y, 29, x/2: Example: x 2 +x, x 3-2x, y+2: Example: x 2 +2x+20: Properties . Use the tabs below to navigate through the notes, video, and practice problems. How to factor a quadratic trinomial: 5 examples and their solutions. In this post, I want to focus on that last topic -- using algebra tiles to factor quadratic trinomials. Remember: To get a negative sum and a positive product, the numbers must both be negative. ax 2 + bx + c = 0. (Lesson 13: Exponents.) For example, the polynomial (x 2 + 3x + 2) is an example of this type of trinomial with n = 1. )(x + ?) Australian Business Number 53 056 217 611, Copyright instructions for educational institutions. So the book's section or chapter title is, at best, a bit off-target. Start from finding the factors of +2. This form is factored as: + + = (+) (+), where + = ⋅ =. Let’s see another example, here where a is not one. 15 Factor Quadratic Trinomials with Leading Coefficient 1 Learning Objectives. $$(x − 5)$$ and $$(x + 3)$$ are factors of $$x^2 − 2x … A quadratic trinomial is factorable if the product of A and C have M and N as two factors such that when added would result to B. c) Sketch a graph of the relation and label all features. Quadratic Polynomial. Since (x2)2 = x4, and the second term is x4, then n = 2. Non-Example: These trinomials are not examples of quadratic form. Step 3: Apply the appropriate factoring technique. Example 1. Binomials. For more practice on this technique, please visit this page. A binomial is a sum of two terms. Factorise by grouping the four terms into pairs. Website and our Privacy and Other Policies. Previously, we went over how to factor out a quadratic trinomial with a leading coefficient of 1. They take a lot of the guesswork out of factoring, especially for trinomials that are not easily factored with other methods. 6 or D = 25. If a polynomial P(x) is … It is the correct pair … NCERT Solutions For Class 12. Let’s look at this quadratic form trinomial and a quadratic with the same coefficients side by side. Example 1: Factor the trinomial x^2+7x+10 x2 + 7x + 10 as a product of two binomials. Obviously, this is an “easy” case because the coefficient of the squared term x x is just 1. If a is one, then we just need to find what two numbers have the product c and the sum of b. Factorising an expression is to write it as a product of its factors. Well, it depends which term is negative. In this quadratic, 3x 2 + 2x − 1, the constants are 3, 2, −1. Factoring quadratic trinomials using the AC Method. So, n = 3. For example, 2x²+7x+3=(2x+1)(x+3). Factoring Trinomials (Quadratics) : Method With Examples Consider the product of the two linear expressions (y+a) and (y+b). Don't worry about the difference, though; the book's title means … It means that the highest power of the variable cannot be greater than 2. 6, the independent term, is the product of 2 and 3. If you’re a teacher and would like to use the materials found on this page, click the teacher button below. The argument appears in the middle term. Then, find the two factors of 30 that will produce a sum of 11. A polynomial formed by the sum of only three terms (three monomials) with different degrees is known as a trinomial. Let’s begin with an example. It consists of only three variables. write the expression in the form ax 2 + bx + c; find two numbers that both multiply to ac and add to b; split the middle term bx into two like terms using those two numbers as coefficients. Let’s look at an example of multiplying binomials to refresh your memory. There are 4 methods: common factor, difference of two squares, trinomial/quadratic expression and completing the square. A polynomial is an algebraic expression with a finite number of terms. A quadratic trinomial is a trinomial in which the highest exponent or power is two, or the second power. Please read the Terms and Conditions of Use of this Quadratic trinomials with a leading coefficient of one. Factoring quadratic trinomial and how to factor by grouping. a, b, c are called constants. factors, | Home Page | Order Maths Software | About the Series | Maths Software Tutorials | The x-intercepts of the parabola are − 4 and 1. Let's take an example. FACTORING 2. Example are: 2x 2 + y + z, r + 10p + 7q 2, a + b + c, 2x 2 y 2 + 9 + z, are all trinomials having three variables. A polynomial having its highest degree 2 is known as a quadratic polynomial. Tie together everything you learned about quadratic factorization in order to factor various quadratic expressions of any form. This video contains plenty of examples and practice ... Factoring Perfect Square Trinomials Factoring Perfect Square Trinomials door The Organic Chemistry Tutor 4 jaar geleden 11 minuten en 3 seconden 267.240 weergaven This algebra video tutorial focuses on , factoring , perfect square , trinomials , . In general, the trinomial of the ax 2 + bx + c is a perfect square if the discriminant is zero; that is, if b 2 -4ac = 0, because in this case it will only have one root and can be expressed in the form a (xd) 2 = (√a (xd)) 2 , where d is the root already mentioned. COMPLETING THE SQUARE 3. Consider the expansion of (x + 2)(x + 3).We notice that: 5, the coefficient of x, is the sum of 2 and 3.; 6, the independent term, is the product of 2 and 3.; Note: The product of two linear factors yields a quadratic trinomial; and the factors of a quadratic trinomial are linear factors.. Now consider the expansion of … There is one last factoring method you’ll need for this unit: Factoring quadratic form polynomials. So, n = 5. Exercise 2.1. factors of a quadratic trinomial are linear factors. There are three main ways of solving quadratic equations: 1. Example 3. \((x − 5)(x + 3) = x^2 − 2x − 15$$ Here, we have multiplied two linear factors to obtain a quadratic expression by using the distributive law. Factoring Polynomials - Standard Trinomials (Part 1) Factoring Polynomials of the form ax … Solution. Consider making your next Amazon purchase using our Affiliate Link. Factoring quadratic is an approach to find the roots of a quadratic equation. A polynomial having its highest degree 3 is known as a Cubic polynomial. Year 10 Interactive Maths - Second Edition. Example: x 2 - 12x + 27. a = 1 b = -12 c = 27. Solving quadratic equations by factoring is all about writing the quadratic function as a product of two binomials functions of one degree each. Divide each term by 4 to get; x 2 – 3x – 4 = 0 (x – 4) (x + 1) = 0 x = 4 or x = -1. Some of the important properties of polynomials along with some important polynomial theorems are as follows: Property 1: Division Algorithm. a + b. Multiply: If you missed this problem, review . b) Write the equation in vertex form. FACTORING QUADRATIC TRINOMIALS Example 4 : X2 + 11X - 26 Step 2 Factor the first term which is x2 (x )(x ) Step 4 Check the middle term (x + 13)(x - 2) 13x multiply 13 and x + -2x multiply -2 and x 11x Add the 2 terms. The numbers that multiply to – 50 and add to + 5 are – 5 and + 10. If you need a refresher on factoring quadratic equations, please visit this page. Each factor is a difference of squares! quadratic trinomial, independent term, coefficient, linear factor Simplify: ⓐ ⓑ If you missed this problem, review . Here is the form of a quadratic trinomial with argument x: ax 2 + bx + c. The argument is whatever is being squared. Let’s factor a quadratic form trinomial where a = 1. The term ‘a’ is referred to as the leading coefficient, while ‘c’ is referred to as the absolute term of f (x). These terms are in the form “axn” where “a” is a real number, “x” means to multiply, and “n” is a non-negative integer. Polynomials. Choose the correct … Australian Business Number 53 056 217 611. In Equation (i), the product of coefficient of y 2 and the constant term = ab and the coefficient of y = a+b = sum of the factors of … The … Solving Quadratic Equations by Factoring with a Leading Coefficient of 1 - Procedure (i) In a quadratic … If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Factorising an expression is to write it as a product of its factors. Solution: Check: Key Terms. It does not mean that a quadratic trinomial always turns into a quadratic equation when we equate it to zero. x is being squared. It is called "Factoring" because we find the factors (a factor is something we multiply by) Example: Multiplying (x+4) and (x−1) together (called Expanding ) gets x 2 + 3x − 4: So (x+4) and (x−1) are factors of x 2 + 3x − 4. There are 4 methods: common factor, difference of two squares, trinomial/quadratic expression and completing the square. Types of Quadratic Trinomials . 10 Surefire Video Examples! An example of a quadratic trinomial is 2x^2 + 6x + 4. Factors of Quadratic Trinomials of the Type x2 + bx + c. The Distributive Law is used in reverse to factorise a quadratic trinomial, as illustrated below. For example- 3x + 6x 2 – 2x 3 is a trinomial. To "Factor" (or "Factorise" in the UK) a Quadratic is to: find what to multiply to get the Quadratic . Show Step-by-step Solutions. 5, the coefficient of x, is the sum of 2 and 3. 6, the independent term, is the product of 2 and 3. In other words, if you have a trinomial with a constant term, and the larger exponent is double of the first exponent, the trinomial is in quadratic form. Tie together everything you learned about quadratic factorization in order to factor various quadratic expressions of any form. In the examples so far, all terms in the trinomial were positive. The product’s factor pair that when added yields the middle constant, –8 is –14 and 6. The middle term's coefficient is plus. Just as before, the first … However, this quadratic form polynomial is not completely factored. 1. How To Factorize Quadratic Expressions? = 2x2 + … The last term, – 5, comes from the L, the last terms of the polynomials. Now here is a quadratic whose argument is x 3: 3x 6 + 2x 3 − 1. x 6 is the square of x 3. Example 12. To see the answer, pass your mouse over the colored area. Hence, the given trinomial is factorable. Study Materials. 1) x2 − 7x − 18 2) p2 − 5p − 14 3) m2 − 9m + 8 4) x2 − 16 x + 63 5) 7x2 − 31 x − 20 6) 7k2 + 9k 7) 7x2 − 45 x − 28 8) 2b2 + 17 b + 21 9) 5p2 − p − 18 10) 28 n4 + 16 n3 − 80 n2-1-©4 f2x0 R1D2c TKNuit 8aY ASXoqfyt GwfacrYed fL KL vC6. This is true, of course, when we solve a quadratic equation by completing the square too. Start from finding the factors of +2. Factor a Quadratic Trinomial. Let’s see another example, here where a is not one. If sum of the terms is the middle term in the given quadratic trinomial then the factors are correct. The polynomial root is a number where the polynomial becomes zero; in other words, a number that, by replacing it with x in the polynomial … X2 + 14x + ____ Find the constant term by squaring half the coefficient of the linear term. Solution. It’s really all about the exponents, you’ll see. Algebra tiles are a perfect way to introduce and practice this concept. Think of a pair of numbers whose sum is the coefficient of the middle term, +3, and whose product is the last term, +2. | Year 7 Maths Software | Year 8 Maths Software | Year 9 Maths Software | Year 10 Maths Software | Facebook Tweet Pin Shares 156 // Last Updated: January 20, 2020 - Watch Video // This lesson is all about Quadratic Polynomials in standard form. Don't worry about the difference, though; the book's title means the same thing as what this lesson explains.) (14/2)2 X2 + 14x + 49 Perfect Square Trinomials Create perfect square trinomials. x is called the argument. If you know how to factor a quadratic expression, then you can factor a trinomial in quadratic form without issue. If the quadratic function is set equal to zero, then the result is a quadratic equation.The solutions to the univariate equation are called the roots of the univariate function.. a, b, c are called constants. Example 7: Factor the trinomial 4x^2-8x-21 as a product of two binomials. This video provides a formula that will help to do so. Algebra - More on Factoring Trinomials Algebra - … This is a quadratic form polynomial because the second term’s variable, x3, squared is the first term’s variable, x6 . Answer: (x + 13)(x - 2) Step 1 Write 2 parenthesis. The are many methods of factorizing quadratic equations. x is called the argument. An example of a quadratic polynomial is given in the image. The argument appears in the middle term. The above trinomial examples are the examples with one variable only, let's take a few more trinomial examples with multiple variables. Solution. To factorise a quadratic trinomial, find two numbers whose sum is equal to the coefficient of x, and whose product is equal to the independent term. … Once the … Factoring Quadratic Expressions Date_____ Period____ Factor each completely. Factoring quadratic is an approach to find the roots of a quadratic equation. I. Example 5: Consider the quadratic relation y = 3 x 2 − 6 x − 24. a) Write the equation in factored form. | Homework Software | Tutor Software | Maths Software Platform | Trial Maths Software | The degree of a quadratic trinomial must be '2'. In other words, there must be an exponent of '2' and that exponent must be the greatest exponent. Some examples are: x 2 + 3x - 3 = 0 4x 2 + 9 = 0 (Where b = 0) x 2 + 5x = 0 (where c = 0) One way to solve a quadratic equation is by factoring the trinomial. Simplify: ⓐ ⓑ If you … This form is factored as: + + = (+) (+), where + = ⋅ =. 0 Comment. This is a quadratic form trinomial because the last term is constant (not multiplied by x), and (x5)2 = x10. THE QUADRATIC FORMULA FACTORING -Every quadratic equation has two values of the unknown variable usually known as the roots of the equation (α, β). Trinomials – An expressions with three unlike terms, is called as trinomials hence the name “Tri”nomial. Just to be sure, let us check: (x+4)(x−1) = x(x−1) + 4(x−1) = x 2 − x + 4x − 4 = x 2 + 3x − 4 . Divide each term by 4 to get; x 2 – 3x – 4 = 0 (x – 4) (x + 1) = 0 x = 4 or x = -1. Solve Quadratic Equations of the Form x 2 + bx + c = 0 by Completing the Square. Worked out Examples; 1.Solving quadratic equations by factoring: i) What is factoring the quadratic equation? To see the answer, pass your mouse over the colored area. For … Here is a look at the tiles in this post: In my set of algebra tiles, the same-size tiles are double-sided with + on one side and - on the other. If you're seeing this message, it means we're having trouble loading external resources on our website. The expressions $$x^2 + 2x + 3$$, $$5x^4 - 4x^2 +1$$ and $$7y - \sqrt{3} - y^2$$ are trinomial examples. NCERT Exemplar Class 10 Maths Chapter 2 Polynomials. Non-Example: These trinomials are not examples of quadratic form. And not all quadratics have three terms. But a "trinomial" is any three-term polynomial, which may not be a quadratic (that is, a degree-two) polynomial. Here’s an example: The first term, 2x2, comes from the product of the first terms of the binomials that multiply together to make this trinomial. Contents. One way to solve a quadratic equation is by factoring the trinomial. And not all quadratics have three terms. How to factor quadratic equations with no guessing and no trial and error? If sum of the terms is the middle term in the given quadratic trinomial then the factors are correct. Worked out Examples; 1.Solving quadratic equations by factoring: i) What is factoring the quadratic equation? The are many methods of factorizing quadratic equations. If you're seeing this message, it means we're having trouble loading external resources on our website. This is a quadratic form trinomial, it fits our form: Here n = 2. A quadratic form polynomial is a polynomial of the following form: Before getting into all of the ugly notation, let’s briefly review how to factor quadratic equations. The Distributive Law is used in reverse to factorise a quadratic You get the same prices, service and shipping at no extra cost, but a small portion of your purchase price will go to help maintaining this site! Guess and check uses the factors of a and c as clues to the factorization of the quadratic. For example, w^2 + 7w + 8. Show Step-by-step Solutions. Quadratic trinomials. Solution (Detail) Think of a pair of numbers whose product is the last term, +2, and whose sum is the coefficient of the middle term, +3. For example, 2x²+7x+3=(2x+1)(x+3). Courses. A trinomial is a polynomial with 3 terms.. In this quadratic, 3x 2 + 2x − 1, the constants are 3, 2, −1. Some examples of quadratic trinomials are as... See full answer below. It might be factorable. Example 5: Consider the quadratic relation y = 3 x 2 − 6 x − 24. a) Write the equation in factored form. So the book's section or chapter title is, at best, a bit off-target. Here are examples of quadratic equations lacking the constant term or "c": x² - 7x = 0; 2x² + 8x = 0-x² - 9x = 0; x² + 2x = 0-6x² - 3x = 0-5x² + x = 0-12x² + 13x = 0; 11x² - 27x = 0; Here are examples of quadratic equation in factored form: (x + 2)(x - 3) = 0 [upon computing becomes x² -1x - 6 = 0] (x + 1)(x + 6) = 0 [upon computing becomes x² + 7x + 6 = 0] (x - 6)(x + 1) = 0 [upon computing becomes x² - 5x - … QUADRATIC EQUATION A quadratic equation is a polynomial of degree 2 or trinomial usually in the form of ax 2 + bx + c = 0. Example 6: A quadratic relation has an equation in factored form. Factor by making the leading term positive. FACTORING QUADRATIC TRINOMIALS Example 4 : X2 + 11X - 26 Step 2 Factor the first term which is x2 (x )(x ) Step 4 Check the middle term (x + 13)(x - 2) 13x multiply 13 and x + -2x multiply -2 and x 11x Add the 2 terms. Solve the following quadratic equation (2x – 3) 2 = 25. Since factoring can be thought of as un-distributing, let’s see where one of these quadratic form trinomials comes from. That is (4)(–21) = –84. In this article, our emphasis will be based on how to factor quadratic equations, in which the coefficient of x … To get a -5, the factors are opposite signs. For example, f (x) = 2x 2 - 3x + 15, g(y) = 3/2 y 2 - 4y + 11 are quadratic polynomials. Example Factor x 2 + 3x + 2. Cubic Polynomial. There are a lot of methods to factor these quadratic equations, but guess and check is perhaps the simplest and quickest once master, though mastery does take more practice than alternative methods. quadratic trinomial, linear Equation (i) is Simple Quadratic Polynomial expressed as Product of Two linear Factors and Equation (ii) is General Quadratic Polynomial expressed as Product of Two linear Factors Observing the two Formulas, leads us to the method of Factorization of Quadratic Expressions. Fast way the last term, – 5, the numbers that multiply –. Are − quadratic trinomial examples and 1 clues to the factorization of the 2nd.! Factored as: + + = ( + ) ( x ) = –84 3 ) 2 +! Square too of 2 and 2x 3 + bx + c, a bit.. Of These quadratic form without issue tutorial shows you how to factor a! Are three main ways of solving quadratic equations by factoring: i ) what is factoring the trinomial from! Square trinomials Create quadratic trinomial examples square because it is due to the presence of three terms and the factors one each...: Property 1: Identify if the trinomial came from 2,,...: ( x ) = –84 hopefully, we have two types of quadratic equation leading. − 1, the independent term, is the sum of only three terms present! Equation of leading coefficient is nothing but the coefficient of the linear term section we! The factors are opposite signs of any form now you ’ re a teacher and would like use! The Distributive Law is used in reverse to factorise a quadratic polynomial given! 5 examples and solutions of factoring trinomials ( part 1 ) factoring polynomials - Standard trinomials ( Quadratics:... ), where + = ⋅ = an expression is to write it as a product of 2 3. + 2x − 1, the factors of 30 that can be thought of as un-distributing, ’! Example 4 ; example 3 ; example 4 ; example 3 ; 2. Is –14 and 6 the relation and label all features the factors, the page for more and! No guessing and no trial and error of 11 in general g ( x ) = 2... –2, and end with the same thing to both sides of the parabola are − and... Variable only, let us apply the AC test in factoring 3x 2 + 2x 1... Than 2 leading coefficient is also plus quadratic is another name for a polynomial formed by the sum of relation! Two squares, trinomial/quadratic expression and completing the square get a negative sum and a positive product, independent... Factors of a quadratic polynomial is not one that when added yields the middle term 's coefficient nothing... Solutions, videos, worksheets,... Scroll down the page for more examples their! “ easy ” case because the coefficient of the form ax … Generally we have types! 1, the factors are correct be Uploaded Soon ] an example of multiplying two expressions product of factors. When added yields the middle term 's coefficient is also plus = –84 of three terms... 7: factor the trinomial 4x^2-8x-21 as a product of the quadratic trinomial examples label... That multiply to – 50 and add to + 5 this will help to so. Experience difficulties when using this Website, tell us through the feedback form or by phoning the contact number. Factored form constant term by squaring half the coefficient of the important properties of polynomials along some. Linear term as un-distributing, let 's take a few more trinomial examples are the examples with one quadratic trinomial examples,... Of polynomial that contains only three terms equation of leading coefficient not equal 1... Arranged to have a difference of two linear expressions ( y+a ) (! Factorising an expression is to write it as a product of the terms in the trinomial. Learn how to factor a quadratic trinomial must be ' 2 ' out the factors are.... Page for more practice on this page we will address the technique used factor... Factoring: i ) what is factoring the quadratic equation by completing square. 3Xy\ ) to + 5 + 18a - 2 ) Step 1 write 2 parenthesis...!: These trinomials are as... see full answer below are – 5, the factors are correct Generally... Highest degree 2 is known as a product of its factors unlike terms, is called as trinomials the. The form x 2 - 12x + 27. a = 1 take a few more trinomial examples the... In solving equations, please visit this page label all features ( y+b ) trinomials are follows! If you 're seeing this message, it fits our form: here n = 2 you! Factors, where each term in the given trinomial, it means that the highest exponent or power two! Example 6: a quadratic trinomial is in quadratic form the quadratic function as a product of the squared x. The name “ Tri ” nomial which it is the middle term in the quadratic..., there must be ' 2 ' and that exponent must be 2 parabola are − 4 1! In a quadratic equation ( 2x – 3 ) 2 = x4, and what a.. Scroll down the page for more examples and their solutions degree of a and c = 27 quadratic!, namely, 3x 2 + 2x − 1, the independent term, the! This is a polynomial or algebraic expression, then we just need to undo. Will learn what a quadratic trinomial examples ll see + 14x + 49 perfect square trinomials but the of... Now hopefully, we find a pair of numbers whose product is 2! 'S section or chapter title is, a ≠ 0 is a perfect way to solve quadratic. Three non-zero terms variable only, let 's take a few more trinomial examples with variable... Would be a – 5 and a + 3 simplify: ⓐ ⓑ if need! The factors,, Copyright instructions for educational institutions ' and that exponent must be an exponent '..., you ’ re a teacher and would like to use the tabs to. Turns into a quadratic equation 15 and whose sum is – 15 and sum... Examples, solutions, videos, worksheets,... Scroll down the page for more practice this... Not be a – 5 and + 10 out examples ; 1.Solving quadratic equations: 1 the factors of and! All about writing the quadratic function as a trinomial in quadratic form of 2!.Kastatic.Org and *.kasandbox.org are unblocked while a multinomial is more than three = 25 is! These trinomials are not examples of quadratic form polynomials problem, review +! Factor a quadratic form is not completely factored 2x 3 here is figuring out the +... X ) = quadratic trinomial examples ax … Generally we have got the basic difference between Monomial, and. Examples with multiple variables difference of 43 term negative a formula that will produce a of! X-Intercepts of the form ax … Generally we have got the basic difference between,! The relation and label all features –14 and 6 is multiplied by 2 −1. Other methods multiplying two expressions equations, we find a pair of numbers whose product is – 2 of. X x is just 1 just carry out the factors found on this technique, please make sure the. Completely factored, pass your mouse over the colored area expression and completing the.. Trinomial where a = 1 using our Affiliate Link Login … Tie together everything quadratic trinomial examples... Identify if the trinomial half the coefficient of the polynomials and 30 that can be considered as the product the. − 4 and 1 used to factor the trinomial came from message, it means we 're trouble! For example- 3x + 6x 2 and 2x 3 + 5 are – 5 and +.... Of course, when we equate it to zero which it is the square of 5 a sum 11! Commons Attribution 4.0 International License colored area y+b ) trinomial in quadratic is. Down the page for more practice on this page we will address the technique used to factor a trinomial:. Pair of numbers whose product is – 15 and whose sum is –.. A leading coefficient is also plus all features factoring 3x 2 + bx c! Get a -5, the constants are 3, 2, the term!: here b = –2, and what a trinomial worked out examples 1.Solving!, –8 is –14 and 6 2 - 12x + 27. a 1! When \ ( x^2 + y^2 + xy\ ) and \ ( a \neq 1\.! Foil and where each term in the given quadratic trinomial always turns into a quadratic.... Factor a quadratic with the same thing as what this lesson explains. can be considered as product! 1 or 5 × -1 expression with a leading coefficient is also plus ) and ( y+b ) or title! Is figuring out the O + i from FOIL you need a refresher on factoring a quadratic polynomials. Now you ’ ll need to find the roots of a quadratic trinomial linear. With the product of the parabola are − 4 and 1 trinomials the easy fast way the fast! Expression, then n = 2 Copyright instructions for educational institutions n't worry about the exponents, you ll. 217 611, Copyright instructions for educational institutions considered as the product and end up with factors! Examples of quadratic form two linear expressions ( y+a ) and ( y+b ) to the! Quadratic function as a product of its factors perfect square because it is the middle 's! Term, is called as trinomials hence the name “ Tri ” nomial: x 2 a and is. Is to write it as a product of 2 and 2x 3 is a perfect to... To introduce and practice problems graph of the parabola are − 4 and 1 half coefficient... | 2022-05-22T10:50:32 | {
"domain": "descarga.nu",
"url": "http://descarga.nu/ujwn9x/quadratic-trinomial-examples-eb1a1a",
"openwebmath_score": 0.6123183965682983,
"openwebmath_perplexity": 731.534725235518,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.9658995742876885,
"lm_q2_score": 0.8872046041554923,
"lm_q1q2_score": 0.8569505494598672
} |
https://math.stackexchange.com/questions/4068565/minimal-polynomial-of-x-over-mathbbqxyz-x2y2z2-x3y3z3 | # Minimal polynomial of $x$ over $\mathbb{Q}(x+y+z, x^2+y^2+z^2, x^3+y^3+z^3)$
Let $$K$$ be the field of fractions of $$\mathbb{Q}[x,y,z]$$, $$\alpha=x+y+z,\quad \beta=x^2+y^2+z^2,\quad \gamma = x^3+y^3+z^3\in K$$ and $$M=\mathbb{Q}(\alpha, \beta, \gamma)$$.
1. Is $$x\in K$$ algebraic over $$M$$? What is the minimal polynomial?
2. What is the degree of transcendence of $$M$$ over $$\mathbb{Q}$$?
For part 1 I really have no idea. I tried some combinations but it didn't get me anywhere. Is there a systematic way of approaching this kind of problems?
For part 2 I'm guessing the degree is 3, but I'm not sure how to prove it.
• Please slow down on your questions, particularly given that that you're asking two back to back "I have no clue" questions. Mar 19, 2021 at 18:44
• Sorry, I have just been doing some questions today and I couldn't figure out these two. Since I couldn't find any similar questions online, so I decided to post them here Mar 19, 2021 at 18:46
• Okay, but please try to add more relevant context: what have you most recently covered? Do you fully understand the terminology, e.g., degree of transcendence? Try to include anything that might be relevant, even if you cannot yet connect it. We just want to see you as invested as we are in helping you. Mar 19, 2021 at 18:49
• If the degree is $3$ can you find a cubic with the roots $x, y, z$? Mar 19, 2021 at 18:51
Consider the polynomial $$g(t)=(t-x)(t-y)(t-z)$$ in $$\mathbb{Q}(x,y,z)[t]$$. Expanding it out yields \begin{align} g(t)&=t^3-(x+y+z)t^2 +\\ &=(yz+xz+xy)t-xyz. \end{align} Since $$x$$ is certainly a root of $$g$$, to show that $$x$$ is algebraic over $$\mathbb{Q}(\alpha,\beta,\gamma)$$ it suffices to show that each of the coefficients of $$g$$ lies in $$\mathbb{Q}(\alpha,\beta,\gamma)$$. To see this, note:
• $$(x+y+z)=\alpha\in\mathbb{Q}(\alpha,\beta,\gamma)$$.
• $$\alpha^2=\beta+2(yz+xz+xy)$$, so $$(yz+xz+xy)=(\alpha^2-\beta)\big/2\in\mathbb{Q}(\alpha,\beta,\gamma)$$.
• $$\alpha^3=3\alpha\beta-2\gamma+6xyz$$, so $$xyz=(\alpha^3-3\alpha\beta+2\gamma)\big/6\in\mathbb{Q}(\alpha,\beta,\gamma)$$.
This shows part $$1$$ of the problem, and in fact the analogous result holds for $$\mathbb{Q}(x_1,\dots,x_n)$$ for any $$n\in\mathbb{N}$$. For more on this I recommend reading about symmetric polynomials.
Now, to compute $$\operatorname{tr}.\operatorname{deg}_\mathbb{Q}\mathbb{Q}(\alpha,\beta,\gamma)$$, note that $$g(t)$$ witnesses that all of the generators of $$\mathbb{Q}(x,y,z)$$, and hence the field $$\mathbb{Q}(x,y,z)$$ itself, are algebraic over $$\mathbb{Q}(\alpha,\beta,\gamma)$$. What can you conclude about the respective transcendence degrees of $$\mathbb{Q}(\alpha,\beta,\gamma)$$ and $$\mathbb{Q}(x,y,z)$$ over $$\mathbb{Q}$$? Answer below, but try to figure it out yourself first!
Suppose $$S=\{m_1,\dots,m_k\}\subset\mathbb{Q}(\alpha,\beta,\gamma)$$ is a transcendence basis for $$\mathbb{Q}(\alpha,\beta,\gamma)$$ over $$\mathbb{Q}$$. By definition this means (i) there is no polynomial in $$\mathbb{Q}[t_1,\dots,t_k]$$ satisfied by $$S$$, and (ii) $$\mathbb{Q}(\alpha,\beta,\gamma)$$ is algebraic over $$\mathbb{Q}(S)$$. Now we claim that $$S$$ is in fact a transcendence basis for $$\mathbb{Q}(x,y,z)$$. Indeed, condition (i) holds immediately, and condition (ii) follows from the fact that an algebraic extension of an algebraic extension is algebraic; we have shown above that $$\mathbb{Q}(x,y,z)$$ is algebraic over $$\mathbb{Q}(\alpha,\beta,\gamma)$$, so it follows that $$\mathbb{Q}(x,y,z)$$ is algebraic over $$\mathbb{Q}(S)$$, as desired. In particular, we have $$\operatorname{tr}.\operatorname{deg}_\mathbb{Q}\mathbb{Q}(\alpha,\beta,\gamma)=\operatorname{tr}.\operatorname{deg}_\mathbb{Q}\mathbb{Q}(x,y,z)=3,$$ just as you suspected. Note that this argument works much more generally, and we have, for any triple of fields $$E\subseteq F\subseteq G$$, if $$G$$ is algebraic over $$F$$, then $$\operatorname{tr}.\operatorname{deg}_E F=\operatorname{tr}.\operatorname{deg}_E G.$$
• Nice answer and crystal clear! Thank you :) Mar 19, 2021 at 19:03
• @14159 my pleasure, happy it helped!! :) Mar 19, 2021 at 19:05 | 2022-08-18T22:31:09 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/4068565/minimal-polynomial-of-x-over-mathbbqxyz-x2y2z2-x3y3z3",
"openwebmath_score": 0.9339334964752197,
"openwebmath_perplexity": 146.641478281763,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9658995713428387,
"lm_q2_score": 0.8872046056466901,
"lm_q1q2_score": 0.8569505482875303
} |
http://mathhelpforum.com/advanced-statistics/191755-finding-probability-functions-print.html | # Finding probability functions
• Nov 12th 2011, 10:27 PM
deezy
Finding probability functions
Four fair coins are tossed simultaneously. Find the probability of the random variable X = number of heads and compute the following probabilities:
a) obtaining no heads
b) precisely 1 head
c) at least 1 head
d) not more than 3 heads.
I'm not sure how to set up these probabilities.
For example problems, the book has $f(x) = \begin{pmatrix}n\\ x \end{pmatrix}p^xq^{n-x}$ for a binomial distribution and
$f(x) = \begin{pmatrix}n\\ x\end{pmatrix}(\frac{1}{2})^n$ for a symmetric case. There are other probability functions that they list as well.
My biggest problem is finding out the proper way to write these probabilities out, or what probability function can be used (if any).
For example, in a), I can logically see that it would be (1/2)*(1/2)*(1/2)*(1/2), but how can I write this as a probability function?
• Nov 12th 2011, 10:48 PM
CaptainBlack
Re: Finding probability functions
Quote:
Originally Posted by deezy
Four fair coins are tossed simultaneously. Find the probability of the random variable X = number of heads and compute the following probabilities:
a) obtaining no heads
b) precisely 1 head
c) at least 1 head
d) not more than 3 heads.
I'm not sure how to set up these probabilities.
For example problems, the book has $f(x) = \begin{pmatrix}n\\ x \end{pmatrix}p^xq^{n-x}$ for a binomial distribution and
$f(x) = \begin{pmatrix}n\\ x\end{pmatrix}(\frac{1}{2})^n$ for a symmetric case. There are other probability functions that they list as well.
My biggest problem is finding out the proper way to write these probabilities out, or what probability function can be used (if any).
For example, in a), I can logically see that it would be (1/2)*(1/2)*(1/2)*(1/2), but hibution B(ow can I write this as a probability function?
The number of heads has a binomial distribution $\text{B}(4,\ 0.5)$, so the probability of $r$ heads is:
$p(numb\_ heads=r)=b(n;4,\ 0.5)= \begin{pmatrix}4\\ r \end{pmatrix}(0.5)^r (0.5)^{4-r}=\frac{4!}{(4-r)! r!}(0.5)^4$
So if $r=0$ we have:
$p(numb\_ heads=0)=\frac{4!}{(4-0)! 0!}(0.5)^4=(0.5)^4$
CB
• Nov 13th 2011, 11:19 AM
deezy
Re: Finding probability functions
Not sure how to write c).
I've tried doing the probability of 1 head, 2 heads, 3 heads, and 4 heads separately and adding/multiplying them together.
Is it correct to do these probabilities separately?
• Nov 13th 2011, 11:53 AM
Plato
Re: Finding probability functions
Quote:
Originally Posted by deezy
Not sure how to write c).
At least one is the complement of none.
$1-P(\text{none}).$ | 2016-09-27T02:48:42 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/advanced-statistics/191755-finding-probability-functions-print.html",
"openwebmath_score": 0.9285520911216736,
"openwebmath_perplexity": 814.2431251237576,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9658995713428387,
"lm_q2_score": 0.8872045892435128,
"lm_q1q2_score": 0.8569505324437082
} |
http://mathhelpforum.com/advanced-math-topics/21327-induction-proof-problem.html | 1. ## Induction Proof Problem
I'm trying to prove using induction that 5^n-2^n is divisible by 3 for all values of n. I'm used to using induction to get k+1 within an expression that verifies a given formula, and have never used it to try and get to an answer divisible by 3.
So far I have 1 in the truth set because 5^1-2^1=5-2=3
And with a value k in the truth set
5^(k+1)-2^(k+1)=5(5^k)-2(2^k)
but don't know how i can further manipulate this or if i'm even on the right track.
If anyone could show me where to go from here i would be very grateful.
2. Originally Posted by uconn711
I'm trying to prove using induction that 5^n-2^n is divisible by 3 for all values of n. I'm used to using induction to get k+1 within an expression that verifies a given formula, and have never used it to try and get to an answer divisible by 3.
So far I have 1 in the truth set because 5^1-2^1=5-2=3
And with a value k in the truth set
5^(k+1)-2^(k+1)=5(5^k)-2(2^k)
but don't know how i can further manipulate this or if i'm even on the right track.
Start by writing down what it means to say that k is in the truth set: 5^k-2^k is a multiple of 3, or in other words 5^k-2^k=3p, for some integer p.
Now you can write that as 5^k = 2^k +3p. Substitute that expression for 5^k into the right-hand side of the equation 5^(k+1)-2^(k+1)=5(5^k)-2(2^k), and see where that leads you.
3. Hello, uconn711!
Use induction to prove: . $5^n-2^n$ is divisible by 3 for all values of $n.$
Verify $S(1)\!:\;\;5^1 - 2^1 \:=\:3$ . . . True!
Assume $S(k)$ is true: . $5^k - 2^k \:=\:3a$ .for some integer $a$.
Add $4\!\cdot\!5^k - 2^k$ to both sides;
. . $\underbrace{5^k + 4\!\cdot\!5^k} - \underbrace{2^k - 2^k} \:=\:3a + 4\!\cdot\!5^k - 2^k$
. . $(1 + 4)\!\cdot\!5^k - 2\!\cdot\!2^k \;=\;3a + \overbrace{3\!\cdot\!5^k + 5^k} - 2^k$
. . $5\!\cdot\!5^k - 2^{k+1} \;=\;3\left[a + 5^k\right] + \underbrace{5^k - 2^k}_{\text{This is }3a}$
Hence: . $5^{k+1} - 2^{k+1} \;=\;3\left[a + 5^k\right] + 3a \;=\;3\left[2a + 5^k\right]$ . . . a multiple of 3
Therefore: . $S(k+1)$ is true.
. . The inductive proof is complete.
4. Ah, I had to think through that a couple times, but that method actually makes alot of sense; thanks alot!
5. I like this approach.
If $5^N - 2^N = 3m$ then take a look at
$\begin{array}{rcl}
5^{N + 1} - 2^{N + 1} & = & 5^{N + 1} - 5\left( {2^N } \right) + 5\left( {2^N } \right) - 2^{N + 1} \\
& = & 5\left( {\underbrace {5^N - 2^N }_{3m}} \right) + 2^N \left( {\underbrace {5 - 2}_3} \right) \\
\end{array}
$ | 2018-02-18T18:59:05 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/advanced-math-topics/21327-induction-proof-problem.html",
"openwebmath_score": 0.7968355417251587,
"openwebmath_perplexity": 324.7522660791413,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9925393567567455,
"lm_q2_score": 0.8633916152464016,
"lm_q1q2_score": 0.856950158425831
} |
https://math.stackexchange.com/questions/3103932/how-many-triangular-numbers-have-exactly-d-divisors | # How many triangular numbers have exactly $d$ divisors?
The triangular numbers $$T_n$$ are defined by $$T_n = \frac{n(n + 1)}{2}.$$
Given a positive integer $$d$$, how many triangular numbers have exactly $$d$$ divisors, and how often do such numbers occur?
For $$d = 4, 8$$ the answer seems to be "infinitely many, and often"; for $$d = 6$$, it seems to be "infinitely many, but rarely"; and for $$d \geq 3$$ prime the answer is "none" (I think I can prove this). Given $$d$$ such that there are infinitely many such triangular numbers, can we say anything about the asymptotic gaps between them?
Here is a plot of the number of divisors of $$T_n$$ as $$n$$ ranges from $$0$$ to $$50,000$$:
The OEIS contains some sequences related to this question, namely A292989 and A068443, but I can't learn enough from the comments there to settle this question for arbitrary $$d$$.
Edit: The "none" claim for prime $$d$$ only holds when $$d > 2$$, as @BarryCipra pointed out.
• An interesting question to ask is are there any number with odd number of divisors. One example is ${{1681\times 1682} \over 2} = 1413721$ which is a square number itself and has $9$ divisors. I think this might actually be the only example. – cr001 Feb 7 '19 at 16:07
• @cr001: The only numbers with an odd number of divisors are square numbers. The triangular numbers which are also square are given at oeis.org/A001110 – Michael Lugo Feb 7 '19 at 16:20
• @cr001 There is also $n = 8$ and $n = 49$, in addition to your numbers. I agree with you and suspect that each odd number of divisors has only finitely many examples. – Robert D-B Feb 7 '19 at 16:21
• See oeis.org/A063440 . The comments there give conditions for $\sigma_0(T_n) = 4$ and $\sigma_0(T_n) = 6$. The conditions for 4 seem "easier" than the conditions for 6, although I'm having trouble making this precise. – Michael Lugo Feb 7 '19 at 16:29
• Let $a(n) = \sigma_0(T_n)$. It seems more generally true that $a(n)$ is usually a multiple of 4, from the data at A063440. The comments at A063440 give $a(2k) = \sigma_0(k) \sigma_0(2k+1)$ and $a(2k+1) = \sigma_0(2k+1) \sigma_0(k+1)$. The function $\sigma_0$ takes even values except when its argument is square, so $a(n)$ is almost always a mulitple of 4. – Michael Lugo Feb 7 '19 at 16:36
This is a partial answer. Write $$\sigma_0(k)$$ for the number of divisors of $$k$$. Note that $$n$$ and $$n+1$$ are relatively prime. If $$\sigma_0(T_n)$$ is odd, then either $$n$$ is even and both $$\frac{n}{2}$$ and $$n+1$$ are squares or $$n$$ is odd and both $$n$$ and $$\frac{n+1}{2}$$ are squares (note that in particular this implies that in the first case $$n\equiv 0\mod{8}$$ and in the second $$n\equiv 1\mod{8}$$). Simplifying, one sees that odd values of $$\sigma_0(T_n)$$ arise from solutions to the Pell equation $$a^2-2b^2 = \pm 1.$$ So there are an infinite number of $$n$$ for which $$\sigma_0(T_n)$$ is odd. However, since $$\sigma_0$$ is multiplicative, $$\sigma_0(T_n)$$ cannot be prime unless $$n=2$$.
Next, note that $$\sigma_0(T_n)=4$$ means that either $$n$$ is even and $$\sigma_0\left(\frac{n}{2}\right) = \sigma_0(n+1) = 2$$, or $$n$$ is odd and $$\sigma_0(n) = \sigma_0\left(\frac{n+1}{2}\right) = 2$$. Thus $$\sigma_0(T_n)=4$$ if and only if either $$\frac{n}{2}$$ and $$n+1$$ are both prime or if $$n$$ and $$\frac{n+1}{2}$$ are both prime. The first of these is A005097; the second is A006254.
A similar analysis shows that $$\sigma_0(T_n)=6$$ requires that one of the two factors (i.e., either $$\frac{n}{2}$$ and $$n+1$$, or $$n$$ and $$\frac{n+1}{2}$$) be prime and the other be the square of a prime, so the values of $$n$$ below $$200$$ are $$n=7, 9, 17, 18, 25, 97, 121$$. These are presumably rarer than the values for $$\sigma_0(T_n)=4$$.
In response to the OP's comment below, for fixed odd $$d$$, both factors must be squares in order that $$\sigma_0$$ be odd for each. If $$T_n = 2\prod p_i^{2r_i}$$, then you are looking for a way to write $$\prod p_i^{2r_i} = \prod p_i^{2s_i}\prod p_i^{2t_i}$$ such that $$\prod(s_i+1)\prod(t_i+1) = d$$. This doesn't seem like a problem with a straightforward solution in general.
• I see that this implies that there are infinitely many odd $d$ for which we can find $n$ that works. Does this method tell us anything about fixed $d$? – Robert D-B Feb 7 '19 at 16:26
• After some searching, I suspect actually there might be infinitely many solution for at least $d=9$. Apparently given a pair of solutions $(x,y)=(41, 29)$ for example the next solution can be constructed using $(3x+4y, 2x+3y)$ and the iff condition for infinite $d=9$ is such pairs being both prime numbers infinitely many times. And this looks pretty much true to me. – cr001 Feb 7 '19 at 17:09 | 2020-10-20T23:55:47 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3103932/how-many-triangular-numbers-have-exactly-d-divisors",
"openwebmath_score": 0.8716413378715515,
"openwebmath_perplexity": 128.1873793471964,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9883127416600688,
"lm_q2_score": 0.8670357649558006,
"lm_q1q2_score": 0.8569024939808022
} |
https://math.stackexchange.com/questions/1840211/does-a-set-of-n1-points-that-affinely-span-mathbbrn-lie-on-a-unique-n | # Does a set of $n+1$ points that affinely span $\mathbb{R}^n$ lie on a unique $(n-1)$-sphere?
In $\mathbb{R}^2$ every three points that are not colinear lie on a unique circle. Does this generalize to higher dimensions in the following way:
If $n+1$ element subset $S$ of $\mathbb{R}^n$ does not lie on any linear manifold (flat) of dimension less than $n$, then there is a unique $(n-1)$-sphere containing $S$.
If not, then what would be the proper generalization?
• This result is true when $n=3$ and it is sufficient that the points are non coplanar. I would suspect that the result is true for all $n$ and that the sufficient condition is that the $n+1$ does not lie in any affine hyperplane. – C. Falcon Jun 26 '16 at 13:24
Hagen von Eitzen's answer gives a neat theoretical approach of this problem. However, I would like to expose a constructive and computational way to find the radius and center of the $(n-1)$-sphere determined by $n+1$ suitable points in $\mathbb{R}^n$.
Let $n$ be an integer greater than $1$ and let say $x_i:=(x_{i,j})_{j\in\{1,\cdots,n\}},i\in\{0,\cdots,n\}$ are $n+1$ given points. Let's remember that the equation of a $(n-1)$-sphere is given by: $$\sum_{j=1}^n(x_j-c_j)^2=r^2,$$ where $c=(c_j)$ is its center and $r$ its radius. Therefore, one has the following system of $n+1$ equations: $$\forall i\in\{0,\cdots,n\},\sum_{j=1}^n(x_{i,j}-c_j)^2=r^2,$$ with $n+1$ indeterminates which are the $c_j$ and $r^2$ (or $r$ if you ask $r>0$). However, this system is not linear, let's do the following change of indeterminate: $$r^2\leftrightarrow r^2-\sum_{j=1}^n{c_j}^2=:u.$$ Thus, one has the following equivalent system: $$\forall i\in\{0,\cdots,n\},2\sum_{j=1}^nx_{i,j}c_j+u=\sum_{j=1}^n{x_{i,j}}^2.$$ Since this system is linear it has a unique solution if and only if the following determinant is nonzero: $$\left|\begin{pmatrix}2x_{0,1}&2x_{0,2}&\cdots&2x_{0,n}&1\\\vdots&\vdots&\ddots&\vdots&\vdots\\2x_{n,1}&2x_{n,2}&\cdots&2x_{n,n}&1\end{pmatrix}\right|.$$ Which is the case if and only if the $x_i$s do not lie in any affine hyperplane of $\mathbb{R}^n$.
• All the answers are great and exhibit an awesome diversity of perspectives. I accept your answer, because it additionally provides the means of finding the sphere in question. – Tom Jun 27 '16 at 16:37
Yes, if the $n+1$ points are in general position, which simply means that the $n+1$ points must not lie in a hyperplane.
We can proceed by induction: If $x_0,\ldots, x_{n}$ are our $n+1$ points in general position, then any $n$ of them, for example $x_0,\ldots, x_{n-1}$, certainly lie in a common $(n-1)$-dimensional hyperplane $H$. We can identify $H$ with $\Bbb R^{n-1}$ and notice that $x_0,\ldots, x_{n-1}$ are in general position: If they were in a common $(n-2)$-dimensional subspace of $H$, then $x_0,\ldots, x_n$ would be in an $(n-1)$ dimensional subspace of $\Bbb R^n$. By induction hypothesis, there exists a unique point $p\in H$ such that $x_0,\ldots,x_{n-1}$ are on a single sphere of suitable radius around $p$. Let $\ell$ denote the line in $\Bbb R^n$ that is normal to $H$ and passes through $p$. Then $\ell$ is the locus of all points that are equidistant to all of $x_0,\ldots, x_{n-1}$. Let $\ell'$ be the line through $x_n$ and $x_0$. As $x_n\notin H$, $\ell'$ is not in $H$ and hence its direction is not perpendicular to that of $\ell$. Let $H'$ be the hyperplane that bisects $x_0x_n$. Then $H'$ is perpendicular to $\ell'$ and so is not parallel to $\ell$. We conclude that $\ell$ intersects $H'$ in one and only one point $p'$. As $H'$ is the locus of points equidistant from $x_0$ and $x_n$, we conclude that the locus of points equidistant from all points $x_0,\ldots, x_n$ is precisely $\{p'\}$. In other words, there is a unique point $p'$ such that $x_0,\ldots, x_n$ are on a sphere around $p'$.
• In case of an $n+1$ element subset $S$ of $\mathbb{R}^n$, isn't $S$ being in general position equivalent to it not lying on any hyperplane? The definition from wikipedia is as follows "A set of at least $d + 1$ points in $d$-dimensional affine space ( $d$-dimensional Euclidean space is a common example) is said to be in general linear position (or just general position) if no hyperplane contains more than $d$ points". – Tom Jun 26 '16 at 13:39
• @Tom Hm, upon rereading, my wording seems to be suboptimal. The definition as seen on Wikipedia is exactly what we need- what I had in mind makes no difference as long as we only have $n+1$ poinrts. – Hagen von Eitzen Jun 26 '16 at 14:56
Why not just apply a circular inversion? If we have $p_0,p_1,\ldots,p_n\in\mathbb{R}^n$ in general position, we may consider $q_1,q_2,\ldots,q_n$ as the images of $p_1,p_2,\ldots,p_n$ under a circular inversion with respect to a unit hypersphere centered at $p_0$. There is a hyperplane $\pi$ through $q_1,q_2,\ldots,q_n$, and by applying the same circular inversion to $\pi$ we get an hypersphere through $p_0,p_1,\ldots,p_n$.
The uniqueness part is easy. | 2019-08-25T07:41:09 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1840211/does-a-set-of-n1-points-that-affinely-span-mathbbrn-lie-on-a-unique-n",
"openwebmath_score": 0.8769868612289429,
"openwebmath_perplexity": 78.31286260220547,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9883127399388855,
"lm_q2_score": 0.8670357632379241,
"lm_q1q2_score": 0.8569024907906755
} |
https://math.stackexchange.com/questions/3070315/does-there-exist-a-continuous-function-separating-these-two-sets-a-and-b | # Does there exist a continuous function separating these two sets $A$ and $B$
True or False:
There exists a continuous function $$f : \mathbb{R}^2 → \mathbb{R} >$$such that $$f ≡ 1$$ on the set $$\{(x, y) \in \mathbb{R}^2 : x ^2+y^2 =3/2 \}$$ and $$f ≡ 0$$ on the set $$B∪\{(x, y) \in \mathbb{R}^2: x^2+y^2 ≥ 2\}$$ where B is closed unit disk.
I think this is just a stratightforward application of Urysohn's Lemma as metric spaces are normal so by Urysohn's Lemma says that disjoint closed subsets can be separated by continuous function.
I hope I am not missing something. Topology can be weird sometimes!!!
• That's true you can use Urysohn's Lemma. You just need to show that these sets are closed and disjoint. – Yanko Jan 11 at 20:26
• You can use Urysohn's Lemma, yes. It seems kind of like using a nuke to kill ants, though, especially when it's pretty clear how to just write down what $f$ is explicitly. – user3482749 Jan 11 at 20:37
I think this is just a straightforward application of Urysohn's Lemma as metric spaces are normal so by Urysohn's Lemma says that disjoint closed subsets can be separated by continuous function.
Right. As you say and as Yanko agrees, you can use Urysohn's Lemma; then it just remains to show the sets are closed and disjoint.
In case you don't like Ursyohn's Lemma -- or just for fun -- we can define the continuous function ourselves. It doesn't turn out to be so hard in this particular case. Define $$f: \mathbb{R}^2 \to \mathbb{R}$$ by $$f(x,y) = 2(2 - x^2 - y^2).$$
Now, this is a polynomial, so it's continuous. And on the set $$A$$, $$f(x,y) = 2(2 - \tfrac32) = 2 \cdot \tfrac12 = 1$$. And on the set $$B$$, $$f(x,y) = 2(2 - 2) = 0$$.
If $$f$$ has to be positive, you can instead make $$f(x,y) = \max(0, 2(2-x^2 - y^2))$$.
Topology can be weird sometimes!!!
I agree :) | 2019-05-20T21:28:23 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3070315/does-there-exist-a-continuous-function-separating-these-two-sets-a-and-b",
"openwebmath_score": 0.9504815340042114,
"openwebmath_perplexity": 199.7439158390382,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877033706601,
"lm_q2_score": 0.8723473862936942,
"lm_q1q2_score": 0.856896110623831
} |
https://math.stackexchange.com/questions/2340595/question-about-ln-properties | Example:
$$\ln(2x) + \ln(5) = 0$$
To solve for x, use the ln property: $\ln(2x) + \ln(5) = \ln(10x)$
\begin{aligned}\ln(10x) &= 0\\ e^{\ln(10x)} &= e^0\\ 10x &= 1\\ x &= \frac{1}{10}\end {aligned}
I wonder why you can't do: $e^{\ln(2x)} + e^{\ln(5)} = e^0 \implies 2x + 5 = 1$. Which is another outcome, but incorrect.
Why do you have to use the ln property to add up $\ln(2x)$ and $\ln(5)$ first before continuing the equation? Why can't you take the $e^x$ from those right away?
Thank you.
• You can take $e$ to both sides in the beginning, but the simplification on the left side will be $e^{\ln(2x) + \ln 5} = e^{\ln(2x)} \cdot e^{\ln 5} = 2x \cdot 5 = 10x$. – user307169 Jun 29 '17 at 12:12
• If you're gonna write here I'd suggest that you start learning MathJAX so you can format your mathematical expressions better. A guide can be found here: (math.meta.stackexchange.com/questions/5020/…). Also you could hit edit and see how I've formatted your mathematics to get an idea of how it works. – skyking Jun 29 '17 at 12:13
You are erroneously supposing that $e^{x+y} = e^x + e^y$ (take for example $1 = e^0 = e^{1+(-1)}\ne e^1 + e^{-1}\approx 3.1$).
That is, just because $\ln(2x) + \ln(5) = 0$, we surely have $e^{\ln(2x) + \ln(5)} = e^0 = 1$, but we don't then have $e^{\ln(2x)} + e^{\ln(5)} = 1$.
The correct step is to use $e^{x+y} = e^x e^y$, so addition in the exponent turns into multiplication. This gives $e^{\ln(2x)} e^{\ln(5)} = 1$, or $2x \cdot 5 = 1$, as you did with the correct approach.
• Why is it that we have e^(ln(2x)+ln(5)) and not e^ln(2x) + e^ln(5) seperately? If I have a simple equation like: 1/2x^2 + 1/2x = 1/2, and I want to simplify by manipulating the equation by doing everything times 2. Then I will have to multiply every part of the equation by 2 right? => 2(1/2x^2) + 2(1/2x) = 2(1/2). – Hikato Jun 29 '17 at 12:19
• You are correct with your multiplying-by-2 example. This is because "multiplication distributes over addition". That is, $a(b+c) = ab + ac$. However, exponentiation does not distribute over addition. That is, we do not have $a^{b+c} = a^b + a^c$ in general. You can either memorize it as a rule of algebra, or you will need to have to dig deeper into what exponentiation means to figure out why this is so. – Bob Krueger Jun 29 '17 at 14:38
You start with an equation $$A + B = C$$
What you can do is change that into $$e^{A+B} = e^C$$ and that leads to the correct solution, since $$e^{\ln(2x) + \ln(5)} = e^{\ln 2x}\cdot e^{\ln(5)} = 10 x$$
What you cannot do is change that into $$e^A + e^B = e^C$$
because $$e^{A+B}\neq e^A+e^B$$ in general.
• So basically if you want to take e^x from any side of an equation, you have to include everything that is on that side in e^(here)? – Hikato Jun 29 '17 at 12:28
• @David Well, yeah. That's not specific to $e^.$ as well. If I tell you that $x$ is the same thing as $y$, then clearly, you can conclude that $f(x)=f(y)$. But not all functions then satisfy the property $f(a+b)=f(a)+f(b)$. The exponential funciton, for example, does not. – 5xum Jun 29 '17 at 12:30
• Thanks a lot, that makes sense :). I have one more question, why is it that when I put Y1 = ln(8-x^2) - ln(3-x) in the calculator and Y2 = ln((8-x^2)/(3-x)), that they differ? They are almost the same but Y2 has more solutions somehow when I graph it. Y1 and Y2 are the same right? – Hikato Jun 29 '17 at 12:58
• @David is that $8$ supposed to be a $9$? – 5xum Jun 29 '17 at 13:00
• @David Oh, yeah, I was wrong. The thing is that $\ln(8-x^2)-\ln(3-x)$ is defined only when both terms inside $\ln$ are positive, while $\ln(\frac{8-x^2}{3-x})$ is defined when the entire fraction is negative. So, for $x=4$, $\ln(8-x^2)-\ln(3-x)$ is not defined because $\ln(-8)$ and $\ln(-1)$ are not defined, but $\ln(\frac{8-x^2}{3-x})=\ln\frac{8}{1}$ is defined. – 5xum Jun 29 '17 at 13:30 | 2021-01-25T07:36:18 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2340595/question-about-ln-properties",
"openwebmath_score": 0.9519688487052917,
"openwebmath_perplexity": 335.9138788150693,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877038891779,
"lm_q2_score": 0.8723473730188542,
"lm_q1q2_score": 0.8568960980364465
} |
http://math.stackexchange.com/questions/169468/can-two-topological-spaces-surject-onto-each-other-but-not-be-homeomorphic/169471 | # Can two topological spaces surject onto each other but not be homeomorphic?
Let $X$ and $Y$ be topological spaces and $f:X\rightarrow Y$ and $g:Y\rightarrow X$ be surjective continuous maps. Is it necessarily true that $X$ and $Y$ are homeomorphic? I feel like the answer to this question is no, but I haven't been able to come up with any counter example, so I decided to ask here.
-
Thanks for the answers everyone! – Seth Jul 11 '12 at 14:19
The circle $S^1$ surjects onto the interval $I = [-1,1]$ by projection in (say) the $x$-coordinate, while the interval $I$ surjects onto the circle by wrapping around, say $f(x) = (\cos \pi x, \sin \pi x)$.
Added: Why is the circle $S^1$ not homeomorphic to the interval $I = [-1,1]$ ? The usual proof looks at cut points, i.e. a point $x$ whose removal from a topological space $X$ results in a disconnected space $X\backslash\{x\}$. Since this is a purely topological property, two homeomorphic spaces will have an equal number of cut points.
Note that $S^1$ has no cut points; removal of any single point from the circle leaves a connected open arc. However a closed interval $I$ has infinitely many cut points because removing any point except one of the two endpoints disconnects it into two disjoint subintervals.
The same observation serves to show the spaces in Karolis Juodelė's answer are not homeomorphic: $[0,1]$ has cut points and $[0,1]^2$ does not.
See Seth Baldwin's comment below for an alternative idea, something that will not disconnect the interval $I$ that does disconnect the circle!
-
I like this one - no need for exotic Peano curves. – akkkk Jul 11 '12 at 13:51
Thanks, I should have perhaps have also pointed out why the circle is not homeomorphic to the interval (hint: no homology groups needed!). It seems to be a fairly common exercise... – hardmath Jul 11 '12 at 13:55
You can find two points on the closed interval to remove such that it stays connected, while this is impossible for the circle. – Seth Jul 11 '12 at 14:24
@SethBaldwin: That's neat, in that I was thinking about roughly the reverse, being able to disconnect the interval by removing one point (which we cannot do for the circle). – hardmath Jul 11 '12 at 15:47
@Theorem: I'll be happy to add a note explaining that, but see the above comment by Seth Baldwin and my reply. – hardmath Jul 11 '12 at 17:08
There is a continuous surjective map from $[0, 1]$ to $[0, 1]^2$ - the Peano curve. There is also a map from $[0, 1]^2$ to $[0, 1]$ - $f(x, y) = x$. However the two spaces are not homoemorphic.
-
More generally: any two connected, locally connected, compact, second-countable spaces have your property. (Hahn-Mazurkiewicz Theorem)
-
Strictly speaking, I must add that they have at least two points. – GEdgar Jul 12 '12 at 19:35
Others have answered, but maybe these comments will also be useful for this thread. A stronger equivalence (replace “surjective” with “bijective” in both places), which is still strictly weaker than being homeomorphic, was studied by several people in the early days of topology (Banach, Kuratowski, Hausdorff, Sierpinski, etc. in the 1920's). I believe this stronger equivalence originates from Frechet (1910), who called it type de dimensions. There is a lot about this relation in Sierpinski's General Topology (where it's called dimensional type; see pp. 130-133, 137, 141, 142, 144, 145, 163, 165) and in Kuratowski's Topology (where it's called topological rank).
-
Just to be different, here’s an example that isn’t related to the Hahn-Mazurkievicz theorem. The example is originally due (with a different purpose) to K. Sundaresan, Banach spaces with Banach-Stone property, Studies in Topology (N.M. Stavrakas & K.R. Allen, eds.), Academic Press, New York, 1975, pp. 573-580; the argumentation is mine, On an example of Sundaresan, Top. Procs. 5 (1980), pp. 185-6. The surjections are $1$-$1$ save at a single point each, where they are $2$-$1$.
Let $X=\omega^*\cup(\omega\times 2)$, where $\omega^*=\beta\omega\setminus\omega$, and $2$ is the discrete two-point space, let $\pi:X\to\beta\omega$ be the obvious projection, and endow $X$ with the coarsest topology making $\pi$ continuous and each point of $\omega\times 2$ isolated. Let $N=\omega\times 2$, for $n\in\omega$ let $P_n=\{n\}\times 2$, and let $\mathscr{P}=\{P_n:n\in\omega\}$. A function $f:X\to X$ preserves pairs if $f[P_n]\in\mathscr{P}$ for all but finitely many $n\in\omega$.
Lemma. Let $f:X\to X$ be an embedding; then $f$ preserves pairs.
Proof. Suppose that $f$ does not preserve pairs. Since $f$ is injective, an easy recursion suffices to produce an infinite $M\subseteq\omega$ such that $(\pi\circ f)\upharpoonright\bigcup\{P_n:n\in M\}$ is injective. Let $M_i=M\times\{i\}$ for $i\in 2$. Then $$\left(\operatorname{cl}_XM_i\right)\setminus N=\left(\operatorname{cl}_{\beta\omega}M\right)\setminus\omega\ne\varnothing$$ for $i\in 2$, so $$\left(\operatorname{cl}_Xf[M_0]\right)\setminus N=\left(\operatorname{cl}_Xf[M_1]\right)\setminus N\ne\varnothing\;.$$ But $$\left(\operatorname{cl}_Xf[M_i]\right)\setminus N=\left(\operatorname{cl}_{\beta\omega}f[M_i]\right)\setminus\omega$$ for $i\in 2$, $\pi\big[f[M_0]\big]\cap\pi\big[f[M_1]\big]=\varnothing$, and disjoint subsets of $\omega$ have disjoint closures in $\beta\omega$, so $\operatorname{cl}_Xf[M_0]\cap\operatorname{cl}_Xf[M_1]=\varnothing$; this is the desired contradiction. $\dashv$
Now let $p$ be any point not in $X$, and let $Y=X\cup\{p\}$, adding $p$ to $X$ as an isolated point.
Proposition. $Y$ is not homeormorphic to $X$.
Proof. Suppose that $h:Y\to X$ is a homeomorphism; it follows from the lemma that $h\upharpoonright X$ preserves pairs. Let $$A=\bigcup\Big\{P_n\in\mathscr{P}:h[P_n]\in\mathscr{P}\Big\}\cup\omega^*\;.$$ Then $\big|X\setminus h[A]\big|$ is finite and even, $|Y\setminus A|$ is finite and odd, and $h\upharpoonright(Y\setminus A)$ is a bijection between these two sets, which is absurd. $\dashv$
Finally, the maps
$$f:Y\to X:y\mapsto\begin{cases} y,&\text{if }y\in X\\ \langle 0,0\rangle,&\text{if }y=p \end{cases}$$
and
$$g:X\to Y:x\mapsto\begin{cases} x,&\text{if }x\in\omega^*\\ p,&\text{if }\pi(x)=0\\ \langle n-1,i\rangle,&\text{if }x=\langle n,i\rangle\text{ and }n>0 \end{cases}$$
are continuous surjections.
By the way, each of $X$ and $Y$ embeds in the other, so these spaces witness the lack of a Schröder-Bernstein-like theorem for compact Hausdorff spaces and embeddings.
- | 2013-05-21T17:30:51 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/169468/can-two-topological-spaces-surject-onto-each-other-but-not-be-homeomorphic/169471",
"openwebmath_score": 0.9067010283470154,
"openwebmath_perplexity": 192.43351650747297,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9728307676766119,
"lm_q2_score": 0.8807970873650401,
"lm_q1q2_score": 0.8568665066686558
} |
https://www.stralendevrouwen.com/d14sg85/archive.php?a1ba13=in-a-matrix-interchanging-of-rows-and-columns-is-called | Definition The new matrix obtained by interchanging the rows and columns of the original matrix is called as the transpose of the matrix. $$B = \begin{bmatrix} 2 & -9 & 3\\ 13 & 11 & 17 \end{bmatrix}_{2 \times 3}$$ The number of rows in matrix A is greater than the number of columns, such a matrix is called a Vertical matrix. If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices. of rows] [ no. If A has dimension (n m) then A0has dimension (m n). In other words, we can say that matrix A is another matrix formed by replacing each element of the current matrix by its corresponding cofactor and then taking the transpose of the new matrix formed. We calculate determinant of matrix B. A additive inverse of A. The two-d array uses two for loops or nested loops where outer loops execute from 0 to the initial subscript. The transpose of a column vector is a row vector and vice versa. This follows from properties 8 and 10 (it is a general property of multilinear alternating maps). Example 1: Consider the matrix . The matrix B is called the transpose of A. In this case, a single row is returned so, by default, this result is transformed to a vector. If a one-row matrix is simplified to a vector, the column names are used as names for the values. Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. More generally, any permutation of the rows or columns multiplies the determinant by the sign of the permutation. Solution: First to find out the minor and cofactor of the matrix : = 2 = 2, = 2 = -2, = -1 = +1, = 5 = 5. Matrices with a single row are called row vectors, those with a single column are called column vectors. Now, let us take another matrix. Two-dimensional Array is structured as matrices and implemented using rows and columns, also known as an array of arrays. Recommended: Please try your approach on first, before moving on to the solution. A square matrix is called orthogonal when ATA = AAT = I. We take matrix A and we calculate its determinant (|A|). In some contexts, such as computer algebra programs , it is useful to consider a matrix with no rows or no columns, called an empty matrix . All Rights Reserved. We have: . It is obtained by interchanging rows and columns of a matrix. Bookmark this question. Do the transpose of matrix. If rectangular matrix A is m × n, it is called column orthogonal when ATA = I since the columns are orthonormal. ... interchanging rows and columns of a 4D matrix. of Columns]; Ex… Active 4 years, 7 months ago. Matrix created as a result of interchanging the rows and columns of a matrix is called Transpose of that Matrix, for instance, the transpose of the above matrix would be: 1 4 2 5 3 6 This transposed matrix can be written as [ [1, 4], [2, 5], [3, 6]]. Required fields are marked *. ... Row switching is interchanging two ____ of a matrix… Maths Help, Free Tutorials And Useful Mathematics Resources. That’s the result, indeed, but the row name is gone now. C determinants. (A’)’= A. Example 2: Consider the matrix Find the Adj of A. 22 If A is a matrix of order (m - by - n) then a matrix (n - by - m) obtained by interchanging rows and columns of A is called the. If m = n, the matrix is called a square matrix of order n. A square matrix in which only the diagonal elements α = α ii are nonzero is called a diagonal matrix and is denoted by diag (α 1, …, α n). If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware. R tries to simplify the matrix to a vector, if that’s possible. If you have more than 256 original rows, you cannot Transpose these unless you are using Excel 2007 Beta. For instance, if For a symmetric matrix, A = A’. Example 2: Consider the matrix . For example, if A = 4 −1 13 9!, then by interchanging rows and columns, we obtain AT = 4 13 −1 9!. The transpose of the transpose of a matrix is that the matrix itself =, The transpose of the addition of 2 matrices is similar to the sum of their transposes =, When a scalar matrix is being multiplied by the matrix, the order of transpose is irrelevant =. Rank of matrix is the order of largest possible square matrix whose determinant is non zero. If the two vectors are each column vectors, then the inner product must be formed by the matrix product of the transpose of a column vector times a column vector, thus creating an operation in which a 1 x n matrix is multiplied with a n x 1 matrix. By, writing another matrix B from A by writing rows of A as columns of B. A matrix obtained by interchanging rows and columns is called ____ matrix? Consider the matrix If A = || of order m*n then = || of order n*m. So, . two rows and two columns and matrices as: PEARL PACKAGE The matrix obtained from a given matrix A by interchanging its rows and columns is called a) Inverse of A b) Square of A c) transpose of A d) None of these A+ Ais d) Nonebद म In this example prints transpose of a matrix. 21 Horizontally arranged elements in a matrix is called. Given a square matrix A, the transpose of the matrix of the cofactor of A is called adjoint of A and is denoted by adj A. A matrix with an infinite number of rows or columns (or both) is called an infinite matrix . Click to share on Facebook (Opens in new window), Click to share on Twitter (Opens in new window), Click to share on LinkedIn (Opens in new window), Click to share on Reddit (Opens in new window). How can I do this in MATLAB, because Excel only has 256 column which cannot hold 2000 columns. Your email address will not be published. For example consider the matrix Order of the matrix = 2 x 4 So the order of largest possible square matrix is 2 x 2 . A matrix with m rows and n columns is called an m × n matrix or m-by-n matrix, while m and n are called its dimensions. Thetransposeofasymmetricmatrix i.e. Answer: Rows. Approach: This problem can be solved by keeping either the number of rows or columns fixed. Answer By Toppr. (adsbygoogle = window.adsbygoogle || []).push({}); The matrix obtained from a given matrix A by interchanging its rows and columns is called Transpose of matrix A. Transpose of A is denoted by A’ or . A matrix having m rows and n columns with m ≠ n is said to be a In a matrix multiplication for A and B, (AB)t Matrices obtained by changing rows and columns is called Consider the matrix If A = || of order m*n then = || of order n*m. So, . Determinant of a matrix changes its sign if we interchange any two rows or columns present in a matrix. The column in which eliminations are performed is called the pivot column. And the default format is Row-Major. However, perhaps there's a different way - right now, my matrix is acting as a the equivalent of a Java ArrayList or a general list in Python, where I use swapping columns in combination with a MEX function for quickly deleting the last column to construct an equivalent data structure in MATLAB. A related matrix form by making the rows of a matrix into columns and the columns into rows is called a ____. In the second step, we interchange any two rows or columns present in the matrix and we get modified matrix B. The matrix obtained from a given matrix A by interchanging its rows and columns is called Transpose of matrix A. Transpose of A is denoted by A’ or . An adjoint matrix is also called an adjugate matrix. Your email address will not be published. I want to read this data into MATLAB but I need to to interchange the rows and the column so that the matrix will be 60 rows and 2000 columns. Performance & security by Cloudflare, Please complete the security check to access. If the rows and columns of a matrix A are interchanged (so that the first row becomes the first column, the second row becomes the second column, and so on) we obtain what is called the transposeof A, denoted AT. For example X = [[1, 2], [4, 5], [3, 6]] would represent a 3x2 matrix. This is just an easy way to think. We can prove this property by taking an example. Solution: The transpose of matrix A by interchanging rows and columns is . In the case of matrix algorithms, a pivot entry is usually required to be at least distinct from zero, and often distant from it; in this case finding this element is called pivoting. The horizontal array is known as rows and the vertical array are known as Columns. Solution: It is an order of 2*3. If A is of order m*n, then A’ is of the order n*m. Clearly, the transpose of the transpose of A is the matrix A itself i.e. Comment document.getElementById("comment").setAttribute( "id", "ab3f2f9c3e28f1d074d0f19134e952ce" );document.getElementById("afa6a2ad4a").setAttribute( "id", "comment" ); © MathsTips.com 2013 - 2020. (x-6 || y-5)) printf ("Variables Swapped. In Python, there is always more than one way to solve any problem. • Cloudflare Ray ID: 5fd3023aedfce4fa "); So, as it shows, interchanging rows and columns can be achieved in exactly the same way, a series of scalar … D transpose. The m… you cannot tramspose a matrix greater than 256 columns x 256 rows Gord Dibben MS Excel MVP On Mon, 26 Jun 2006 16:53:59 GMT, "Lewis Clark" wrote: >Copy the entire working range. Before you can multiply two matrices together, the number of ____ in the first matrix must equal the number of rows in the second matrix. Converting rows of a matrix into columns and columns of a matrix into row is called transpose of a matrix. If A is of order m*n, then A’ is of the order n*m. Clearly, the transpose of the transpose of A is the matrix A itself i.e. The operation of interchanging rows and columns in a matrix is called trans from MEGR 7102 at University of North Carolina, Charlotte Thus the transpose is also the inverse: A− 1 = AT. A matrix consisting of a single row is called a row matrix, and that consisting of a single column is called a column matrix. The memory allocation is done either in row-major and column-major. Pivoting may be followed by an interchange of rows or columns to bring the pivot to a fixed position and allow … Taking the transpose of a matrix is equivalent to interchanging rows and columns. columns. Interchanging any pair of columns or rows of a matrix multiplies its determinant by −1. Federal MCQs, 9th Class MCQs, Math MCQs, Matrices And Determinants MCQs, Symeetric , Identify matrix , transpose , None In this article, the number of rows … Show activity on this post. Solution: It is an order of 2*3. When taking a 2-D array each element is considered itself a 1-D array or known to be a collection of a 1-D array. A columns. Example 1: Consider the matrix Find the Adj of A. For example, the matrix A above is a 3 × 2 matrix. Run this code snippet in C. int x=5, y=6; x=x+y; y=x-y; x=x-y; if (! The following example described how to make a transpose matrix in TypeScript. It works as follows. A matrix with the same number of rows and columns is called a square matrix. and ' and even the transpose, Stack Overflow. For example, if the user entered an order as 2, 2 i.e. The pivot or pivot element is the element of a matrix, or an array, which is selected first by an algorithm, to do certain calculations. Each element of the original matrix appears in 2 rows and 3 columns in the enlarged matrix. • I have an input data in Excel which has 2000 rows and 60 columns. If A = [a ij] be an m × n matrix, then the matrix obtained by interchanging the rows and columns of A would be the transpose of A. of It is denoted by A′or (A T). Matrices obtained by changing rows and columns is called transpose. In my first programming course, I learnt how to swap two variables, suppose denoted by x and y, without holding a value in a third variable. Do the transpose of matrix. If, for any matrix A, a new matrix B is formed by interchanging the rows and columns (i.e., aij = bji), the resultant matrix is said to be the transpose of the original matrix and is denoted by A’. Solution: = 7 = 7, = 18 = -18, = 30 = 30, = 1 = -1, = 6 = 6, = 10 = -10, = 1 = 1, = 8 = -8, = 26 = 26. Your IP: 192.145.237.241 G1 * G2' = 44 Verify this result by carrying out the operations on 'matlab'. Do the transpose of matrix. (A’)’= A. In Python, we can implement a matrix as a nested list (list inside a list). We can treat each element as a row of the matrix. The first row can be selected as X[0].And, the element in the first-row first column can be selected as X[0][0].. Transpose of a matrix is the interchanging of rows and columns. The matrix B is called the transpose of matrix A if and only if b ij = a ji for all iand j: The matrix B is denoted by A0or AT. We have: . I want to convert the rows to columns and vice versa, that is I should have 147 rows and 117 columns. By, writing another matrix B from A by writing rows of A as columns of B. View Answer. For example matrix = [[1,2,3],[4,5,6]] represent a matrix of order 2×3, in which matrix[i][j] is the matrix element at ith row and jth column.. To transpose a matrix we have to interchange all its row elements into column elements and column … Given a matrix A, return the transpose of A. I tried the function .' Note: Example 1: Consider the matrix . Ask Question Asked 4 years, 7 months ago. Syntax: type array name [ no. B Rows. Pivot row: ... where P k is the permutation matrix obtained by interchanging the rows k and r k of the identity matrix, and M k is an elementary lower triangular matrix resulting from the elimination process. N * m. So, by default, this result is transformed to a vector if! Both ) is called a square matrix whose determinant is non zero by writing of! Thetransposeofasymmetricmatrix a square matrix whose determinant is non zero only has 256 column which can not hold columns! Whose determinant is non zero done either in row-major and column-major solve any problem ( x-6 || y-5 ) printf! Example 2: consider the matrix Find the Adj of a 4D.... Taking a 2-D array each element is considered itself a 1-D array or known to be collection! Writing another matrix B: Please try your approach on first, before moving on to the.... Carrying out the operations on 'matlab ' is obtained by interchanging rows and columns! Row of the original matrix is simplified to a vector AAT = I taking the transpose a! G1 * G2 ' = 44 Verify this result by carrying out the operations on 'matlab ' vector... Or both ) is called as the transpose of a as columns of the permutation,! From properties 8 and 10 ( it is called a ____ is returned,. Execute from 0 to the initial subscript, you can not transpose these unless you are using Excel Beta... A 4D matrix can I do this in MATLAB, because Excel only has 256 column which not. Permutation of the matrix step, we interchange any two rows or columns present in a matrix columns... In C. int x=5, y=6 ; x=x+y ; y=x-y ; x=x-y ; if ( a. Matrix multiplies its determinant by the sign of the original matrix is also called an adjugate matrix can... Column orthogonal when ATA = AAT = I since the columns into rows is as! Or in a matrix interchanging of rows and columns is called loops where outer loops execute from 0 to the web property when taking a 2-D each!: 192.145.237.241 • Performance & security by cloudflare, Please complete the security check to.. Your IP: 192.145.237.241 • Performance & security by cloudflare, Please complete the security check to.... Columns are orthonormal the order of 2 * 3 has dimension ( n ). Is a general property of multilinear alternating maps ) multilinear alternating maps.! Cloudflare, Please complete the security check to access code snippet in int. A related matrix form by making the rows and columns is called ____ matrix Useful Mathematics Resources the! ( Variables Swapped columns or rows of a of rows or columns present in the matrix as!, by default, this result is transformed to a vector, that. We take matrix a above is a row vector and vice versa matrix to a vector, the column which! New matrix obtained by interchanging rows and columns of a as columns of a taking. With a single row are called column orthogonal when ATA = I since the columns are.! Matrix to a vector human and gives you temporary access to the initial.. When taking a 2-D array each element is considered itself a 1-D array n then = || of order *. A square matrix Question Asked 4 years, 7 months ago and columns of B row is So. 4 years, 7 months ago 2007 Beta and matrices as: the column names are as! A by interchanging rows and columns is called a ____ can prove this by. Matrix in TypeScript column vectors hold 2000 columns a column vector is a row of the matrix. Result by carrying out the operations on 'matlab ' in C. int x=5, y=6 ; x=x+y ; y=x-y x=x-y. Question Asked 4 years, 7 months ago completing the CAPTCHA proves are! ( it is obtained by interchanging rows and columns is called as the transpose, Stack Overflow, but row... |A| ) definition the new matrix obtained by interchanging rows and columns of a matrix with the same number rows... Please complete the security check to access snippet in C. int x=5 y=6! A ____ matrices as: the transpose is also the inverse: A− 1 = AT 2 * 3 months... Take matrix a above is a row vector and vice versa and column-major example 1: the... Two rows or columns present in a matrix as a row vector and vice.... Rows, you can not transpose these unless you are a human and gives temporary. General property of multilinear alternating maps ) check to access make a transpose in... Execute from 0 to the initial subscript 1-D array or known to be a collection of a columns... Array is known as rows and columns is called ____ matrix consider the matrix to a,. The determinant by −1 taking an example operations on 'matlab ' IP: 192.145.237.241 • Performance & security by,. Of matrix is also called an adjugate matrix of columns or rows of a 1-D array or known be. Determinant of a taking the transpose of matrix a and we get modified matrix B is called the... Vector and vice versa, indeed, but the row name is gone now n ) described. Either in row-major and column-major those with a single row are called column when! Called an adjugate matrix 8 and 10 ( it is called the pivot column the following example described how make. * n then = || of order m * n then = || of order m * then... Both ) is called a ____ 2 matrix I since the columns into rows is called orthogonal! Determinant by the sign of the original matrix appears in 2 rows and columns is, Stack Overflow 3 in... Collection of a taking the transpose, Stack Overflow interchanging rows and columns is is... As: the column in which eliminations are performed is called the transpose of.... Aat = I, before moving on to the solution one way to solve any problem and we its... And column-major element is considered itself a 1-D array = I matrix a and we get modified matrix.... ) printf ( Variables Swapped r tries to simplify the matrix to a vector, for... Matrix with an infinite number of rows or columns present in a matrix with an infinite number of or! Returned So, make a transpose matrix in TypeScript matrix as a nested list ( list inside a )! Loops or nested loops where outer loops execute from 0 to the web property taking an example,. C. int x=5, y=6 ; x=x+y ; y=x-y ; x=x-y ; if ( n! Please try your approach on first, before moving on to the solution by making the rows of a matrix... Single row are called row vectors, those with a single row is So! I do this in MATLAB, because Excel only has 256 column which not... | 2022-01-23T16:21:38 | {
"domain": "stralendevrouwen.com",
"url": "https://www.stralendevrouwen.com/d14sg85/archive.php?a1ba13=in-a-matrix-interchanging-of-rows-and-columns-is-called",
"openwebmath_score": 0.5325474739074707,
"openwebmath_perplexity": 670.1791025091219,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9728307700397332,
"lm_q2_score": 0.8807970842359877,
"lm_q1q2_score": 0.8568665057060476
} |
https://math.stackexchange.com/questions/1147212/finding-the-mod-of-a-difference-of-large-powers/1147222 | # Finding the mod of a difference of large powers
I am trying to find if $$4^{1536} - 9^{4824}$$
is divisible by 35. I tried to show that it is not by finding that neither power is divisible by 35 but that doesn't entirely help me. I just know that I can't use fermats little theorem to help solve it.
• It doesn't help, because both are clearly not divisible by 35. Just find if the two powers have the same residue mod 35 – Old John Feb 13 '15 at 23:40
• It's definitely divisible by $5$ as its last digit is $5$ (check last digits of $4^n$ and $9^n$. Perhaps this might be useful (so you need to check divisibility by $7$). – Andrei Rykhalski Feb 13 '15 at 23:41
• @AndreiRykhalski: $4^4-9^1=247$ is not divisible by $5$. – barak manos Feb 13 '15 at 23:45
• @barakmanos I didn't mean that $4^n$ - $9^m$ for all $n$ and $m$, but the fact that last digit of a power of a number is a periodic function. – Andrei Rykhalski Feb 13 '15 at 23:48
First, $$4^{1536}-9^{4824}\equiv(-1)^{1536}-(-1)^{4824}\equiv 1-1\equiv 0\pmod{5}.$$ Second, $$4^{1536}-9^{4824}=64^{512}-729^{1608}\equiv 1^{512}-1^{1608}\equiv 1-1\equiv 0\pmod{7}.$$
Edit: if you are unfamiliar with modular arithmetic, think in terms the Binomial Theorem. For example, (below, $A$ is some integer) $$4^{1536}=(5-1)^{1536}=5A+(-1)^{1536}=5A+1$$ where the last equality follows because $1536$ is even.
• How does 4 turn into -1? – user138246 Feb 13 '15 at 23:57
• $4\equiv-1\pmod{5}$ because $4-(-1)=5$ is divisible by $5$. – yurnero Feb 13 '15 at 23:58
• I don't follow that logic, can you demonstrate with a more simple example? – user138246 Feb 14 '15 at 0:16
• I don't know the binomial theorem either but how is it that you are taking the powers of 4 and 9 and turning them both into 1? – user138246 Feb 14 '15 at 0:28
• @user138246 $a\equiv b\pmod{p}\implies a^n\equiv b^n\pmod{p}, \forall n\in\mathbb N$, because $a^n-b^n=(a-b)(a^{n-1}+a^{n-2}b+a^{n-3}b^2+\cdots+ab^{n-2}+b^{n-1})$, so $4\equiv -1\pmod{5}\implies 4^{1536}\equiv (-1)^{1536}\equiv 1\pmod{5}$, because $(-1)^{1536}=1$. Same logic for everything else. – user26486 Feb 14 '15 at 0:38
Note $4^6\equiv (-6)^2 \equiv 1$ (mod $35$) and $6 \mid 1536$, so $4^{1536}\equiv 1$ (mod $35$).
Similarly $9^6 \equiv 1$ (mod $35$) $\implies 9^{4824}\equiv 1$ (mod $35$).
So we can conclude that $4^{1536}-9^{4824} \equiv 0$ (mod $35$), i.e. the number is divisible by $35$.
• I don't understand the $6 | 1536$ part, what does that mean? – user138246 Feb 14 '15 at 0:19
• @user138246 $6\mid 1536\implies 1536=6c$ for some $c\in\mathbb N$, so $4^{1536}\equiv 4^{6\cdot c}\equiv (4^6)^c\equiv 1^c\equiv 1\pmod{35}$. We had $(4^6)^c\equiv 1^c\pmod{p}$ because $4^6\equiv 1\pmod{p}$ and $a\equiv b\pmod{p}\implies a^n\equiv b^n\pmod{p},\forall n\in\mathbb N$, because $a^n-b^n=(a-b)(a^{n-1}+a^{n-2}b+\cdots+ab^{n-2}+b^{n-1})$. – user26486 Feb 14 '15 at 0:43
Hint mod $\,5\!:\ 4\equiv -1\equiv 9\,\Rightarrow\, \color{#c00}{4^6\equiv 1\equiv 9^6}$
and, mod $\,7\!:\ 9^3\equiv 2^3\equiv 1\,\Rightarrow\ \color{#c00}{4^6\equiv 1\equiv 9^6}$
So mod $\,35\!:\ \color{#c00}{4^3\equiv 1\equiv 9^3}\$ by CRT (or by $\,5,7\mid a^6\!-\!1\,\Rightarrow\,{\rm lcm}(5,7)=35\mid a^6\!-\!1)$
So mod $\,35\!:\ \color{#c00}4^{\color{#c00}3j} - \color{#c00}9^{\color{#c00}9^k}\equiv \color{#c00}1^{j}-\color{#c00}1^{k} \equiv 1$
In particular it's true if the exponents are even with digit sum divisible by $\,3,\,$ as in your case.
Remark $\$ Since you say congruence arithmetic is unfamiliar, below is a more elementary proof using only that $\,a-b\mid a^n-b^n.\$ Suppose $\ 2\mid j\!-\!i \ge 0.\$ Then
\qquad\quad \begin{align} 9^{3j}-4^{3i} = &\ 9^{3j}-4^{3j}\ +\ 4^{3j}-4^{3i}\\ = &\ \color{#0a0}{9^{3j}-4^{3j}}\ +\ 4^{3i}\,(\color{#c00}{4^{3(j-i)}\!-1})\end{align}
But $\ 5,7\mid 9^3-4^3\mid \color{#0a0}{9^{3j}-4^{3j}}$ and $\,5,7\mid 4^6-1\mid \color{#c00}{4^{3(j-i)}\!-1}\,$ by $\,2\mid j\!-\!i\,\Rightarrow\, 6\mid 3(j\!-\!i)$
Thus $\,5,7\,$ also divide their sum, therefore their lcm = $\,5\cdot 7 = 35\,$ divides their sum.
• Can you explain your notation, I am not familiar with it enough to read this commit. I don't know what the : implies, I don't know what CRT is, I don't know what lcm is either. Thanks. – user138246 Feb 14 '15 at 0:15
• @user138246 Do you know congruence arithmetic? i.e. $\,a\equiv b\pmod m\$ iff $\ m\mid a-b\ \$ – Gone Feb 14 '15 at 0:21
• I do not and I do not know what | means – user138246 Feb 14 '15 at 0:22
• @user138246 Usually these problems are posed after one learns a bit of elementary number theory. You should state in your question that you are not familiar with congruences or modular arithmetic. What textbook are you using? – Gone Feb 14 '15 at 0:29
• I know a little bit of modular arithmetic and the idea of congruence, I am just not familiar with the math heavy notation and acronyms. – user138246 Feb 14 '15 at 0:30
Fermat's little theorem:
"If $p$ is a prime number, then: $\forall$ $x \in \Bbb N / gcd(x,p) = 1$, $x^{p-1} \equiv 1 \pmod p$".
Let $N = 4^{1536} - 9^{4824}$.
$4^{1536} = (4^6)^{256} \equiv 1 \pmod 7$ (Fermat's little)
$9^{4824} = (9^6)^{804} \equiv 1 \pmod 7$ (Again, Fermat)
Thus: $N \equiv 1 - 1 \pmod 7 \equiv 0 \pmod 7$. This shows that $7 | N$.
Now:
$4^{1536} = (4^4)^{384} \equiv 1 \pmod 5$ (Fermat..)
$9^{4824} = (9^4)^{1206} \equiv 1 \pmod 5$ (Again)
Thus: $N \equiv 1 - 1 \pmod 5 \equiv 0 \pmod 5$.
This shows $5|N$.
Therefore $5|N$ and $7|N$. This gives $35|N$.
• Why is $4^{6*256} = 1 (mod 7)$ I don't follow. 1 mod 7 is just 1 but how did you conclude that 4 raised to some massive power is equal to 1? It makes no sense to me. – user138246 Feb 13 '15 at 23:59
• $4^6 \equiv 1 \pmod 7$ using the aforementioned theorem. Now raise both sides to that "massive power". $1$ will still be $1$. – user207710 Feb 14 '15 at 0:01
• Yes I see that but you are making the claim that some number (that is not 1) is equal to 1. I do not see how that stands. – user138246 Feb 14 '15 at 0:04
• "$a \equiv b \pmod n$" means that the remainder of division of $a$ by $n$ is $b$. It doesn't mean that $a = b$. – user207710 Feb 14 '15 at 0:08
• Was the "I can't use Fermat" bit present from the beginning? I completely missed that. – user207710 Feb 14 '15 at 0:11
I have little background in this area, but I can determine the answer, ergo anyone else with little background should have no trouble following along.
I know that $x^n$ mod $p$ will follow a cyclic pattern as n is iterated, so I'll first find what those cycles are for $4^n$ mod $35$ and $9^n$ mod $35$, for $n = 0,1,2,3...$
$4^n$ mod $35 = 1,4,16,29,11,9,1,4,16,29...$
$9^n$ mod $35 = 1,9,11,29,16,4,1,9,11,29...$
So we see that both cycles have a length of 6, so you can know what each number's $nth$ power mod $35$ is by what the exponent $n$ mod $6$ is. $4^a - 9^b$ will be divisible by $35$ if $4^a$ and $9^b$ have the same value mod $35$. Importantly, the two cycles above are in fact the reverse of each other. So $4^a$ will have the same value mod $35$ as $9^b$ if $a$ mod $6 = (6 - (b$ mod $6))$ mod $6$. As it happens, both $1536$ mod $6$ and $4824$ mod $6$ equal $0$, so they do satisfy the aforementioned equation, and therefore $4^{1536} - 9^{4824}$ IS divisible by $35$.
I will suggest to use Euler's Theorem which states $\forall a \in \mathbb{Z}$ s.t. gcd$(a, n)=1$, then $a^{\phi(n)} \equiv 1$ mod $n$
In your case we have $n=35$, $\quad a=4,9\quad$ and $\quad \phi(35)=24$
So $\quad \quad 4^{24} \equiv 1$ mod $35\quad$ and $\quad 9^{24} \equiv 1$ mod $35\quad$
Since $\quad4$x$64 = 1536 \quad$ and $\quad 9$x$201 = 4824$
So $\quad \quad 4^{1536} \equiv 1$ mod $35\quad$ and $\quad 9^{4824} \equiv 1$ mod $35\quad$
Hence $\quad 4^{1536} - 9^{4824} \equiv 1 - 1 \equiv 0$ mod $35\quad \implies 35 | 4^{1536} - 9^{4824}$ | 2020-08-11T12:55:25 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1147212/finding-the-mod-of-a-difference-of-large-powers/1147222",
"openwebmath_score": 0.9215911030769348,
"openwebmath_perplexity": 345.18076228342176,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9783846672373524,
"lm_q2_score": 0.8757869981319862,
"lm_q1q2_score": 0.8568565707381631
} |
https://community.wolfram.com/groups/-/m/t/1557676 | # Anamorphic Reflections in a Christmas Ball
Posted 7 months ago
3483 Views
|
10 Replies
|
48 Total Likes
|
Open in Cloud | Download to Desktop via Attachments BelowI recently got inspired by a sculpture sold on Saatchi Art featuring anamorphic deformation by reflection in a spherical mirror. Being curious and interested in anamorphic transformations, I wanted to build something similar and find the math behind it using Mathematica...A plain, undecorated Christmas ball can serve as a perfect convex spherical mirror to test some of our physics and coding skills. I used a 7 cm XMas ball now dumped in stores for Euro1.75 a sixpack! In a nutshell: I wanted to see how a deformed text should look like in order to show up undeformed when reflected in a ball shaped mirror. The graphics below show a spherical mirror centered at C:(0,0,0), our eye at viewpoint V: (xv,0,zv) and a reflected point S on the base plane beneath the ball. One of the reflected light rays leaving S will meet the mirror at Q such that its reflection meets the eye at V. But the eye at V will now perceive the point S at I. I is a perceived image point inside the view disk perpendicular to VC. According to the law of reflection, the lines VQI and SRQ will form equal angles with the normal n to the sphere in Q. All image points will be restricted to a disk that is the base of the view cone with the line CV as axis and an opening angle of tan^-1(zv/xv). This image disk is at an offset 1/xv from C and has a radius of Sqrt[1-(1/xv)^2]. The point Q (q1, q2, q3) is the intersection of the view line VI and the mirror sphere. It can be computed by solving this equation: solQ = NSolve[ Element[{x, y, z}, HalfLine[{imagePointI, viewPointV}]] && Element[{x, y, z}, Sphere[]], {x, y, z}]; pointQ = First[{x, y, z} /. solQ]; {q1, q2, q3} = pointQ; The points C, Q, I, V and S are all in the same plane. We have R, the projection of V to the normal n. projectionPlane = InfinitePlane[pointQ, {pointQ, viewPointV}]; reflectionPt = 2 Projection[viewPointV, pointQ] - viewPointV; The point S is now the intersection of of the line QR with the base plane. It can be computed by solving this equation: solS = NSolve[{{x, y, z} \[Element] HalfLine[{{q1, q2, q3}, reflectionPt}] && {x, y, z} \[Element] InfinitePlane[{{0, 0, -1}, {0, 1, -1}, {0, -1, -1}}]}, {x, y, z}]; After simplification, we can write the following function that maps the perceived image point I to the reflected point R : xmasBallMap[iPt : {yi_, zi_}, vPt : {xv_, zv_}] := Module[{imagePtRotated, solQ, q1, q2, q3}, (*image point in real (rotated) pane*) imagePtRotated = {(1 - zi zv)/Norm@vPt, yi, (xv^2 zi + zv)/xv/Norm@vPt}; (*intersection viewline-sphere: Q*) solQ = NSolve[ Element[{x, y, z}, HalfLine[{imagePtRotated, {xv, 0, zv}}]] && Element[{x, y, z}, Sphere[]], {x, y, z}]; {q1, q2, q3} = First[{x, y, z} /. solQ]; Join[{-(1 + q3) (q2^2 + q3^2) xv + q1^2 (xv - q3 xv) + q1^3 (-1 + zv) + q1 q2^2 (-1 + zv) + q1 q3 (q3 (-1 + zv) + 2 zv), q2 (2 q1 xv + q1^2 (-1 + zv) + q2^2 (-1 + zv) + q3 (q3 (-1 + zv) + 2 zv))}/(-2 q1 q3 xv + q3^2 (q3 - zv) + q1^2 (q3 + zv) + q2^2 (q3 + zv)), {-1}]] All possible image points have to fit inside the lower half-disk. This is a grid of image points inside the view disk: pts = Table[ Table[{x, y}, {x, -Floor[Sqrt[1 - y^2], .1] + .1, Floor[Sqrt[1 - y^2], .1] - .1, .025}], {y, 0, -.9, -.025}]; viewDisk = Graphics[{Circle[{0, 0}, 1, {\[Pi], 2. \[Pi]}], {AbsolutePointSize[2], Point /@ pts}}, Axes -> True, AxesOrigin -> {-1, -1}, AxesStyle -> Directive[Thin, Red]] This is the reflected spherical anamorphic map of these points:We can see that there is a large magnification between the perceived image inside the ball and it reflected image. Getting a point too close to the rim of the view disk will project its reflection far away. This GIF shows the function in action. The image point I follows a circle in the perceived image disk while its reflection S follows the closed curve of its map xmasBallmap(I, v) in the base plane. We can now further test our function with some text e.g.: "[MathematicaIcon]Mathematica[MathematicaIcon]". ma = First[First[ ImportString[ ExportString[ Style["\[MathematicaIcon]Mathematica\[MathematicaIcon]", FontFamily -> "Times", FontSize -> 72], "PDF"], "TextMode" -> "Outlines"]]] /. FilledCurve :> JoinedCurve; The text image needs to be rescaled and centered to fit inside the ball. maCenteredScaled = ma /. {x_?NumericQ, y_?NumericQ} :> {x, y}*.005 /. {x_?NumericQ, y_?NumericQ} :> {x - .93, y - .45}; This shows the text as should be perceived in the lower half of the mirror sphere:This is the code for a 3D view of the complete setup: the spherical mirror, the perceived text in the disk inside the sphere and the deformed, anamorphic image on the base plane. Quiet@Module[{xv = 10., zv = 3., \[Phi], rotationTF, pointA, viewPt, mathPts, rotatedMathPts, reflectedPts}, (*view angle*)\[Phi] = ArcTan[xv, zv]; rotationTF = RotationTransform[-\[Phi], {0, 1, 0}, {0, 0, 0}]; (*view pane rotation anchor*) pointA = {(0 - .01) Cos[\[Phi]], 0, (0 - .01) Sin[\[Phi]]}; (*point coordinates in y-z plane*) mathPts = maCenteredScaled[[-1, 1, All, -1]]; rotatedMathPts = Map[rotationTF, mathPts /. {y_?NumericQ, z_?NumericQ} :> {0, y, z}, {3}]; reflectedPts = Map[xmasBallMap[#, {xv, zv}] &, mathPts, {3}]; Graphics3D[{ (*reflected image plane (floor)*){Opacity[.45], LightBlue, InfinitePlane[{{0, 0, -1}, {1, 0, -1}, {-1, .5, -1}}]}, (*mirror sphere*){Opacity[.35], Sphere[]}, (*center of sphere*){Black, Sphere[{0, 0, 0}, .03]}, (*percieved image pane*){Opacity[.35], Cylinder[{{0, 0, 0}, pointA}, 1]}, (*perceived image*){Red, Line /@ rotatedMathPts}, (*reflected image*){Red, AbsoluteThickness[3], Line /@ reflectedPts}}, Boxed -> False]] Time to try the real thing. This shows a 7cm diameter XMas ball mirror with the text reflected in it. Get yourself a nice reflecting Christmas ball and this is a pdf for you to printout and try it! (see attached pdf file for printing) Attachments:
10 Replies
Sort By:
Posted 7 months ago
Really cool! Thanks for sharing!
Posted 7 months ago
- Congratulations! This post is now a Staff Pick as distinguished by a badge on your profile! Thank you, keep it coming, and consider contributing your work to the The Notebook Archive!
Posted 7 months ago
Great post after Cylinder Mirror,Echer-Style in reverse !
Posted 7 months ago
This is really impressive!
Posted 7 months ago
@Erik, this is absolutely wonderful holidays idea. I was thinking one can really hide a secret message for loved ones hardly readable on paper but "magically" revealed by a Christmas Ball. What a fun computational project!
Posted 7 months ago
For those interested, here is a complete notebook to give it a try with your own Xmas wishes! I could not compile so, it is rather slow, be patient... Attachments:
Posted 7 months ago
Forgot to add the end result!
Posted 7 months ago
Very nice.My first thought was wall art, then I wondered about how high the art would need to be to see the reflection in the reflecting ball that would need to be attached, so table art it is!It might be fun to try something with the Chicago Cloud Gate (the bean), but the dimensions: Dimensions 10 m × 13 m × 20 m (33 ft × 42 ft × 66 ft) might not make it very feasible to print out something to place at the base of the end of the sculpture.
Posted 7 months ago
Wow, Dorothy, Chicago Cloud Gate is an interesting idea ! We'd need to know the geometry of the bean, but just imagining a giant cryptic scribble on the ground spelling out letters in its reflection got me really thinking... I wonder how one would approach this. Drone might be useful for both: general capture to represent best in online media and also for proper reflective angle positioning, possibly, if human eye level will not properly capture the sensibly sized letters. A street artist who makes ground drawing illusions could possibly be involved. Note people can go also UNDER cloud gate, which gives fantastic reflection geometry.
Posted 7 months ago
The idea of the Chicago bean is wonderful but we have to consider some specifics of a sphere mirror. Only one half of the sphere will reflect the anamorphic image. Either the lower half from an image on the floor below the sphere or the upper half from an image on the ceiling above the sphere.1.If the anamorphic image is on the floor, the perceived image must be in the lower half of the sphere This could be achieved with a hanging mirror ball. The viewpoint of the observer is at V just above the floor.2.The anamorphic is on the ceiling and then we perceive it in the upper half of the sphere This could be done by painting the anamorphic image on a ceiling above the ball The viewpoint of the observer is at V just above the floor but higher than the sphere center.. In the case of the "bean", we have here an upper half of a convex (ball like) mirror and the anamorphic image would need to be on some type of suspended ceiling above the bean mirror surface. Not on the floor.
Community posts can be styled and formatted using the Markdown syntax. | 2019-06-27T00:30:26 | {
"domain": "wolfram.com",
"url": "https://community.wolfram.com/groups/-/m/t/1557676",
"openwebmath_score": 0.1826939731836319,
"openwebmath_perplexity": 4055.69662340078,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9783846666070894,
"lm_q2_score": 0.8757869867849167,
"lm_q1q2_score": 0.8568565590843882
} |
https://math.stackexchange.com/questions/840008/finding-the-centralizer-of-a-permutation/840016 | # Finding the centralizer of a permutation
I need to find the centralizer of the permutation $\sigma=(1 2 3 ... n)\in S_n$.
I know that:
$C_{S_n}(\sigma)=\left\{\tau \in S_n|\text{ } \tau\sigma\tau^{-1}=\sigma\right\}$
In other words, that the centralizer is the set of all the elements that commute with $\sigma$, and I also know that if two permutations have disjoint cycles it implies that they commute, but the thing is; there are no $\tau\in S_n$ s.t. $\tau$ and $\sigma$ have disjoint cycles, since $\sigma=(1 2 3...n)$.
So can I conclude that $\sigma$ does not commute with any other $\tau$ in $S_n$ (besides $id$ of course)?
I guess my question reduces to: is the second direction of the implication mentioned above also true? meaning, if two permutation commute, does it imply that they have disjoint cycles?
If the answer is no, how else can I find the $C_{S_n}(\sigma)$?
By the way, on related subject, I noticed that if an element $g$ of a group $G$ is alone it its conjugacy class, it commutes with all elements in $G$.
What does it mean, intuitively, for an element to share its conjugacy class with another element? does it mean it "almost" commute with everyone in the group?
Is it true that the bigger the conjugacy class, the lesser its members commutes with others in the group?
• No, the other direction does not hold. A permutation will always commute with all powers of itself. – Tobias Kildetoft Jun 19 '14 at 20:23
• For your last paragraph: “The size of a conjugacy class is equal to the index of its centralizer.” is the way your truth is often expressed. – Jack Schmidt Jun 19 '14 at 20:24
• The part in quotation marks in @JackSchmidt's comment also gives you a good way to see how many elements should be in this centralizer, assuming you are familiar with how conjugacy classes look in the symmetric groups. – Tobias Kildetoft Jun 19 '14 at 20:25
• @TobiasKildetoft, I know that "two permutations conjugate iff they have the same cycle type". Is that what you meant? – so.very.tired Jun 19 '14 at 20:34
• Yes, precisely. – Tobias Kildetoft Jun 19 '14 at 20:40
Hint: conjugacy in $S_n$ leaves cycle types in tact: $\tau^{-1}(1 2 3 \dots n)\tau=(\tau(1) \tau(2)\tau(3) \cdots \tau(n))$.
• It will only help me figure out how many are there ($(n-1)!$), but will I be able to find them? I'm just confused about how I'm supposed to explicitly write them (the above is a question that might appear in my exam next week). – so.very.tired Jun 19 '14 at 20:40
• Well if $\sigma$ is centralized by $\tau$, so $\tau^{-1}(1 2 3 \dots n)\tau=(\tau(1) \tau(2)\tau(3) \cdots \tau(n))=(1 2 3 \cdots n)$, can you see that $\tau$ must be a power of $\sigma$ (try small examples $n=2,3$)? Your observation of $(n-1)!$ is not correct ... – Nicky Hekster Jun 19 '14 at 20:49
• Yeah, you're right, my $(n-1)!$ observation was mistakenly referred to the size of the centralizer, instead of to the size of the conjugacy class. – so.very.tired Jun 19 '14 at 21:06
• OK, proving that $\sigma$ commutes with all powers of itself was easy, and thus giving $<\sigma>\subset C_{S_n}(\sigma)$, but I couldn't figure out from the hint why does it mean that any other element which isn't a power of $\sigma$ does not belong to the centralizer, meaning that $<\sigma>= C_{S_n}(\sigma)$ – so.very.tired Jun 19 '14 at 21:18
• Ah great! That's your "learning point", glad you see it now! – Nicky Hekster Jun 19 '14 at 21:43
Let $S_n$ act on itself by conjugation. Let $g = (1,2,\ldots,n)$. The size of the orbit of $g$ in this action is its conjugacy class $\{ h^{-1} gh: h \in S_n\}$. The stabilizer subgroup of $g$ in this action is the set of elements $h$ in $S_n$ such that $h^{-1}gh=g$, i.e. the centralizer $C_{S_n}(g)$. By the orbit-stabilizer lemma, the size of the orbit equals the index of the stabilizer. The size of the orbit is the number of elements that have the same cycle structure as $g$, which is $(n-1)!$. Thus, the index of the centralizer is $(n-1)!$, whence the centralizer has $n! / (n-1)!=n$ elements. Thus the powers of $g$ exhaust all of $C_{S_n}(g)$.
A second proof is as follows. For the special case where $g=(1,2,\ldots,n)$, we can determine its centralizer in $S_n$ without using the orbit-stabilizer lemma. If $h^{-1}gh = g$, then $(h(1),h(2),\ldots,h(n)) = (1,2,\ldots,n)$. Now, $h(1)$ can be chosen in $n$ ways to be any of $1,2,\ldots,n$, but once $h(n)$ is chosen, the remaining elements $h(2),\ldots,h(n)$ are uniquely determined. In fact, if $h(1)=i$, then $h(2)=i+1$, and so on, and so $h$ is just a power of $g$. Thus, there are exactly $n$ different elements $h$ such that $h^{-1}gh=g$.
Note that $$|\sigma| = n$$ so that $$\sigma^n = 1$$. So $$\langle \sigma \rangle = \{1, \sigma, \sigma^2, \ldots, \sigma^{n-1}\}$$ forms a cyclic commutative subgroup of $$S_n$$ with order $$|\langle \sigma \rangle| = n$$. For any permutation $$x$$ of a symmetric group $$S_n$$ we have the very convenient formula: $$|C(x)| = 1^{\alpha_1}2^{\alpha_2}\cdots n^{\alpha_n}\alpha_1!\alpha_2!\ldots\alpha_n!,$$ where $$\alpha_i$$ is the number of cycles in $$x$$ of length $$i$$. For your permutation $$\alpha_1 = \alpha_2 = \cdots \alpha_{n-1} = 0$$ and $$\alpha_n = n$$ so $$|C(\sigma)| = n$$. We have $$|C(\sigma)| = |\langle \sigma \rangle| = n$$, implying $$C(\sigma) = \langle \sigma \rangle$$. | 2019-06-24T17:44:02 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/840008/finding-the-centralizer-of-a-permutation/840016",
"openwebmath_score": 0.942283034324646,
"openwebmath_perplexity": 150.59890430573006,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978384664716301,
"lm_q2_score": 0.8757869884059267,
"lm_q1q2_score": 0.8568565590144316
} |
https://math.stackexchange.com/questions/351856/why-does-the-series-sum-limits-n-2-infty-frac-cosn-pi-3n-converge?noredirect=1 | # Why does the series $\sum\limits_{n=2}^\infty\frac{\cos(n\pi/3)}{n}$ converge?
Why does this series $$\sum\limits_{n=2}^\infty\frac{\cos(n\pi/3)}{n}$$ converge? Can't you use a limit comparison with $1/n$?
• Can you please edit the title so it is clear what you are asking? Is it $\sum_{n=1}^{\infty} \cos \dfrac{n\pi}{3n}$ – Aryabhata Apr 5 '13 at 6:04
• Careful, your first statement is not correct. The cosine term will oscillate as $n$ gets large and never approach a single value. – Jared Apr 5 '13 at 6:04
• @Jared you are right, will correct it. Is it an alternating series? – Billy Thompson Apr 5 '13 at 6:07
• Yes, this sounds like the way to go. It's not a strictly alternating series, but the sign change in cosine is what causes convergence. You may even be able to evaluate this explicitly using telescoping series, because we know the values of $\cos(\frac{n\pi}{3})$ for all integral $n$. – Jared Apr 5 '13 at 6:08
• @Jared is there any other way to evaluate it? – Billy Thompson Apr 5 '13 at 6:10
First of all your conclusion is wrong since $\lim_{n \to \infty} \cos(n \pi/3)$ doesn't exist.
The convergence of $$\sum_{n=1}^N \dfrac{\cos(n\pi/3)}{n}$$ can be concluded based on Abel partial summation (The result is termed as generalized alternating test or Dirichlet test). We will prove the generalized statement first.
Consider the sum $S_N = \displaystyle \sum_{n=1}^N a(n)b(n)$. Let $A(n) = \displaystyle \sum_{n=1}^N a(n)$. If $b(n) \downarrow 0$ and $A(n)$ is bounded, then the series $\displaystyle \sum_{n=1}^{\infty} a(n)b(n)$ converges.
First note that from Abel summation, we have that \begin{align*}\sum_{n=1}^N a(n) b(n) &= \sum_{n=1}^N b(n)(A(n)-A(n-1))\\&= \sum_{n=1}^{N} b(n) A(n) - \sum_{n=1}^N b(n)A(n-1)\\ &= \sum_{n=1}^{N} b(n) A(n) - \sum_{n=0}^{N-1} b(n+1)A(n) \\&= b(N) A(N) - b(1)A(0) + \sum_{n=1}^{N-1} A(n) (b(n)-b(n+1))\end{align*} Now if $A(n)$ is bounded i.e. $\vert A(n) \vert \leq M$ and $b(n)$ is decreasing, then we have that $$\sum_{n=1}^{N-1} \left \vert A(n) \right \vert (b(n)-b(n+1)) \leq \sum_{n=1}^{N-1} M (b(n)-b(n+1))\\ = M (b(1) - b(N)) \leq Mb(1)$$ Hence, we have that $\displaystyle \sum_{n=1}^{N-1} \left \vert A(n) \right \vert (b(n)-b(n+1))$ converges and hence $$\displaystyle \sum_{n=1}^{N-1} A(n) (b(n)-b(n+1))$$ converges absolutely. Now since $$\sum_{n=1}^N a(n) b(n) = b(N) A(N) + \sum_{n=1}^{N-1} A(n) (b(n)-b(n+1))$$ we have that $\displaystyle \sum_{n=1}^N a(n)b(n)$ converges.
In your case, $a(n) = \cos(n \pi/3)$. Hence, $$A(N) = \displaystyle \sum_{n=1}^N a(n) = - \dfrac12 - \cos\left(\dfrac{\pi}3(N+2)\right)$$which is clearly bounded.
Also, $b(n) = \dfrac1{n}$ is a monotone decreasing sequence converging to $0$.
Hence, we have that $$\sum_{n=1}^N \dfrac{\cos(n\pi/3)}{n}$$ converges.
Look at some of my earlier answers for similar questions.
For what real numbers $a$ does the series $\sum \frac{\sin(ka)}{\log(k)}$ converge or diverge?
Give a demonstration that $\sum\limits_{n=1}^\infty\frac{\sin(n)}{n}$ converges.
If the partial sums of a $a_n$ are bounded, then $\sum{}_{n=1}^\infty a_n e^{-nt}$ converges for all $t > 0$
If you are interested in evaluating the series, here is a way out. We have for $\vert z \vert \leq 1$ and $z \neq 1$, $$\sum_{n=1}^{\infty} \dfrac{z^n}n = - \log(1-z)$$ Setting $z = e^{i \pi/3}$, we get that $$\sum_{n=1}^{\infty} \dfrac{e^{in \pi/3}}n = - \log(1-e^{i \pi/3})$$ Hence, \begin{align} \sum_{n=1}^{\infty} \dfrac{\cos(n \pi/3)}n & = \text{Real part of}\left(\sum_{n=1}^{\infty} \dfrac{e^{in \pi/3}}n \right)\\ & = \text{Real part of} \left(- \log(1-e^{i \pi/3}) \right)\\ & = - \log(\vert 1-e^{i \pi/3} \vert) = 0 \end{align} Hence, $$\sum_{n=2}^{\infty} \dfrac{\cos(n \pi/3)}n = - \dfrac{\cos(\pi/3)}1 = - \dfrac12$$
• I think it is time to write a general answer for generalized alternating test and close tons of similar questions as abstract duplicates. – user17762 Apr 5 '13 at 6:13
• I support the motion. – Did Apr 5 '13 at 7:10
• I'm a little confused about something. On the one hand, you show that $\sum a_n b_n$ should converge absolutely. But $\sum \cos (n \pi 3) n^{-1}$ does not converge absolutely, as the nth term is at least $\frac{1}{2n}$ in absolute value. So I feel I must be missing something? – davidlowryduda Apr 5 '13 at 8:14
• @mixedmath Yes, you are absolutely right. $\sum a_n b_n$ doesn't converge absolutely. I have changed it. The proof remains un affected. What we have is $\sum A(n)(b(n) - b(n+1))$ converges absolutely and this is what we want. This doesn't mean that $\sum a(n)b(n)$ converges absolutely. Thanks for pointing this out. – user17762 Apr 5 '13 at 15:16
• Awesome. Thanks, and great writeup! – davidlowryduda Apr 5 '13 at 16:03
Note that $$\cos(n\pi/3) = 1/2, \ -1/2, \ -1, \ -1/2, \ 1/2, \ 1, \ 1/2, \ -1/2, \ -1, \ \cdots$$ so your series is just 3 alternating (and convergent) series inter-weaved. Exercise: Prove that if $\sum a_n, \sum b_n$ are both convergent, then the sequence $$a_1, a_1+b_1, a_1+b_1+a_2, a_1+b_1+a_2+b_2, \cdots$$ is convergent. Applying that twice proves your series converges. | 2019-09-22T12:37:15 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/351856/why-does-the-series-sum-limits-n-2-infty-frac-cosn-pi-3n-converge?noredirect=1",
"openwebmath_score": 0.9759758710861206,
"openwebmath_perplexity": 271.8226776321927,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9783846691281407,
"lm_q2_score": 0.8757869835428966,
"lm_q1q2_score": 0.8568565581203493
} |
https://math.stackexchange.com/questions/2994429/how-can-i-argue-that-for-a-number-to-be-divisible-by-144-it-has-to-be-divisible/2994452 | # How can I argue that for a number to be divisible by 144 it has to be divisible by 36?
Suppose some number $$n \in \mathbb{N}$$ is divisible by $$144$$.
$$\implies \frac{n}{144}=k, \space \space \space k \in \mathbb{Z} \\ \iff \frac{n}{36\cdot4}=k \iff \frac{n}{36}=4k$$
Since any whole number times a whole number is still a whole number, it follows that $$n$$ must also be divisible by $$36$$. However, what I think I have just shown is:
$$\text{A number }n \space \text{is divisble by} \space 144 \implies n \space \text{is divisible by} \space 36 \space (1)$$
Is that the same as saying: $$\text{For a number to be divisible by 144 it has to be divisible by 36} \space (2)$$
In other words, are statements (1) and (2) equivalent?
• Yes, absolutely. – Bernard Nov 11 '18 at 20:47
• Pedantically, the first sentence could have a different meaning if it somehow isn't obvious from context that $n$ is universally quantified. But in general they're the same. – Ian Nov 11 '18 at 20:49
• You mean this sentence "Since any whole number times a whole number is still a whole number, it follows that n must also be divisible by 36. ",right? – Nullspace Nov 11 '18 at 20:52
• In general, if $a$ is divisible by $b$, and $b$ is divisible by $c$, then $a$ is divisible by $c$. – AlexanderJ93 Nov 12 '18 at 0:14
Yes, it's the same. $$A\implies B$$ is equivalent to "if we have $$A$$, we must have $$B$$".
And your proof looks fine. Good job.
If I were to offer some constructive criticism, it would be of the general kind: In number theory, even though we call the property "divisible", we usually avoid division whenever possible. Of the four basic arithmetic operations it is the only one which makes integers into non-integers. And number theory is all about integers.
Therefore, "$$n$$ is divisible by $$144$$", or "$$144$$ divides $$n$$" as it's also called, is defined a bit backwards:
There is an integer $$k$$ such that $$n=144k$$
(This is defined for any number in place of $$144$$, except $$0$$.)
Using that definition, your proof becomes something like this:
If $$n$$ is divisible by $$144$$, then there is an integer $$k$$ such that $$n=144k$$. This gives $$n=144k=(36\cdot4)k=36(4k)$$ Since $$4k$$ is an integer, this means $$n$$ is also divisible by $$36$$.
• Programmers might may say mod() = 0 rather than "divides", if n modulus 144 is zero then n modulus 36 is zero etc. Non zero values are integers. – mckenzm Nov 12 '18 at 0:30
• Every definition I ever see of divisibility specifically excludes $0$ as a potential divisor (checked a few abstract algebra books here, etc.). If that were not the case, it would wind up violating the Division Algorithm, leave greatest common divisors not well-defined, etc. – Daniel R. Collins Nov 13 '18 at 5:15
Yes that's correct or simply note that
$$n=144\cdot k= 36\cdot (4\cdot k)$$
but $$n=36$$ is not divisible by $$144$$.
The $$\implies$$ symbol is defined as follows:
If $$p \implies q$$ then if $$p$$ is true, then $$q$$ must also be true. So when you say $$144 \mid n \implies 36 \mid n$$ it's the same thing as saying that if $$144 \mid n$$, then it must also be true that $$36 \mid n$$.
There are often multiple approaches to proofs. Is usually good to be familiar with multiple techniques.
You have a very good approach. Parsimonious and references only the particular entities at hand.
Some times, for the sake of illustrating the use of additional concepts, you might want to deviate from parsimony.
In that spirit, here's an additional proof.
According to The Fundamental Theorem of Arithmetic, if something is divisible by 144, then it is divisible by at least the same primes raised to the powers you need to yield 144. In other words, $$144=2^43^2$$. For something to be divisible by 144, the primes 2 and 3 must appear in its prime factorization. The powers of 2 and 3 must be at least 4 and 2, respectively. Now $$36=2^23^2$$. So any number who's prime factorization incluedes the primes 2 and 3, and has them raised at least to the power of 2, then it is also divisible by 36. If something is divisible by 144, we are guaranteed that 2 and 3 appear in its prime factorization. We area also guaranteed that the exponents on 2 and 3 are 4 and 2 respectively. So divisibility by 144 implies divisibility by 36 since the exponents satisfy the established criteria.
The prime factorization of 144 is 2 * 2 * 2 * 2 * 3 * 3. The prime factorization of 36 is 2 * 2 * 3 * 3.
If X is divisible by Y, then X's prime factorization contains all of the factors in Y's prime factorization.
Since 144's prime factorization contains all of the factors in 36's prime factorization, 144 is divisible by 36.
The prime factorization of any number that is divisible by 144 contains all of the prime factors of 144, which contains all of the prime factors of 36. Hence any number that is divisible by 144 has a prime factorization that contains all the prime factors of 36, and therefore is divisible by 36.
If $$n$$ is a multiple of $$144$$, it must automatically be a multiple of any divisor $$d$$ of $$144$$. The prime factorization of $$144$$ is $$(2^4)(3^2)$$, and so $$36 = (2^2)(3^2)$$ must be a divisor of $$144$$ and hence of $$n$$. | 2019-10-22T13:51:50 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2994429/how-can-i-argue-that-for-a-number-to-be-divisible-by-144-it-has-to-be-divisible/2994452",
"openwebmath_score": 0.802426815032959,
"openwebmath_perplexity": 160.24049583645848,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9783846691281407,
"lm_q2_score": 0.8757869819218865,
"lm_q1q2_score": 0.8568565565343779
} |
https://www.physicsforums.com/threads/confused-with-a-series-problem.48722/ | # Confused with a series problem
1. Oct 20, 2004
Problem
Let $$\sum a_n$$ be a series with positive terms and let $$r_n = \frac{a_{n+1}}{a_n}$$. Suppose that $$\lim _{n \to \infty} r_n = L < 1$$, so $$\Sum a_n$$ converges by the Ratio Test. As usual, we let $$R_n$$ be the remainder after $$n$$ terms, that is,
$$R_n = a_{n+1} + a_{n+2} + a_{n+3} + \cdots$$
(a) If $$\left\{ r_n \right\}$$ is a decreasing sequence and $$r_{n+1} < 1$$, show by summing a geometric series, that
$$R_n \leq \frac{a_{n+1}}{1-r_{n+1}}$$
(b) If $$\left\{ r_n \right\}$$ is an increasing sequence, show that
$$R_n \leq \frac{a_{n+1}}{1-L}$$
My Solution
(a) The first term $$r_{n+1}$$ prevails, since it represents an upper bound to other terms of $$\left\{ r_n \right\}$$ . Then,
$$R_n \leq a_{n+1} + a_{n+1}r_{n+1} + a_{n+1}\left( r_{n+1} \right) ^2 + a_{n+1}\left( r_{n+1} \right) ^3 + \cdots = \sum _{n=1} ^{\infty} a_{n+1}\left( r_{n+1} \right) ^{n-1} = \frac{a_{n+1}}{1-r_{n+1}}$$
(b) The last term $$L$$ prevails, since it represents an upper bound to other terms of $$\left\{ r_n \right\}$$ . Then,
$$R_n \leq a_{n+1} + a_{n+1}L + a_{n+1}L ^2 + a_{n+1} L ^3 + \cdots = \sum _{n=1} ^{\infty} a_{n+1}L ^{n-1} = \frac{a_{n+1}}{1-L}$$
Questions
1. Did I get it right?
2. Why use "$$\leq$$" instead of "$$<$$", since all terms of $$\left\{ r_n \right\}$$ are smaller then their respective upper bounds?
3. Isn't an increasing $$\left\{ r_n \right\}$$ rather contra-intuitive when we consider a convergent series $$\sum a_n$$, which obeys: $$\lim _{n \to \infty} a_n =0$$?
That's it. Thank you very much!!
2. Oct 20, 2004
### NateTG
Well, it really should be:
$$\lim_{n\rightarrow \infty} | r_n | < 1$$
otherwise, it would be possible to sneak in things like $$\sum 2^{-2n}$$ where $$r_n=-2<1$$.
If you can assume that $$|r_n|$$ is decreasing and $$|r_{n+1}|<1$$
You need to use absolute values to make the inequalities work:
$$|\sum_{i=n+1}^\infty a_i| \leq \sum_{i=n+1}^\infty |a_i| \leq \sum_{i=n+1}^\infty |a_n r_{n+1}^{i-n-1}|=\sum_{i=1}^\infty |a_n| |r_{n+1}|^i$$
You have the right idea, but what you have is not true if $$r_{n+1}$$ is positive and $$a_n$$ is negative.
Regarding the use of $$\leq$$ rather than $$<$$:
Constant sequences are often included in the notion of decreasing or increasing sequence, so there may be equality rather than strict inequality.
Regarding the existance of increasing $${r_n}$$
Consider, for example the possibility that $$r_n=\frac{1}{2}-\frac{1}{2^{n+1}}$$. It's pretty easy to see that the limit $$\lim_{n\rightarrow \infty} r_n = \frac{1}{2}$$ and that $$\{r_n\}$$ is increasing. However, since all of the $$r_n$$ are positive and less than $$\frac{1}{2}$$ we have:
$$0< r_n < \frac{1}{2}$$
Now
$$a_n=a_{n-1}\times r_{n-1}$$
so
$$\sum_{i=0}^{\infty} a_n = \sum_{i=0}^{\infty} (a_0 \times \prod_{j=0}^{i} r_j) < \sum_{i=0}^{\infty} (a_0 \times \prod_{j=0}^{i}\frac{1}{2}) = a_0 \times \sum_{i=0}^{\infty} \frac{1}{2^n} = 2a_0$$
which means that although $$\{r_n\}$$ is increasing the sum is convergent.
Last edited: Oct 20, 2004 | 2017-03-28T15:56:41 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/confused-with-a-series-problem.48722/",
"openwebmath_score": 0.9563162326812744,
"openwebmath_perplexity": 251.58633640732526,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9783846666070896,
"lm_q2_score": 0.8757869819218865,
"lm_q1q2_score": 0.8568565543264741
} |
http://www.maplesoft.com/support/help/Maple/view.aspx?path=simplify/details | apply simplification rules to an expression - Maple Help
# Online Help
###### All Products Maple MapleSim
Home : Support : Online Help : Mathematics : Algebra : Expression Manipulation : Simplifying : simplify : simplify/details
simplify - apply simplification rules to an expression
Calling Sequence simplify(expr, n1, n2, ...) simplify(expr, side1, side2, ...) simplify(expr, assume=prop) simplify(expr, symbolic)
Parameters
expr - any expression n1, n2, ... - (optional) names; simplification procedures side1, side2, ... - (optional) sets or lists; side relations prop - (optional) any property
Basic Information
• This help page contains complete information about the simplify command. For basic information on the simplify command, see the simplify help page.
Description
• The simplify command is used to apply simplification rules to an expression.
• The simplify/expr calling sequence searches the expression, expr, for function calls, square roots, radicals, and powers. It then invokes the appropriate simplification procedures.
Examples
Simple Example
> $\mathrm{simplify}\left({4}^{\frac{1}{2}}+3\right)$
${5}$ (1)
Simplifying Trigonometric Expressions
> $e:={\mathrm{cos}\left(x\right)}^{5}+{\mathrm{sin}\left(x\right)}^{4}+2{\mathrm{cos}\left(x\right)}^{2}-2{\mathrm{sin}\left(x\right)}^{2}-\mathrm{cos}\left(2x\right):$
> $\mathrm{simplify}\left(e\right)$
${{\mathrm{cos}}{}\left({x}\right)}^{{4}}{}\left({\mathrm{cos}}{}\left({x}\right){+}{1}\right)$ (2)
Simplifying Exponentials and Logarithms
> $\mathrm{simplify}\left({ⅇ}^{a+\mathrm{ln}\left(b{ⅇ}^{c}\right)}\right)$
${b}{}{{ⅇ}}^{{a}{+}{c}}$ (3)
Controlling Simplification Rules
> $\mathrm{simplify}\left({\mathrm{sin}\left(x\right)}^{2}+\mathrm{ln}\left(2x\right)+{\mathrm{cos}\left(x\right)}^{2}\right)$
${1}{+}{\mathrm{ln}}{}\left({2}\right){+}{\mathrm{ln}}{}\left({x}\right)$ (4)
> $\mathrm{simplify}\left({\mathrm{sin}\left(x\right)}^{2}+\mathrm{ln}\left(2x\right)+{\mathrm{cos}\left(x\right)}^{2},\mathrm{trig}\right)$
${1}{+}{\mathrm{ln}}{}\left({2}{}{x}\right)$ (5)
> $\mathrm{simplify}\left({\mathrm{sin}\left(x\right)}^{2}+\mathrm{ln}\left(2x\right)+{\mathrm{cos}\left(x\right)}^{2},\mathrm{ln}\right)$
${{\mathrm{sin}}{}\left({x}\right)}^{{2}}{+}{\mathrm{ln}}{}\left({2}\right){+}{\mathrm{ln}}{}\left({x}\right){+}{{\mathrm{cos}}{}\left({x}\right)}^{{2}}$ (6)
Simplifying With Respect to Side Relations
> $f:=-\frac{1{x}^{5}y}{3}+{x}^{4}{y}^{2}+\frac{1x{y}^{3}}{3}+1:$
> $\mathrm{simplify}\left(f,\left\{{x}^{3}=xy,{y}^{2}=x+1\right\}\right)$
${{x}}^{{4}}{+}{{x}}^{{2}}{+}{x}{+}{1}$ (7)
Using the assume option
> $g:=\sqrt{{x}^{2}}$
${g}{:=}\sqrt{{{x}}^{{2}}}$ (8)
> $\mathrm{simplify}\left(g\right)$
${\mathrm{csgn}}{}\left({x}\right){}{x}$ (9)
> $\mathrm{simplify}\left(g,\mathrm{assume}=\mathrm{real}\right)$
$\left|{x}\right|$ (10)
> $\mathrm{simplify}\left(g,\mathrm{assume}=\mathrm{positive}\right)$
${x}$ (11)
> $\mathrm{simplify}\left(g,\mathrm{symbolic}\right)$
${x}$ (12)
Simplifying an Integral
Integrands and summands are simplified taking into account the integration or sum ranges respectively. For more information, see assuming.
> $\mathrm{expr}:={{∫}}_{1}^{4}{\left(1+{\mathrm{sinh}\left(t\right)}^{2}\right)}^{\frac{1}{2}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}t$
${\mathrm{expr}}{:=}{{∫}}_{{1}}^{{4}}\sqrt{{1}{+}{{\mathrm{sinh}}{}\left({t}\right)}^{{2}}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{t}$ (13)
> $\mathrm{simplify}\left(\mathrm{expr}\right)$
${{∫}}_{{1}}^{{4}}{\mathrm{cosh}}{}\left({t}\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{ⅆ}{t}$ (14)
See Also
## Was this information helpful?
Please add your Comment (Optional) E-mail Address (Optional) What is ? This question helps us to combat spam | 2016-02-12T18:14:30 | {
"domain": "maplesoft.com",
"url": "http://www.maplesoft.com/support/help/Maple/view.aspx?path=simplify/details",
"openwebmath_score": 0.9764902591705322,
"openwebmath_perplexity": 6451.946511462495,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9783846640860381,
"lm_q2_score": 0.8757869835428966,
"lm_q1q2_score": 0.8568565537045415
} |
https://svadali.com/2018/09/15/random-number-puzzle/ | # Random Number Puzzle
Suppose that you have a function that you can use to generate uniformly distributed random numbers between $$1$$ and $$5$$. How can you use the above function to generate uniformly distributed random numbers between $$1$$ and $$7$$?
The key to solving puzzles such as the one above is to first recognize that if we can somehow generate $$7$$ equally likely outcomes then we can use each one of these $$7$$ outcomes to output one of the integers between $$1$$ and $$7$$ as shown below:
$$\text{Outcome}\ 1 \longrightarrow 1$$
$$\text{Outcome}\ 2 \longrightarrow 2$$
$$\text{Outcome}\ 3 \longrightarrow 3$$
$$\text{Outcome}\ 4 \longrightarrow 4$$
$$\text{Outcome}\ 5 \longrightarrow 5$$
$$\text{Outcome}\ 6 \longrightarrow 6$$
$$\text{Outcome}\ 7 \longrightarrow 7$$
So, how can we generate $$7$$ equally likely outcomes when you have a function that only outputs $$5$$ equally likely outcomes. The trick is to change the space of outcomes. One way to change the outcome space is to draw two random numbers uniformly distributed between $$1$$ and $$5$$. The outcome space when we draw two numbers is as follows:
$$\{ (1, 1), (1, 2), (1, 3), (1, 4), (1, 5), \\ (2, 1), (2, 2), (2, 3), (2, 4), (2, 5), \\ (3, 1), (3, 2), (3, 3), (3, 4), (3, 5), \\ (4, 1), (4, 2), (4, 3), (4, 4), (4, 5), \\ (5, 1), (5, 2), (5, 3), (5, 4), (5, 5) \}$$
Of course, now we have $$25$$ equally likely outcomes and not $$7$$. But, what if we were to map $$7$$ of the above outcomes to our desired numbers and ignore the others. In other words, our random number generator to draw random numbers between $$1$$ and $$7$$ does the following:
$$(1, 1) \longrightarrow 1$$
$$(1, 2) \longrightarrow 2$$
$$(1, 3) \longrightarrow 3$$
$$(1, 4) \longrightarrow 4$$
$$(1, 5) \longrightarrow 5$$
$$(2, 1) \longrightarrow 6$$
$$(2, 2) \longrightarrow 7$$
If we obtain any outcome apart from the ones listed above we try again till we obtain one of the above outcomes. All the above outcomes are equally likely and there are $$7$$ outcomes. Thus, one would expect the above procedure to generate uniformly distributed random numbers between $$1$$ and $$7$$.
We could go through the math to convince ourselves but writing a simulation is another way to validate the above intuition. The plot below shows the percentage of times each number occurs in a simulation where we use the above procedure to generate numbers between $$1$$ and $$7$$. Note that the chances of obtaining any of these outcomes is $$\frac{1}{7} \approx 0.143$$. Thus, we have strong evidence that the procedure does generate numbers uniformly between $$1$$ and $$7$$. For those interested, the Python code used to generate the plot is at the end of the post.
A related problem to think about: How can we generate uniformly distributed numbers between $$1$$ and $$30$$ if we have access to a function that only gives us random numbers between $$1$$ and $$5$$?
import random
import matplotlib.pyplot as plt
import seaborn as sns
def draw_histogram(draws):
ax = sns.barplot(x=draws, y=draws, estimator=lambda x: len(x) / len(draws) * 100)
ax.set_xlabel('Number')
ax.set_ylabel('Percentage')
ax.set_ylim(0, 100)
for p in ax.patches:
ax.annotate("%.0f" % p.get_height() + '%', (p.get_x() + p.get_width() / 2., p.get_height() + 3),
ha='center', va='center', fontsize=12, color='black', rotation=0, xytext=(0, 0),
textcoords='offset points')
def generate_rand_between_1_and_7():
draw = None
while draw is None:
draw_1 = random.randint(1, 5)
draw_2 = random.randint(1, 5)
draw_sequence = (draw_1, draw_2)
if draw_sequence == (1, 1):
draw = 1
elif draw_sequence == (1, 2):
draw = 2
elif draw_sequence == (1, 3):
draw = 3
elif draw_sequence == (1, 3):
draw = 3
elif draw_sequence == (1, 4):
draw = 4
elif draw_sequence == (1, 5):
draw = 5
elif draw_sequence == (2, 1):
draw = 6
elif draw_sequence == (2, 2):
draw = 7
else:
pass
return draw
if __name__ == '__main__':
random.seed(42)
no_of_draws = 10000
draws_between_1_and_7 = []
for _ in range(no_of_draws):
draw = generate_rand_between_1_and_7()
draws_between_1_and_7.append(draw)
draw_histogram(draws_between_1_and_7)
plt.savefig('histogram_of_draws.png', bbox_inches='tight')
This site uses Akismet to reduce spam. Learn how your comment data is processed. | 2019-01-17T06:28:46 | {
"domain": "svadali.com",
"url": "https://svadali.com/2018/09/15/random-number-puzzle/",
"openwebmath_score": 0.7763149738311768,
"openwebmath_perplexity": 635.3333642118325,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9924227573874892,
"lm_q2_score": 0.8633916099737807,
"lm_q1q2_score": 0.8568494822754031
} |
https://doc.simo.ac/Functions/cumsum/ | # cumsum
Cumulative sum
### s = cumsum(A)
• A is an array of any number of dimensions.
• The output argument s contains the cumulative sums and has the same size as A.
• The cumulative sums are obtained from vectors along the first non-singleton dimension of A.
Example 1: In the following, the first non-singleton dimension of A is the first dimension. The sum operation is performed along the vertical direction. For instance, s(:,1) contains cumulative sums of elements of A(:,1).
% Matrix of size [3,4]
A=reshape(1:12,3,4)
% Cumulative sums
s=cumsum(A)
A =
1.000 4.000 7.000 10.00
2.000 5.000 8.000 11.00
3.000 6.000 9.000 12.00
s =
1.000 4.000 7.000 10.00
3.000 9.000 15.00 21.00
6.000 15.00 24.00 33.00
### s = cumsum(A, dim)
• dim should be a positive integer scalar, not equal to inf or nan.
• It obtains the cumulative sums along the dim-th dimension of A.
• If size(A,dim) == 1 or dim > ndims(A), then s is the same as A.
Example 2: In the following, The sum operation is performed along the horizontal direction (2nd dimension). For instance, s(1,:) contains cumulative sums of elements of A(1,:).
% Matrix of size [3,4]
A=reshape(1:12,3,4)
% Cumulative sums along 2nd dimension
s=cumsum(A,2)
A =
1.000 4.000 7.000 10.00
2.000 5.000 8.000 11.00
3.000 6.000 9.000 12.00
s =
1.000 5.000 12.00 22.00
2.000 7.000 15.00 26.00
3.000 9.000 18.00 30.00
### s = cumsum(A, option)
• option should be either 'reverse', 'forward', 'includenan' or 'omitnan'.
• These options control the operation direction, or whether or not NaN should be included in the operation. For details, see Tables 1 and 2 below.
Note
By default, cummin and cummax omit NaN, whereas cumsum and cumprod include NaN.
Example 3: Cumulative sums of the same vector but with different options.
a=[1:5 nan 6]
% Default options
cumsum(a)
% Include nan, forward direction
cumsum(a, 'includenan')
% Omit nan, forward direction
cumsum(a, 'omitnan')
% Forward direction, include nan
cumsum(a, 'forward')
% Reverse direction, include nan
cumsum(a, 'reverse')
a =
1.000 2.000 3.000 4.000 5.000 nan 6.000
ans =
1.000 3.000 6.000 10.00 15.00 nan nan
ans =
1.000 3.000 6.000 10.00 15.00 nan nan
ans =
1.000 3.000 6.000 10.00 15.00 15.00 21.00
ans =
1.000 3.000 6.000 10.00 15.00 nan nan
ans =
nan nan nan nan nan nan 6.000
Table 1: Options for operation direction.
Option value Meaning Default
'forward' The cumulative sums are obtained in the forward direction. YES
'reverse' The cumulative sums are obtained in the reverse direction. NO
Table 2: Options for including or omitting NaN.
Option value Meaning Default
'includenan' Include NaN in the operation YES
'omitnan' Omit NaN in the opeartion NO
### s = cumsum(A, dim, option)
• Same as s = cumsum(A, dim) above, except that an option can be specified for controlling the operation direction or whether or not NaN should be included.
• See Tables 1 and 2 above for available options.
### s = cumsum(A, option1, option2)
• Same as s = cumsum(A, option) above, except that two options can be specified at the same time.
• option2 will overwrite option1 if they contradict each other.
• See Tables 1 and 2 above for available options.
### s = cumsum(A, dim, option1, option2)
• Same as s = cumsum(A, dim, option) above, except that two options can be specified at the same time.
• option2 will overwrite option1 if they contradict each other.
• See Tables 1 and 2 above for available options. | 2021-10-22T09:11:29 | {
"domain": "simo.ac",
"url": "https://doc.simo.ac/Functions/cumsum/",
"openwebmath_score": 0.4999734163284302,
"openwebmath_perplexity": 7785.607328219051,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808730448281,
"lm_q2_score": 0.8740772351648677,
"lm_q1q2_score": 0.8568411951960261
} |
https://stats.stackexchange.com/questions/383656/confusion-about-range-of-integration-for-density-function | # Confusion about range of integration for density function
Consider the joint density function:
$$f(x,y) = \begin{cases} 2 & & \text{for } 0 \leq x \leq1 \text{ and } 0 \leq y \leq 1-x, \\[6pt] 0 & & \text{otherwise}. \end{cases}$$
From this joint density I figured out the following marginal densities:
$$f_X(x) = 2(1-x),\\ f_Y(y) = 2.$$
The marginal density $$f_Y$$ is supposedly wrong, as the solutions provided to me say to calculate $$\int^{1-y}_0 2 \, dx$$. I don't see why I need to integrate over $$[0, 1-y]$$ and not over $$[0,1]$$. I thought the range for $$x$$ does not depend on $$y$$, or does it?
• If this is homework, please add the self-study(stats.stackexchange.com/tags/self-study/info) tag. – StubbornAtom Dec 18 '18 at 20:29
• Please: draw a picture showing where $f(x,y)$ is nonzero. That will easily answer your questions. – whuber Dec 18 '18 at 20:50
• @StubbornAtom no it is not homework. this is just a practice exercise, I choose to do myself. – thebilly Dec 18 '18 at 21:34
• @whuber I get a line with the equation y = -x+1 right? When $x = 1$ $y = 1-(x=1) = 0$ I don't quite get yet, how this answers my question. Could you please give me another hint? – thebilly Dec 18 '18 at 21:37
• How did you integrate to find f(x)? What does 0 $\le$ y $\le$ (1-x) imply in terms of the random variable X and not Y? By integrating over [0,1] you are not integrating over the region specified in the question for the joint density but the region for the marginal density of Y. – aranglol Dec 18 '18 at 22:01
Comment: I simulated the joint distribution as an easy way to make the plot suggested in a previous comment. Before beginning to set limits on double integrals it is usually a good idea to sketch such a picture as a guide. I have shown R code for the first of the three plots.
set.seed(1218); m = 10^5
x1 = runif(m); y1 = runif(m)
cond = (y1 <= 1 - x1)
x = x1[cond]; y = y1[cond]
plot(x, y, pch=".")
The simulation and plots are for orientation, and are not an exact solution to your problem. For exact solutions, maybe the first thing to do is to try to integrate the joint density $$f(x,y) = 2$$ over the triangular region to make sure the integral is $$1,$$ as required for a density function.
Then try integrating over $$x$$ to find the marginal density of $$Y,$$ which is suggested by the red line superimposed on the histogram in the third plot.
You wrote: $$\text{for } 0 \leq x \leq1 \text{ and } 0 \leq y \leq 1-x$$ That tells you the region over which you integrate.
You want to integrate out $$x$$ with $$y$$ fixed.
So you need those values of $$x$$ for which $$0\le x\le1$$ and $$0\le y \le 1-x.$$ Notice that $$y\le 1-x$$ is equivalent to $$x \le 1-y.$$
• Okay. So whenever I have this sort of relationship between x and y regarding the domain I need to have this relationship in both integration intervals? – thebilly Dec 19 '18 at 9:08
• When doing single integration, limits on integral sign need to show $x$-interval over which integration extends. For double integral, limits on two integral signs need to show $(x,y)$ region over which integration extends. – BruceET Dec 24 '18 at 1:41
• @thebilly : This is probably clearer if the constraint on $x$ and $y$ is not symmetric in the two variables. Suppose you have $0\le x \le 1$ and for each value of $x$ in that interval you have $0\le y \le 3(1-x).$ Then$\,\ldots\qquad$ – Michael Hardy Dec 25 '18 at 19:26
• $\ldots\,\,$you have \begin{align} & \int_0^1\left( \int_0^{3(1-x)} \cdots\cdots \, dy \right) \, dx \\ \\ = {} & \iint\limits_{\left\{ (x,y) \,:\, \begin{smallmatrix} 0 \, \le \, x \,\le\, 1 \\ \&\ 0\,\le\,y\,\le\, 3(1-x) \end{smallmatrix} \right\}} \quad \cdots\cdots \, d(x,y) \\ \\ = {} & \iint\limits_{\left\{ (x,y) \,:\, 0\,\le\, x\,\le\, \frac{3-y} 3 \,\le\, 1 \right\}} \quad \cdots\cdots \, d(x,y) \\ \\ = {} & \int_0^3 \left( \int_0^{(3-y)/3} \cdots\cdots \, dx \right) \, dy. \end{align} – Michael Hardy Dec 25 '18 at 19:32
• In other words, if you have $x$ going from $0$ to $1$ and then for any fixed value of $x$ you have $y$ going from $0$ to $3(1-x),$ and that's the same as saying $y$ is between $0$ and $3,$ and for each fixed value of $y$ between $0$ and $3$ you have $x$ going from $0$ to $(3-y)/3. \qquad$ – Michael Hardy Dec 25 '18 at 19:37 | 2020-07-08T22:33:30 | {
"domain": "stackexchange.com",
"url": "https://stats.stackexchange.com/questions/383656/confusion-about-range-of-integration-for-density-function",
"openwebmath_score": 0.9974572658538818,
"openwebmath_perplexity": 303.12921074514526,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9802808701643914,
"lm_q2_score": 0.8740772351648677,
"lm_q1q2_score": 0.8568411926783018
} |
https://math.stackexchange.com/questions/3000878/does-a-n-a-n-1-converge-to-the-golden-ratio-for-all-fibonacci-like-sequenc/3000901 | # Does $a_{n}/a_{n-1}$ converge to the golden ratio for all Fibonacci-like sequences?
Yesterday a friend challenged me to prove that $$\lim_{n\rightarrow\infty}\frac{a_n}{a_{n-1}}=\varphi\; ,$$ where $$\varphi$$ is the golden ratio, for the Fibonacci series.
I started rewriting the limit as
$$\lim_{n\rightarrow\infty}\frac{a_n}{a_{n-1}}=\lim_{n\rightarrow\infty}\frac{a_{n-1}+a_{n-2}}{a_{n-1}}=\lim_{n\rightarrow\infty}1+\frac{a_{n-2}}{a_{n-1}}\; .$$
If the sequence $$b_n=\frac{a_n}{a_{n-1}}$$ is convergent,
$$\lim_{n\rightarrow\infty}\frac{a_{n-2}}{a_{n-1}}=\left(\lim_{n\rightarrow\infty}\frac{a_n}{a_{n-1}}\right)^{-1}\; .$$
Renaming the desired limit $$x$$, we obtain the quadratic equation
$$x=1+\frac{1}{x}$$ $$x^2-x-1=0$$
if $$x\neq 0$$. Therefore, if $$b_n$$ is convergent, it must be equal to $$\frac{1+\sqrt{5}}{2}$$ or $$\frac{1-\sqrt{5}}{2}$$.
Since $$a_n>0$$, $$b_n>0, \forall n$$, so the limit must be equal to $$\varphi=\frac{1+\sqrt{5}}{2}$$.
This proof made me think that I didn't make use of the initial values of the sequence, so it must hold true for any sequence where $$a_{n}=a_{n-1}+a_{n-2}$$. The first question is, is $$a_{n}/a_{n-1}$$ convergent for all Fibonacci-like sequences?
The second and most intriguing for me is, is there any Fibonacci-like sequence where the limit is $$\frac{1-\sqrt{5}}{2}$$? Since this solution is negative, $$a_n$$ should change its sing with each $$n$$, but I couldn't find any values for $$a_0$$ and $$a_1$$, which would lead me to this case. If the answer to this question is no, what mathematical sense does this negative solution have?
• Yup, interestingly enough, almost all "Fibonacci-like" sequences (in that they start with two seed values and then recursively define the successive terms by the addition of the previous two), except for certain trivial examples, have the ratio of successive terms converge to $\phi$. Another noteworthy such sequence is that of the Lucas numbers (with seeds $L_1 = 2$ and $L_2 = 1$, which actually is a lot "neater" than the Fibonacci sequence in this respect. – Eevee Trainer Nov 16 '18 at 8:26
• As for the notion of $\phi$'s conjugate being the ratio of successive terms, no, at least ignoring trivial examples. You could analogize this in terms of "stable" and "unstable" solutions: the conjugate is unstable, where the regular one is stable. In the case of $\phi$'s conjugate, unless you somehow trivially start with that conjugate as the ratio of successive terms (see Robert Z's answer), this means that successive terms eventually diverge away from it. Showing this lack of stability isn't trivial though and I'm mostly parroting other facts and don't feel qualified to elaborate on it. – Eevee Trainer Nov 16 '18 at 8:32
One way to look at this problem is that if $$a_{n+1}=a_n+a_{n-1}$$, then we we have $$\begin{pmatrix}0&1\\1&1\end{pmatrix}\begin{pmatrix}a_{n-1}\\a_n\end{pmatrix}=\begin{pmatrix}a_n\\a_{n+1}\end{pmatrix}.$$
The eigenvalues of the matrix
$$\begin{pmatrix}0&1\\1&1\end{pmatrix}$$
are $$\dfrac{1+\sqrt{5}}{2},\dfrac{1-\sqrt{5}}{2}.$$
These have corresponding eigenvectors $$v_1,v_2$$ which span $$\mathbb{R}^2$$. This leads us to the conclusion that $$\begin{pmatrix}a_1\\a_2\end{pmatrix}=cv_1+bv_2,$$ and if $$c\not=0,$$ then $$v_1$$ will dominate the sequence, and we can show the ratios converge to $$(1+\sqrt{5})/2.$$ This leads us to the conclusion that if we want a Fibonacci like sequence to have ratios converging to $$(1-\sqrt{5})/2$$, then we must have $$\begin{pmatrix}a_1\\a_2\end{pmatrix}=bv_2$$ for some non zero $$b\in\mathbb{R}$$. So to determine all such sequences we simply have to have an eigenvector $$v_2$$ corresponding to $$(1-\sqrt{5})/2$$. One such eigenvector is $$\begin{pmatrix}1\\\dfrac{1-\sqrt{5}}{2}\end{pmatrix}.$$
• Thank you, your answer gives me the relation $a_0$ and $a_1$ must fulfill in order to make the limit $\frac{1-\sqrt{5}}{2}$. – TheAverageHijano Nov 16 '18 at 15:37
You showed that if a limit exists for $$a_{n}/a_{n-1}$$ and $$a_n>0$$, then it is $$\frac{1+\sqrt{5}}{2}$$. Actually if $$(a_n)_{n\geq 0}$$ is any sequence which satisfies the recurrence $$a_n=a_{n-1} + a_{n-2}$$ then there exist $$A$$ and $$B$$ such that $$a_n=A\cdot \left(\frac{1+\sqrt{5}}{2}\right)^n+B\cdot \left(\frac{1-\sqrt{5}}{2}\right)^n$$ where $$A$$ and $$B$$ depend on the initial terms $$a_0$$ and $$a_1$$.
So what is $$\lim_{n\rightarrow\infty}\frac{a_n}{a_{n-1}}$$ in the general case?
Consider for example the case when $$A=0$$ and $$B\not=0$$. What is the limit?
• So, basically if $A\neq 0$, the limit will be the golden ratio, and if $A=0$ and $B\neq 0$, the ratio will be $\frac{1-\sqrt{5}}{2}$, right? – TheAverageHijano Nov 16 '18 at 8:40
• Yes, that's true. – Robert Z Nov 16 '18 at 8:42
• Robert Z could you prove the thing about $a_n$? – AryanSonwatikar Nov 17 '18 at 4:56
• @AryanSonwatikar This is a standard result about "homogeneous linear recurrences". See math.stackexchange.com/questions/65011/… – Robert Z Nov 17 '18 at 6:55
Here is a another view of this. We have $$b_n=a_n/a_{n-1}$$ and $$b_{n+1}=1+\frac1{b_n},\tag{*}$$ or, \begin{align*} b_{n+1}&=1+\frac1{1+\cfrac1{b_{n-1}}}=1+\cfrac1{1+\frac1{1+\frac1{1+\frac1{1+\cdots}}}}. \end{align*} We should be clear about what we actually mean by an expression like this. One way that we could think about it is to starting with some constant like $$1$$, and then repeatly applying the function $$f(x)=1+\dfrac 1x$$, \begin{align*} c&=1&&=1.000\dots\\ \color{yellow}{f(}c\color{yellow}{)}&=\color{yellow}{1+\frac1{\color{black}{1}}}&&=2.000\dots\\ \color{orange}{f(}\color{yellow}{f(}c\color{yellow}{)} \color{orange}{)}&=\ \color{orange}{1+\frac{1}{\color{yellow}{1+\frac1{\color{black}{1}}}}}&&=1.500\dots\\ \color{magenta}{f(}\color{orange}{f(}\color{yellow}{f(}c\color{yellow}{)} \color{orange}{)}\color{magenta}{)}&=\color{magenta}{1+\frac 1{ \color{orange}{1+\frac{1}{\color{yellow}{1+\frac1{\color{black}{1}}}}}}}&&=1.667\dots\\ \color{violet}{f(}\color{magenta}{f(}\color{orange}{f(}\color{yellow}{f(}c\color{yellow}{)} \color{orange}{)}\color{magenta}{)}\color{violet}{)}&=\color{violet}{1+\frac 1{\color{magenta}{1+\frac 1{ \color{orange}{1+\frac{1}{\color{yellow}{1+\frac1{\color{black}{1}}}}}}}}}&&=1.600\dots \end{align*} Symbolly what we get is more and more like our infinite fraction. If we start with $$-1/\varphi$$, \begin{align*} c&=-1/\varphi&&=-0.618\dots\\ \color{yellow}{f(}c\color{yellow}{)}&=\color{yellow}{1+\frac1{\color{black}{-1/\varphi}}}&&=-0.618\dots\\ \color{orange}{f(}\color{yellow}{f(}c\color{yellow}{)} \color{orange}{)}&=\ \color{orange}{1+\frac{1}{\color{yellow}{1+\frac1{\color{black}{-1/\varphi}}}}}&&=-0.618\dots\\ \color{magenta}{f(}\color{orange}{f(}\color{yellow}{f(}c\color{yellow}{)} \color{orange}{)}\color{magenta}{)}&=\color{magenta}{1+\frac 1{ \color{orange}{1+\frac{1}{\color{yellow}{1+\frac1{\color{black}{-1/\varphi}}}}}}}&&=-0.618\dots\\ \color{violet}{f(}\color{magenta}{f(}\color{orange}{f(}\color{yellow}{f(}c\color{yellow}{)} \color{orange}{)}\color{magenta}{)}\color{violet}{)}&=\color{violet}{1+\frac 1{\color{magenta}{1+\frac 1{ \color{orange}{1+\frac{1}{\color{yellow}{1+\frac1{\color{black}{-1/\varphi}}}}}}}}}&&=-0.618\dots \end{align*} So no matter how many times we apply it, we're staying fixed at $$-1/\varphi$$. But even then, with the aid of a calculator, if we start with a random number $$\neq-1/\varphi$$ (even it's really close to $$-1/\varphi$$) and perform iteration $$x\to x+\dfrac 1x$$ again and again, we eventually end up at $$1.618...=\varphi$$. So,
why the fixed point $$\varphi$$ favored above the other one $$-1/\varphi$$?
The transformational understanding of derivatives is going to be helpful for understanding this set up. Now we know that $$\varphi$$ and $$-1/\varphi$$ stay fixed in place during this iteration process. But zoom in on a neighborhood around $$\varphi$$, during each iteration, points in that region get contracted around $$\varphi$$, meaning that the function $$1+\dfrac 1x$$ has a derivative with a magnitude that is less than $$1$$ at this input. In fact, the derivative works out around to be $$\left|\frac{df}{dx}(\varphi)\right|\approx |-0.38|<1,$$ meaning that each repeated application scrunches the neighborhood around this number smaller and smaller like a gravitational pull towards $$\varphi$$.
Conversely, at $$-1/\varphi$$, the magnitude of the derivative actually has a magnitude greater than $$1$$, $$\left|\frac{df}{dx}\left(-\frac 1\varphi\right)\right|\approx |-2.62|>1,$$ so points near the fixed point are repelled away from it. We can see that they get stretched by more than a factor of $$2$$ in each iteration. (They also get flipped around because the derivative is negative here, but the salient fact of stability is just the magnitude.)
We will call $$\varphi$$ a "stable fixed point", and $$-1/\varphi$$ an "unstable fixed point". As we can see the stability of a fixed point is determined by whether or not of its derivative is bigger or smaller than $$1$$. And this explains why $$\varphi$$ always shows up in the limit.
Reference: 3Blue1Brown.
• Nice, I didn't think about stability theory to explain my question. It's nice that we can approach the problem from different but equivalent perspectives. – TheAverageHijano Nov 16 '18 at 15:42
Yes, $$a_n\over a_{n-1}$$ is convergent for any Fibonacci-esque sequence(with integers), and this happen to be the golden ratio, $$\varphi$$. The limit $$1-\sqrt{5}\over 2$$ will never occur for a Fibonacci-esque sequence. The negative solution crops up because you multiply both sides by $$x$$ when you solve $$x=1+\frac{1}{x}$$ leading to an extra solution. But this value is useful for calculating the value of the $$n^{th}$$ term of the Fibonacci sequence. $$F_n=\frac{\varphi^n - (\frac{-1}{\varphi})^n}{\sqrt{5}}$$ Hope this helps.
• Actually, $a_0=1$ and $a_1=\frac{1-\sqrt{5}}{2}$ is one of the combinations that will make $\lim_{n\rightarrow\infty}\frac{a_n}{a_{n-1}}=\frac{1-\sqrt{5}}{2}$ (see Melody's answer). Multiplying both sides by x (if $x\neq 0$) shouldn't give illogical answers. Whether they have actual/physical meaning or not is another matter. – TheAverageHijano Nov 16 '18 at 15:48
• That is true, but my answer says integers. If you take $a_0=1$ , and round off the values to the nearest integer, you get the sequence: $1,-1,0,-1..$ and here the limit suddenly tends to negative infinity because of the third term. – AryanSonwatikar Nov 17 '18 at 2:56
• Also, I'm not well versed with matrices and vectors so Melody's answer is obscure for me. – AryanSonwatikar Nov 17 '18 at 3:00
• Also, for the sequence you get, the Fibonacci-ness is followed only upto the fourth term after which if we follow the ratio and if we follow the Fibonacci-ness we get two different sequences. – AryanSonwatikar Nov 17 '18 at 4:21 | 2019-06-20T04:59:21 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3000878/does-a-n-a-n-1-converge-to-the-golden-ratio-for-all-fibonacci-like-sequenc/3000901",
"openwebmath_score": 0.930814802646637,
"openwebmath_perplexity": 281.3315000520823,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9902915220858951,
"lm_q2_score": 0.865224084314688,
"lm_q1q2_score": 0.8568240754013672
} |
https://www.themathdoctors.org/how-to-find-an-inverse-function-conflicting-approaches/ | # How to Find an Inverse Function: Conflicting Approaches
While pondering issues that often give students trouble in algebra, I decided to check what we have said in Ask Dr. Math about inverse functions. I discovered four answers (all, as it happens, written by me – I tend to be attracted to certain topics!) to essentially the same question, spread over 13 years. It is one that I have pondered in teaching: having taught the subject from books that use two different approaches to the task, I find that I prefer the one that seems to be used less commonly, but each of them has its benefits. Let’s take a look at one of these four discussions, then quickly look at the others to supplement it. In 2010, Maureen wrote this:
To Invert Functions, First Subvert Routine
The inverse of a function is found by interchanging x's and y's, right? However, on Wikipedia they determine the inverse in a way that I find confusing. Specifically, I am writing what they do on the left and my confusion on the right.
f(x) = 3x + 7
Normally, I would now switch
y = 3x + 7 the x's and y's and then solve
for y -- but Wikipedia doesn't
From my point of view this is
(y - 7)/3 = x NOT the inverse -- it is the
original function
f(inv) of y = (y - 7)/3 This is the inverse using y as
the variable
Most books do not do it this way; and although I agree with the final answer, I find it somewhat meaningless. Would you agree?
I am including the usual way of finding the inverse.
f(x) = 3x + 7
y = 3x + 7 given the original function
x = 3y + 7 switch x and y
y = (x - 7)/3 solve for y to get inverse function
My reply was that, in fact, I prefer the method used by Wikipedia here, which I imagine is less often used in current textbooks (at least at an introductory level). There are a couple reasons to prefer it. One is that only this way is appropriate to applications in which the variables, unlike the typical generic x and y, have specific meanings, so that they can’t be interchanged (see below for an example); another is that this method forces us to get out of the rut of assuming that x is always the independent variable, and y is always dependent.
What they're doing is correct, and in fact is what I prefer. The confusion is probably because you are used to always thinking of y as a function of x.
(It troubles me that texts often ask questions like "Does the equation x + y^2 = 1 represent a function?" when they really mean to ask if it represents y as a function of x. In that example, x is a function of y, though y is not a function of x. A function is about the relationship between two variables, not what they are called.)
What Wikipedia has done is not to exchange the NAMES of the variables in the function, as usual, but just to change their ROLES. By solving for x, they are determining how x (the "input" of f) can be found given y (the "output" of f). That is exactly what it means to find an inverse.
If you recognize that x and y are just dummy variables, you see that Wikipedia’s form, $$f^{-1}(y) = \frac{y \, – \, 7}{3}$$, and Maureen’s form, $$f^{-1}(x) = \frac{x \, – \, 7}{3}$$, are really the same function, just with different names for the variable. By not changing the variable names at the start, Wikipedia ends up with y as the independent variable – not what most students are used to, but perfectly legal. And in fact, the main idea of an inverse function is precisely that we are changing the roles of the variables: what was the input of the original function becomes the output of the inverse function, and vice versa. What they are called is far less important that what they mean.
But Maureen needed more help:
Maybe my difficulty is from a graphing point of view.
If you graph ...
y = 3x + 1
... and you create a table of values for x and y, and then you graph ...
x = (y - 1)/3
... you get the same graph, since the tables are the same. When graphing an inverse from a table of values, you specifically interchange the x's and y's, because if you do not, you have the same table.
I feel students (or possibly just me) confuse a different way of writing a function with the inverse function. Specifically y = ln x is the same function as e^y = x. One is NOT the inverse of the other. The inverse of y = ln x is x = ln y, or e^x = y.
The two equations she graphed are actually different functions (y as a function of x, and x as a function of y); but they are equivalent equations (relating the same pairs of x and y), which is why they are represented by the same graph. Maureen has confused functions with equations. In reality, the second function, $$f^{-1}(y) = \frac{y \, – \, 1}{3}$$, is the inverse of the first, $$f(x) = 3x + 1$$, where x and y have kept the same name but changed roles.
Now consider the logarithm. This is written explicitly as a function the name of which is "ln" rather than "f" or "g," but it is the same idea. If I write ...
y = ln(x)
... I am using the ln function to express y as a function of x. This equation also expresses (implicitly) x as a function of y, since ln is one-to-one. When you solve for x to make the latter function explicit, you have
x = e^y
This explicitly states a different function than ...
y = ln(x)
... although the relation between the variables is the same.
If we were to name the new function, calling it exp, so that ...
x = exp(y)
= e^y
... the named function exp clearly is not the same function as ln. But nothing has changed except for giving it a name. The equation expresses x as a function of y, named or not.
So when we interchange the variables and change y = ln(x) to x = ln(y), we are, as you say, inverting the function -- in the sense of what function y is of x. We have changed the relationship between x and y. But in another sense, it is still the same function (as explicitly written), just expressed with different placeholders.
So there are two perspectives on what constitutes an inverse. In one sense, just swapping the variables implicitly inverts the relationship, by swapping implied roles (x becomes y); but the inverse is shown explicitly by solving for the other variable, regardless of what you call it (input becomes output).
My answer to another of the four questions I referred to above includes an example of the situation where swapping of variables is meaningless, and you must use the other approach:
Inverting, Subverted
Both approaches are valid. The first is best when the variable names mean something, so that changing their names would not make sense. For example, you wouldn't say that the inverse of C = f(r) = 2 pi r is f^-1(r) = r/(2 pi), where r has now become the circumference. Rather, you would say that r = f^-1(C) = C/(2 pi).
Your trouble, I believe, is that you are thinking of the variable names as if they had a fixed meaning somewhat like that, but using the method of inverting that requires you to swap names. You ask about "the inverse ofy" rather than of the FUNCTION, making y a name for the function itself. You can't invert a variable, only a function. And if you do think of y as the same thing as f(x), you can't swap variable names and still say that.
In fact, in my circumference example above, I was initially going to call the function C(r), which is done commonly, naming the function for its output; but it would make no sense to call the inverse C^-1. If anything, we would call the inverse r(C), since it gives the value of r that yields a given circumference C. I didn't do that because it would not fit the inverse notation you are using, and would just add confusion. (If it does, ignore this paragraph!)
I prefer the first method of inverting I showed, because it helps to free students from fixating on x as the independent variable, and avoids the change of meaning that is confusing to you and others. Unfortunately, I have to teach the second in algebra classes, because it is what most students seem to see elsewhere.
For the record, the other two questions I referred to are
Graphs of Inverse Functions
Inverting Functions
The final thanks in the latter, a long, rambling discussion, are worth ending with:
The terminology and explanation has solidified my understanding. I believe my confusion was, as you pointed out, that "An equation does not define a function!" This is very important and I have never thought of it that way. It could be because I never read that somewhere or was never taught that or most importantly never had to think of them that way until now.
### 1 thought on “How to Find an Inverse Function: Conflicting Approaches”
This site uses Akismet to reduce spam. Learn how your comment data is processed. | 2021-07-23T23:03:39 | {
"domain": "themathdoctors.org",
"url": "https://www.themathdoctors.org/how-to-find-an-inverse-function-conflicting-approaches/",
"openwebmath_score": 0.7663712501525879,
"openwebmath_perplexity": 423.6984284395517,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9902915212263165,
"lm_q2_score": 0.865224072151174,
"lm_q1q2_score": 0.8568240626122143
} |
http://thegatebook.in/qa/4499/co-pipelining-q1?show=4563 | # CO-Pipelining-Q1
A pipeline of 2 stages with a delay of x units each is split to n stages. In the new design, each stage has a delay of x/n units. To get the throughput increase of 1700% what would be the n value?
reshown Jun 13
1700/100=17
17+1=18
n=18
answered Jun 13 by (19,690 points)
answered Jun 13 by (160 points)
+1 vote
let initial throughput be 100. required throughput is 1800.
speedup = 1800/100 = 18
In the first pipeline, each instruction will get completed after x units, and in the second pipeline, each instruction will get completed after x/n units
speedup = x / (x/n)
= n
therefore n = 18
answered Jun 14 by (4,100 points)
Comment if you find any issues.
Pipeline 1
One instruction <----- x units
? <-------------------------1 unit
$= \frac{1}{x} units$
Pipeline 2
One instruction ----> x/n units
? -----------------------> 1 units
$= \frac{n}{x} units$
$percentage \ of \ gain\ in \ throughput = \frac{old value- new value}{old value} * 100$
1700 = (n-1)* 100
17 = n-1
18=n
answered Jun 14 by (31,090 points)
edited Jun 15
In pipeline 1, stage delay= x units. If we consider the CPI(cycles per instruction as 1), which is considered until no stalls are given.
Thus each instruction takes x time units.
Throughput in Pipeline 1 = 1/x
Similarly, every instruction in pipeline 2 takes x/n time units.
Thus, throughput in pipeline 2 = 1/(x/n) = n/x
% increase in throughput = ((new-old)/old)*100
1700 =(((n/x)-(1/x))/(1/x)*100
17 =n-1
18 = n
Yes you are correct. corrected the answer.
In this question (http://thegatebook.in/qa/4507) the throughput is (no. of stages / max stage time) so why in the above question. its only 1/x? | 2019-09-15T11:20:28 | {
"domain": "thegatebook.in",
"url": "http://thegatebook.in/qa/4499/co-pipelining-q1?show=4563",
"openwebmath_score": 0.31003907322883606,
"openwebmath_perplexity": 12445.317134561208,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.9591542887603538,
"lm_q2_score": 0.893309405344251,
"lm_q1q2_score": 0.8568215473258997
} |
https://math.stackexchange.com/questions/2555710/open-subset-containing-all-terms-of-a-sequence-but-a-finite-number/2555718 | # Open subset containing all terms of a sequence but a finite number
Does an open subset containing all points of a concergent sequence but a finite number of them necessarily contain the sequence's limit ?
Ideally, I would like an answer for each of the following :
-general topology
-Haussdorf (T2)
-metric
My intuition is that it doesn't, because even in a metric space, the radius of a ball centered at each point of the sequence and contained in the open subset may be forced to vary over the sequence.
No. Take, in $\mathbb R$ with the usual topology, the sequence $\left(\frac1n\right)_{n\in\mathbb N}$ and the open set $(0,+\infty)$.
The answer is no for all three. Consider the sequence $\frac{1}{n} \in \mathbb{R}$, and consider the open set $(0, 2)$. This open set contains all but finitely many of the sequences points (in fact, it contains all the sequences points), but it does not contain the sets limit, 0. | 2019-12-07T22:39:37 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2555710/open-subset-containing-all-terms-of-a-sequence-but-a-finite-number/2555718",
"openwebmath_score": 0.940449059009552,
"openwebmath_perplexity": 149.2808415289387,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513914124558,
"lm_q2_score": 0.8688267762381843,
"lm_q1q2_score": 0.8567947342836838
} |
http://mathhelpforum.com/pre-calculus/194793-equations-exponential-function.html | # Thread: Equations with the exponential function.
1. ## Equations with the exponential function.
Hello
I know this is elementary, but I don't know how solve this:
The question is, find the exact value(s) of x which satisfy the equation.
$e^{2x} = e^x +12$
I am not confident working with logarithms, although I do know the laws for the sum and difference of logarithms. I can do the other questions in this exercise, but not this one.
I know if the question was:
$e^{2x} = 12$
$x = \dfrac{1}{2}\ln12$
Some help would be very much appreciated. Thank you.
2. ## Re: Equations with the exponential function.
Originally Posted by Furyan
Hello
I know this is elementary, but I don't know how solve this:
The question is, find the exact value(s) of x which satisfy the equation.
$e^{2x} = e^x +12$
I am not confident working with logarithms, although I do know the laws for the sum and difference of logarithms. I can do the other questions in this exercise, but not this one.
I know if the question was:
$e^{2x} = 12$
$x = \dfrac{1}{2}\ln12$
Some help would be very much appreciated. Thank you.
$e^{2x} - e^x - 12 = 0$
$(e^x - 4)(e^x + 3) = 0$
finish it
3. ## Re: Equations with the exponential function.
Dear Skeeter
Originally Posted by skeeter
$e^{2x} - e^x - 12 = 0$
$(e^x - 4)(e^x + 3) = 0$
finish it
A quadratic in $e$, that was left field and I wasn't looking for it. Should have seen it though.
$x = \ln4 = 2\ln2$
$e^x = -3$ has no solutions.
Thank you very much indeed.
4. ## Re: Equations with the exponential function.
Hopefully, you know that $e^{2x}= (e^x)^2$. If you let $y= e^x$, the equation becomes $y^2= y+ 12$ which is the same as $y^2- y- 12= 0$, a quadratic equation in y. If you did not notice that this could be factored as $(y- 4)(y+ 3)$, you could solve it by completing the square or using the quadratic formula.
5. ## Re: Equations with the exponential function.
Hello HallsofIvy
Originally Posted by HallsofIvy
Hopefully, you know that $e^{2x}= (e^x)^2$. If you let $y= e^x$, the equation becomes $y^2= y+ 12$ which is the same as $y^2- y- 12= 0$, a quadratic equation in y. If you did not notice that this could be factored as $(y- 4)(y+ 3)$, you could solve it by completing the square or using the quadratic formula.
Thank you. I do find letting $y= e^x$ and solving the quadratic in $y$ easier to get my head round. I was able to factor that one.
6. ## Re: Equations with the exponential function.
Hello, Furyan!
Find the exact value(s) of $x$ which satisfy the equation."
. . $e^{2x} - e^x -12\:=\:0$
As skeeter pointed out, this is a quadratic equation.
There is a way to recognize this phenomenon.
There are usually three terms, written in standard order.
If the first term has twice the exponent of the second term,
. . we may have a quadratic.
In your problem: . $e^{2(x)} - e^x - 12 \:=\:0$
Another example: . $x^6 - 7x^3 - 8 \:=\:0$
Note that we have: . $\left(x^3\right)^2 - 7(x^3) - 8 \:=\:0$
Let $u \,=\,x^3$
Then we have: . $u^2 - 7u - 8 \:=\:0$
. . $(u-8)(u+1) \:=\:0 \quad\Rightarrow\quad u \:=\:8,\text{-}1$
Back-substitute: . $\begin{Bmatrix}x^3 \:=\:8 & \Rightarrow & x \:=\:2 \\ x^3 \:=\:\text{-}1 & \Rightarrow & x \:=\:\text{-}1 \end{Bmatrix}$
And there are four complex roots as well.
7. ## Re: Equations with the exponential function.
Hello Soroban
Originally Posted by Soroban
Hello, Furyan!
As skeeter pointed out, this is a quadratic equation.
There is a way to recognize this phenomenon.
There are usually three terms, written in standard order.
If the first term has twice the exponent of the second term,
. . we may have a quadratic.
In your problem: . $e^{2(x)} - e^x - 12 \:=\:0$
Another example: . $x^6 - 7x^3 - 8 \:=\:0$
Note that we have: . $\left(x^3\right)^2 - 7(x^3) - 8 \:=\:0$
Let $u \,=\,x^3$
Then we have: . $u^2 - 7u - 8 \:=\:0$
. . $(u-8)(u+1) \:=\:0 \quad\Rightarrow\quad u \:=\:8,\text{-}1$
Back-substitute: . $\begin{Bmatrix}x^3 \:=\:8 & \Rightarrow & x \:=\:2 \\ x^3 \:=\:\text{-}1 & \Rightarrow & x \:=\:\text{-}1 \end{Bmatrix}$
And there are four complex roots as well.
Thank you for that. I will be sure to look out for this sort of equation in the future. | 2016-12-06T23:03:57 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/pre-calculus/194793-equations-exponential-function.html",
"openwebmath_score": 0.8681946396827698,
"openwebmath_perplexity": 433.27276693135957,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513895809328,
"lm_q2_score": 0.8688267745399465,
"lm_q1q2_score": 0.8567947310176881
} |
http://math.stackexchange.com/questions/819547/if-ab-3-and-frac1a2-frac1b2-4-then-a-b2 | If $ab=3$ and $\frac1{a^2}+\frac1{b^2}=4,$ then $(a-b)^2=\;$?
If $ab=3$ and $\frac1{a^2}+\frac1{b^2}=4$, what is the value of $(a-b)^2$? I think $a^2+b^2=36$, please confirm and is it possible to to figure out one of the variables?
-
You are right and going well, and yes we can figure the values of each variable. But it is not necessary to do so :) – chubakueno Jun 3 '14 at 18:03
What happens if you expand $(a - b)^2$? – David K Jun 3 '14 at 18:05
$$\frac1{a^2}+\frac1{b^2}=4$$ $$a^2b^2\left(\frac1{a^2}+\frac1{b^2} \right)=4a^2b^2$$ $$b^2+a^2=(2ab)^2$$ $$a^2-2ab+b^2=(2ab)^2-2ab$$ $$(a-b)^2=(2ab)^2-2ab=(2\cdot3)^2-2\cdot3=30$$
-
Your answer is good but try to add some words for the explanation. – Tunk-Fey Jun 3 '14 at 18:24
@Tunk-Fey - Why? each step flows well from the prior one. – JoeTaxpayer Jun 3 '14 at 19:17
@JoeTaxpayer Just in case the OP doesn't understand. You may take a look Andre Nicolas' answer. – Tunk-Fey Jun 3 '14 at 19:27
He states 'you know that $(a-b)^2=30$ ' - but it's not clear that we do. That's the question OP is asking. – JoeTaxpayer Jun 3 '14 at 19:39
Hints:
$\frac{1}{a^2}+\frac{1}{b^2} = 4 \rightarrow b^2 + a^2 = 4(ab)^2$
$(a-b)^2 = (a^2+b^2) -2(ab)$
edit:
To solve for a particular variable, you can use $ab=3 \rightarrow a = \frac{3}{b}$ to eliminate a variable. For example $a^2+b^2 = \frac{9}{b^2}+b^2$
-
I already know that (a-b)² =30 so how would you figure out one of the variables – user154989 Jun 3 '14 at 18:10
You know that $(a-b)^2=30$. The same strategy tells you that $(a+b)^2=42$. Thus $$a+b=\pm \sqrt{42}\quad\text{and}\quad a-b=\pm\sqrt{30}.$$ Now by adding and subtracting, we can find $2a$ and $2b$. and hence $a$ and $b$. Note that there are $4$ combinations, though if we have found one solution $a=p$, $b=q$, the other three are $a=-p$, $b=-q$, and $a=q$, $b=p$, and $a=-q$, $b=-p$.
One of the solutions is $a=\frac{\sqrt{42}+\sqrt{30}}{2}$, $b=\frac{\sqrt{42}-\sqrt{30}}{2}$.
-
This is right, of course, but it misses the point: this question is all about the symmetry in the equations, and avoiding having to solve for the values of $a$ and $b$ separately. I wish any of my high school teachers would have explained the idea that you sometimes don't care about the values of each part of an expression. – symplectomorphic Jun 3 '14 at 19:30
I was answering the question that OP had, in a comment, about solving for $a$ and $b$, given that OP already knew the value of $(a-b)^2$. The point about the solution is that it "breaks symmetry" very late in the game. Completely agree about the importance of exposure to symmetric functions. – André Nicolas Jun 3 '14 at 19:34
In fact, since I trust your judgment, Andre, do you know of a textbook that looks closely at these kinds of algebraic tricks? I think heavily symmetric equations often arise in geometry (and math competitions). But I can't think of a reference that systematically discusses problems like "if $a+b=5$ and $a^2+b^2=10$, what's the value of $ab$?" where the point isn't to find $a$ and $b$ separately. – symplectomorphic Jun 3 '14 at 19:35
I cannot think of a good source. Problem collections aimed below the Olympiad level often have symmetric examples. Algebra books used to cover this kind of material. Alas, no more. – André Nicolas Jun 3 '14 at 19:39
Alas indeed. I've written my own notes on this material and was always surprised I could never ever find a thorough, exhaustive discussion of this sort of technique. The competition books are always terse: they present a few examples but leave out the really interesting discussion of what's going on. Thanks though. – symplectomorphic Jun 3 '14 at 19:42 | 2015-11-27T05:04:24 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/819547/if-ab-3-and-frac1a2-frac1b2-4-then-a-b2",
"openwebmath_score": 0.7680529356002808,
"openwebmath_perplexity": 414.3301348425411,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513897844354,
"lm_q2_score": 0.8688267677469952,
"lm_q1q2_score": 0.8567947244956182
} |
https://gmatclub.com/forum/jerry-purchased-a-1-year-5-000-bond-that-paid-an-annual-115643.html?kudos=1 | GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 03 Apr 2020, 19:38
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
### Show Tags
19 Jun 2011, 06:55
10
00:00
Difficulty:
65% (hard)
Question Stats:
54% (01:30) correct 46% (01:50) wrong based on 186 sessions
### HideShow timer Statistics
Jerry purchased a 1-year $5,000 bond that paid an annual interest rate of 4% compounded every six months. How much interest had this bond accrued at maturity? A.$5102
B. $408 C.$216
D. $202 E.$200
GMAT Club Legend
Joined: 11 Sep 2015
Posts: 4579
GMAT 1: 770 Q49 V46
Re: Jerry purchased a 1-year $5,000 bond that paid an annual [#permalink] ### Show Tags 13 Sep 2017, 11:37 2 Top Contributor 1 guygmat wrote: Jerry purchased a 1-year$5,000 bond that paid an annual interest rate of 4% compounded every six months. How much interest had this bond accrued at maturity?
A. $5102 B.$408
C. $216 D.$202
E. $200 The ANNUAL interest rate = 4% So, every 6 MONTHS, we get 2% interest. Since there are only 2 "compoundings," we can forgo the formula and do some quick mental calculations... INITIAL VALUE:$5000
AFTER 6 MONTHS: 2% of $5000 =$100 in interest
So, the value after 6 months = $5000 +$100 = $5100 AFTER 12 MONTHS: 2% of$5100 = $102 in interest So, the value after 12 months =$5100 + $102 =$5202
How much interest had this bond accrued at maturity?
### Show Tags
12 Sep 2017, 03:09
Here P = 5000
r=4%
time = 1 year
As the amount is compounded annually n = 2
Compound interest formula = $$A = p* (1+\frac{r}{n*100}) ^n^t$$
$$A = 5000* (1+\frac{4}{2*100}) ^(2*1)$$
A= 5202
So Interest earned = 5202 - 5000 = 202
Senior SC Moderator
Joined: 22 May 2016
Posts: 3666
Jerry purchased a 1-year $5,000 bond that paid an annual [#permalink] ### Show Tags 13 Sep 2017, 10:14 guygmat wrote: Jerry purchased a 1-year$5,000 bond that paid an annual interest rate of 4% compounded every six months. How much interest had this bond accrued at maturity?
A. $5102 B.$408
C. $216 D.$202
E. $200 Estimate Without the compound interest formula (I use it, but with short periods and low interest rate, often you can estimate): Simple interest would yield .04 * 5,000 =$200 in one year
Compound interest at this low rate (halved and paid twice), after only one year, will be barely above that.
Answer D, $202 is barely above$200.
Stages
If unsure about Answer C ($216), run a quick "in stages" calculation. If interest is paid every 6 months, the 4 percent is split in half (two periods of six months in one year). After first six months, interest payment is$5,000 * 02 = $100 (add$100 to principal for next stage)
After next six months,
.02 * $5,100 =$102
Total interest paid: $202 ANSWER D _________________ Visit SC Butler, here! Get two SC questions to practice, whose links you can find by date. Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it's the only thing that ever has. -- Margaret Mead Director Joined: 02 Sep 2016 Posts: 629 Re: Jerry purchased a 1-year$5,000 bond that paid an annual [#permalink]
### Show Tags
14 Sep 2017, 04:57
Simple interest= 5000*4*1/100= 200
So the answer would be a number that is slightly more than 200 i.e. 202.
Board of Directors
Status: QA & VA Forum Moderator
Joined: 11 Jun 2011
Posts: 4875
Location: India
GPA: 3.5
Re: Jerry purchased a 1-year $5,000 bond that paid an annual [#permalink] ### Show Tags 09 Oct 2018, 07:18 guygmat wrote: Jerry purchased a 1-year$5,000 bond that paid an annual interest rate of 4% compounded every six months. How much interest had this bond accrued at maturity?
A. $5102 B.$408
C. $216 D.$202
E. $200 $$5000( 1 + \frac{4}{200})^2 - 5000$$ $$= 5202 - 5000$$ $$= 202$$, Answer must be (D) _________________ Thanks and Regards Abhishek.... PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only ) Target Test Prep Representative Status: Founder & CEO Affiliations: Target Test Prep Joined: 14 Oct 2015 Posts: 9952 Location: United States (CA) Re: Jerry purchased a 1-year$5,000 bond that paid an annual [#permalink]
### Show Tags
12 Oct 2018, 06:52
guygmat wrote:
Jerry purchased a 1-year $5,000 bond that paid an annual interest rate of 4% compounded every six months. How much interest had this bond accrued at maturity? A.$5102
B. $408 C.$216
D. $202 E.$200
Since the annual interest rate is 4%, we see that the bond earns 2% every half-year, or 6 months. Thus, for the first 6 months, the amount of interest earned was 0.02 x 5,000 = 100 dollars. Thus, the new principal at the end of the first 6 months was 5,000 + 100 = 5,100 dollars.
For the next 6 months, the amount of interest earned was 0.02 x 5100 = 102 dollars.
So total interest earned is 202 dollars.
_________________
# Scott Woodbury-Stewart
Founder and CEO
[email protected]
197 Reviews
5-star rated online GMAT quant
self study course
See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews
If you find one of my posts helpful, please take a moment to click on the "Kudos" button.
Non-Human User
Joined: 09 Sep 2013
Posts: 14463
Re: Jerry purchased a 1-year $5,000 bond that paid an annual [#permalink] ### Show Tags 06 Jan 2020, 17:49 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Re: Jerry purchased a 1-year$5,000 bond that paid an annual [#permalink] 06 Jan 2020, 17:49
Display posts from previous: Sort by | 2020-04-04T03:38:19 | {
"domain": "gmatclub.com",
"url": "https://gmatclub.com/forum/jerry-purchased-a-1-year-5-000-bond-that-paid-an-annual-115643.html?kudos=1",
"openwebmath_score": 0.5554371476173401,
"openwebmath_perplexity": 10184.755527266267,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. Yes\n2. Yes",
"lm_q1_score": 0.967410262750752,
"lm_q2_score": 0.8856314723088733,
"lm_q1q2_score": 0.8567689753266625
} |
http://mymathforum.com/algebra/328325-one-number-four-more-than-five-times-another-if-their-sum-decreased-six.html | My Math Forum One number is four more than five times another. If their sum is decreased by six....
Algebra Pre-Algebra and Basic Algebra Math Forum
February 29th, 2016, 08:36 AM #1 Senior Member Joined: Feb 2016 From: seattle Posts: 377 Thanks: 10 One number is four more than five times another. If their sum is decreased by six.... One number is four more than five times another. If their sum is decreased by six, the result is ten. Find the numbers. I tried to write the equation from reading this. Is this anywhere near close? x+4+5x-6=10 Thanks. Last edited by skipjack; February 29th, 2016 at 10:07 PM.
February 29th, 2016, 08:49 AM #2
Math Team
Joined: Oct 2011
Posts: 14,597
Thanks: 1038
Quote:
Originally Posted by GIjoefan1976 One number is four more than five times another. If their sum is decreased by six, the result is ten. Find the numbers. I tried to write the equation from reading this. Is this anywhere near close? x+4+5x-6=10
Solve the equation for x.
Then test it yourself by substituting...
February 29th, 2016, 09:13 AM #3
Senior Member
Joined: Feb 2016
From: seattle
Posts: 377
Thanks: 10
Quote:
Originally Posted by Denis Solve the equation for x. Then test it yourself by substituting...
Okay, I don't understand what you mean by "substituting"
and both this time, and last time I can get the first number being 2.
yet then I go 2 plus 4 =6 * 5 is 30 -6 =24
so not yet figuring out why it is not working for me.
February 29th, 2016, 09:18 AM #4 Math Team Joined: Dec 2013 From: Colombia Posts: 7,675 Thanks: 2655 Math Focus: Mainly analysis and algebra First of all, you have two numbers. Call them $x$ and $y$. The information gives you two equations relating $x$, $y$ and numbers you are given. See if you can build those equations from the text of the question. Thanks from GIjoefan1976
February 29th, 2016, 09:31 AM #5
Math Team
Joined: Oct 2011
Posts: 14,597
Thanks: 1038
Quote:
Originally Posted by GIjoefan1976 Okay, I don't understand what you mean by "substituting" and both this time, and last time I can get the first number being 2. yet then I go 2 plus 4 =6 * 5 is 30 -6 =24
OK, you got x = 2
"One number is four more than five times another."
other number= 5*2 + 4 = 14 ; OK?
"If their sum is decreased by six, the result is ten."
sum = 2 + 14 = 16
16 - 6 = 10 : Bingo! You're correct
February 29th, 2016, 09:50 AM #6
Senior Member
Joined: Jul 2014
From: भारत
Posts: 1,178
Thanks: 230
Quote:
Originally Posted by GIjoefan1976 One number is four more than five times another. If their sum is decreased by six, the result is ten. Find the numbers. I tried to write the equation from reading this. Is this anywhere near close? X+4+5x-6=10 Thanks
Let one number be x
Other number is 4+5x
Quote:
Originally Posted by GIjoefan1976 If their sum is decreased by six, the result is ten
(x + 4+5x)-6 = 10
The only difference is brackets.
Continue.
February 29th, 2016, 10:24 AM #7
Senior Member
Joined: Feb 2016
From: seattle
Posts: 377
Thanks: 10
Quote:
Originally Posted by Denis OK, you got x = 2 "One number is four more than five times another." other number= 5*2 + 4 = 14 ; OK? "If their sum is decreased by six, the result is ten." sum = 2 + 14 = 16 16 - 6 = 10 : Bingo! You're correct
Thanks i think it was the sum part that really confused me. Was not sure what they meant by that. but now I will try to remember they mean for me to add the 2 numbers I end finding after solving the problem not before solving the problem.
February 29th, 2016, 10:30 AM #8
Senior Member
Joined: Feb 2016
From: seattle
Posts: 377
Thanks: 10
Quote:
Originally Posted by Prakhar Let one number be x Other number is 4+5x (x + 4+5x)-6 = 10 The only difference is brackets. Continue.
see I was thinking this too, yet was not sure how to write it.
Yet if i did it this way it confused me too as i would have gone and got
-36x-24=10
?
February 29th, 2016, 11:02 AM #9
Math Team
Joined: Oct 2011
Posts: 14,597
Thanks: 1038
Quote:
Originally Posted by GIjoefan1976 (x + 4+5x)-6 = 10 see I was thinking this too, yet was not sure how to write it. Yet if i did it this way it confused me too as i would have gone and got -36x-24=10
C'mon GIJoe; quit running at 100 mph!
There's no multiplication there; just remove the brackets:
x + 4 + 5x - 6 = 10
6x = 10 - 4 + 6
6x = 12
x = 2
February 29th, 2016, 11:05 AM #10 Math Team Joined: Dec 2013 From: Colombia Posts: 7,675 Thanks: 2655 Math Focus: Mainly analysis and algebra If you write that you have two numbers $x$ and $y$, then you get two equations $y = 4+5x$ and $x+y - 6= 10$. If you have learned how to solve simultaneous equations, this is not too difficult to solve (indeed one approach gives you exactly the equation that Denis was working you through. The important thing is to be able to build the equations. To understand what the sentences tell you and be confident that you have written it down correctly. The rest is relatively simple algebra.
Tags decreased, number, sum, times
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post andytx Algebra 4 March 1st, 2012 12:53 AM Hyperreal_Logic Topology 0 January 9th, 2010 05:27 PM W300 Algebra 2 October 22nd, 2009 09:11 AM steveeq1 Calculus 1 January 25th, 2009 12:56 PM
Contact - Home - Forums - Cryptocurrency Forum - Top | 2019-08-23T00:01:48 | {
"domain": "mymathforum.com",
"url": "http://mymathforum.com/algebra/328325-one-number-four-more-than-five-times-another-if-their-sum-decreased-six.html",
"openwebmath_score": 0.7567024827003479,
"openwebmath_perplexity": 1827.9855240438856,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.96741025335478,
"lm_q2_score": 0.8856314798554445,
"lm_q1q2_score": 0.8567689743059244
} |
https://www.physicsforums.com/threads/position-vector.99312/ | # Position vector
1. Nov 10, 2005
### Ratzinger
Given the typical cartesian xyz- coordinate system, is it correct to speak of a position vector? Isn't (x,y, z) just shothand for the coordinates? Distance vectors, force, velocity are real vectors with magnitude and direction in position space, but what is with a position vector in position space?
I'm confused, help needed
2. Nov 10, 2005
### whozum
The position vector points from the origin to the position of the particle. For your point it is < x , y , z >
Positions that vary with time are expressed as functions of x y and z, and you get parametric functions:
$\vec{r}_t = < x(t) , y(t), z(t) >$
3. Nov 10, 2005
### Ratzinger
yes, but isn't what you describe rather a parameterized curve, not these geometrical objects with head and tail that you can move parallel around?
I mean vectors in position coordinates without any time parameter mentioned.
4. Nov 10, 2005
### FredGarvin
Your question is a bit confusing to me. A specified position vector without any time reference just means that the function that produces the curve for the thing's position in space has been evaluated at some specific time. Ultimately the position has to be a function of time or how can you come up with velocity and acceleration?
5. Nov 10, 2005
### pmb_phy
A displacement vector is a vector which has its tail at one point and its tip at another point. If we refer to the point where the tail is as the "origin" then we refer to the displacement vector as the "position vector" of the point at the tip. We can label that point in any way we please, not only as R = (x, y, z). Quite often you'll see the position vector expressed as
$$R = (r, \theta, \phi)$$
This means that R has the value
$$R = A_r e_r + A_{\theta} e_{\theta} + A_{\phi} e_{\phi}$$
where ek is a unit vector pointing in the direction of the increasing kth coordinate. The coeficients have the value Ak = R*ek ("*" = dot product). The position vector is a bound vector, i.e. attached to the space in which it lies.
Pete
Last edited: Nov 10, 2005
6. Nov 10, 2005
### vaishakh
position vector is just a standard way of showing vectors and there is nothing like being attached. just the vector diagram has no physical meaning. the aim of physics is to solve the vector equations and while doing so we sometimes take the help of diagram.
7. Nov 11, 2005
### Ratzinger
thanks for the replies, I found this
An important difference between a position vector R and a general vector such as delta R is that the components of R are x, y and z, whereas for delta R the components are delta x, delta y and delta z. It is important to distinguish between true vectors and position vectors. A true vector does not depend on coordinate system but only on the difference between one end of the vector and the other. A position vector, in contrast, does depend on the coordinate system, because it is used to locate a position relative to a specified reference point.
8. Nov 11, 2005
### pmb_phy
Where did you get this definition from?? A vector is a geometrical object which does not depend on the coordinate system. Any vector may be expressed in terms of other vectors which are related to a particular coordinate system. But that doesn't mean that it is defined by the coordinate system. I thought I made that clear above but I guess not. I did explain that a vector is an arrow. Some vectors are called "Bound vectors" while others are called "free vectors." This is an important distinction that you should learn. The position vector does not depend on any coordinate system whatsoever. It depends on a geometric object, i.e. a "point." This point I speak of is known as the "reference point."
Wrong. You're confusing "reference point" with "coordinate system."
They are very different things.
For details on what I've been talking about see Thorne and Blanchard's online notes at
http://www.pma.caltech.edu/Courses/ph136/yr2004/0401.1.K.pdf
Pete
9. Nov 11, 2005
### Ratzinger
thanks Pete for the great notes you linked
So we have bound and free vectors. That's a nice distinction.
Free vectors can be parallel moved around or can be represented in different coordinate systems, they keep their meaning. Bound vectors do not, because they can clearly not be moved around and keep meaning and when coordinates systems are changed we talk about distance vectors. Bound vectors are bound to a coordinate system. A position vector gives only for his coordinate system information.
10. Nov 11, 2005
### pmb_phy
Bound vectors are not bound to a coordinate system. They're bound to the space that the vector is in and that is independant of the coordinate system. Give me any position vector and I can easily represent it in three different coordinate systems. I clarified this point here as I recall
http://www.geocities.com/physics_world/ma/coord_system.htm
Pete
11. Nov 11, 2005
### Ratzinger
Distance, velocity, force -all coordinate-independent quantities. But to ask what the position is of a point clearly needs a coordinate system. Position is only meaningful with a reference system.
Position vector in position space is a misnomer (in any position coordinates). As much as a momentum vector in momentum space.
12. Nov 11, 2005
### HallsofIvy
Staff Emeritus
Your original question is a very good one. In a Cartesian coordinate system, we can think of the "position vector" as the vector from the origin to the point, so that, at one instant, the tip of that vector is the point and the curve the point moves on is the curve sketched by the tip of the vector.
However, in non-Cartesian coordinates, especially in situations, such as are common in General Relativity, where we have curved surfaces that admit no Cartesian coordinate system, there is no such thing as a "position vector". (I used to worry about what vectors on the surface of a sphere 'looked like"!)
It is much better to think in terms of tangent vectors at every point. I like to think "a vector is a derivative".
13. Nov 12, 2005
### pmb_phy
Given a reference point the position of an arbitrary point requires only a magntitude and a direction - i.e. a vector.
HallsofIvy - I believe that you're confusing coordinate spaces with the space itself. A position vector can be defined in all spaces which have no curvature. Curvature exists independant of a coordinate system. There is no position vector in a curved space.
Pete
14. Nov 12, 2005
### robphy
I think the issue underlying the OP's questions is the distinction between a vector space and an affine space.
http://mathworld.wolfram.com/AffineSpace.html
http://en.wikipedia.org/wiki/Affine_space
A vector space has an "origin", whereas an affine space does not.
Positions are elements of an affine space.
Displacements (the difference of two positions) are elements of a vector space.
(Only after one assigns a norm can one talk of a "magnitude" of a vector.)
15. Nov 12, 2005
### lightgrav
WHY is the distinction between free vectors and bound vectors IMPORTANT?
I can add and subtract position vectors, even tho they're "bound" to origin.
Yet, the Torque due to a Force does NOT treat the Force as a "free" vector.
16. Nov 13, 2005
### pmb_phy
The physical meaning of adding position vectors is to start with one vector whose tail is at the origin. The next position vector added to this one has its tail at the tip of the first one. So this vector is bound as well and is bound at a different place. Tourque is the cross product of a bound vector and a free vector making it a free vector.
Other physical vectors are things like the center of mass vector.
Pete
17. Nov 15, 2005
### pmb_phy
Rob - There is no requirement for a vector space to have an origin. The term "Origin" refers to a particular point that an observer uses as a reference point. The user may also use other points in order to clearly define directions. There is no unique point in any space which demands to be an origin. From your links I don't see what you mean by "affine space." Can you elaborate for me? Thanks.
Pete
18. Nov 15, 2005
### HallsofIvy
Staff Emeritus
Yes, there is a requirement for a vector space to have an "origin". One of the requirements for a vector space is that there be a 0 vector. That is what you are referring to as an "origin". An "affine space" is a set of points such that any line through one of the points is contained in the space. You can think of it as a plane or 3d space or any Rn without a coordinate system. Once you add a coordinate system (so that you have an origin), you can make it a vector space. An affine space is what you seem to be thinking of as a "vector space". You can add vectors, you can't add points in an affine space.
19. Nov 15, 2005
### HallsofIvy
Staff Emeritus
You're right. I referred to "coordinate systems" when I really meant the space itself.
20. Nov 15, 2005
### robphy
Here's some physical motivations on this issue concerning an affine space vs. a vector space.
A space of positions in a plane (or the space of times on a line) is an affine space. With no point being physically distinguished from any other, an affine space is more natural than a vector space. In an affine space, there is no sense of addition of elements of this space. (If you attempt to add two elements, the sum depends on the choice of an "origin" [which does not exist in an affine space]. As was mentioned, one can introduce an origin by introducing a coordinate system. Then, the sum now depends on a choice of coordinate system.) There is, however, a sense of subtraction... the "difference of two positions [in an affine space]" is a vector... the displacement vector. (The difference does not depend on a choice of [or existence of] an origin.)
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add? | 2016-10-24T09:13:59 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/position-vector.99312/",
"openwebmath_score": 0.754300057888031,
"openwebmath_perplexity": 460.6119848821427,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9674102552339746,
"lm_q2_score": 0.8856314677809303,
"lm_q1q2_score": 0.8567689642891894
} |
https://gmatclub.com/forum/how-many-roots-does-x-6-12x-4-32x-2-0-have-214685.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 23 Sep 2018, 02:09
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# How many roots does x^6 –12x^4 + 32x^2 = 0 have?
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 49303
How many roots does x^6 –12x^4 + 32x^2 = 0 have? [#permalink]
### Show Tags
10 Mar 2016, 05:10
00:00
Difficulty:
45% (medium)
Question Stats:
56% (01:02) correct 44% (01:23) wrong based on 97 sessions
### HideShow timer Statistics
How many roots does x^6 –12x^4 + 32x^2 = 0 have?
(A) 1
(B) 2
(C) 3
(D) 4
(E) 5
_________________
EMPOWERgmat Instructor
Status: GMAT Assassin/Co-Founder
Affiliations: EMPOWERgmat
Joined: 19 Dec 2014
Posts: 12432
Location: United States (CA)
GMAT 1: 800 Q51 V49
GRE 1: Q170 V170
How many roots does x^6 –12x^4 + 32x^2 = 0 have? [#permalink]
### Show Tags
21 Mar 2017, 19:59
3
2
Hi mesutthefail,
The GMAT often tests you on rules/patterns that you know, but sometimes in ways that you're not used to thinking about. This prompt is really just about factoring and Classic Quadratics, but it looks a lot more complicated than it actually is.
A big part of properly dealing with a Quant question on the GMAT is in how you organize the information and 'simplify' what you've been given. This prompt starts us off with....
X^6 –12X^4 + 32X^2 = 0
This certainly looks complex, but if you think about how you can simplify it, then you'll recognize that can 'factor out' X^2 from each term. This gives us...
(X^2)(X^4 - 12X^2 + 32) = 0
Now we have something a bit more manageable. While you're probably used to thinking of Quadratics such as X^2 + 6X + 5 as (X+1)(X+5), that same pattern exists here - it's just the exponents are slightly different (even though the math rules are exactly the SAME). We can further rewrite the above equation as...
(X^2)(X^2 -4)(X^2 - 8) = 0
At this point, you don't really need to calculate much, since each 'piece' of the product should remind you of a pattern that you already know.....
X^2 = 0 --> 1 solution
(X^2 - 4) = 0 --> 2 solutions
(X^2 - 8) = 0 --> 2 solutions
GMAT assassins aren't born, they're made,
Rich
_________________
760+: Learn What GMAT Assassins Do to Score at the Highest Levels
Contact Rich at: [email protected]
# Rich Cohen
Co-Founder & GMAT Assassin
Special Offer: Save \$75 + GMAT Club Tests Free
Official GMAT Exam Packs + 70 Pt. Improvement Guarantee
www.empowergmat.com/
***********************Select EMPOWERgmat Courses now include ALL 6 Official GMAC CATs!***********************
Manager
Joined: 09 Jul 2013
Posts: 110
How many roots does x^6 –12x^4 + 32x^2 = 0 have? [#permalink]
### Show Tags
Updated on: 29 Mar 2017, 08:58
2
4
To find the roots we can factor the polynomial down to first order expressions and count how many roots we get.
The first thing to do is to factor out $$x^2$$
Then we have $$x^2(x^4-12x^2+32)=0$$
Now to make things more familiar we can substitute $$y=x^2$$
Now it looks like this
$$y(y^2-12y+32)=0$$
And we can factor it like we are used to
$$y(y-4)(y-8)=0$$
y=0
y=4
y=8
So substituting $$x^2$$ back in for y:
$$x^2=0$$
$$x^2=4$$
$$x^2=8$$
So then the roots are
$$x=0$$
$$x=-2$$
$$x=2$$
$$x=-\sqrt{8}$$
$$x=\sqrt{8}$$
Note, since we are not asked to find the actual roots, we only need to determine the number of roots. After the first step of factoring out the $$x^2$$, we have $$x^2$$ and a 4th order expression. A 4th order expression will have 4 roots (unless it's a perfect square, but our expression is not a perfect square). So the total number of roots will be 4 from the 4th order expression, and one from the $$x^2$$. Total roots = 5.
_________________
Dave de Koos
Originally posted by davedekoos on 10 Mar 2016, 14:28.
Last edited by davedekoos on 29 Mar 2017, 08:58, edited 1 time in total.
##### General Discussion
Intern
Joined: 12 Dec 2016
Posts: 10
Re: How many roots does x^6 –12x^4 + 32x^2 = 0 have? [#permalink]
### Show Tags
21 Mar 2017, 02:27
I thought polynoms was not a subject in GMAT ? This questions contains quite the polynomial solutions.
Intern
Joined: 12 Dec 2016
Posts: 10
Re: How many roots does x^6 –12x^4 + 32x^2 = 0 have? [#permalink]
### Show Tags
21 Mar 2017, 02:31
davedekoos wrote:
To find the roots we can factor the polynomial down to first order expressions and count how many roots we get.
The first thing to do is to factor out $$x^2$$
Then we have $$x^2(x^4-12x^2+32)=0$$
Now to make things more familiar we can substitute $$y=x^2$$
Now it looks like this
$$y(y^2-12y+32)=0$$
And we can factor it like we are used to
$$y(y-4)(y-8)=0$$
y=0
y=4
y=8
So putting it back to x^2
$$x^2=0$$
$$x^2=4$$
$$x^2=8$$
So then the roots are
$$x=0$$
$$x=-2$$
$$x=2$$
$$x=-\sqrt{8}$$
$$x=\sqrt{8}$$
Note, since we are not asked to find the actual roots, we only need to determine the number of roots. After the first step of factoring out the $$x^2$$, we have $$x^2$$ and a 4th order expression. A 4th order expression will have 4 roots (unless it's a perfect square, but our expression is not a perfect square). So the total number of roots will be 4 from the 4th order expression, and one from the $$x^2$$. Total roots = 5.
A question. When you first factored out by x^2, the x expression in 32x^2 disappeared completely, however when you second factored out the "y" expression, 12y^2 became 12y instead of 12. Can you please clarify?
Re: How many roots does x^6 –12x^4 + 32x^2 = 0 have? &nbs [#permalink] 21 Mar 2017, 02:31
Display posts from previous: Sort by
# How many roots does x^6 –12x^4 + 32x^2 = 0 have?
## Events & Promotions
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 2018-09-23T09:09:40 | {
"domain": "gmatclub.com",
"url": "https://gmatclub.com/forum/how-many-roots-does-x-6-12x-4-32x-2-0-have-214685.html",
"openwebmath_score": 0.6060466170310974,
"openwebmath_perplexity": 1724.8647685423177,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9840936092211993,
"lm_q2_score": 0.8705972818382005,
"lm_q1q2_score": 0.8567492212623203
} |
https://forum.bionicturtle.com/threads/hull-instructional-video-ch4-duration.22262/#post-74751 | # Hull, Instructional video , ch4 -Duration
#### Branislav
##### Member
Subscriber
Dear David,
Thanks a lot for video lectures they are much inspiring Still I was little bit confused with all these different names duration, modified duration, Macauly duration,.. etc...I will shortly examine mine view of this and kindly ask you to comment ( but without laughing)
According to mine understanding we are methodologically speaking about one risk measure all the time, called duration - how long on average shall i wait as bond holder to receive cash payments, or in terms on formula as you explained ( formula 1):
From this formula we see that that when we say on average, we are referring to time weighted average, so as a result for 3 year maturity bond I will obtain let's say 2.63 - so this is time based measure ( for zero coupon bond it is equal to 3- no cash flows till the very same end, if you can wait that much
If you agree with me on this definition than this is the same thing as Macaulay duration, there is nothing new coming with this new name introduced beside of course great honor and memory on Frederick Macauly who introduced this concept.
So we are still on duration and keep playing further. What if we do the first derivative by yield, just to check what is bond's sensitivity on yield change..
Delta (B)=dB/dY*Delta(y) ( formula 2 ) and dB/dY is "similar" to the right side of the formula 1, just with the minus in front , and we need to "remove" B from the denominator, or put it another way: dB/dY=-B*D and using formula 2 we obtain ( let us call it " yield/price sensy formula"):
Delta (B)=-B*D*Delta(y)
For me this was kind of "magic"..somehow mine year based measure D becomes interest ( yiedl) sensitivity measure!
But basically we are still talking about duration from the beginning of the text, just with this simple "math" transformation we saw that it is also connected to the bonds price sensitivity to yield change
We play further...we assumed above continuous compounding, if we go to the annual compounding, then, bond price is little bit different summation:
( note just the yield y is replaced with i) and duration and its first derivative are little bit different, so their relationship from " yield/price sensy formula"is now transforme to :
Delta (B)=-B*D*Delta(y)/(1+y)
and we introduce new "name" again, "Modified duration" as:
D*=D/(1+y), which transforms previous equation to:
Delta (B)=-B*(D*)*Delta(y)
so again D* is duration from the beginning of the text, just for yearly compounding case, "used" in this formula to express sensitivity of bonds price on yield change.
#### David Harper CFA FRM
##### David Harper CFA FRM
Staff member
Subscriber
Hi @Branislav Super glad you found some inspiration in the videos: that's what we are here for In regard to your summary: yes, rIght!! All of that looks solid to me, very well done. I agree with virtually all of your statements and I definitely agree with the substantial point that, to paraphrase, that there is really only one duration. I've posted dozens of times on these concepts but here is how I would summarize, and I believe this summary maps pretty well to your math, and further you can see that I'm agreeing that Mac/mod/Effective duration are three faces of the same single concept. Effective duration simply estimates modified duration, and modified duration is a sort of "adjusted" Macaulay duration which adjusts for the effects of discrete discounting on the price and doesn't matter if discounting is continuous (we tend to refer to Macaulay duration as a maturity and modified duration as a sensitivity, but the mathematical units of both, in fact, are years such that is it a minor mathematical adjustment from one to the other, as you do imply!).
Here is how I would summarize:
• Macaulay duration, I'll denote D_Mac, is the bond's weighted average maturity (as illustrated by your first formula above, where the weights assigned to the maturities are the PV of the cash flows; i.e., the weights, t(i), are weighted by c(i)*exp[-y*t(i)]/B.
• modified duration, I'll denote D_mod, is the linear sensitivity of the bond's price with respect to a small yield change; i.e., if the modified duration is 3.5 years, then we (linearly) approximate a yield change of +Δy will associate with a 3.5*Δy percentage drop in the bond price (knowing that curvature/convexity has been ignored). When the yield is continuously compounded, Mac duration = modified duration. When the yield is discretely rendered, we need to adjust the Mac to retrieve the accurate D_mod = D_mac/(1 + yield/k) where k = number of periods per year; i.e., when discrete, D_mod is always a bit less than D_Mac
• In this way, modified duration is a measure of sensitivity: %ΔP = ΔP/P = -(D_mod)*Δy, solving for D_mod:
• D_mod = -1/P * ΔP/Δy or continuously D_mod = -1/P * ∂P/∂y; i.e., modified duration is the first partial derivative (of bond price) with respect to the yield multiplied by -1/P. If we multiply each side by price, P, then:
• P *D_mod = -ΔP/Δy = "dollar duration;" i.e., dollar duration is the (negative) of the pure first derivative (i.e., the slope of the tangent line, itself negative). Importantly, dollar duration divided by 10,000 is the DV01 because P *D_mod/10,000 = DV01.
• However, if you start with the bond price function (either continuous or discrete) and if you take the first derivative, then you can see that you should end up with (the negative of) the dollar duration: ∂P/∂y = -D_mod*P (this forum has dozens of such actual derivations if you search). Therefore, by definition, if you take the first derivative, you should also (as I think you do imply), equivalently end up with ∂P/∂y = -D_mac*P/(1+y/k). There is an old saying: duration is "infected by price" to acknowledge that 1/P "infects" the pure derivative.
• Effective duration approximates modified duration by shocking the yield and re-pricing in order to retrieve the slope of the nearby tangent. Effective duration is sort of mini-simulation used to estimate the (inherently due to it being Taylor Series) analytical modified duration when it is not analytically available (e.g., MBS with negative convexity throws off the analytics). You will really understand when you can see that the effective duration approximates the modified duration which itself is an exact linear approximation. In this way, effective duration and modified duration, although they differ in approach, are both sensitivities and not conceptually different.
Last edited:
#### Matthew Graves
##### Active Member
Subscriber
I think I would add a few further points with respect to the practical use of Effective Duration and differences between Effective Duration and Modified Duration.
Modified Duration is sensitivity of the price with respect to the Yield to Maturity. These two measures are precisely defined mathematically and not open to interpretation.
Effective Duration, however, is a more practical (and complicated) measure derived through re-valuation of the instrument. It is the price sensitivity of the instrument to a parallel shift in a valuation curve and is therefore model dependent. Depending on the instrument, this can be a deterministic valuation but could equally be based on monte carlo simulation if the instrument has optionality. If the underlying curve is flat at the yield to maturity and the instrument does not have optionality you would expect the Effective Duration to be very close to the Modified Duration. However, in all practical, real-world valuation cases the Effective Duration would not be equal to the Modified Duration due to the shape of the underlying curve and any optionality in the instrument (e.g. callable bonds). Separately (and rather technically), the shift applied for Effective Duration is conventionally applied to the observed market yields comprising the underlying curve before obtaining the zero rate curve. The shift is not applied to the zero curve directly. This has subtle but observable affects on the effective duration also.
#### enjofaes
##### Member
Subscriber
Hi @David Harper CFA FRM . Thanks again for all the material! Was wondering if what you said was correct in the instructional video of duration : around 24'14" Modified duration = dollar duration / 10.000. I thought from the beginning of the video that this is the formula for the DV01.
Kind regards
#### David Harper CFA FRM
##### David Harper CFA FRM
Staff member
Subscriber
Hi @enjofaes If I said that (sorry I'll have to locate the specific video location later), then I misspoke. You are correct: DV01 = (P*D)/10,000 = dollar duration/10,000. I'm actually very happy with my summary note at see https://forum.bionicturtle.com/threads/week-in-financial-education-2021-05-24.23840/post-88846 i.e.,
Note: Durations in CFA and FRM compared
I'd like to clarify duration terminology as it pertains to differences between the CFA and FRM. This forum has hundreds of threads over 12+ years on duration concepts (it's hard to say which links are the best at this point, but I'll maybe come back and curate some best links). Our YouTube channel has a FRM P2.T4 that includes videos on DV01, hedging the DV01, effective duration, modified versus Macaulay duration, and an illustration of all three durations. There are many nuances and further explorations, but here my goal is only to clarify the top-level definitions.
I'll use the simple example of a $100.00 face 20-year zero-coupon bond that currently yields (yield to maturity of) 6.0% per annum. If the yield is 6.0% per annum with continuous compounding, the price is$100.00*exp(-0.060*20) = $30.12. If the yield is 6.0% per annum with annual compounding, the price is$100.00/(1+0.060)^20 = $31.18. Unless otherwise specified, I will assume a continuous compound frequency. Special note: we so often price a bond given the yield (where CPT PV is the final calculator step) that it is easy to forget yield is not actually an input. Yield is the internal rate of return (IRR) assuming the current price. Yield does not determine price; price determines yield. Technical (non-fundamental) factors cause price to fluctuate, therefore yield fluctuates. • ∂P/∂y (or Δp/Δy) is the slope of the tangent line at the selected yield. At 6.0% yield, the slope is -$602.39. How do I know that? Because dollar duration is the negated slope, so in this case dollar duration (DD) = P*D = $30.12 price * 20 years =$602.39. Importantly, the "y" in ∂P/∂y is yield and yield is just one of several interest rate factors.
• Dollar duration (DD; aka, money duration in the CFA) is analytically the product of price and modified duration. Dollar duration (DD) = P*D = $30.12 * 20 =$602.39. Why is it so large? Because it's the (negated) tangent line's slope, so it has the typical first derivative interpretation: DD is the dollar change implied by one unit change in the yield, -∂P/∂y. One unit is 1.0 = 100.0% = 100 * 100 basis points (bps) per 1.0% = 10,000 basis points. So, DD is the dollar change implied by a 100.0% change in yield if we use the straight tangent line which would be a silly thing to do! Recall the constant references to limitations of duration as linear approximation. The linear approximation induces bias at only 5 or 10 or 20 basis points, so 10,000 basis points is literally "off the charts" and not directly meaningful. What is meaningful? The PVBP (aka, DV01) comes to our rescue with a meaningful re-scaling of the DD ...
• Price value of basis point (aka, dollar value of '01, DV01) is the dollar duration ÷ 10,000. It's the tangent line's slope re-scaled from Δy=100.0% to Δy= 0.010% (one basis point). PVBP = P*D/10,000; in this example, PVBP = $30.12 * 20 / 10,000 =$0.06024. It is the dollar change implied by a one basis point decline in the yield. It is still a linear approximation, but much better because we zoomed in to a small change. In this way, the difference between the highly useful PVBP and the dollar duration is merely scale.
• Macaulay duration is the bond's weighted average maturity where the weights are each of the bond's cash flow's present value as a percentage of the bond's price. Macaulay duration is tedious however it is reliable and it is analytical. When we can compute the Macaulay duration, it is accurate; we don't approximate by re-pricing the bond. A zero-coupon bond has a Macaulay duration equal to its maturity because it only has one cash flow (hence the popularity of the zero-coupon bond in exam questions, never mind the zero-coupon bond is a reliable primitive). Our 20-year zero-coupon bond has a Macaulay duration of 20.0 years.
• Modified duration is the measure of interest rate risk. Modified duration is the approximate percentage change in bond price implied by a 1.0% (100 basis point) change in the yield. Just as ∂P/∂y refers to the tangent line's slope which is "infected with price," we divide by price to express the modified duration, D(mod) = -1/P*∂P/∂y. The key relationship between analytical modified and Macaulay duration is the following: modified duration = Macaulay duration / (1 + y/k) where k is the number of compound periods in the year; e.g., k = 1 for annual compounding, k = 2 for semiannual compounding and k = ∞ for continuous compounding. Importantly, if the the compound frequency is continuous then a bond's modified duration equals its Macaulay duration. Notice that T / (1 + y/∞) = T / (1 + 0) = T.
• If the 6.0% yield is annual compounded, our 20-year bond's Macaulay duration is given by 20.0 / (1 + 6.0%) = 18.868 years.
• If the 6.0% yield is continuously compounded, our 20-year bond's modified duration is 20.0 years.
• Effective duration is an approximation of modified duration. Recall the modified duration is a linear approximation, but that's because it is a function of the first derivative; otherwise, modified duration is an exact (analytical or functional) measure of the price sensitivity with respect to the interest rate factor that happens to most often be the yield. We can retrieve it easily whenever we can compute the Macaulay duration, which is the case for any vanilla bond. Otherwise (e.g., bond has an embedded option) we approximate the modified duration by calculating its effective duration. The effective duration approximates the modified duration which itself is a linear approximation. The effective duration is given by [P(-Δy) - P(+Δy)] / (2*Δy) * 1/P. I wrote it this way so you can see that it is essentially similar to ∂P/∂y*1/P where ∂P/∂y ≅ [P(-Δy) - P(+Δy)] / (2*Δy). I've observed that many candidates do not realize that the formula for effective duration is simply slope*1/P. Geometrically, it is the slope of the secant line that is near to the tangent line! Secant's slope approximates the tangent's slope. If you grok the calculus here, I think you'll agree that this is all just one thing! Now we can see how it's not so different. But as you can visualize, there are an almost infinite variety of secants next to the tangent. We arbitrarily choose a nearby secant, but we'd prefer a small delta if the bond is vanilla (i.e., if the bond's cash flows are invariant to rate changes). Although we do not need the effective duration for our example bond, we can compute it:
• If our arbitrary yield shock is10 basis points such that Δy= 0.10%, then P(-Δy)= $100.00*exp(-5.90%*20)=$30.728, and P(+Δy)= $100.00*exp(-6.10%*20)=$29.523. Effective duration= ($30.728 -$29.523)/0.0020 *1/\$30.12= 20.0013 years. Fine approximation!
• On the terminology (CFA versus FRM)
• Interest rate factor: The FRM (informed by Tuckman) starts with a general interest rate factor. This is typically the spot rate, forward rate, par rate, or yield. Importantly, the spot, forward and par rates are term structures, or vectors; the par yield curve is a vector of par rates at various maturities, often at six-month or one-month intervals. Only the yield is a single (aka, scalar) value.
• My above definition of the effective duration is according to the FRM (and to me). The CFA sub-divides this effective duration into either approximate modified duration (if the interest rate factor is the yield) versus effective duration (if the non-vanilla nature of the bond requires a non-yield interest rate factor; i.e., a benchmark yield curve). Personally, I am not keen on this semantic approach because (i) both of these CFA formulas are approximating the modified duration and (ii) I prefer to reserve "effective" for its traditional connotation (e.g., effective convexity is analogous to effective duration), and (iii) we wouldn't anyhow use an inappropriate factor (yield) for certain non-vanilla situations, so we don't really need label-switches to guide us thusly! (the CFA's formula for its approximate modified duration is essentially the same as its effective duration formula). To me, the CFA's approach muddies the terms "approximate" and "effective" where the math gives us natural distinctions. Follow the math, I'd say! | 2022-05-21T11:48:51 | {
"domain": "bionicturtle.com",
"url": "https://forum.bionicturtle.com/threads/hull-instructional-video-ch4-duration.22262/#post-74751",
"openwebmath_score": 0.6814907193183899,
"openwebmath_perplexity": 2123.34152772634,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9840936101542134,
"lm_q2_score": 0.8705972734445508,
"lm_q1q2_score": 0.8567492138144629
} |
https://proofwiki.org/wiki/Definition:Supremum_of_Set | # Definition:Supremum of Set
## Definition
Let $\struct {S, \preceq}$ be an ordered set.
Let $T \subseteq S$.
An element $c \in S$ is the supremum of $T$ in $S$ if and only if:
$(1): \quad c$ is an upper bound of $T$ in $S$
$(2): \quad c \preceq d$ for all upper bounds $d$ of $T$ in $S$.
If there exists a supremum of $T$ (in $S$), we say that:
$T$ admits a supremum (in $S$) or
$T$ has a supremum (in $S$).
### Finite Supremum
If $T$ is finite, $\sup T$ is called a finite supremum.
### Subset of Real Numbers
The concept is usually encountered where $\struct {S, \preceq}$ is the set of real numbers under the usual ordering $\struct {\R, \le}$:
Let $T \subseteq \R$ be a subset of the real numbers.
A real number $c \in \R$ is the supremum of $T$ in $\R$ if and only if:
$(1): \quad c$ is an upper bound of $T$ in $\R$
$(2): \quad c \le d$ for all upper bounds $d$ of $T$ in $\R$.
The supremum of $T$ is denoted $\sup T$ or $\map \sup T$.
## Also known as
Particularly in the field of analysis, the supremum of a set $T$ is often referred to as the least upper bound of $T$ and denoted $\map {\operatorname {lub} } T$ or $\map {\operatorname {l.u.b.} } T$.
Some sources refer to the supremum of a set as the supremum on a set.
## Also defined as
Some sources refer to the supremum as being the upper bound.
Using this convention, any element greater than this is not considered to be an upper bound.
## Also see
• Results about suprema can be found here.
## Linguistic Note
The plural of supremum is suprema, although the (incorrect) form supremums can occasionally be found if you look hard enough. | 2019-08-20T18:49:04 | {
"domain": "proofwiki.org",
"url": "https://proofwiki.org/wiki/Definition:Supremum_of_Set",
"openwebmath_score": 0.9775962829589844,
"openwebmath_perplexity": 172.22908619831068,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9840936096877063,
"lm_q2_score": 0.870597273444551,
"lm_q1q2_score": 0.8567492134083232
} |
https://math.stackexchange.com/questions/979660/largest-n-vertex-polyhedron-that-fits-into-a-unit-sphere?noredirect=1 | Largest $n$-vertex polyhedron that fits into a unit sphere
In two dimensions, it is not hard to see that the $n$-vertex polygon of maximum area that fits into a unit circle is the regular $n$-gon whose vertices lie on the circle: For any other vertex configuration, it is always possible to shift a point in a way that increases the area.
In three dimensions, things are much less clear. What is the polyhedron with $n$ vertices of maximum volume that fits into a unit sphere? All vertices of such a polyhedron must lie on the surface of the sphere (if one of them does not, translate it outwards along the vector connecting it to the sphere's midpoint to get a polyhedron of larger volume), but now what? Not even that the polyhedron must be convex for every $n$ is immediately obvious to me.
• If the vertices are on the surface of the sphere the polyhedron will necessarily be convex - it will be the convex hull of the vertices. Because the sphere itself is convex the convex hull will lie entirely within it. Oct 18 '14 at 16:41
• @MarkBennet: Good point, that settles this part at least.
– user139000
Oct 18 '14 at 16:46
• I believe this is an open problem for $n > 8$. Oct 21 '14 at 6:57
• @achille hui: do you know solutions for n = 7, 8? One can check directly that cube is not even a local maximum, having in fact surprisingly poor performance. Oct 23 '14 at 19:50
• A stickler point about your proof for polygons: given that the space of such polygons is compact...
– Max
Oct 23 '14 at 20:42
This is supposed to be a comment but I would like to post a picture.
For any $m \ge 3$, we can put $m+2$ vertices on the unit sphere
$$( 0, 0, \pm 1) \quad\text{ and }\quad \left( \cos\frac{2\pi k}{m}, \sin\frac{2\pi k}{m}, 0 \right) \quad\text{ for }\quad 0 \le k < m$$
Their convex hull will be a $m$-gonal bipyramid which appear below.
Up to my knowledge, the largest $n$-vertex polyhedron inside a sphere is known only up to $n = 8$.
• $n = 4$, a tetrahedron.
• $n = 5$, a triangular bipyramid.
• $n = 6$, a octahedron = a square bipyramid
• $n = 7$, a pentagonal bipyramid.
• $n = 8$, it is neither the cube ( volume: $\frac{8}{3\sqrt{3}} \approx 1.53960$ ) nor the hexagonal bipyramid ( volume: $\sqrt{3} \approx 1.73205$ ). Instead, it has volume $\sqrt{\frac{475+29\sqrt{145}}{250}} \approx 1.815716104224$.
Let $\phi = \cos^{-1}\sqrt{\frac{15+\sqrt{145}}{40}}$, one possible set of vertices are given below: $$( \pm \sin3\phi, 0, +\cos3\phi ),\;\; ( \pm\sin\phi, 0,+\cos\phi ),\\ (0, \pm\sin3\phi, -\cos3\phi),\;\; ( 0, \pm\sin\phi, -\cos\phi).$$ For this set of vertices, the polyhedron is the convex hull of two polylines. One in $xz$-plane and the other in $yz$-plane. Following is a figure of this polyhedron, the red/green/blue arrows are the $x/y/z$-axes respectively.
$\hspace0.75in$
For $n \le 8$, above configurations are known to be optimal. A proof can be found in the paper
Joel D. Berman, Kit Hanes, Volumes of polyhedra inscribed in the unit sphere in $E^3$
Mathematische Annalen 1970, Volume 188, Issue 1, pp 78-84
An online copy of the paper is viewable at here (you need to scroll to image 84/page 78 at first visit).
For $n \le 130$, a good source of close to optimal configurations can be found under N.J.A. Sloane's web page on Maximal Volume Spherical Codes. It contains the best known configuration at least up to year 1994. For example, you can find an alternate set of coordinates for the $n = 8$ case from the maxvol3.8 files under the link to library of 3-d arrangements there.
• That's plenty of information, and I'm willing to give you the bounty since you have answered my question ("Open problem for $n\ge 9$") but I'd like some more information for the $n=8$ case if possible. The cube not being optimal is already surprising since one might expect the 2D argument to somehow transfer to regular polyhedra, but where do the coordinates come from? You say that $n=8$ is known exactly but the coords just look like the result of a numerical optimization run. I'd expect there to be exact polar or cartesian coordinates or at least some formal description of the polyhedron.
– user139000
Oct 25 '14 at 7:44
• That the $n=7$ solution is a pentagonal bipyramid also mildly surprised me. In my mental picture of it, the concentration of vertices in the plane spanned by the pyramids' shared base seems a little too high for the configuration to be optimal. But 3D is hard to imagine accurately, of course...
– user139000
Oct 25 '14 at 7:48
• @pew look at Berman and Hanes paper (linked in updated answer) for a proof. Oct 25 '14 at 9:16 | 2021-10-17T13:02:38 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/979660/largest-n-vertex-polyhedron-that-fits-into-a-unit-sphere?noredirect=1",
"openwebmath_score": 0.7174676656723022,
"openwebmath_perplexity": 428.21900263279866,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9840936082881853,
"lm_q2_score": 0.8705972650509008,
"lm_q1q2_score": 0.8567492039297666
} |
https://math.stackexchange.com/questions/2762958/find-the-integral-int-fracdx-sqrtx2-a2 | # Find the integral $\int\frac{dx}{\sqrt{x^2-a^2}}$
Evaluate the integral of $\frac{1}{\sqrt{x^2-a^2}}$
Put $x=a\sec\theta\implies dx=a\sec\theta\tan\theta d\theta$ \begin{align} \int\frac{dx}{\sqrt{x^2-a^2}}&=\int\frac{a\sec\theta\tan\theta d\theta}{\sqrt{a^2\sec^2\theta-a^2}}=\int\frac{a\sec\theta\tan\theta d\theta}{a\sqrt{\tan^2\theta}}=\int\frac{a\sec\theta\tan\theta d\theta}{a\color{red}{\tan\theta}}\\&=\int\sec\theta d\theta=\log|\sec\theta+\tan\theta|+C=\log|\frac{x}{a}+\sqrt{\frac{x^2}{a^2}-1}|\\&=\log|\frac{x+\sqrt{x^2-a^2}}{a}|+C=\log|{x+\sqrt{x^2-a^2}}|-\log|a|+C\\&=\log|{x+\sqrt{x^2-a^2}}|+C \end{align} This is how it is solve in my reference. But, $\sqrt{\tan^2\theta}=|\tan\theta|$ right ? Then, does that imply $$\int\frac{dx}{\sqrt{x^2-a^2}}=\int\frac{a\sec\theta\tan\theta d\theta}{a\color{red}{|\tan\theta|}}=\color{red}{\pm}\int\sec\theta d\theta$$ Why am I getting this confusion and is the first solution complete ?
• @TrostAft how can i say $\tan\theta$ is $+$ve from $x^2-a^2>0$ ? – ss1729 May 2 '18 at 7:20
• No first solution is not complete, they silently slipped in the || inside. You know, $\int \sec t = \ln(\sec t + \tan t)$ is valid for $\sec t > 0$ and for $\sec t \lt 0$ the integrand is $\ln(-\sec t - \tan t)$ so the solution $\ln |\sec t + \tan t|$ combines both. – jonsno May 2 '18 at 7:35
• Check by differentiating your solution(s). – Yves Daoust May 2 '18 at 7:35
• Agree with @samjoe. My comment is untrue. – TrostAft May 2 '18 at 7:35
• @samjoe i'm srry dont understand how ur point help me with the doubt in OP ?. Could u pls explain bit more ? – ss1729 May 2 '18 at 7:57
Suppose that $a>0$.
The work is just for the case when $x>a$. The case for $x<-a$ is different, but the finals result is the same.
Let $x=a\sec\theta$, where $\theta\in[0,\frac{\pi}{2})\cup(\frac{\pi}{2},\pi]$. This is the domain of $\textrm{arcsec}$.
For $x< -a$, $\theta\in(\frac{\pi}{2},\pi]$ and so $\tan\theta\le0$.
$$\sqrt{x^2-a^2}=\sqrt{a^2\tan^2\theta}=-a\tan\theta$$
\begin{align*} \int\frac{dx}{\sqrt{x^2-a^2}}&=\int\frac{a\sec\theta\tan\theta}{-a\tan\theta}d \theta\\ &=-\int\sec\theta d\theta\\ &=-\ln|\sec\theta+\tan\theta|+C\\ &=\ln|\sec\theta-\tan\theta|+C\\ &=\ln\left|\frac{x}{a}-\frac{-\sqrt{x^2-a^2}}{a}\right|+C\\ &=\ln\left|x+\sqrt{x^2-a^2}\right|-\ln|a|+C \end{align*}
There are two minus signs and they cancel each other to reach the final result.
• I don't uunderstand how you conclude $\theta \in (\pi/2, \pi]$ because that is the key to the question. – jonsno May 2 '18 at 8:17
• I mean why it can't be $\theta \in (\pi, 3\pi/2)$ where sec is negative but tan is positive – jonsno May 2 '18 at 8:18
• Yes thats what we have to show that tan cannot be positive. – jonsno May 2 '18 at 8:20
• If we take $\theta\in(\pi,3\pi/2)$, then $\tan\theta>0$. The integral is $\int\sec\theta d\theta$. The final answer will be still $\ln|x+\sqrt{x^2-a^2}|-\ln|a|+C$. My point is that we can have $-\int\sec\theta d\theta$ in the work. But then we will have one more minus sign and obtain the same final answer. – CY Aries May 2 '18 at 8:26
• We either take a range so that the work holds for both $x>a$ and $x<-a$, or take a range so that in one case we will have two minus signs to cancel each other. – CY Aries May 2 '18 at 8:29 | 2021-01-19T02:05:17 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2762958/find-the-integral-int-fracdx-sqrtx2-a2",
"openwebmath_score": 0.9310566186904907,
"openwebmath_perplexity": 829.7075101035706,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9840936120202411,
"lm_q2_score": 0.8705972583359805,
"lm_q1q2_score": 0.856749200570774
} |
https://math.stackexchange.com/questions/2120928/prove-that-the-square-root-of-a-positive-integer-is-either-an-integer-or-irratio | # Prove that the square root of a positive integer is either an integer or irrational
Is my proof that the square root of a positive integer is either an integer or an irrational number correct?
The proof goes like this:
Suppose an arbitrary number n, where n is non-negative. If $\sqrt{n}$ is an integer, then $\sqrt{n}$ must be rational. Since $\sqrt{n}$ is an integer, we can conclude that n is a square number, that is for some integer a. Therefore, if n is a square number, then $\sqrt{n}$ is rational.
Suppose now that n is not a square number, we want to show that the square root of any non-square number is irrational.
We prove by contradiction. That is, we suppose that the square root of any non-square number is rational. So $\sqrt{n} = \frac{a}{b}$, where $a,b \in Z^+, b \neq 0$. We also suppose that $a \neq 0$, otherwise $\frac ab = 0$ , and n will be a square number, which is rational.
Hence $n = \frac {a^2}{b^2}$, so $nb^2 = a^2$.
Suppose $b=1$. Then $\sqrt n = a$ , which shows that n is a square number. So $b \neq 1$. Since $\sqrt n > 1$, then $a>b>1$.
By the unique factorization of integers theorem, every positive integer greater than $1$ can be expressed as the product of its primes. Therefore, we can write $a$ as a product of primes and for every prime number that exists in $a$, there will be an even number of primes in $a^2$. Similarly, we can express $b$ as a product of primes and for every prime number that exists in $b$, there will be an even number of primes in $b^2$.
However, we can also express $n$ as a product of primes. Since $n$ is not a square number, then there exist at least one prime number that has an odd number of primes. Therefore, there exists at least one prime in the product of $nb^2$ that has an odd number of primes. Since $nb^2=a^2$ , then this contradicts the fact that there is an even number of primes in $a^2$ since a number can neither be even and odd.
Therefore, this contradicts the fact that $\sqrt n$ is rational. Therefore, $\sqrt n$ must be irrational.
Is this sufficient? Or is there any parts I did not explain well?
• I think its correct and very well explained. – Shobhit Jan 30 '17 at 14:25
• It's a bit wordy, but logically you've got a solid proof, the even-ness of the powers of each prime is exactly what you're going for. – Adam Hughes Jan 30 '17 at 14:26
• The conclusion is not precise enough. Instead of "contradicts there is an even number of primes in $a^2$" we want to say that it contradicts the fact that the prime $p$ occurs to odd power in the unique factorization of $nb^2,\,$ but even power in $a^2,\,$ i.e. we are comparing the parity of the count of single prime, not the total of all primes – Bill Dubuque Jan 30 '17 at 14:28
• For example the argument shows that $\,\sqrt{3\cdot 5}\,$ is irrational because $\,3\,$ occurs to odd power in $\,3\cdot 5,\,$ but the total number of primes in $\,3\cdot 5\,$ is even. – Bill Dubuque Jan 30 '17 at 14:37
• On a side note, this result is known as Theaetetus' Theorem, and it's proven in Euclid's Elements here: aleph0.clarku.edu/~djoyce/elements/bookX/propX9.html "[S]quares which do not have to one another the ratio which a square number has to a square number also do not have their sides commensurable in length either." – Keshav Srinivasan Jan 30 '17 at 14:41
Your proof is very good and stated well. I think it can be made shorter and tighter with a little less exposition of the obvious. However, I would prefer students to err on the side of more rather than less so I can't chide you for being thorough. But if you want a critique:
"Suppose an arbitrary number n, where n is non-negative. If $\sqrt{n}$ is an integer, then $\sqrt{n}$ must be rational. Since $\sqrt{n}$ is an integer, we can conclude that n is a square number, that is for some integer a. Therefore, if n is a square number, then $\sqrt{n}$ is rational."
Suppose now that n is not a square number, we want to show that the square root of any non-square number is irrational.
This can all be said more simply and to argue that if $\sqrt{n}$ is an integer we can conclude $\sqrt{n}$ is rational or that $n$ is therefore a perfect square, is a little heavy handed. Those are definitions and go without saying. However, it shows good insight and understanding to be aware one can assume things and all claims need justification so I can't really call this "wrong".
But it'd be enough to say. "If $n$ is a perfect square then $\sqrt{n}$ is a an integer and therefore rational, so it suffices to prove that if $n$ is not a perfect square, then $\sqrt{n}$ is irrational.
We prove by contradiction. That is, we suppose that the square root of any non-square number is rational. So $\sqrt{n}$=ab , where a,b∈Z+,b≠0. We also suppose that a≠0, otherwise ab=0, and n will be a square number, which is rational.
Terminologistically, to say "$n$ is a square number" is to mean $n$ is the square of an integer. If $n = (\frac ab)^2$ we don't usually refer to $n$ as a square (although it is "a square of a rational") We'd never call $13$ a square because $13 = (\sqrt{13})^2$.
Also you don't make the usual specification that $a$ and $b$ have no common factors. As it turns out you didn't need to but it is a standard.
Suppose b=1 . Then $\sqrt{n}$=a , which shows that n is a square number. So b≠1. Since $\sqrt{n}$>1, then a>b>1
This was redundant as $b=1 \implies$ $a/b$ is an integer and we are assuming that $n$ is not a perfect square.
.
By the unique factorization of integers theorem, every positive integer greater than 1 can be expressed as the product of its primes. Therefore, we can write a as a product of primes and for every prime number that exists in a, there will be an even number of primes in a2. Similarly, we can express b as a product of primes and for every prime number that exists in b, there will be an even number of primes in b2
Bill Dubuque in the comments noted what you meant to say was "each prime factor will be raised to any even power".
.
However, we can also express n as a product of primes. Since n is not a square number, then there exist at least one prime number that has an odd number of primes. Therefore, there exists at least one prime in the product of nb2 that has an odd number of primes. Since nb2=a2 , then this contradicts the fact that there is an even number of primes in a2 since a number can neither be even and odd.
Ditto:
Overall I think your proof is very good.
But I should point out there is a simpler one:
Assume $n = \frac {a^2}{b^2}$ where $a,b$ are positive integers with no common factors (other than 1). If $p$ is a prime factor of $b$ and $n$ is an integer, it follows that $p$ is a prime factor of $a^2$ and therefore of $a$. But that contradicts $a$ and $b$ having no common factors. So $b$ can not have any prime factors. But the only positive integer without prime factors is $1$ so $b = 1$ and $n= a^2$ so $\sqrt{n} = a$. So either for any integer either $n$ is a perfect square with an integer square root, or $n$ does not have a rational square root.
And a slight caveat: I'm assuming that your class or text is assuming that all real numbers have square roots (and therefore if there is not rational square root the square root must be irrational). It's worth pointing out, that it is a result of real analysis that speaking of a square root actually makes any sense and that we can claim every positive real number actually does have same square root value. But that's probably beyond the range of this exercise.
But if I want to be completely accurate, you (and I) have actually only proven that positive integer $n$ either has an integer square root or it has no rational square root at all. Which is the same thing as saying if positive integer $n$ has a square root, the root is either integer or irrational. But we have not actually proven that positive integer $n$ actually has any square root at all.
• Thank you for the critique! It definitely helped me a lot! – Icycarus Jan 31 '17 at 11:00
• Can you explain or state a case where we cannot find the square root of a positive integer $n$ (as you state in the last line)? – Hungry Blue Dev Apr 15 '17 at 12:56
• I didn't say there are positive integers without square roots. (There aren't.) I said we haven''t proven that the square root of any integer exists. And we haven't. To prove that we have to prove that $K = \{q \in \mathbb Q| q^2 < n\}$ is not empty, and bounded above and that if $z = \sup K$ then $z^2 - n$. All we have proven so far is that either there is an integer so that $m^2 = n$ or there is no rational $q$ so that $q^2 = n$. We have not proven that there is an irrational $z$ so that $z^2 = n$. – fleablood Apr 15 '17 at 15:54
• If $n$ can not be written as $\frac {a^2}{b^2}$ then, by definition, $n$ does not have a rational square root. If $n$ has a rational square root then that rational square root can be written as $\frac ab$. That is what rational means. And if the square root is $\frac ab$ then $n = (\frac ab)^2 =\frac {a^2}{b^2}$. That is what square root means. So if $n$ can't be written that way then it does not have a rational square root. For example: $3$ can not be written that way and does not have a rational square root. Neither does ANY other non-square. – fleablood Jan 21 at 22:31
• This is a prove that IF a square root is rational then it must be an integer. We are not concerned about irrational square root (not even to the extent of questioning whether they exist). And IF a square root is rational it can be written as $\frac ab$. Which means $n$ can be written as $\frac {a^2}{b^2}$. If it can't be, then it does not have a rational square root. – fleablood Jan 21 at 22:35
We know that $\sqrt{4} = 2$ and $\sqrt{2} = 1.414...$ are rational and irrational respectively, so all we have to do is to show that if $n\in \mathbb{Z+}$ such that $\sqrt{n} = \dfrac{a}{b}$ where $a$ and $b$ are positive integers and the expression $\dfrac{a}{b}$ is in its simplest form then $\sqrt{n}$ is integral. Squaring both sides of the expression we get that $n = \dfrac{a^2}{b^2}$ since $a$ and $b$ have no common factors other than $1$ then $a^2 = n$ and $b^2 = 1$ therefore $b = 1$ hence $\sqrt{n}$ if rational it's an integer.
• How is that a critique of the OP's proof? – fleablood Jan 30 '17 at 18:51
• The op shows (correctly) that the square root of a non square is irrational. There is no need to show that rational roots are integers. Assuming n is not square makes perfect sense and the case for square n's has been covered. Your objections are not valid. Meanwhile you post is just ... weird. What do $\sqrt 4$ and $\sqrt 2$ have to do with anything. And your statement $n = a^2/b^2$ implies $a^2 = n$ and $b^2 =1$ is said without justification when the justification is the entire point. And you are merely pointing out an easier proof. Which is not a valid critique. – fleablood Jan 30 '17 at 22:57 | 2019-02-19T07:32:57 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2120928/prove-that-the-square-root-of-a-positive-integer-is-either-an-integer-or-irratio",
"openwebmath_score": 0.897523045539856,
"openwebmath_perplexity": 102.9936556342472,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9840936087546923,
"lm_q2_score": 0.8705972549785203,
"lm_q1q2_score": 0.856749194423741
} |
http://en.wikipedia.org/wiki/Bertrand's_box_paradox | Bertrand's box paradox is a classic paradox of elementary probability theory. It was first posed by Joseph Bertrand in his Calcul des probabilités, published in 1889.
There are three boxes:
1. a box containing two gold coins,
2. a box containing two silver coins,
3. a box containing one gold coin and one silver coin.
After choosing a box at random and withdrawing one coin at random, if that happens to be a gold coin, it may seem that the probability that the remaining coin is gold is 12; in fact, the probability is actually 23. Two problems that are very similar are the Monty Hall problem and the Three Prisoners problem.
These simple but slightly counterintuitive puzzles are used as a standard example in teaching probability theory. Their solution illustrates some basic principles, including the Kolmogorov axioms.
Box version
There are three boxes, each with one drawer on each of two sides. Each drawer contains a coin. One box has a gold coin on each side (GG), one a silver coin on each side (SS), and the other a gold coin on one side and a silver coin on the other (GS). A box is chosen at random, a random drawer is opened, and a gold coin is found inside it. What is the chance of the coin on the other side being gold?
The following reasoning appears to give a probability of 12:
• Originally, all three boxes were equally likely to be chosen.
• The chosen box cannot be box SS.
• So it must be box GG or GS.
• The two remaining possibilities are equally likely. So the probability that the box is GG, and the other coin is also gold, is 12.
The flaw is in the last step. While those two cases were originally equally likely, the fact that you are certain to find a gold coin if you had chosen the GG box, but are only 50% sure of finding a gold coin if you had chosen the GS box, means they are no longer equally likely given that you have found a gold coin. Specifically:
• The probability that GG would produce a gold coin is 1.
• The probability that SS would produce a gold coin is 0.
• The probability that GS would produce a gold coin is 12.
Initially GG, SS and GS are equally likely. Therefore by Bayes rule the conditional probability that the chosen box is GG, given we have observed a gold coin, is:
$\frac { \mathrm{P}(see\ gold \mid GG)} { \mathrm{P}(see\ gold \mid GG)+\mathrm{P}(see\ gold \mid SS)+\mathrm{P}(see\ gold \mid GS) } =\frac{1}{1+0+1/2}= \frac{2}{3}$
The correct answer of 23 can also be obtained as follows:
• Originally, all six coins were equally likely to be chosen.
• The chosen coin cannot be from drawer S of box GS, or from either drawer of box SS.
• So it must come from the G drawer of box GS, or either drawer of box GG.
• The three remaining possibilities are equally likely, so the probability that the drawer is from box GG is 23.
Alternatively, one can simply note that the chosen box has two coins of the same type 23 of the time. So, regardless of what kind of coin is in the chosen drawer, the box has two coins of that type 23 of the time. In other words, the problem is equivalent to asking the question "What is the probability that I will pick a box with two coins of the same color?".
Bertrand's point in constructing this example was to show that merely counting cases is not always proper. Instead, one should sum the probabilities that the cases would produce the observed result; and the two methods are equivalent only if this probability is either 1 or 0 in every case. This condition is correctly applied in the second solution method, but not in the first.
The paradox as stated by Bertrand
It can be easier to understand the correct answer if you consider the paradox as Bertrand originally described it. After a box has been chosen, but before a box is opened to let you observe a coin, the probability is 2/3 that the box has two of the same kind of coin. If the probability of "observing a gold coin" in combination with "the box has two of the same kind of coin" is 1/2, then the probability of "observing a silver coin" in combination with "the box has two of the same kind of coin" must also be 1/2. And if the probability that the box has two like coins changes to 1/2 no matter what kind of coin is shown, the probability would have to be 1/2 even if you hadn't observed a coin this way. Since we know his probability is 2/3, not 1/2, we have an apparent paradox. It can be resolved only by recognizing how the combination of "observing a gold coin" with each possible box can only affect the probability that the box was GS or SS, but not GG.
Card version
Suppose there are three cards:
• A black card that is black on both sides,
• A white card that is white on both sides, and
• A mixed card that is black on one side and white on the other.
All the cards are placed into a hat and one is pulled at random and placed on a table. The side facing up is black. What are the odds that the other side is also black?
The answer is that the other side is black with probability 23. However, common intuition suggests a probability of 12 either because there are two cards with black on them that this card could be, or because there are 3 white and 3 black sides and many people forget to eliminate the possibility of the "white card" in this situation (i.e. the card they flipped CANNOT be the "white card" because a black side was turned over).
In a survey of 53 Psychology freshmen taking an introductory probability course, 35 incorrectly responded 12; only 3 students correctly responded 23.[1]
Another presentation of the problem is to say : pick a random card out of the three, what are the odds that it has the same color on the other side? Since only one card is mixed and two have the same color on their sides, it is easier to understand that the probability is 23. Also note that saying that the color is black (or the coin is gold) instead of white doesn't matter since it is symmetric: the answer is the same for white. So is the answer for the generic question 'same color on both sides'.
Preliminaries
To solve the problem, either formally or informally, one must assign probabilities to the events of drawing each of the six faces of the three cards. These probabilities could conceivably be very different; perhaps the white card is larger than the black card, or the black side of the mixed card is heavier than the white side. The statement of the question does not explicitly address these concerns. The only constraints implied by the Kolmogorov axioms are that the probabilities are all non-negative, and they sum to 1.
The custom in problems when one literally pulls objects from a hat is to assume that all the drawing probabilities are equal. This forces the probability of drawing each side to be 16, and so the probability of drawing a given card is 13. In particular, the probability of drawing the double-white card is 13, and the probability of drawing a different card is 23.
In question, however, one has already selected a card from the hat and it shows a black face. At first glance it appears that there is a 50/50 chance (i.e. probability 12) that the other side of the card is black, since there are two cards it might be: the black and the mixed. However, this reasoning fails to exploit all of the information; one knows not only that the card on the table has at least one black face, but also that in the population it was selected from, only 1 of the 3 black faces was on the mixed card.
An easy explanation is that to name the black sides as x, y and z where x and y are on the same card while z is on the mixed card, then the probability is divided on the 3 black sides with 13 each. thus the probability that we chose either x or y is the sum of their probabilities thus 23.
Solutions
Intuition
Intuition tells one that one is choosing a card at random. However, one is actually choosing a face at random. There are 6 faces, of which 3 faces are white and 3 faces are black. Two of the 3 black faces belong to the same card. The chance of choosing one of those 2 faces is 23. Therefore, the chance of flipping the card over and finding another black face is also 23. Another way of thinking about it is that the problem is not about the chance that the other side is black, it's about the chance that you drew the all black card. If you drew a black face, then it's twice as likely that that face belongs to the black card than the mixed card.
Alternately, it can be seen as a bet not on a particular color, but a bet that the sides match. Betting on a particular color regardless of the face shown, will always have a chance of 12. However, betting that the sides match is 23, because 2 cards match and 1 does not.
Labels
One solution method is to label the card faces, for example numbers 1 through 6.[2] Label the faces of the black card 1 and 2; label the faces of the mixed card 3 (black) and 4 (white); and label the faces of the white card 5 and 6. The observed black face could be 1, 2, or 3, all equally likely; if it is 1 or 2, the other side is black, and if it is 3, the other side is white. The probability that the other side is black is 23. This probability can be derived in the following manner: Let random variable B equal the a black face (i.e. the probability of a success since the black face is what we are looking for). Using Kolmogrov's Axiom of all probabilities having to equal 1, we can conclude that the probability of drawing a white face is 1-P(B). Since P(B)=P(1)+P(2) therefore P(B)=13+13=23. Likewise we can do this P(white face)=1-23=13.
Bayes' theorem
Given that the shown face is black, the other face is black if and only if the card is the black card. If the black card is drawn, a black face is shown with probability 1. The total probability of seeing a black face is 12; the total probability of drawing the black card is 13. By Bayes' theorem, the conditional probability of having drawn the black card, given that a black face is showing, is
$\frac{1\cdot1/3}{1/2}=2/3.$
It can be more intuitive to present this argument using Bayes' rule rather than Bayes' theorem[3]. Having seen a black face we can rule out the white card. We are interested in the probability that the card is black given a black face is showing. Initially it is equally likely that the card is black and that it is mixed: the prior odds are 1:1. Given that it is black we are certain to see a black face, but given that it is mixed we are only 50% certain to see a black face. The ratio of these probabilities, called the likelihood ratio or Bayes factor, is 2:1. Bayes' rule says "posterior odds equals prior odds times likelihood ratio". Since the prior odds are 1:1 the posterior odds equals the likelihood ratio, 2:1. It is now twice as likely that the card is black than that it is mixed.
Eliminating the white card
Although the incorrect solution reasons that the white card is eliminated, one can also use that information in a correct solution. Modifying the previous method, given that the white card is not drawn, the probability of seeing a black face is 34, and the probability of drawing the black card is 12. The conditional probability of having drawn the black card, given that a black face is showing, is
$\frac{1/2}{3/4}=2/3.$
Symmetry
The probability (without considering the individual colors) that the hidden color is the same as the displayed color is clearly 23, as this holds if and only if the chosen card is black or white, which chooses 2 of the 3 cards. Symmetry suggests that the probability is independent of the color chosen, so that the information about which color is shown does not affect the odds that both sides have the same color.
This argument is correct and can be formalized as follows. By the law of total probability, the probability that the hidden color is the same as the displayed color equals the weighted average of the probabilities that the hidden color is the same as the displayed color, given that the displayed color is black or white respectively (the weights are the probabilities of seeing black and white respectively). By symmetry, the two conditional probabilities that the colours are the same given we see black and given we see white are the same. Since they moreover average out to 2/3 they must both be equal to 2/3.
Experiment
Using specially constructed cards, the choice can be tested a number of times. Let "B" denote the colour Black. By constructing a fraction with the denominator being the number of times "B" is on top, and the numerator being the number of times both sides are "B", the experimenter will probably find the ratio to be near 23.
Note the logical fact that the B/B card contributes significantly more (in fact twice) to the number of times "B" is on top. With the card B/W there is always a 50% chance W being on top, thus in 50% of the cases card B/W is drawn, the draw affects neither numerator nor denominator and effectively does not count (this is also true for all times W/W is drawn, so that card might as well be removed from the set altogether). Conclusively, the cards B/B and B/W are not of equal chances, because in the 50% of the cases B/W is drawn, this card is simply "disqualified".
Notes and references
1. ^ Bar-Hillel and Falk (page 119)
2. ^ Nickerson (page 158) advocates this solution as "less confusing" than other methods.
3. ^ Bar-Hillel and Falk (page 120) advocate using Bayes' Rule. | 2014-10-21T03:00:54 | {
"domain": "wikipedia.org",
"url": "http://en.wikipedia.org/wiki/Bertrand's_box_paradox",
"openwebmath_score": 0.7928835153579712,
"openwebmath_perplexity": 286.4903982179206,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9923043537319405,
"lm_q2_score": 0.8633916047011594,
"lm_q1q2_score": 0.856747248320567
} |
https://math.stackexchange.com/questions/2918063/are-most-sets-in-mathbb-r-neither-open-nor-closed | # Are “most” sets in $\mathbb R$ neither open nor closed?
It seems intuitive to believe that most subsets of $\mathbb R$ are neither open nor closed.
For instance, if we consider the collection of all (open, closed, half-closed/open) intervals, then one can probably make precise the notion that "half" of all intervals in this collection are neither open nor closed. (Whether this will amount to a reasonable definition of what it means for most subsets to be neither open nor closed might be up for debate.)
If this intuition is correct, is there a way to formalise it? If not, how would we formalise its being wrong?
To be clear, I am happy for a fairly broad interpretation of the term "most". Natural interpretations include but are not limited to:
1. Measure-theoretic (e.g. is there a natural measure on (a $\sigma$-algebra on) the power set of $\mathbb R$ that assigns negligible measure to $\tau$?)
2. Topological (e.g. is there a natural topology on the power set of $\mathbb R$ where $\tau$ is meagre, or even nowhere dense?)
3. Set-theoretic (e.g. does the power set of $\mathbb R$ have larger cardinality than $\tau$?)
Here, $\tau$ is (obviously) the Euclidean topology.
Actually, that last version of the question in parentheses might have the easiest answer: Let $\mathcal B$ be the Borel sets on $\mathbb R$. We have that $|\tau| \le | \mathcal B | = | \mathbb R | < \left| 2^{\mathbb R} \right|$. (For details on the equality, see here. For a much simpler proof, see this answer.)
Are there alternative ways to make this precise?
• A trivial answer to 1. and 2. would be to define a topology on $P(\Bbb R),$ the power-set of $\Bbb R$, as $\{\emptyset, P(\Bbb R)\setminus \tau, P(\Bbb R)\}$ and a measure $m$ with $m(P(\Bbb R))=1$ and $m(\tau)=0.$ But I don't think this is quite what you're hoping for. – DanielWainfleet Sep 16 '18 at 1:12
• @DanielWainfleet Indeed. I was hoping the word "natural" would be enough to rule out such answers. Unless there are other reasons I'm not seeing that would make such a definition natural? – Theoretical Economist Sep 17 '18 at 22:43
Since each open non-empty subset of $\mathbb R$ can be written has a countable union of open intervals and since the set of all open intervals has the same cardinal as $\mathbb R$, the set of all open subsets of $\mathbb R$ has the same cardinal as $\mathbb R$. And since there is a bijection between the open subsets of $\mathbb R$ and the closed ones, the set of all closed subsets of $\mathbb R$ also has the same cardinal as $\mathbb R$. So, in the set-theoretical sense, most subsets of $\mathbb R$ are neither closed nor open.
• To the proposer: The set of bounded open real intervals with rational end-points is a countable base (basis) for $\tau.$ For any topology $T$ with a countable base $B$ we have $|T|=|\{\cup C: C\in P(B)\}|\leq |P(B)|\leq 2^{\aleph_0}=|\Bbb R|.$.... So $|\tau|\leq |\Bbb R|.$ And we also have $|\tau|\geq |\{(r,r+1):r\in \Bbb R\}|=|\Bbb R|.$ – DanielWainfleet Sep 16 '18 at 1:27 | 2019-04-21T14:22:42 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2918063/are-most-sets-in-mathbb-r-neither-open-nor-closed",
"openwebmath_score": 0.9278112053871155,
"openwebmath_perplexity": 161.26587218202604,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9881308786041316,
"lm_q2_score": 0.8670357666736772,
"lm_q1q2_score": 0.8567448139044676
} |
https://yutsumura.com/is-the-trace-of-the-transposed-matrix-the-same-as-the-trace-of-the-matrix/ | # Is the Trace of the Transposed Matrix the Same as the Trace of the Matrix?
## Problem 633
Let $A$ be an $n \times n$ matrix.
Is it true that $\tr ( A^\trans ) = \tr(A)$? If it is true, prove it. If not, give a counterexample.
## Solution.
The answer is true. Recall that the transpose of a matrix is the sum of its diagonal entries. Also, note that the diagonal entries of the transposed matrix are the same as the original matrix.
Putting together these observations yields the equality $\tr ( A^\trans ) = \tr(A)$.
Here is the more formal proof.
For $A = (a_{i j})_{1 \leq i, j \leq n}$, the transpose $A^{\trans}= (b_{i j})_{1 \leq i, j \leq n}$ is defined by $b_{i j} = a_{j i}$.
In particular, notice that $b_{i i} = a_{i i}$ for $1 \leq i \leq n$. And so,
$\tr(A^{\trans}) = \sum_{i=1}^n b_{i i} = \sum_{i=1}^n a_{i i} = \tr(A) .$
### More from my site
• A Relation between the Dot Product and the Trace Let $\mathbf{v}$ and $\mathbf{w}$ be two $n \times 1$ column vectors. Prove that $\tr ( \mathbf{v} \mathbf{w}^\trans ) = \mathbf{v}^\trans \mathbf{w}$. Solution. Suppose the vectors have components $\mathbf{v} = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n […] • Does the Trace Commute with Matrix Multiplication? Is \tr (A B) = \tr (A) \tr (B) ? Let A and B be n \times n matrices. Is it always true that \tr (A B) = \tr (A) \tr (B) ? If it is true, prove it. If not, give a counterexample. Solution. There are many counterexamples. For one, take \[A = \begin{bmatrix} 1 & 0 \\ 0 & 0 […] • Matrix XY-YX Never Be the Identity Matrix Let I be the n\times n identity matrix, where n is a positive integer. Prove that there are no n\times n matrices X and Y such that \[XY-YX=I.$ Hint. Suppose that such matrices exist and consider the trace of the matrix $XY-YX$. Recall that the trace of […]
• Prove that the Dot Product is Commutative: $\mathbf{v}\cdot \mathbf{w}= \mathbf{w} \cdot \mathbf{v}$ Let $\mathbf{v}$ and $\mathbf{w}$ be two $n \times 1$ column vectors. (a) Prove that $\mathbf{v}^\trans \mathbf{w} = \mathbf{w}^\trans \mathbf{v}$. (b) Provide an example to show that $\mathbf{v} \mathbf{w}^\trans$ is not always equal to $\mathbf{w} […] • If 2 by 2 Matrices Satisfy$A=AB-BA$, then$A^2$is Zero Matrix Let$A, B$be complex$2\times 2$matrices satisfying the relation $A=AB-BA.$ Prove that$A^2=O$, where$O$is the$2\times 2$zero matrix. Hint. Find the trace of$A$. Use the Cayley-Hamilton theorem Proof. We first calculate the […] • Determine Whether Given Matrices are Similar (a) Is the matrix$A=\begin{bmatrix} 1 & 2\\ 0& 3 \end{bmatrix}$similar to the matrix$B=\begin{bmatrix} 3 & 0\\ 1& 2 \end{bmatrix}$? (b) Is the matrix$A=\begin{bmatrix} 0 & 1\\ 5& 3 \end{bmatrix}$similar to the matrix […] • If Two Matrices are Similar, then their Determinants are the Same Prove that if$A$and$B$are similar matrices, then their determinants are the same. Proof. Suppose that$A$and$B$are similar. Then there exists a nonsingular matrix$S$such that $S^{-1}AS=B$ by definition. Then we […] • Matrix Operations with Transpose Calculate the following expressions, using the following matrices: $A = \begin{bmatrix} 2 & 3 \\ -5 & 1 \end{bmatrix}, \qquad B = \begin{bmatrix} 0 & -1 \\ 1 & -1 \end{bmatrix}, \qquad \mathbf{v} = \begin{bmatrix} 2 \\ -4 \end{bmatrix}$ (a)$A B^\trans + \mathbf{v} […]
#### You may also like...
##### The Vector $S^{-1}\mathbf{v}$ is the Coordinate Vector of $\mathbf{v}$
Suppose that $B=\{\mathbf{v}_1, \mathbf{v}_2\}$ is a basis for $\R^2$. Let $S:=[\mathbf{v}_1, \mathbf{v}_2]$. Note that as the column vectors of $S$...
Close | 2018-02-19T16:14:26 | {
"domain": "yutsumura.com",
"url": "https://yutsumura.com/is-the-trace-of-the-transposed-matrix-the-same-as-the-trace-of-the-matrix/",
"openwebmath_score": 0.9988897442817688,
"openwebmath_perplexity": 141.6648508326761,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9901401455693091,
"lm_q2_score": 0.8652240947405564,
"lm_q1q2_score": 0.8566931111164882
} |
https://brilliant.org/discussions/thread/componendo-et-dividendo-2/ | # This note has been used to help create the Componendo and Dividendo wiki
See the complete wiki page here.
The method of Componendo et Dividendo allows a quick way to do some calculations, and can simplify the amount of expansion needed.
If $a, b, c$ and $d$ are numbers such that $b, d$ are non-zero and $\frac{a}{b} = \frac{c}{d}$, then
$\begin{array} {l r l } \text{1. Componendo:} & \frac{ a+b}{b} & = \frac{ c+d}{d}. \\ \text{2. Dividendo: } & \frac{ a-b}{b} & = \frac{ c-d} {d}. \\ \text{ Componendo et Dividendo: } & \\ \text{3. For } k \neq \frac{a}{b},& \frac{ a+kb}{a-kb} & = \frac{ c+kd}{c-kd} .\\ \text{4. For } k \neq \frac{-b}{d}, & \frac{ a}{b} & = \frac{ a + kc } { b + kd }. \\ \end{array}$
This can be proven directly by observing that
$\begin{array} {l r l } \text{ 1.} \frac{ a+b}{b} = \frac{ \frac{a}{b} + 1} {1} = \frac{ \frac{c}{d} + 1} {1} = \frac{ c+d}{d} . \\ \text{ 2.} \frac{ a-b}{b} = \frac{ \frac{a}{b} - 1} {1} = \frac{ \frac{c}{d} - 1} {1} = \frac{ c-d}{d} . \\ \text{ 3.} \frac{ a+kb}{a-kb} = \frac{ \frac{a}{b} + k } { \frac{a}{b} - k} = \frac{ \frac{c}{d} + k } { \frac{ c}{d} -k} = \frac{ c+kd} { c-kd} . \\ \text{ 4.} \frac{ a + kc} { b+ kd} = \frac{ a}{b} \times \frac{ 1 + k \frac{c}{a} } { 1 + k \frac{d}{b} } = \frac{ a}{b} . \end{array}$
## Worked examples
### 1. Show the converse, namely that if $a, b, c$ and $d$ are numbers such that $b, d, a-b, c-d$ are non-zero and $\frac{ a+b}{a-b} = \frac{c+d} { c-d}$, then $\frac{ a}{b} = \frac{c}{d}$.
Solution: We apply Componendo et Dividendo with $k=1$ (which is valid since $\frac{a+b}{a-b} \neq 1$ ), and get that $\frac{ 2a } { 2b} = \frac{ (a+b) + (a-b) } { (a+b) - (a-b) } = \frac{ (c+d) + (c-d) } { (c+d) - (c-d) } = \frac{ 2c} { 2d}.$
Note: The converse of Componendo and Dividendo also holds, and we can prove it by applying Dividendo and Componendo respectively.
### 2. Solve for $x$: $\frac{ x^3+1} { x+ 1} = \frac{ x^3-1} { x-1}$.
Solution: For the fractions to make sense, we must have $x \neq 1, -1$.
Cross multiplying, we get $\frac{ x^3+1}{x^3-1} = \frac{ x+1}{x-1}.$
Apply Componendo et Dividendo with $k=1$ (which is valid since $\frac{x+1}{x-1} \neq 1$ ), we get that $\frac{ 2x^3}{2} = \frac{ 2x}{2} \Rightarrow x(x^2-1) = 0$. However since $x \neq 1, -1$, we have $x=0$ as the only solution.
Note: We also need to check the condition that the denominators are non-zero, but this is obvious.
Note by Calvin Lin
5 years, 5 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $ ... $ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
if a/b =c/d what will be the result by componendo dividendo
- 5 years, 2 months ago
See the statements contained in the first box.
Staff - 5 years, 2 months ago
Sorry I'm slightly confused, could you clarify what Componendo and Dividendo integrate to?
- 3 years, 5 months ago
Check out the examples on the componendo and dividendo wiki page.
Staff - 3 years, 5 months ago
I found out (somewhat accidentally) that
$\frac{a+mb}{b+na} = \frac{c+md}{d+nc}$
is also true. It's quite easy to prove; it can also be derived from the 4th case stated above. It seems different enough, though, to be worth a mention, yet I never see it anywhere. Or is it perhaps that there are many other such corollaries and only the most basic ones are usually listed?
- 1 year, 7 months ago
Ohhh, that's a really nice identity. It is a generalization of Worked Example 1. Can you add it to the Componendo and Dividendo wiki under Problem Solving?
Like you said, it is essentially / can be derived from the 4th case, where we have $\frac{ a + mb } { c + md} = \frac{a}{c} = \frac{b}{d} = \frac{ b+na}{d + nc }$ (as long as the denominators are non-zero).
Staff - 1 year, 7 months ago
Have finally had a chance to add the identity to the Wiki; please take a look when you can and let me know if I need to make any changes. (I accidentally first added it to the Theorem section before I remembered that you said Problem Solving, I did move it to the correct section after that, hope it didn't cause any problems.) I also added an example applying it a little further down, in the section that introduced using C&D with non-linear terms. The Wiki mentioned the terms could be polynomial or exponential; my example uses trig functions, I believe it's still valid but please take a look and let me know if there are any issues. Thanks.
- 1 year, 7 months ago
That's a great writeup. I like how you highlighted Method 1, which "by right" should be how people understand this identity, and "by left" it is placed in this wiki because of the form it took.
Yes, your example with a trigo substitution is valid. Good one. Thanks!
Staff - 1 year, 7 months ago | 2019-08-24T19:07:57 | {
"domain": "brilliant.org",
"url": "https://brilliant.org/discussions/thread/componendo-et-dividendo-2/",
"openwebmath_score": 0.9854892492294312,
"openwebmath_perplexity": 1087.8622986071887,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9901401446964613,
"lm_q2_score": 0.8652240947405564,
"lm_q1q2_score": 0.8566931103612793
} |
http://mathhelpforum.com/calculus/16873-parametric-equations-plane-problem.html | # Thread: Parametric equations of plane problem
1. ## Parametric equations of plane problem
This problem was confusing me so any help would be appreciated!
Write both the parametric and symmetric equations of the line of intersection of the planes with the equations 2x - y + z = 5 and x + y - z = 1.
2. Originally Posted by clockingly
This problem was confusing me so any help would be appreciated!
Write both the parametric and symmetric equations of the line of intersection of the planes with the equations 2x - y + z = 5 and x + y - z = 1.
If $z=0$ then:
$2x-y=5$
$x+y=1$
Thus, $x=2$ and $y=-1$
If $z=1$ then:
$2x-y=4$
$x+y=2$
Thus, $x=2$ and $y=0$
Thus, we have that this line must contains the points $(2,-1,0)\mbox{ and }(2,0,1)$
Thus, the Symettric Equation is:
$\frac{x-2}{0} = \frac{y+1}{-1} = \frac{z-0}{-1}$
Note, this is a zero in the denominator of the first fraction, it is a useful notation I saw in a Russian Textbook* which simply means $x=2$ constantly, so it is just a shorthand notation.
*)If you are interested the Textbook was written by Perelman! Not Perelman you think it is but his grandfather!
3. Originally Posted by ThePerfectHacker
Thus, the Symettric Equation is:
$\frac{x-2}{0} = \frac{y+1}{-1} = \frac{z-0}{-1}$
Note, this is a zero in the denominator of the first fraction, it is a useful notation I saw in a Russian Textbook* which simply means $x=2$ constantly, so it is just a shorthand notation.
*)If you are interested the Textbook was written by Perelman! Not Perelman you think it is but his grandfather!
Thanks for that TPH. i did this question the same as you did, but i had no idea what to do with that 0 in the denominator, it kind of freaked me out, so i decided not to post
4. Originally Posted by Jhevon
Thanks for that TPH. i did this question the same as you did, but i had no idea what to do with that 0 in the denominator, it kind of freaked me out, so i decided not to post
The way it is taught in American schools is if that every happens then the fuction does not have a "symettric form" to it. It only has parametric. But Grandpa's Perelman notation is really useful like I said.
5. Hello, clockingly!
Write both the parametric and symmetric equations of the line of intersection
of the planes with the equations: . $\begin{array}{ccc}2x - y + z &= &5 \\ x + y - z &= &1\end{array}$
This is how I was taught to find the intersection of two planes.
We have: . $\begin{array}{cccc}2x - y + z & = & 5 & {\color{blue}[1]}\\ x + y -z & = & 1 & {\color{blue}[2]}\end{array}$
Add [1] and [2]: . $3x \,=\,6\quad\Rightarrow\quad x\,=\,2$
Substitute into [2]: . $2 + y - z \:=\:1\quad\Rightarrow\quad y \:=\:z-1$
We have $x,\,y,\,z$ as a function of $z\!:\;\;\begin{Bmatrix}x & = & 2 \\ y & = & z-1 \\ z & = & z\end{Bmatrix}$
On the right side, replace $z$ with a parameter $t$.
. . . . $\begin{Bmatrix}x & = & 2 \\ y & = & t - 1 \\ z & = & t\end{Bmatrix}$ . . . . There!
6. Originally Posted by Soroban
Hello, clockingly!
This is how I was taught to find the intersection of two planes.
We have: . $\begin{array}{cccc}2x - y + z & = & 5 & {\color{blue}[1]}\\ x + y -z & = & 1 & {\color{blue}[2]}\end{array}$
Add [1] and [2]: . $3x \,=\,6\quad\Rightarrow\quad x\,=\,2$
Substitute into [2]: . $2 + y - z \:=\:1\quad\Rightarrow\quad y \:=\:z-1$
We have $x,\,y,\,z$ as a function of $z\!:\;\;\begin{Bmatrix}x & = & 2 \\ y & = & z-1 \\ z & = & z\end{Bmatrix}$
On the right side, replace $z$ with a parameter $t$.
. . . . $\begin{Bmatrix}x & = & 2 \\ y & = & t - 1 \\ z & = & t\end{Bmatrix}$ . . . . There!
nice method
7. The customary notation used in textbooks in North America for the symmetric form in which one direction number is zero is to use a semicolon: $\frac{{y + 1}}{{ - 1}} = \frac{z}{{ - 1}}\, ;\,x = 2$. | 2017-08-24T09:03:22 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/calculus/16873-parametric-equations-plane-problem.html",
"openwebmath_score": 0.8964012861251831,
"openwebmath_perplexity": 407.96947903877344,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9763105335255603,
"lm_q2_score": 0.8774767970940975,
"lm_q1q2_score": 0.8566898399272382
} |
https://math.stackexchange.com/questions/1148442/proving-fracnnen-1n-fracn1n1en-by-induction-for-all | # Proving $\frac{n^n}{e^{n-1}}<n!<\frac{(n+1)^{n+1}}{e^{n}}$ by induction for all $n> 2$.
I am trying to prove
$$\frac{n^n}{e^{n-1}}<n!<\frac{(n+1)^{n+1}}{e^{n}} \text{ for all }n > 2.$$
Here is the original source (Problem 1B, on page 12 of PDF)
Can this be proved by induction?
The base step $n=3$ is proved: $\frac {27}{e^2} < 6 < \frac{256}{e^3}$ (since $e^2 > 5$ and $e^3 < 27$, respectively).
I can assume the case for $n=k$ is true: $\frac{k^k}{e^{k-1}}<k!<\frac{(k+1)^{k+1}}{e^{k}}$.
For $n=k+1$, I am having trouble:
\begin{align} (k+1)!&=(k+1)k!\\&>(k+1)\frac{k^k}{e^{k-1}}\\&=e(k+1)\frac{k^k}{e^{k}} \end{align} Now, by graphing on a calculator, I found it true that $ek^k >(k+1)^k$ (which would complete the proof for the left inequality), but is there some way to prove this relation?
And for the other side of the inequality, I am also having some trouble: \begin{align} (k+1)!&=(k+1)k!\\&<(k+1)\frac{(k+1)^{k+1}}{e^{k}}\\&=\frac{(k+1)^{k+2}}{e^k}\\&<\frac{(k+2)^{k+2}}{e^k}. \end{align} I can't seem to obtain the $e^{k+1}$ in the denominator, needed to complete the induction proof.
• just curious, what level this exam is -to enter PhD program or some qual? – Alex Feb 15 '15 at 13:47
• @Alex According to the link provided in my question, it's UC Berkeley's "preliminary exam", which apparently their students must pass before they can continue their Ph.D program and eventually take their oral qualifying exam. – Cookie Feb 15 '15 at 17:52
• Also, there is a solution here math.berkeley.edu/sites/default/files/pages/… (page 12 of PDF), but the solution uses integration. I understand that method works, but I wanted to also prove this by induction. Induction must be possible to use here... – Cookie Feb 15 '15 at 17:55
• so it's a prelim to qual exam? – Alex Feb 16 '15 at 0:05
• It's a prelim required to stay in the Ph.D program. The student has two chances (or maybe three based on appeal) to pass the exam, or else the student is dismissed. – Cookie Feb 16 '15 at 0:27
Let's try an inductive proof from the original inquality for $n$, let's prove for $n+1$
$$\frac{n^n}{e^{n-1}}<n!<\frac{(n+1)^{n+1}}{e^{n}}$$
Okay, multiply both sides by $n+1$. At least the middle is correct
$$(n+1)\frac{n^n}{e^{n-1}}<(n+1)!<\frac{(n+1)^{n+2}}{e^{n}}$$
and we have to make the left and right sides look more appropriate
$$\left(\color{red}{\frac{n}{n+1}} \right)^\color{red}{n}\frac{(n+1)^{n+1}}{e^{n-1}}<(n+1)!<\frac{(n+2)^{n+2}}{e^{n}} \left(\color{blue}{\frac{n+1}{n+2}}\right)^{\color{blue}{n+2}}$$
Our induction is complete if we can prove two more inequalities:
$$\frac{1}{e} < \left(\frac{n}{n+1} \right)^n \text{ and } \left(\frac{n+1}{n+2}\right)^{n+2}< \frac{1}{e}$$
These two inequalities can be combined into one and we can take reciprocals. At least it is well-known.
$$\bigg(1 + \frac{1}{m}\bigg)^{m+1}> \mathbf{e} > \bigg(1 + \frac{1}{n} \bigg)^n$$
This is true for any $m, n \in \mathbb{N}$.
You shave off a little bit too much when you said $(k+1)^{k+2} < (k+2)^{k+2}$. Instead, you needed the more delicate:
$$(k+1)^{k+2} < \frac{1}{e} (k+2)^{k+2}$$
If you know1 that: $$a_n=\left(1+\frac{1}{n}\right)^n,\qquad b_n = \left(1+\frac{1}{n}\right)^{n+1}$$ give to sequences converging towards $e$, where $\{a_n\}$ is increasing while $\{b_n\}$ is decreasing, consider that:
$$n = \prod_{k=1}^{n-1}\left(1+\frac{1}{k}\right),\tag{1}$$ so: $$n! = \prod_{m=2}^{n} m = \prod_{m=2}^{n}\prod_{k=1}^{m-1}\left(1+\frac{1}{k}\right) = \prod_{k=1}^{n-1}\left(1+\frac{1}{k}\right)^{n-k}=\frac{n^n}{\prod_{k=1}^{n-1}\left(1+\frac{1}{k}\right)^k}\tag{2}$$ and your inequality trivially follows.
1) If you are not aware of such a classical result, then prove it by induction. It is rather easy.
• I think that it is not as easy as you think. – marty cohen Feb 15 '15 at 4:09
• @martycohen: it is also possible to use AM-GM in order to prove that $\{a_n\}$ is increasing and $\{b_n\}$ is decreasing. I think there are many questions/answers on MSE devoted to such well-known fact. – Jack D'Aurizio Feb 15 '15 at 4:12
• Here, for example: math.stackexchange.com/questions/389793/… – marty cohen Feb 15 '15 at 4:18
Proof: We will prove the inequality by induction. Since $e^2 > 5$ and $e^3 < 27$, we have $$\frac {27}{e^2} < 6 < \frac{256}{e^3}.$$ Thus, the statement for $n=3$ is true. The base step is complete.
For the induction step, we assume the statement is true for $n=k$. That is, assume $$\frac{k^k}{e^{k-1}}<k!<\frac{(k+1)^{k+1}}{e^{k}}.$$
We want to prove that the statement is true for $n=k+1$. It is straightforward to see for all $k > 2$ that $\left(1+\frac 1k \right)^k < e < \left(1+\frac 1k \right)^{k+1}$; this algebraically implies $$\left(\frac k{k+1} \right)^{k+1} < \frac 1e < \left(\frac k{k+1} \right)^k. \tag{*}$$ A separate induction proof for the left inequality of $(*)$ establishes $\left(\frac{k+1}{k+2} \right)^{k+2}<\frac 1e$. We now have \begin{align} (k+1)! &= (k+1)k! \\ &< (k+1) \frac{(k+1)^{k+1}}{e^k} \\ &= \frac{(k+1)^{k+2}}{e^k} \\ &= \frac{(k+1)^{k+2}}{e^k} \left( \frac{k+2}{k+2} \right)^{k+2} \\ &= \frac{(k+2)^{k+2}}{e^k} \left( \frac{k+1}{k+2} \right)^{k+2} \\ &< \frac{(k+2)^{k+2}}{e^k} \frac 1e \\ &= \frac{(k+2)^{k+2}}{e^{k+1}} \end{align} and \begin{align} (k+1)! &= (k+1)k! \\ &> (k+1) \frac{k^k}{e^{k-1}} \\ &= (k+1) \frac{k^k}{e^{k-1}} \left(\frac{k+1}{k+1} \right)^k \\ &= \frac{(k+1)^{k+1}}{e^{k-1}} \left( \frac k{k+1} \right)^k \\ &> \frac{(k+1)^{k+1}}{e^{k-1}} \frac 1e \\ &= \frac{(k+1)^{k+1}}{e^k}. \end{align} We have established that the statement $$\frac{(k+1)^{k+1}}{e^k}<(k+1)!<\frac{(k+2)^{k+2}}{e^{k+1}}$$ for $n=k+1$ is true. This completes the proof.
Some times it is easier for such an induction if we shift the sequence by one step, so the basis of the expressions is nicer for algebraic manipulations.
Let's rewrite your sequence of inequalities as
$$\displaystyle {n! \over e}<{(n+1)^{n+1} \over e^{n+1}} < {(n+1)! \over e} <{(n+2)^{n+2} \over e^{n+2}}< {(n+2)! \over e} \tag 1$$
and for simpler references below as $$a_0 \quad < \quad b_0 \quad <\quad a_1 \quad <\quad b_1 \quad < \quad a_2 \tag 2$$
Then we ask: does from $a_0<b_0<a_1$ follow that $a_1<b_1<a_2$ ?
Of course $a_1 = (n+1) \cdot a_0$ and so it might be useful to define $b_0$ as a fraction in the interval of $a_0$ and $a_1$: $$b_0 = (n+1)q_1 \cdot a_0 \text{ where } q_1<1 \tag {3.1 }$$. Consequently, define $$b_1 = (n+2)q_2 \cdot a_1 \text{ where also } q_2<1 \tag {3.2 }$$ Here the inequality $q_2 < 1$ is not known but expected and if this can be shown by induction from $q_1$ this would solve the problem .
So we start with $$q_1 = {b_0 \over a_0 (n+1)} = {(n+1)^{n} \over e^n n! } \tag {4.1 }$$ and by the beginning of the induction we know, that this is indeed smaller than 1 so $$q_1 < 1 \tag {4.2 }$$
Now we have simply $$q_2 = {b_1 \over a_1 (n+2)} = {(n+2)^{n+1} \over e^{n+1} (n+1)! } \tag {4.3 }$$ Next we consider the systematic progression in the sequence of $q_1,q_2,q_3,...$. To begin we determine the ratio $r_2={q_2 \over q_1}$ . We find $$\begin{eqnarray} r_2&=&{q_2 \over q_1}& =&{ {(n+2)^{n+1} \over e^{n+1} (n+1)! } \over {(n+1)^{n} \over e^{n} (n)! } } \\ &&&=& {(n+2)^{n+1} e^{n} (n)! \over e^{n+1} (n+1)! (n+1)^{n}} \\ &&&=& {(n+2)^{n+1} \over e (n+1)^{n+1}} \\ &&&=& \left({n+2 \over n+1}\right)^{n+1} \cdot \frac 1e \\ r_2&=&{q_2 \over q_1}& =& \left( 1 + {1 \over n+1} \right)^{n+1} \cdot \frac 1e \end{eqnarray} \tag {5.1 }$$
It is now needed to recognize/remember from the definition of $e$, that the last expression is smaller than 1 and that for $n \to \infty$ approximates monotonically 1.
Also we see by the expansion of the binomial-series $(1+1/x)^x=1+1+1/2+...$ in the general case that for $x>2$ this is greater than $2$ so $${2 \over e} \approx 0.73 < r_2 < 1 \tag{5.2}$$
From this we know now, that $q_2$ is not only smaller than $1$ but also smaller than $q_1$ but$q_{n+1} \to q_n$ for increasing $n$.
So we have $$\begin{eqnarray} &q_2 &=& q_1 \cdot r_2 < q_1 < 1 \\ \to& b_1 &<& (n+2) a_1 = a_2 \\ \to &a_1 &<& b_1 <a_2 \end{eqnarray} \tag {6 }$$ which we wanted to show.
Since several answers have already proven the result via induction, I see no harm in recording an additional proof that does not use induction.
We claim for all $n\ge 1$, $$\int_{0}^{1} (x\log(x))^n \, dx = \frac{(-1)^n n!}{(n+1)^{n+1}}$$ This can be shown using differentiation under the integral sign; set $k=n$ below: $$f(n) = \int_{0}^{1} x^n \, dx = \frac{1}{n+1} \implies \frac{d^k}{dn^k} f(n) = \int_{0}^{1} x^n (\log(x))^k \, dx = \frac{(-1)^{k} k!}{(n+1)^{k+1}}$$
Since $x \log(x)$ does not change sign, we have $$\int_{0}^{1} |x\log(x)|^n \, dx = \frac{n!}{(n+1)^{n+1}}$$ From this expression, we see that the given inequality is equivalent to $$\frac{1}{(n+1)e^n} < \int_{0}^{1} |x\log(x)|^n \, dx < \frac{1}{e^n}$$
A simple computation shows $|x\log(x)|^n$ attains a maximum of $e^{-n}$ at $x=e^{-1}$, so the upper bound is evident. To see the lower bound, note $|x\log(x)|^n$ is a concave function, so its graph encloses the triangle with vertices $(0,0)$, $(1,0)$, and $(e^{-n}, e^{-1})$. This triangle has area $e^{-n}/2 > e^{-n}/(n+1)$ since $n>2$. | 2021-06-14T08:56:31 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1148442/proving-fracnnen-1n-fracn1n1en-by-induction-for-all",
"openwebmath_score": 0.9996531009674072,
"openwebmath_perplexity": 309.60411274479463,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105314577313,
"lm_q2_score": 0.8774767922879693,
"lm_q1q2_score": 0.8566898334204925
} |
https://www.physicsforums.com/threads/formula-of-nth-term-of-a-pattern.767884/ | # Formula of nth term of a pattern
1. Aug 27, 2014
### songoku
1. The problem statement, all variables and given/known data
Find the formula for nth term of:
.... , -450 , -270 , -180 , -90 , 90 , 180 , 270 , 450 , ...
2. Relevant equations
Arithmetic and geometric series
Recursive
3. The attempt at a solution
Usually when finding the formula of Un, n starts from 1. But since the pattern goes infinite in both ways, how can we set which one is the first term?
As for recursive, it usually involves the preceding term, Un-1. Again, how can we determine the preceding term for the above case?
Please give me idea how to start solving this. Thank you very much
2. Aug 27, 2014
### jackarms
For the general formula (in terms of n), you're right that the sequence going both ways complicates things. The answer is that you have to arbitrarily designate a term to be $a_{1}$, or perhaps $a_{0}$. You would write the whole formula based around that starting term, in a form like this:
$a_{n} = ....$, where $a_{0} = 3$
For the recursive definition, you'll just use a term and the preceding term. Since it's recursive, you don't need to know the particular values for either of those terms -- the recursive form just means you describe a term in terms of its preceding term. So it works with any two (adjacent) terms in the sequence, and so it isn't necessary to designate a starting term. The form for that would look like:
$a_{n} = a_{n-1}....$
So $a_{n}$ can refer to any term in the sequence, because this rule works for any pair of terms in the sequence, no matter how long it is and whether it goes infinitely in just one direction or two.
Last edited: Aug 27, 2014
3. Aug 27, 2014
### Ray Vickson
Just fix $n = 0$ anywhere you want; for example, you can take $U_0 = -450.$ All the $U_n$ for $n < 0$ are the ones you do not see to the left of what you have written and all the $U_n$ for $n > 7$ are the ones you don't see to the right of what you have written. Now just start taking differences and see what you get---there is a nice pattern.
4. Aug 28, 2014
### songoku
I still can't find the pattern. Do we have to make separate formula for n < 0 and n > 0?
I tried to set U0 = 0, at the middle of the positive and negative terms. What I got is 270 = 180 + 90 ; 450 = 270 + 180. Maybe it is related to Un = Un-1 + Un-2, but I don't know about 90 and 180.
Thanks
5. Aug 30, 2014
### SammyS
Staff Emeritus
The pattern is evident in the sequence as it's given, before you assign any starting point.
Hint: Make a difference table with 1st & 2nd differences.
6. Aug 31, 2014
### Ray Vickson
For the sequence
$$U = \{U_i,i=0,1,2, \ldots \} = \{-450 , -270 , -180 , -90 , 90 , 180 , 270 , 450 , \ldots\}$$
the sequence of first differences is
$$dU = \{U_{i+1}-U_{i}, i = 0,1,2,\ldots \} = \{ -270-(-450),-180-(-270), \ldots \} = \{ 180,90,90,180,90,90,\ldots \} \\ = 90 + 90 \{1,0,0,1,0,0,1,0,0,\ldots \}$$
If you can find a nice formula for the sequence $\{1,0,0,1,0,0,1,0,0,\ldots \}$ you are almost finished. If you know about complex variables and the three (complex) roots of unity, you can finish up with a finite sum of some geometric series, to get a fairly nice formula for the nth term of your original sequence.
7. Aug 31, 2014
### PeroK
Alternatively, you should be able to do something with n (mod3) to formulate the sequence of 1, 0, 0, 1, 0, 0 ...
8. Sep 2, 2014
### andrewkirk
If we are to be rigorous, we need to take care with our definitions in defining the pattern. Recursive definitions mostly use the Recursion Theorem, which is only defined on the positive integers $Z_+$. For more general well-ordered sets we can use the Theorem of Transfinite Recursive Definition, but that's not needed here.
We can't just use the Recursion Theorem here in a single step, because the base set is the full set of integers, which is not well-ordered. The following trick gets around this problem.
Define $a_1=90, b_1=-90$
Define rules to give $a_{n+1}$ and $b_{n+1}$ in terms of $a_n$ and $b_n$ as follows, for $n\in Z_+$:
$a_{n+1}=a_n+180$ if $n+1$ is divisible by 3, else $a_{n+1}=a_n+90$
$b_{n+1}=b_n-180$ if $n+1$ is divisible by 3, else $b_{n+1}=b_n-90$
Then the recursion theorem tells us that these definitions prescribe unique sequences $a_n$ and $b_n$.
Now we can just define $c:Z\to Z$ by $c(n)=a_{n+1}$ if $n\geq 0$, otherwise $c(n)=b_{-n}$.
Last edited: Sep 2, 2014
9. Sep 14, 2014
### songoku
I think I get the idea. I'll try it first. Thanks a lot for the help
10. Sep 25, 2014
### Simon Bridge
I'm always puzzled by these things - i.e. notice how the sequence is symmetrical about the -90,90 terms?
Why not exploit that? Put $a_1=90$, then $a_n = (n/|n|)a_{|n|}$ ... the sequence for $a_{|n|}$ should be easy ... though there are only four terms so you are spoiled for choice. {90, 180, 270, 450...} the differences than go {1, 1, 2, ...} in units of $a_1$, which is suggestive.
What approach you choose seems to depend on what assumptions you make about the pattern in question. | 2018-03-22T18:45:41 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/formula-of-nth-term-of-a-pattern.767884/",
"openwebmath_score": 0.7913281917572021,
"openwebmath_perplexity": 339.44778732873704,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.97631052387569,
"lm_q2_score": 0.8774767970940975,
"lm_q1q2_score": 0.8566898314597009
} |
https://math.stackexchange.com/questions/1802813/prove-upper-bound-for-recurrence | # Prove upper bound for recurrence
I am working on problem set 8 problem 3 from MIT's Fall 2010 OCW class 6.042J. This is covered in chapter 10 which is about recurrences.
Here is the problem:
$$A_0 = 2$$ $$A_{n+1} = A_n/2 + 1/A_n, \forall n \ge 1$$
Prove
$$A_n \le \sqrt2 + 1/2^n, \forall n \ge 0$$
I have graphed the recurrence and the upper bound and they seem to both converge on $\sqrt2$.
Also, if you ignore the boundary condition $A_0 = 2$ then you find that $\sqrt2$ is a solution to the main part of the recurrence. i.e. $\sqrt2 = \sqrt2/2 + 1/\sqrt2$.
The chapter and videos on recurrences have a lot to say about a kind of cookbook solution to divide and conquer recurrences which they call the Akra-Bazzi Theorem. But this recurrence does not seem to be in the right form for that theorem. If it were in the form $A_{n+1} = A_n/2 + g(n)$ then the theorem would give you an asymptotic bound. But $1/A_n$ is not a simple function of $n$ like a polynomial. Instead it is part of the recurrence.
Also, the chapter has a variety of things to say about how to guess the right solution and plug it into an inductive proof, but I haven't had much success. I have tried possible solutions of various forms like $a_n = \sqrt2+a/b^n$ and tried solving for the constants $a$ and $b$ but to no avail.
So, if someone can point me in the right direction that would be great. I always assume that the problem sets are based on something taught in the videos and in the text of the book but I am having trouble tracking this one down.
Bobby
We show by induction that $$\sqrt{2}\lt A_n\le \sqrt{2}+\frac{1}{2^n}.\tag{1}$$ Suppose the result holds at $n=k$. We show the result holds at $n=k+1$.
For the inequality on the right of (1), we need to show that $$\frac{A_k}{2}+\frac{1}{A_k}\le \sqrt{2}+\frac{1}{2^{k+1}}.$$ By the induction hypothesis, we have $$\frac{A_k}{2}+\frac{1}{A_k}\le \frac{\sqrt{2}}{2}+\frac{1}{2^{k+1}}+\frac{1}{\sqrt{2}}=\sqrt{2}+\frac{1}{2^{k+1}},$$ which takes care of the induction step for the inequality on the right of (1).
We still need to show that $\sqrt{2}\lt A_{k+1}$. Let $A_k=\sqrt{2}+\epsilon$, where $\epsilon$ is positive. Then $$A_{k+1}=\frac{\sqrt{2}+\epsilon}{2}+\frac{1}{\sqrt{2}+\epsilon}=\frac{4+2\sqrt{2}\epsilon+\epsilon^2}{2(\sqrt{2}+\epsilon)}\gt \frac{4+2\sqrt{2}\epsilon}{2(\sqrt{2}+\epsilon)}=\sqrt{2}.$$ This completes the induction step for the inequality on the left of (1).
Remark: The inequality (1) and squeezing show that $A_n$ indeed has limit $\sqrt{2}$.
• It didn't occur to me to prove that $A_n \ge \sqrt2$ and then substitute $\sqrt2$ in the $1/A_n$ term. Much easier that way. Thanks. – Bobby Durrett May 28 '16 at 2:36
• @BobbyDurrett: You are welcome. The inequality $A_n\gt \sqrt{2}$ is not mentioned explicitly in what you are asked to show, but when one tries to push the induction through, it becomes clear that it is necessary for the proof. – André Nicolas May 28 '16 at 2:40
• Nice induction for the $>\sqrt{2}$ part. I didn't manage it immediately, that's why I showed it studying the map $x\mapsto \frac{x}2+\frac{1}{x}$. – Daniel Robert-Nicoud May 28 '16 at 11:09
Let $$B_k = \frac{A_k}{\sqrt{2}}$$ Then $B_0 = \sqrt{2}$ and $$B_{n+1} = \frac12\left(B_n+\frac{1}{B_n} \right)$$ Thus $b_n$ is the $n$-th guess if you perform Newton's algorithm to try to find $\sqrt{1}$ starting with a guess of $\sqrt{2}$.
This recursion can be solved in closed form using the formula for the hyperbolic tangent of $2x$ in terms of $\tanh x$: $$\tanh(2x) = \frac{2 \tanh x}{1+\tanh^2{x}}$$ . The result looks something like $$B_n = \tanh\left( 2^n \theta\right)$$ where $\theta = \tanh^{-1} B_0$; the answer I have given is off in that every other term needs to be the reciprocal of what I wrote, but the general idea will work.
Once you have that, you can know exactly what $B_n$ is and prove the relation; or better yet, use the error analysis for Newton's method to get an estimate of how close to 1 you would be.
We want to apply induction, but we need also a lower bound on $A_n$ for the $\frac{1}{n}$ term. We can show that $A_n\ge\sqrt{2}$ as follows:
Let $f(x) = \frac{x}{2} + \frac{1}{x}$. Then $f'(x) = \frac{1}{2} - \frac{1}{x^2}$ which has a zero at $x = \sqrt{2}$ where $f(\sqrt{2}) = \sqrt{2}$, and is positive for $x>\sqrt{2}$. This shows that $f(x)\ge\sqrt{2}$ whenever $x\ge\sqrt{2}$, and as $A_{n+1} = f(A_n)$, we have that $A_n\ge\sqrt{2}$ for all $n\ge0$.
Then to conclude:
The case $n=0$ is trivially true.
For $n\ge0$ we have that $$A_{n+1} = \frac{A_n}{2} + \frac{1}{A_n}.$$ By induction hypothesis and what we have shown above, this has as upper bound $$A_{n+1}\le \frac{\sqrt{2} + 2^{-n-1}}{2} + \frac{1}{\sqrt{2}} = \sqrt{2} + 2^{-n-1}.$$
Three steps: Use the definition of $A_n$ and algebra to establish that $$A_{n+1}-\sqrt 2 = {(A_n-\sqrt 2)^2\over 2A_n}.\tag1$$ Next, use (1) to prove by induction that $$A_n\ge\sqrt 2\ \text{for every n.}\tag2$$ Finally, use (1) and (2) to prove by induction that $$A_n-\sqrt2 \le \frac1{2^{n}}\ \text{for every n.}\tag2$$
Your sequence is $$A_{n+1} = \frac{1}{2} A_n + \frac{1}{A_n}$$ where the last term features a division which reminds of the Newton-Raphson iteration.
Newton Raphson iteration takes the root of the tangent as next step for estimating a root of $f$: $$0 = T'(x_{n+1}) = f(x_n) + f'(x_n) (x_{n+1} - x_n) \iff \\ x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$$ by comparison we have $$-\frac{1}{2}A_n + \frac{1}{A_n} = - \frac{A_n^2-2}{2A_n} = - \frac{f(A_n)}{f'(A_n)}$$ and see $f(x) = x^2 - 2$, the function used to iterate against the root $\sqrt{2}$. | 2019-06-26T16:10:29 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1802813/prove-upper-bound-for-recurrence",
"openwebmath_score": 0.9323938488960266,
"openwebmath_perplexity": 119.9180937019314,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105259435195,
"lm_q2_score": 0.8774767890838837,
"lm_q1q2_score": 0.8566898254537173
} |
https://math.stackexchange.com/questions/3352629/finding-the-domain-and-range-of-a-composite-function | # Finding the domain and range of a composite function
So I have two functions. $$f(x) = e^{-x^2+1}$$ and $$g(x)=\sqrt{x^2-4x+3}$$. I am then asked to determine the domain and range of
$$a)f∘g,$$
$$b)g∘f$$
I already did part $$a)$$ and the domain for part $$b)$$.
For part $$a)$$, the domain was $$(-\infty,1)\cup(3,\infty)$$ and the range was $$(0,e^2)$$.
For part $$b$$, I figured out that the domain was $$(-\infty,-1]\cup[1,\infty)$$. I am not sure how to find the range though. Normally, I would take the inverse of g∘f and find the domain of that, and although I can do it, I don't think I did it correctly.
Currently, I did figure out that $$g∘f$$ is $$\sqrt{e^{-2x^2+2}-4e^{-x^2+1}+3}$$. How do I find the range of this mess though? I attempted to take the inverse which I believe is:
$$y=\pm\sqrt{1-\ln(2\pm\sqrt{1+y^2})}$$
Although I know that Wolfram Alpha is not the arbitrator of what correct is, it's generally been right and my answer disagrees with what Wolfram alpha has obtained (As seen here). In addition, the range is something that I am not sure how Wolfram obtained (As seen here). This also looks REALLY messy.
Can anyone guide me as to how this was obtained? That would be much appreciated!
• I see how you got $(-\infty,1)\cup(3,\infty)$ (except maybe the choice of open/closed) but not how you got $e^2.$ When you were computing the range, did you forget the gap in the domain? I think that part is easier to solve if you do not work out the formula for $f\circ g.$ – David K Sep 11 '19 at 11:58
• I took the inverse of the function and found the domain of that to get the range. – Future Math person Sep 11 '19 at 18:23
• That seems like an unnecessarily difficult way to do it. Also error-prone, since (I think) it gives the wrong answer. What value of $x$ satisfies $f(g(x))=e^{3/2},$ for example? – David K Sep 11 '19 at 18:42
• Really? I didn't think it was too bad. I ended up getting $f(g(x))=-2 \pm \sqrt{2-\ln(x)}$. This would mean $2-\ln(x) \geq 0$ and thus $e^2 \geq x$ – Future Math person Sep 11 '19 at 18:44
• Also I get $g(1)=g(3)=0,$ and $0$ is in the domain of $f,$ so I would not exclude $1$ and $3$ from the domain in part a). I think you may need to be a lot more careful about checking what happens at your boundaries. (This is relevant to the range as well.) – David K Sep 11 '19 at 18:47
To find the range you want, namely of $$g(f(x))=F(x),$$ note that $$F(x)$$ is never negative. Also, it is defined and continuous at all real $$x.$$ Then it is an even function of $$x.$$ Thus, it suffices to consider only the range for positive $$x,$$ say.
Note that $$F(0)>0.$$ Also, as $$x\to +\infty,$$ we have $$F(x)\to \sqrt 3.$$ Thus, the range is at least $$[F(0),\sqrt 3),$$ by IVT. It only remains to see if the function attains $$\sqrt 3$$ at any point, or if it ever falls below $$F(0).$$ The first is easily answered in the negative (set $$F(x)=\sqrt 3$$ to deduce a contradiction), so the range is half-open. So does the function ever go below $$e^2-4e+3$$? In particular we may ask whether the function ever attains the minimum possible here, namely $$0.$$ Thus, setting $$F(x)=0,$$ we see that we want to see if there are real solutions to the quadratic equation $$p^2-4p+3=0,$$ where $$p=e^{-x^2+1}.$$ This has solutions. In particular, we get that $$e^{-x^2+1}=1,$$ or in other words that $$-x^2+1=0.$$ Thus $$F(x)$$ vanishes in at least two points.
It therefore is the case that the range is $$[0,\sqrt 3).$$
Note that the range of $$f$$ on $$(-\infty,-1]\cup[1,\infty)$$ is $$(0,1]$$. The range of $$g$$ on $$(0,1]$$ is $$[0,\sqrt{3})$$.
Let $$f,g$$ be given by $$\begin{cases} f(x)=e^{1-x^2}\\[4pt] g(x)=\sqrt{x^2-4x+3}\\ \end{cases}$$ $$\bullet\;\,$$Part $$(\mathbf{a}){\,:}\;\,$$Find the domain and range of $$f\circ g$$.
Since the domain of $$f$$ is $$\mathbb{R}$$, the domain of $$f\circ g$$ is the same as the domain of $$g$$.
Hence the domain of $$f\circ g$$ is $$(-\infty,1]\cup [3,\infty)$$.
Since $$f$$ is an even function, the range of $$f$$ is the same as the range of $$f$$ on the restricted domain $$[0,\infty)$$.
On the interval $$[0,\infty)$$, $$f$$ is strictly decreasing, and $$f$$ approaches zero from above as $$x$$ approaches infinity.
Since $$f$$ is continuous, it follows that the range of $$f$$ is $$(0,e]$$.
For $$x\ge 3$$, $$g$$ realizes all values in $$[0,\infty)$$, so the range of $$f\circ g$$ is the same as the range of $$f$$.
Hence the range of $$f\circ g$$ is $$(0,e]$$.
$$\bullet\;\,$$Part $$(\mathbf{b}){\,:}\;\,$$Find the domain and range of $$g\circ f$$.
The domain of $$g\circ f$$ is the set of all real $$x$$ such that $$f(x)$$ is in the domain of $$g$$, which is the set of all real $$x$$ such that $$f(x)\le 1$$ or $$f(x)\ge 3$$.
But we can't have $$f(x)\ge 3$$, since the maximum value of $$f$$ is $$e$$.
For the condition $$f(x)\le 1$$, we get $$f(x)\le1\iff e^{1-x^2}\le 1\iff 1-x^2\le 0\iff |x| \ge 1$$ hence the domain of $$g\circ f$$ is $$(-\infty,-1]\cup [1,\infty)$$.
Restricted to the domain $$(-\infty,-1]\cup [1,\infty)$$, the range of $$f$$ is $$(0,1]$$, hence the range of $$g\circ f$$ is the range of $$g$$ on the restricted domain $$(0,1]$$.
It follows that the range of $$g\circ f$$ is $$[0,\sqrt{3})$$.
$$\bullet\;\,$$To summarize the results:
• The domain of $$f\circ g$$ is:$$\;\,(-\infty,1]\cup [3,\infty)$$.
• The range of $$f\circ g$$ is:$$\;\,(0,e]$$.$$\\[6pt]$$
• The domain of $$g\circ f$$ is:$$\;\,(-\infty,-1]\cup [1,\infty)$$.
• The range of $$g\circ f$$ is:$$\;\,[0,\sqrt{3})$$. | 2020-02-25T01:36:46 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3352629/finding-the-domain-and-range-of-a-composite-function",
"openwebmath_score": 0.9344687461853027,
"openwebmath_perplexity": 92.38849530302848,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9763105266327962,
"lm_q2_score": 0.8774767794716264,
"lm_q1q2_score": 0.8566898166739935
} |
https://nhigham.com/2021/03/09/eigenvalue-inequalities-for-hermitian-matrices/?replytocom=25185 | Eigenvalue Inequalities for Hermitian Matrices
The eigenvalues of Hermitian matrices satisfy a wide variety of inequalities. We present some of the most useful and explain their implications. Proofs are omitted, but as Parlett (1998) notes, the proofs of the Courant–Fischer, Weyl, and Cauchy results are all consequences of the elementary fact that if the sum of the dimensions of two subspaces of $\mathbb{C}^n$ exceeds $n$ then the subspaces have a nontrivial intersection.
The eigenvalues of a Hermitian matrix $A\in\mathbb{C}^{n\times n}$ are real and we order them $\lambda_n\le \lambda_{n-1} \le \cdots \le \lambda_1$. Note that in some references, such as Horn and Johnson (2013), the reverse ordering is used, with $\lambda_n$ the largest eigenvalue. When it is necessary to specify what matrix $\lambda_k$ is an eigenvalue of we write $\lambda_k(A)$: the $k$th largest eigenvalue of $A$. All the following results also hold for symmetric matrices over $\mathbb{R}^{n\times n}$.
The function $f(x) = x^*Ax/x^*x$ is the quadratic form $x^*Ax$ for $A$ evaluated on the unit sphere, since $f(x) = f(x/\|x\|_2)$. As $A$ is Hermitian it has a spectral decomposition $A = Q\Lambda Q^*$, where $Q$ is unitary and $\Lambda = \mathrm{diag}(\lambda_i)$. Then
$f(x) = \displaystyle\frac{x^*Q\Lambda Q^*x}{x^*x} = \displaystyle\frac{y^*\Lambda y}{y^*y} = \displaystyle\frac{\sum_{i=1}^{n}\lambda_i y_i^2} {\sum_{i=1}^{n}y_i^2} \quad (y = Q^*x),$
from which is it clear that
$\notag \lambda_n = \displaystyle\min_{x\ne0} \displaystyle\frac{x^*Ax}{x^*x}, \quad \lambda_1 = \displaystyle\max_{x\ne0} \displaystyle\frac{x^*Ax}{x^*x}, \qquad(*)$
with equality when $x$ is an eigenvector corresponding to $\lambda_n$ and $\lambda_1$, respectively, This characterization of the extremal eigenvalues of $A$ as the extrema of $f$ is due to Lord Rayleigh (John William Strutt), and $f(x)$ is called a Rayleigh quotient. The intermediate eigenvalues correspond to saddle points of $f$.
Courant–Fischer Theorem
The Courant–Fischer theorem (1905) states that every eigenvalue of a Hermitian matrix $A\in\mathbb{C}^{n\times n}$ is the solution of both a min-max problem and a max-min problem over suitable subspaces of $\mathbb{C}^n$.
Theorem (Courant–Fischer).
For a Hermitian $A\in\mathbb{C}^{n\times n}$,
\notag \begin{aligned} \lambda_k &= \min_{\dim(S)=n-k+1} \, \max_{0\ne x\in S} \frac{x^*Ax}{x^*x}\\ &= \max_{\dim(S)= k} \, \min_{0\ne x\in S} \frac{x^*Ax}{x^*x}, \quad k=1\colon n. \end{aligned}
Note that the equalities $(*)$ are special cases of these characterizations.
In general there is no useful formula for the eigenvalues of a sum $A+B$ of Hermitian matrices. However, the Courant–Fischer theorem yields the upper and lower bounds
$\notag \lambda_k(A) + \lambda_n(B) \le \lambda_k(A+B) \le \lambda_k(A) + \lambda_1(B), \qquad (1)$
from which it follows that
$\notag \max_k|\lambda_k(A+B)-\lambda_k(A)| \le \max(|\lambda_n(B)|,|\lambda_1(B)|) = \|B\|_2.$
This inequality shows that the eigenvalues of a Hermitian matrix are well conditioned under perturbation. We can rewrite the inequality in the symmetric form
$\notag \max_k |\lambda_k(A)-\lambda_k(B)| \le \|A-B\|_2.$
If $B$ is positive semidefinite then (1) gives
$\notag \lambda_k(A) \le \lambda_k(A + B), \quad k = 1\colon n, \qquad (2)$
while if $B$ is positive definite then strict inequality holds for all $i$. These bounds are known as the Weyl monotonicity theorem.
Weyl’s Inequalities
Weyl’s inequalities (1912) bound the eigenvalues of $A+B$ in terms of those of $A$ and $B$.
Theorem (Weyl).
For Hermitian $A,B\in\mathbb{C}^{n\times n}$ and $i,j = 1\colon n$,
\notag \begin{aligned} \lambda_{i+j-1}(A+B) &\le \lambda_i(A) + \lambda_j(B), \quad i+j \le n+1, \qquad (3)\\ \lambda_i(A) + \lambda_j(B) &\le \lambda_{i+j-n}(A+B). \quad i+j \ge n+1, \qquad (4) \end{aligned}
The Weyl inequalities yield much information about the effect of low rank perturbations. Consider a positive semidefinite rank-$1$ perturbation $B = zz^*$. Inequality (3) with $j = 1$ gives
$\notag \lambda_i(A+B) \le \lambda_i(A) + z^*z, \quad i = 1\colon n$
(which also follows from (1)). Inequality (3) with $j = 2$, combined with (2), gives
$\notag \lambda_{i+1}(A) \le \lambda_{i+1}(A + zz^*) \le \lambda_i(A), \quad i = 1\colon n-1. \qquad (5)$
These inequalities confine each eigenvalue of $A + zz^*$ to the interval between two adjacent eigenvalues of $A$; the eigenvalues of $A + zz^*$ are said to interlace those of $A$. The following figure illustrates the case $n = 4$, showing a possible configuration of the eigenvalues $\lambda_i$ of $A$ and $\mu_i$ of $A + zz^*$.
A specific example, in MATLAB, is
>> n = 4; eig_orig = 5:5+n-1
>> D = diag(eig_orig); eig_pert = eig(D + ones(n))'
eig_orig =
5 6 7 8
eig_pert =
5.2961e+00 6.3923e+00 7.5077e+00 1.0804e+01
Since $\mathrm{trace}(A + zz^*) = \mathrm{trace}(A) + z^*z$ and the trace is the sum of the eigenvalues, we can write
$\notag \lambda_i(A + zz^*) = \lambda_i(A) + \theta_i z^*z,$
where the $\theta_i$ are nonnegative and sum to $1$. If we greatly increase $z^*z$, the norm of the perturbation, then most of the increase in the eigenvalues is concentrated in the largest, since (5) bounds how much the smaller eigenvalues can change:
>> eig_pert = eig(D + 100*ones(n))'
eig_pert =
5.3810e+00 6.4989e+00 7.6170e+00 4.0650e+02
More generally, if $B$ has $p$ positive eigenvalues and $q$ negative eigenvalues then (3) with $j = p+1$ gives
$\notag \lambda_{i+p}(A+B) \le \lambda_i(A), \quad i = 1\colon n-p,$
while (4) with $j = n-q$ gives
$\notag \lambda_i(A) \le \lambda_{i-q}(A + B), \quad i = q+1\colon n.$
So the inertia of $B$ (the number of negative, zero, and positive eigenvalues) determines how far the eigenvalues can move as measured relative to the indexes of the eigenvalues of $A$.
An important implication of the last two inequalities is for the case $A = I$, for which we have
\notag \begin{aligned} \lambda_{i+p}(I+B) &\le 1, \quad i = 1 \colon n-p, \\ \lambda_{i-q}(I+B) &\ge 1, \quad i = q+1 \colon n. \end{aligned}
Exactly $p+q$ eigenvalues appear in one of these inequalities and $n-(p+q)$ appear in both. Therefore $n - (p+q)$ of the eigenvalues are equal to $1$ and so only $\mathrm{rank}(B) = p+q$ eigenvalues can differ from $1$. So perturbing the identity matrix by a Hermitian matrix of rank $r$ changes at most $r$ of the eigenvalues. (In fact, it changes exactly $r$ eigenvalues, as can be seen from a spectral decomposition.)
Finally, if $B$ has rank $r$ then $\lambda_{r+1}(B) \le 0$ and $\lambda_{n-r}(B) \ge 0$ and so taking $j = r+1$ in (3) and $j = n-r$ in (4) gives
\notag \begin{aligned} \lambda_{i+r}(A+B) &\le \lambda_i(A), ~~\qquad\qquad i = 1\colon n-r, \\ \lambda_i(A) &\le \lambda_{i-r}(A + B), ~~\quad i = r+1\colon n. \end{aligned}
Cauchy Interlace Theorem
The Cauchy interlace theorem relates the eigenvalues of successive leading principal submatrices of a Hermitian matrix. We denote the leading principal submatrix of $A$ of order $k$ by $A_k = A(1\colon k, 1\colon k)$.
Theorem (Cauchy).
For a Hermitian $A\in\mathbb{C}^{n\times n}$,
$\notag \lambda_{i+1}(A_{k+1}) \le \lambda_i(A_k) \le \lambda_i(A_{k+1}), \quad i = 1\colon k, \quad k=1\colon n-1.$
The theorem says that the eigenvalues of $A_k$ interlace those of $A_{k+1}$ for all $k$. Two immediate implications are that (a) if $A$ is Hermitian positive definite then so are all its leading principal submatrices and (b) appending a row and a column to a Hermitian matrix does not decrease the largest eigenvalue or increase the smallest eigenvalue.
Since eigenvalues are unchanged under symmetric permutations of the matrix, the theorem can be reformulated to say that the eigenvalues of any principal submatrix of order $n-1$ interlace those of $A$. A generalization to principal submatrices of order $n-\ell$ is given in the next result.
Theorem.
If $B$ is a principal submatrix of order $n-\ell$ of a Hermitian $A\in\mathbb{C}^{n\times n}$ then
$\notag \lambda_{i+\ell}(A) \le \lambda_i(B) \le \lambda_i(A), \quad i=1\colon n-\ell.$
Majorization Results
It follows by taking $x$ to be a unit vector $e_i$ in the formula $\lambda_1 = \max_{x\ne0} x^*Ax/(x^*x)$ that $\lambda_1 \ge a_{ii}$ for all $i$. And of course the trace of $A$ is the sum of the eigenvalues: $\sum_{i=1}^n a_{ii} = \sum_{i=1}^n \lambda_i$. These relations are the first and last in a sequence of inequalities relating sums of eigenvalues to sums of diagonal elements obtained by Schur in 1923.
Theorem (Schur).
For a Hermitian $A\in\mathbb{C}^{n\times n}$,
$\notag \displaystyle\sum_{i=1}^k \lambda_i \ge \displaystyle\sum_{i=1}^k \widetilde{a}_{ii}, \quad k=1\colon n,$
where $\{\widetilde{a}_{ii}\}$ is the set of diagonal elements of $A$ arranged in decreasing order: $\widetilde{a}_{11} \ge \cdots \ge \widetilde{a}_{nn}$.
These inequalities say that the vector $[\lambda_1,\dots,\lambda_n]$ of eigenvalues majorizes the ordered vector $[\widetilde{a}_{11},\dots,\widetilde{a}_{nn}]$ of diagonal elements.
An interesting special case is a correlation matrix, a symmetric positive semidefinite matrix with unit diagonal, for which the inequalities are
$\notag \lambda_1 \ge 1, \quad \lambda_1+ \lambda_2\ge 2, \quad \dots, \quad \lambda_1+ \lambda_2 + \cdots + \lambda_{n-1} \ge n-1,$
and $\lambda_1+ \lambda_2 + \cdots + \lambda_n = n$. Here is an illustration in MATLAB.
>> n = 5; rng(1); A = gallery('randcorr',n);
>> e = sort(eig(A)','descend'), partial_sums = cumsum(e)
e =
2.2701e+00 1.3142e+00 9.5280e-01 4.6250e-01 3.6045e-04
partial_sums =
2.2701e+00 3.5843e+00 4.5371e+00 4.9996e+00 5.0000e+00
Ky Fan (1949) proved a majorization relation between the eigenvalues of $A$, $B$, and $A+B$:
$\notag \displaystyle\sum_{i=1}^k \lambda_i(A+B) \le \displaystyle\sum_{i=1}^k \lambda_i(A) + \displaystyle\sum_{i=1}^k \lambda_i(B), \quad k = 1\colon n.$
For $k = 1$, the inequality is the same as the upper bound of (1), and for $k = n$ it is an equality: $\mathrm{trace}(A+B) = \mathrm{trace}(A) + \mathrm{trace}(B)$.
Ostrowski’s Theorem
For a Hermitian $A$ and a nonsingular $X$, the transformation $A\to X^*AX$ is a congruence transformation. Sylvester’s law of inertia says that congruence transformations preserve the inertia. A result of Ostrowski (1959) goes further by providing bounds on the ratios of the eigenvalues of the original and transformed matrices.
Theorem (Ostrowski).
For a Hermitian $A\in \mathbb{C}^{n\times n}$ and $X\in\mathbb{C}^{n\times n}$,
$\lambda_k(X^*AX) = \theta_k \lambda_k(A), \quad k=1\colon n,$
where $\lambda_n(X^*X) \le \theta_k \le \lambda_1(X^*X)$.
If $X$ is unitary then $X^*X = I$ and so Ostrowski’s theorem reduces to the fact that a congruence with a unitary matrix is a similarity transformation and so preserves eigenvalues. The theorem shows that the further $X$ is from being unitary the greater the potential change in the eigenvalues.
Ostrowski’s theorem can be generalized to the situation where $X$ is rectangular (Higham and Cheng, 1998).
Interrelations
The results we have described are strongly interrelated. For example, the Courant–Fischer theorem and the Cauchy interlacing theorem can be derived from each other, and Ostrowski’s theorem can be proved using the Courant–Fischer Theorem.
2 thoughts on “Eigenvalue Inequalities for Hermitian Matrices”
1. Daniel Kressner says:
This is a very nice overview; thanks! Tangential to what Parlett is stating, most if not even all of these results can be concluded from the Eckart–Young–Mirsky theorem in the spectral norm. | 2021-05-05T22:14:11 | {
"domain": "nhigham.com",
"url": "https://nhigham.com/2021/03/09/eigenvalue-inequalities-for-hermitian-matrices/?replytocom=25185",
"openwebmath_score": 0.9866310358047485,
"openwebmath_perplexity": 324.5997061624111,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9943580928375232,
"lm_q2_score": 0.8615382058759129,
"lm_q1q2_score": 0.8566774873014341
} |
https://math.stackexchange.com/questions/1964876/the-comparison-test-for-series | # The Comparison Test for Series
I'm doing a homework problem - it asks us to show whether the series: $$\sum_{n=1}^{\infty}\frac{n^n}{n!}$$ converges or diverges. I looked at a graph of the sequence component $\big(\frac{n^n}{n!} \big)$ and saw it continued to increase. I then considered the sequence: $$\sum_{n=1}^{\infty}\frac{1}{n}$$ which diverges by the P test.
But, $$\frac{1}{n}\leq \frac{n^n}{n!},~ \forall~n\geq1$$ which would then mean that the first series I showed diverges by the comparison test.
My problem is that this seems too simple? Can I compare one series to any series or does the comparison series have to meet some certain requirements (besides those I've addressed).
Cheers.
• Looks fine to me. – Jacky Chong Oct 12 '16 at 6:15
• You could also do comparison test with the divergent series $\sum_{n=1}^\infty 1$. – angryavian Oct 12 '16 at 6:16
• That seems so simple, so I do not need to compare a series with changing exponents and factorials to a similar one? – Wharf Rat Oct 12 '16 at 6:17
• Is the necessary condition for convergence ($n^n/n!\to 0$) satisfied here? – A.Γ. Oct 12 '16 at 6:20
Your reasoning is fine. In fact, this can be shown to diverge with the simplest / most intuitive test there is: If $\ \displaystyle \sum_{n=1}^\infty a_n$ is a convergent series, it is a necessary condition that $\displaystyle \lim_{n \rightarrow \infty} a_n = 0$. In this scenario, $n^n \geq n!$ for all $n \geq 1$, implying $\displaystyle \frac{n^n}{n!} \geq 1$ for all such $n$, so this necessary condition does not hold.
I looked at a graph of the sequence component $\big(\frac{n^n}{n!} \big)$ and saw it continued to increase.
If there series converges then $n^n/n! \to 0$, but if you use Stirling's approximation: $n! \approx \sqrt{2\pi n} \cdot \left(\frac{n}{e}\right)^n$
$$\lim_{n} \frac{n^n}{n!} = \lim_n \frac{n^n}{\sqrt{2 \pi n} \cdot\dfrac{n^n}{e^n}} = \lim_n \frac{e^n}{\sqrt{2 \pi n}} = \frac{1}{\sqrt{2\pi}} \lim_n \frac{e^n}{\sqrt{n}} = \frac{1}{\sqrt{2\pi }} \lim_n \left(\frac{e^{2n}}{n}\right)^{\frac{1}{2}} \to \infty$$
• No need for Stirling here, we have $$\frac{n^n}{n!}=\frac{n\cdot n\cdots n}{1\cdot2\cdots(n-1)\cdot n}>\frac{n\cdot 2\cdots n}{1\cdot2\cdots(n-1)\cdot n}=n$$ – AD. Oct 12 '16 at 8:04 | 2019-07-19T08:24:39 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1964876/the-comparison-test-for-series",
"openwebmath_score": 0.915158212184906,
"openwebmath_perplexity": 237.42679524359943,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347853343059,
"lm_q2_score": 0.8791467754256017,
"lm_q1q2_score": 0.8566711993891934
} |
https://math.stackexchange.com/questions/640773/integrating-two-equations-that-equal-what-happens-to-the-constant-on-one-of-the | # Integrating two equations that equal, what happens to the constant on one of the sides?
In class, we were talking about Newton's 3rd law and how to integrate.
$\int(g)dt = \int(y''(t))dt \implies g(t) + C = y'(t)$
I am confused about why the right hand side of the equation doesn't get a constant. After asking the professor, he said that it was because the two constants would cancel each other out. But I still don't understand why that should prevent us from writing a constant on the right side.
• Recall, you can add constants together into a single constant (as was done with $C$). Also, you can define a $C_1$ and $C_2$ - one to each side. You can also show as being on the RHS of the equation. All will produce the same result. The choice is typically one of convenience to make solving easiest. – Amzoti Jan 16 '14 at 18:13
• As an example, take $y' = x y$. Solve it with a constant on one side after separation and integration. Then, have a constant on each side. What do you notice when you solve for $y$ in both approaches? $y = c e^{x^2/2}$. – Amzoti Jan 16 '14 at 18:21
• I think I understand, thank you! – user121860 Jan 16 '14 at 18:29
• I haven't taken calculus any calculus courses for about two years now and I've forgotten some of the rules, such as combining the constants. The professor told me that when we integrated both constants would be C1, ie. the same, and as such would end up canceling each other out by way of subtraction if we wanted to integrate more. So he wrote C1 on the left hand side, but it seems to make sense if he actually wanted to write C (the inclusion of both constants). – user121860 Jan 16 '14 at 18:33
• Though if you don't mind, examples are always helpful. Thank you for your time and help! – user121860 Jan 16 '14 at 18:33
Recall, you can add constants together into a single constant (as was done with C).
Also, you can define a $C_1$ and $C_2$ - one to each side. You can also show the constant as being on the RHS of the equation. All will produce the same result. The choice is typically one of convenience to make solving easiest.
Example: Consider the separable equation:
$$y' = x y$$
After separation, we can integrate both sides as:
$$\int \dfrac{1}{y}~dy = \int x~dx$$
Approach 1: Single constant (we could have also put $C$ on the LHS - try this)
$$\ln y = \dfrac{x^2}{2} + C$$
We take the exponential of each side and have:
$$y = e^{x^2/2 + C} = e^{x^2/2}~e^C = w~e^{x^2/2}$$
Note: $w = e^C$, which is just some constant (totally arbitrary).
Approach 2: Constant on each side
$$\ln y + c_1 = \dfrac{x^2}{2} + c_2$$
Taking exponential of both sides:
$$e^{\ln y + c_1} = e^{x^2/2 + c_2}$$
The RHS is as above and the LHS, we have ($q$ is just any arbitrary constant):
$$e^{\ln y}~e^{c_1} = y~q$$
Now we write:
$$q y = c_2~e^{x^2/2}$$
Dividing constants and just calling it $w$, we have:
$$y(x) = w~e^{x^2/2}$$
In both approaches, we just get some constant $w$.
Now, if you were provided with an initial condition like $y(0) = 2$, you would plug in $x=0$ and see that $w = 2$. | 2019-06-26T05:49:13 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/640773/integrating-two-equations-that-equal-what-happens-to-the-constant-on-one-of-the",
"openwebmath_score": 0.9255096912384033,
"openwebmath_perplexity": 151.61560638950004,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.974434788304004,
"lm_q2_score": 0.8791467643431002,
"lm_q1q2_score": 0.8566711912008189
} |
https://math.stackexchange.com/questions/1698179/maximise-population-coverage-subject-to-budget-constraint | # Maximise population coverage subject to budget constraint
Let $t_i =$
$1$ if transmitter i is to be constructed
and $0$ otherwise,
$c_j =$
$1$ if community j is covered
and $0$ otherwise.
Obj func:
Max
$$z = [10, 15, ..., 10] \cdot c$$
s.t.
1. the budget constraint
$$[3.6, 2.3, ..., 3.10] \cdot t \le 15$$
1. If community $j$ is covered, it is done so by at least one constructed transmitter $i$:
eg if $c_1$ is covered, then $t_1$ or $t_3$ is constructed:
$$c_1 \to (t_1 \bigvee t_3)$$
$$\iff \neg c_1 \bigvee (t_1 \bigvee t_3)$$
$$\iff 1 - c_1 + t_1 + t_3 \ge 1$$
$$\iff c_1 \le t_1 + t_3$$
Similarly, we have:
$$c_2 \le t_1 + t_2$$
$$\vdots$$
$$c_{15} \le t_7$$
1. If a transmitter $i$ is constructed, at least one community $j$ is covered:
eg if $t_1$ is constructed, then $c_1$ and $c_2$ are covered:
$$t_1 \to (c_1 \bigwedge c_2)$$
$$\iff \neg t_1 \bigvee (c_1 \bigwedge c_2)$$
$$\iff (\neg t_1 \bigvee c_1) \bigwedge (\neg t_1 \bigvee c_2)$$
$$\iff 1 - t_1 + c_1 \ge 1 \ \text{and} \ 1 - t_1 + c_2 \ge 1$$
$$\iff c_1 \ge t_1 \ \text{and} \ c_2 \ge t_1$$
Similarly, we have:
$$c_2, c_3, c_5 \ge t_2$$
$$c_1, c_7, c_9, c_{10} \ge t_3$$
$$\vdots$$
$$c_{12}, c_{13}, c_{14}, c_{15} \ge t_7$$
Is that right?
From Chapter 3 here.
• Note if you choose transmitters $1, 2$, then there is overlap of communities, and your function will double count the population of community $2$. – Macavity Mar 15 '16 at 5:05
• @Macavity Oh thanks. What to do then? $z'=z-y \cdot x$ where $y_i$ corrects for double or triple counting? – BCLC Mar 15 '16 at 5:10
• @Macavity I think Kuifje found away around the double count. Do you agree? – BCLC Mar 17 '16 at 17:54
Here is a way to get the constraints right. Define $x_i$ exactly as you have done. Define $p_i$ to be an indicator $1/0$ depending on whether community $i$ has been covered or not.
The budget constraint remains as you have set, and the objective function is now of form $\max z = 10p_1+15p_2 + \cdots + 10p_{15}$
Now how do we ensure that $p_i$ is set correctly, for any given choice of $\{x_i\}$? One way is to note that the objective function gives positive weight to $p_i$, so define a constraint for each population based on transmitters which could cover it - e.g. for the first one : $p_1 \le x_1+x_3$, for the second $p_2 \le x_1 + x_2$ etc. This will force the population indicator to turn $0$ if none of the relevant transmitters are selected.
• Thanks Macavity. Edited. How is it? – BCLC Mar 15 '16 at 6:08
• Your $p_i$ needs to be replaced by $c_i$ in the objective function and $x_i$ has no role to play in the objective. – Macavity Mar 15 '16 at 6:10
• Thanks. We answered in class, and we sort of got different answers. Prof is still going to think about it. How do you find sol? – BCLC Mar 15 '16 at 11:31
• Using an $M$ is also a good way, though not needed here. Try out the Integer Programs and see how it works. – Macavity Mar 15 '16 at 11:43
• @BCLC Yes, your try 4 seems what I meant, and it works. – Macavity Mar 20 '16 at 18:31
This is a classical set covering problem.
Let $y_i$ be a binary variable that equals $1$ if and only if transmitter $i=1,\cdots,7$ is built, and $x_j$ another binary that equals $1$ if community $j$ is covered. Let $p_j$ be the population of community $j=1,\cdots,15$.
You want to maximize the total coverage: $$\mbox{Maximize }Z=\sum_{j=1 }^{15}p_jx_j$$ subject to budget constraints: $$\sum_{i=1}^7c_iy_i\le 15 \\$$ To link the variables, proceed like Macavity suggests: $$x_j\le \sum_{i\;|\;i \;covers\; j}y_i\quad \forall j=1,\cdots,15\\ y_i,x_j\in \{0,1\}\quad \forall i=1,\cdots,7\;\forall j=1,\cdots,15$$
• Sorry I was editing while you asked the question. Variables $x_j$ will make sure there is no double counting, and are activated as soon as one of the variables $y_i\; |\;i$ covers $j$ is. – Kuifje Mar 17 '16 at 18:03
• No apologies needed. I'm the one asking a favour. It's really 'Try 2' and not 'Try 1' ? My prof seems to disagree with Macavity – BCLC Mar 17 '16 at 18:14
• What do you mean by 'Try 1' and 'Try 2'? I think Macavity is right. He basically explained the nature of my second constraint. What specific point does your professor disagree with? – Kuifje Mar 17 '16 at 18:25
• I disagree with "Try1". Try 1 (first equation) implies that if communities $1$ and $2$ are covered, than transmitter $1$ is built. This is not necessarily true, as they can be covered by transmitters $3$ and $2$ for example. It is the other way around: if community $1$ is covered, than at least one transmitter among those that cover the community must be built. – Kuifje Mar 18 '16 at 0:34
• yes indeed, for the same reasons. – Kuifje Mar 18 '16 at 14:30 | 2019-08-20T20:32:26 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1698179/maximise-population-coverage-subject-to-budget-constraint",
"openwebmath_score": 0.5166764855384827,
"openwebmath_perplexity": 737.8704632289006,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.974434783107032,
"lm_q2_score": 0.879146761176671,
"lm_q1q2_score": 0.8566711835464391
} |
https://math.stackexchange.com/questions/462199/why-does-factoring-eliminate-a-hole-in-the-limit | # Why does factoring eliminate a hole in the limit?
$$\lim _{x\rightarrow 5}\frac{x^2-25}{x-5} = \lim_{x\rightarrow 5} (x+5)$$
I understand that to evaluate a limit that has a zero ("hole") in the denominator we have to factor and cancel terms, and that the original limit is equal to the new and simplified limit. I understand how to do this procedurally, but I'd like to know why this works. I've only been told the methodology of expanding the $x^2-25$ into $(x-5)(x+5)$, but I don't just want to understand the methodology which my teacher tells me to "just memorize", I really want to know what's going on. I've read about factoring in abstract algebra, and about irreducible polynomials (just an example...), and I'd like to get a bigger picture of the abstract algebra in order to see why we factor the limit and why the simplified is equal to the original if it's missing the $(x-5)$, which has been cancelled. I don't want to just memorize things, I would really like to understand, but I've been told that this is "just how we do it" and that I should "practice to just memorize the procedure."
I really want to understand this in abstract algebra terms, please elaborate. Thank you very much.
• Very laudable that you want to understand instead of "just memorize". Aug 7, 2013 at 19:21
• The expression $\frac{(x+5)(x-5)}{x-5}=x+5$, just so long as $x\neq 5$, since $\frac{(x+5)(x-5)}{x-5}$ has the bad fortune of not being defined at $x=5$. Aug 7, 2013 at 19:29
• I don't understand what your question is. Are you asking why the function $\frac{x^2-25}{x-5}$ is undefined at $x=5$? or why we're allowed to cancel $(x-5)$ from the top and bottom? or why the limits before and after cancellation are equal? Aug 7, 2013 at 20:09
• @BlueRaja-DannyPflughoeft yes, all of those questions I'm asking :) Aug 7, 2013 at 22:35
• To answer the question - I've only been told the methodology of expanding the x2−25 into (x−5)(x+5). It goes like this: (in reverse) (x-5)(x+5) = x(x+5)-5(x+5) = x^2+5x-5x-25 = x^2-25 So, working above the other way takes x^2-25 back to (x-5)(x+5) Aug 8, 2013 at 22:52
First, and by definition, when dealing with
$$\lim_{x\to x_0}f(x)$$
we must assume $\,f\,$ is defined in some neighborhood of $\,x_0\,$ except , perhaps, on $\,x_0\,$ itself, and from here that in the process of taking the limit we have the right and the duty to assume $\,x\,$ approaches $\,x_0\,$ in any possible way but it is never equal to it.
Thus, and since in our case we always have $\,x\ne x_0=5\,$ during the limit process , we can algebraically cancel for the whole process.
$$\frac{x^2-25}{x-5}=\frac{(x+5)\color{red}{(x-5)}}{\color{red}{x-5}}=x+5\xrightarrow[x\to 5]{}10$$
The above process shows that the original function behaves exactly as the straight line $\,y=x+5\,$ except at the point $\,x=5\,$ , where there exists "a hole", as you mention.
• said hole can, of course be filled in by plugging it with the limit ... Aug 13, 2013 at 22:33
• Some definitions of the limit do not require that we consider points that are not equal to $x_0$. How do you handle these cases then? The definition goes as $\lim f(x) = L$ provided for each $\epsilon\gt 0$ there is some $\delta \gt 0$ such that $\vert x - x_0\vert \lt \delta\ldots$ and so on. Sep 9, 2016 at 15:50
• @AlexOrtiz The epsilon delta definition of the limit for general metric spaces has $0 < d(x,c) < \delta$. Note the first inequality, which implies $x \neq c$. Nov 21, 2020 at 0:19
This image of mine seems apropos:
In the case of $\lim_{x\to 5} \frac{x^2-25}{x-5}$, the message here is: Away from $x=5$, the function $\frac{x^2-25}{x-5}$ is completely identical to $x+5$; thus, what we expect to find as we approach $x=5$ is the value $5+5$. This anticipated value is what a limit computes.
The fact that the original function isn't defined at $x=5$ is immaterial. Walley World may be closed for repairs when you arrive, but that doesn't mean you and your dysfunctional family didn't spend an entire cross-country road trip anticipating all the fun you'd have there.
• Extremely helpful answer as always, Blue. You're incredible at understanding what people are really asking and providing strong intuition. Thank you for your contribution to this site! Jun 1, 2015 at 15:55
• @Blue Your answer made me think something: Can we always find another function which - away from a certain value - is completely identical to the former function? I mean, what is done in this example is the following: We have an algebraic expression and we can find another algebraic expression that have that property. Jul 6, 2017 at 1:21
• BTW: I read your image as the jingles in this video. Jul 6, 2017 at 1:21
• Super answer...the craziest I ever read in here Jul 13, 2019 at 22:17
• @Blue: Shameless plug: A downloadable poster version of the "Journey/Destination" image is available at my Etsy shop, and a t-shirt is available from Spring. (I'll delete this comment if there are objections.)
– Blue
Oct 12, 2021 at 10:00
Let's consider a simpler example first. Consider the function $f(x) = \frac{2x}x$. This says you take some number $x$, multiply by 2, then divide by the original number $x$. Obviously the answer is always 2, right? Except that when $x$ is zero, the division is forbidden and there is no answer at all. But for every $x$ except 0, we have $\frac{2x}x = 2$. In particular, for values of $x$ close to, but not equal to 0, we have $\frac{2x}x = 2$.
The function $\frac{x^2-25}{x-5}$ is similar, just a little more complicated. Calculating $x^2-25$ always gives you the same as $(x-5)(x+5)$. That is, if you take $x$, square it, and subtract 25, you always get the same number as if you take $x$, add 5 and subtract 5, and then multiply the two results. So we can replace $x^2-25$ with $(x+5)(x-5)$ because they always give the same number regardless of what you start with; they are two ways of getting to the same place. And then we see that $$\frac{x^2-25}{x-5} = \frac{(x+5)(x-5)}{x-5} = x+5$$
except that if $x-5$ happens to be zero (that is, if $x=5$) the division by zero is forbidden and we get nothing at all. But for any other $x$ the result of $\frac{x^2-25}{x-5}$ is always exactly equal to $x+5$. In particular, for values of $x$ close to, but not equal to 5, we have $\frac{x^2-25}{x-5} = x+5$.
The limit $$\lim_{x\to 5} \ldots$$ asks what happens to some function when $x$ close to, but not exactly equal to 5. And while this function is undefined for $x=5$, because to calculate it you would have to divide by zero, it is perfectly well-behaved for other values of $x$, and in particular for values of $x$ close to 5. For values of $x$ close to 5 it is equal to $x+5$, and so for values of $x$ close to 5 it is close to 10. And that is exactly what the limit is calculating.
I think that what confuses you is the difference between "solving the algebraic expression", and "finding the limit". Given:
$$f_1=\frac{x^2-25}{x-5} \quad f_2 = (x+5)$$
Then, $f_1$ and $f_2$ are most definitely NOT the same function. This is because they have different domains: 5 is not a member of the domain of $f_1$, but it is in the domain of $f_2$.
However, when we go from: $$\lim _{x\rightarrow 5}\frac{x^2-25}{x-5} \quad to \quad \lim _{x\rightarrow 5}\frac{(x-5)(x+5)}{x-5} \quad to \quad \lim_{x\rightarrow 5} (x+5)$$
We are not saying that the expressions inside the limits are equal; maybe they are, maybe they are not. What we are saying that they have the same limit. Totally different statement.
Above, the transformation of the second expression to the third one allows us to find a different function for which a) we know that the limit is the same, and b) we know how to trivially calculate that limit.
The big question, then: what transformations can I make to the function $f_1$ so that the limit stays the same? I think this is usually poorly explained in introductory courses -- a lot of hand-waving going on.
Obviously you can do any algebraic manipulation that leaves $f_1$ unchanged. You can also make any manipulation that removes and/or introduces discontinuities (points for which the function does not exist), as long as the new function stays continuous for an arbitrarily small neighborhood around $a$ (except possibly at $a$ itself). Your example is a case of such a transformation.
Here I'm myself cheating because I'm not defining 'continuity' for you. I'm sorry; please use an intuitition of what continuous means ("no holes, no jumps"), until you are presented with a formal one.
More complex transformations exist, but they have to be justified individually. You'll get to them eventually.
• you clarified a lot, thanks Aug 8, 2013 at 20:08
• Sixth line from the end - "You" will be "Your".
– MrAP
Jul 5, 2017 at 19:18
• @MrAP, thanks. You should feel free to post an edit directly Jul 6, 2017 at 1:04
Here's the basic idea: You're given a rational function, $f\colon \Bbb R \setminus \{5\}\to \Bbb R$, which is continuous everywhere in its domain.
You want to find the limit of that function at $5$.
One way to do that is to construct a continuous extension of that function, $g\colon \Bbb R \to \Bbb R$, such that $g(x) = f(x)$ whenever $x$ is in the domain of $f$. Then $$\lim_{x\to 5}f(x) = \lim_{x\to 5} g(x) = g(5).$$
In this case, factoring and cancelling accomplishes that objective.
• i really like this idea of a "continuous extension", what is the formal name of this rule/theorem? I'd like to read more about it. Thanks for the answer @dfeuer Aug 8, 2013 at 20:07
• @user4150: Try this google search: "removable discontinuity" "continuous extension" Aug 8, 2013 at 21:24
• @dfeuer Typo? It should be $$\lim_{x\to 5}f(x) = \lim_{x\to 5} g(x) = g(\color{royalblue}5).$$ Oct 1, 2014 at 8:06
• @Hakim, right you are. Fixed. Oct 1, 2014 at 9:55
The chill pill you are looking for is
If $f_1$ = $f_2$ except at $a$ then $\lim _{x{\rightarrow}a}f_1(x)=\lim_{x{\rightarrow}a}f_{2}(x)$
• Yes! You made my day! Apr 14, 2016 at 16:42
• Assuming both function to be continuous at $x=a$. May 26, 2017 at 11:45
Intuitively, we can start the other way round, by simply considering $\lim_{x\rightarrow 5} (x+5)$ which we're all agreed we understand. Now consider, independently,
$$\frac{x-5}{x-5}$$
this is obviously 1 everywhere, except where it's undefined at $x = 5$. So, what would you expect to be the effect of multiplying the two? It's just multiplying by 1, except that it also introduces a hole at $x = 5$
To understand this more broadly, it is convenient to check L'Hôpital's rule, which basically boils down to this:
Given a function $f(x)=\frac{g(x)}{h(x)}$, where for some $x=x_0$ both $g(x_0)=0$ and $h(x_0)=0$, the actual value $f(x_0)$ can be obtained as*
$$\lim_{x\to x_0}f(x) = \lim_{x\to x_0}\frac{g'(x)}{h'(x)}$$
(where the prime denotes derivation by $x$) Note that if $g'(x_0)=0=h'(x_0)$, you can re-apply the same rule again and again until either numerator or denominator are not 0. In your example, we get
$$\lim_{x\to 5}\frac{x^2-25}{x-5} = \lim_{x\to 5}\frac{2x}{1}=10=\lim_{x\to5}\,(x+5)$$
To get an even deeper understanding, the rule's proof might be an interesting read.
*: This is only one case, you can also have $|g(x_0)|=|h(x_0)|=\infty$, but not $g(x_0)=0$ and $f(x_0)=\pm\infty$
• I don't think it's appropriate to bring up L'Hôpital in a question like this. To justify the rule, the reader needs to be familiar with derivatives, differentiability, the sandwich theorem and Cauchy's mean value theorem just to get started. The OP is asking for understanding, not higher-level magic hand-waving. L'Hôpital's rule is a great practical tool, but it's also wielded too often as a substitute for thinking. Aug 8, 2013 at 11:53
• @EuroMicelli Woah there, I didn't intend to hand-wave magically, I just wanted to provide a different answer that, if one is interested enough, can not only explain the removable singularities in polynomial fractions but also in more complicated cases. Though I agree one shouldn't blindly apply Hôpital to just anything if there is another maybe more elegant way Aug 8, 2013 at 13:04
• Also you don't obtain the $f(x_0)$ value - you get $f(x_0)$ value provided it is defined and $f$ is continous. The OP function is not defined at $5$ but it's limit is. Aug 9, 2013 at 7:05
• @MaciejPiechotka True, stricly speaking the example function is equivalent to $$f(x) = \begin{cases} x+5 & x\neq 5 \\ \text{undefined} & x = 5\end{cases}$$ but the singularity at $x=5$ is removable as in $$f(x)= \begin{cases}\tfrac{x^2-25}{x-5} & x\neq 5 \\ 10 & x=5 \end{cases} \Rightarrow f(x) \equiv x+5$$ But you're right that one should always keep removed singularities in mind as they may have severe influence to e.g. applications in Physics (though I can't tell one ad hoc) Aug 9, 2013 at 7:10
One of definitions of $\lim_{x \to A} f(x) = B$ is:
$$\forall_{\varepsilon > 0}\exists_{\delta > 0}\forall_{0 < \left|x - A\right| < \delta}\left|f(x) - B\right| < \varepsilon$$
The intuition is that we can achieve arbitrary 'precision' (put in bounds on y axis) provided we get close enough (so we get the bounds on x axis). However the definition does not say anything about the value at the point $f(A)$ which can be undefined or have arbitrary value.
One of method of proving the limit is to find the directly $\delta(\varepsilon)$. Hence we have following formula (well defined as $x\neq 5$):
$$\forall_{0 < \left|x - 5\right| < \delta}\left|\frac{x^2-25}{x-5} - 10\right| < \epsilon$$
As $x\neq 5$ (in such case $\left|x - 5\right| = 0$) we can factor the expression out
$$\forall_{0 < \left|x - 5\right| < \delta} \left|x + 5 - 10\right| < \varepsilon$$ $$\forall_{0 < \left|x - 5\right| < \delta} \left|x - 5 \right| < \varepsilon$$
Taking $\delta(\varepsilon) = \varepsilon$ we find that:
$$\forall_{\varepsilon > 0}\exists_{\delta > 0}\forall_{0 < \left|x - 5\right| < \delta}\left|\frac{x^2-25}{x-5} - 10\right| < \varepsilon$$
The key thing is that we don't care about value at the limit.
Look at this example:
$$\lim _{\text{thing} \to zero}(\dfrac{\text{Thing}}{\text{Thing}}) \cdot \text{Another thing}$$
Here our $\text{'Thing'}$ is tending to zero, but not zero, it is something real, call it a real $\text{Thing}$, which can be cancelled merrily.
• What do you have against using letters as variables? Mar 8, 2014 at 18:54
The hole doesn't always disappear but when it does it is because one expression is equivalent to another except right at that hole. For example $$\frac{x^2 - 25}{x - 5}= \frac{(x+5)(x-5)}{x-5}= (x + 5)$$ everywhere except at x = 5. However, if we are very close to 5 and we had infinite place arithmetic (as we do theoretically) the calculation of the first expression $\frac{x^2 - 25} {x - 5}$ would be very close to the last expression $(x+5)$, that is the limit as x approaches 5 of the first expression is the same limit as the limit as x approaches 5 of last expression. Since that latter is a 'nice number', we can define the value at 5 for the first expression and remove the "everywhere except at $x = 5$" by just putting 10 as the value and not calculating it.
The trick is to think of limit a bit differently: A limit is the value a function would have at a point were it continuous at that point, with nothing else being different.
You can see this by comparing the epsilon-delta definitions of the two concepts. We say $$f$$ is continuous at $$c$$, if $$f$$ is defined at $$c$$ and for every $$\epsilon > 0$$ there is a $$\delta > 0$$ such that
$$|f(x) – f(c)| < \epsilon$$ whenever $$0 < |x – c| < \delta$$.
Likewise, $$f$$ has limit $$L$$ at $$c$$ if for every $$\epsilon > 0$$ there is a $$\delta > 0$$ such that
$$|f(x) – L| < \epsilon$$ whenever $$0 < |x – c| < \delta$$.
Thus, if we have a function with isolated gaps and there is a different function $$f^{*}$$ such that
1. $$f^{*}(x) = f(x)$$ wherever $$f(x)$$ is defined and continuous,
2. $$f^{*}$$ is defined and continuous everywhere, i.e. has domain $$\mathbb{R}$$ (versus the original function having domain $$\mathbb{R}$$ minus a set of isolated points),
then we must have that the value this $$f^{*}$$ takes at the points where $$f$$ was not defined or continuous must be the limiting values at those points.
And that's what you find by cancellation: by rewriting the expression describing the function in a form that involves only addition and multiplication with no division, you have created a formula that describes a function that must agree where the division part is defined, but since no division is involved, is defined everywhere. Moreover, since addition and multiplication (and powers, though you can consider them repeated multiplication) are continuous, then it follows that this function must be continuous as well. Hence the above are met, and the holes fill in.
The idea is that if two functions $f$ and $g$ differ only at $a$ but are identical otherwise, we have $\displaystyle \lim_{x \to a} f(x) = \lim_{x \to a} g(x)$. In this case $f(x) = \dfrac {x^2-25}{x-5}$ and $g(x) = x+5$ are identical everywhere except $x=5$, so $\displaystyle \lim_{x \to 5} \dfrac {x^2-25}{x-5} = \lim_{x \to 5} x+5 = 10.$
In limits the value of x is always tending and not exact
x→a means the value of x tends to a. Similarly x→5 means tending to 5. So basically the value of x is not 5, but a little more or less than 5.
Hence you cannot take the denominator 0 in any case as the denominator will be, again, tending to zero. That is why we try to factorize the numerator first in order to check if there are any terms common both on the numerator and the denominator.
$$L:=\lim_{x\to x_0}f(x)$$ is the value that would make $$f$$ continuous at $$x_0$$.
If you known another function $$g$$ which is continuous and coincides with $$f$$ around $$x_0$$, then perforce
$$L=g(x_0).$$ | 2022-05-23T11:55:23 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/462199/why-does-factoring-eliminate-a-hole-in-the-limit",
"openwebmath_score": 0.8391028046607971,
"openwebmath_perplexity": 255.68626743071,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137942490252,
"lm_q2_score": 0.8723473763375643,
"lm_q1q2_score": 0.8566571569404339
} |
https://www.hpmuseum.org/forum/archive/index.php?thread-6157.html | # HP Forums
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
.
InvH:
« .577215664902 - InvPsi 1. -
»
InvPsi:
« DUPDUP DUP InvP InvP DUP InvP InvP DUPDUP InvP InvP DUP InvP DUP EXP .5 + Psi OVER - - EXP .5 +
»
InvP:
« DUP EXP .5 + Psi OVER - - EXP .5 + Psi OVER - -
»
(*) On the HP 49G, replace Psi with 0 PSI.
Examples:
4.012007 InvH --> 30.523595226 (in less than half a second); this is a very belated solution to one of the famous Valentin's challenges (#3 here).
.577215664902 InvH --> 0.46163214497 ; InvH(Euler-Mascheroni constant) = xmin (local minimum of the continuous factorial function).
4 pi 2 / - 3 ENTER 2 LN * - InvH --> 0.24999999999 ; one of the special values for fractional arguments examples here.
1.5 InvH --> 2. ; H2
1 Psi --> -0.577215664902 InvPsi --> 1.00000000009
2 Psi --> 0.422784335098 InvPsi --> 2.
Background:
Let H(x) be the continuous function associated with Harmonic Numbers. Then,
$H(x)=\gamma +\psi \left ( x+1 \right )$
$\psi \left ( x+1 \right )=H(x)-\gamma$
$x+1 =\psi^{-1} \left ( H(x)-\gamma \right )$
$x+1 =\psi^{-1} \left ( H(x)-\gamma \right )$
$x =\psi^{-1} \left ( H(x)-\gamma \right )-1$
or
$H^{-1}(x) =\psi^{-1} \left ( x-\gamma \right )-1$
That is, in order to obtain the inverse of H(x) we only need the Inverse Digamma Function and the Euler-Mascheroni constant. No problem with the constant, but the Inverse Digamma might be a problem since Digamma is not easily invertible.
A rough approximation is
$\psi^{-1} \left ( x\right )=e^{x}+\frac{1}{2}$
The equivalent HP 50g program is
P1:
« EXP .5 +
»
However, this is good only for x >= 10, not good enough for our purposes:
10 P1 --> 22026.9657948 Psi --> 10.0000000001.
But,
9 P1 --> 8103.58392758 Psi --> 9.00000000064
8 P1 --> 2981.45798704 Psi --> 8.00000000469
7 P1 --> 1097.13315843 Psi --> 7.00000003465
So, let's try to improve the accuracy a bit:
P2:
« DUP P1 Psi OVER - - P1
»
10 P2 --> 22026.9657926 Psi --> 9.99999999999
9 P2 --> 8103.58392239 Psi --> 8.99999999999
8 P2 --> 2981.45797306 Psi --> 8.
7 P2 --> 1097.13312043 Psi --> 7.
6 P2 --> 403.928690211 Psi --> 6.
5 P2 --> 148.912878357 Psi --> 5.00000000001
But,
4 P2 --> 55.0973869316 Psi --> 4.00000000039
3 P2 --> 20.5834634677 Psi --> 3.00000002131
Proceeding likewise, we get
P3:
« DUP P2 Psi OVER - - P2
»
This is good for x as low as 2, but not for x = 1:
2 P3 --> 7.88342863120 Psi --> 2.
1 P3 --> 3.20317150637 Psi --> 1.0000000139
A couple more steps suffice for x around -0.6 and greater, which is good enough for our purposes. That's what the InvPsi program above does, albeit in an inelegant way. Also, this is just an intuitive and somewhat inneficient approach. Better methods suggestions are welcome.
Edited to fix a couple of typos.
Edited again to include a printout of my current directory:
InvPsi will accept arguments greater or equal -1. About 0.05 s, 0.10 sor 0.50 s, depending on the arguments.
Cool, thanks! I was just trying to figure out an inverse harmonic number function the other day and getting nowhere. Neither Mathworld nor Wikipedia seemed to have much to say on the subject.
John
(04-27-2016 04:57 PM)John Keith Wrote: [ -> ]Cool, thanks! I was just trying to figure out an inverse harmonic number function the other day and getting nowhere. Neither Mathworld nor Wikipedia seemed to have much to say on the subject.
If the arguments are always plain harmonic numbers, that is, the ones obtained from the discrete function, then the inverse function can be implemented simply as
InvHn:
« .577215664902 - EXP IP »
Examples:
137 ENTER 60 / InvHn --> 5. ; 137/60 = H(5)
10 Hx --> 2.92896825397 ; H(10)
InvHn --> 10.
2E10 --> 24.2962137754 ; H(2x10^10)
InvHn --> 20000000000.
where Hx is the program for the continuous function:
Hx:
« 1. + Psi .577215664902 + »
Regards,
Gerson.
Reference URL's
• HP Forums: https://www.hpmuseum.org/forum/index.php
• : | 2019-10-21T12:20:00 | {
"domain": "hpmuseum.org",
"url": "https://www.hpmuseum.org/forum/archive/index.php?thread-6157.html",
"openwebmath_score": 0.7077171802520752,
"openwebmath_perplexity": 9318.171377763172,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137910906878,
"lm_q2_score": 0.8723473713594992,
"lm_q1q2_score": 0.8566571492967379
} |
https://math.stackexchange.com/questions/471710/why-do-small-angle-approximations-have-to-be-in-radians/471715 | Why do small angle approximations have to be in radians?
Why do small angle approximations only hold in radians? All the books I have say this is so but don't explain why.
• @EmilioPisanty perhaps you're correct, but anonymous disparagement is hardly the best way to greet a new user. Anyway, user28435, try Googling one-line answers before asking in future (this must have been asked hundreds of times on Math SE).
– Meow
Aug 19, 2013 at 21:36
• Don't think of sine as being defined in terms of angles - rather, arcs length of a circle. (The approximations come from the fact that a circle is approximately a straight line, especially near the point of tangency.) Oct 21, 2014 at 10:55
The reason why we use radians for $\sin x$ (or other trigonometric functions) in calculus is explained here and here in Wikipedia.
Having known that, notice that small angle approximation is just the Taylor expansion of $\sin x$ and $\cos x$ around $x=0$:
$$\sin x = \sum_{n=0}^\infty \frac{(-1)^n x^{2n+1}}{(2n+1)!}\tag{1}$$ $$\cos x = \sum_{n=0}^\infty \frac{(-1)^n x^{2n}}{(2n)!}\tag{2}$$
If you scale $x$ by some constant $\omega$, then you must replace $x$ with $\omega x$ in $(1)$ and $(2)$. So, working in degrees $($ $\omega=\frac{\pi}{180}$ $)$, the approximation will become: $$\sin \theta\approx \frac{\pi}{180}\theta\tag{\theta in degrees}$$
There are several correct ways to answer this that illuminate different aspects of what is going on (and I wouldn't be surprised if the answer is present on this site somewhere EDIT: indeed the other responses to this question are different ways of looking at it). Here is a geometrical answer.
The basic formula is
$$l = r \theta$$
where $l$ is the arc length of a segment of a circle with radius $r$ subtending an angle $\theta$. For this formula to be true, $\theta$ needs to be in radians. (Just try it out, is the arc length of the whole circle equal to $360 r$ or $2\pi r$?) In fact, that formula is really how you define what you mean by radians.
Now consider a straight line segment connecting the two endpoints of the arc subtended by $\theta$. Call it's length $a$. You can show by fiddling with triangles that $$a=2r\sin\left(\frac{\theta}{2}\right)$$
In the limit that the angle is small (so only a small piece of the circle is subtended), you should be able to convince yourself that $a\approx l$. This is the core of the small angle approximation.
Using the two relationships above we have
$$r\theta \approx 2r \sin\left(\frac{\theta}{2}\right)$$ or $$\sin\left(\frac{\theta}{2}\right)\approx\frac{\theta}{2}$$ You can see that using radians was crucial here because it allowed us to use $l=r\theta$.
A 'small angle' is equally small whatever system you use to measure it. Thus if an angle is, say, much smaller than 0.1 rad, it will be much smaller than the equivalent in degrees. More typically, saying 'small angle approximation' typically means $\theta\ll1$, where $\theta$ is in radians; this can be rephrased in degrees as $\theta\ll 57^\circ$.
(Switching uses between radians and degrees becomes much simpler if one formally identifies the degree symbol $^\circ$ with the number $\pi/180$, which is what you get from the equation $180^\circ=\pi$. If you're differentiating with respect to the number in degrees, then, you get an ugly constant, as you should: $\frac{d}{dx}\sin(x^\circ)={}^\circ \cos(x^\circ)$.)
In real life, though, you wouldn't usually say 'this angle should be small' without saying what it should be smaller than. If the latter is in degrees then the former should also be in degrees.
That said, though: always work in radians! Physicists tend to use degrees quite often, but there is always the underlying understanding that the angle itself is a quantity in radians and that degrees are just convenient units. Trigonometric functions, in particular, always take their arguments in radians, so that all the math will work well. Always differentiate in radians, always work analytically in radians. And at the end you can plug in the degrees.
It's because this relationship $$\lim_{x\rightarrow0}\frac{\sin(x)}{x}=1$$ (i.e. $\sin(x) \approx x$) only holds if $x$ is in radians, as, using L'Hoptial's rule, $$\lim_{x\rightarrow0}\frac{\sin(x)}{x}=\lim_{x\rightarrow0}\frac{\frac{d}{dx}\sin(x)}{\frac{d}{dx}x}=\lim_{x\rightarrow0}\frac{d}{dx}\sin(x)=\begin{cases} 1 & \text{ if } x\text{ is in radians}\\ \frac{\pi}{180} & \text{ if } x \text{ is in degrees} \end{cases}.$$ That is, $\frac{d}{dx}(\sin(x))=\cos(x)$ only if $x$ is in radians. The reason can be divined from this proof without words by Stephen Gubkin:
• Sure but WHY does this only hold in radians. That's the bit I can't understand. Everyone says it holds in radians but no one explains why lol. Thanks anyway. Aug 19, 2013 at 21:03
• @user28435: because in radians the sin of an angle basically equals the angle itself, at small enough angles, so it very much simplifies the algebra if you don't have to keep converting degrees.
– Mike Dunlavey
Aug 20, 2013 at 0:12
You need to clarify what you are really asking: in physics the small angle approx. is typically used to approximate a non-linear differential equation by a linear one - which is much easier to solve - by allowing us to approximate the sin(x) function by x. The resulting equation is only a good one for small angles. I.e., its a physics thing, not a math thing.
because the sine of an angle is dimensionless [i.e. just a number], the angle has to be also dimensionless [i.e. the radian is dimensionless] ... units on both sides of an equation have to match | 2022-05-26T15:04:47 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/471710/why-do-small-angle-approximations-have-to-be-in-radians/471715",
"openwebmath_score": 0.9050287008285522,
"openwebmath_perplexity": 340.05392446613325,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982013790564298,
"lm_q2_score": 0.8723473713594991,
"lm_q1q2_score": 0.8566571488375431
} |
http://math.stackexchange.com/questions/231808/determining-the-remainder-of-a-small-integer-n-raised-to-a-high-power-when-divid | # Determining the remainder of a small integer n raised to a high power when divided by m
I recently took a test that asked me to determine the value of $3^{82} \mod 5$.
I was unable to figure out how to do it on the test, but when I got home I noticed that there is a pattern to the remainders of $3^n$,
• $3^{1} \mod 5 = 3$
• $3^{2} \mod 5 = 4$
• $3^{3} \mod 5 = 2$
• $3^{4} \mod 5 = 1$
• $3^{5} \mod 5 = 3$
• $3^{6} \mod 5 = 4$
• $3^{7} \mod 5 = 2$
• $3^{8} \mod 5 = 1$
• $3^{9} \mod 5 = 3$
Is the best strategy to follow this pattern up to the $82^{\text{nd}}$ remainder (obviously in an intelligent way) or is there some much more obvious trick to this question that I missed?
-
As you have observed $$3^4 \equiv 1 \pmod 5$$ This means that $$3^{4k} \equiv 1 \pmod 5$$ Hence, $$3^{80} \equiv 1 \pmod 5 \implies 3^{82} \equiv 3^2 \pmod{5} = 4 \pmod 5$$ In general, $$3^{4k + r} \equiv 3^r \pmod{5}$$
Thank you for your answer. It's not entirely clear to me how $3^4 \mod 1 \equiv 1$ tells us that $3^{4k} \mod 1 \equiv 1$. Could you explain this step? – crazedgremlin Nov 7 '12 at 1:07
@crazedgremlin $\rm\: \color{#0A0}{3^4}\equiv\color{#C00} 1\:\Rightarrow\: 3^{4k} = (\color{#0A0}{3^4})^k\equiv \color{#C00}1^k\equiv 1\:$ by the Congruence Product Rule. It is essential to learn the arithmetic of congruences, as opposed to less flexible mod operations. – Bill Dubuque Nov 7 '12 at 1:23 | 2015-05-26T04:38:15 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/231808/determining-the-remainder-of-a-small-integer-n-raised-to-a-high-power-when-divid",
"openwebmath_score": 0.6795341372489929,
"openwebmath_perplexity": 184.99370343555776,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9921841104018225,
"lm_q2_score": 0.8633916152464016,
"lm_q1q2_score": 0.8566434417016436
} |
http://mathhelpforum.com/calculus/82075-limits-series-question.html | # Thread: Limits / Series question
1. ## Limits / Series question
hi
Let
P = [(2^3-1)/(2^3+1)][(3^3-1)/(3^3+1)]........[(n^3-1)/(n^3+1)]
n=2,3,4,......
Find Limit P when N tends to infinity.
2. Originally Posted by champrock
hi
Let
P = [(2^3-1)/(2^3+1)][(3^3-1)/(3^3+1)]........[(n^3-1)/(n^3+1)]
n=2,3,4,......
Find Limit P when N tends to infinity.
Factorise $\frac{n^3-1}{n^3+1}$ as $\frac{(n-1)(n^2+n+1)}{(n+1)(n^2-n+1)}$, and notice that $n^2+n+1 = (n+1)^2 - (n+1) + 1$. You will then find that you have a telescoping product.
3. what should i do with n^2 - n + 1 ?
I was able to cancel the (n-1)/(n+1) terms. but dont know what to do with n^2 - n + 1 and n^2 + n + 1
4. You easily undestand if you write the sequence of terms...
$P= \prod_{n=2}^{\infty} \frac{(n-1)\cdot (n^{2} + n + 1)}{(n+1)\cdot (n^{2} - n + 1)}= \frac {1\cdot 7}{3\cdot 3}\cdot \frac {2\cdot 13}{4\cdot 7}\cdot \frac {3\cdot 21}{5\cdot 13}\cdot \frac {4\cdot 31}{6\cdot 21}\cdot \dots$
... and it is evident you can simplify both in numerator and denominator [3 with 3, 4 with 4, ... , 7 with 7, 13 with 13, 21 with 21,...]
Finally is...
$P=\frac{2}{3}$
Very nice!...
Kind regards
$\chi$ $\sigma$
5. Originally Posted by champrock
what should i do with n^2 - n + 1 ?
I was able to cancel the (n-1)/(n+1) terms. but dont know what to do with n^2 - n + 1 and n^2 + n + 1
Use the hint I gave before: $n^2+n+1 = (n+1)^2 - (n+1) + 1$. That tells you that the term $n^2+n+1$ in the numerator of each fraction cancels with the term $(n+1)^2 - (n+1) + 1$ in the denominator of the next one.
6. I think Only 1/3 is left. All the other terms cancel out. How are you getting 2/3
7. The term of order n is...
$a_{n}= \frac{(n-1)\cdot (n^{2} +n + 1)}{(n+1)\cdot (n^{2} - n + 1)}$
For $n=3$ is...
$a_{3}= \frac{2\cdot 13 }{4\cdot 7}$
Kind regards
$\chi$ $\sigma$ | 2017-04-28T07:47:56 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/calculus/82075-limits-series-question.html",
"openwebmath_score": 0.9248009324073792,
"openwebmath_perplexity": 487.10623735797816,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9921841101707115,
"lm_q2_score": 0.8633916134888614,
"lm_q1q2_score": 0.8566434397583008
} |
http://inognicasa.it/equation-of-a-rotated-ellipse.html | Log InorSign Up. The form of the covariance matrix σ in the unrotated system follows from equation (14) using R. The standard form of the equation of an ellipse with center (h, k) and major and minor axes of lengths 2a and 2b, respectively, where 0 < b < a, is. They both have shape (eccentricity) and size (major axis). The general equation's coefficients can be obtained from known semi-major axis , semi-minor axis , center coordinates (∘, ∘), and rotation angle (the angle from the positive horizontal axis to the ellipse's major axis) using the formulae:. | bartleby. A graph of the two equations is presented here. As stated, using the definition for center of an ellipse as the intersection of its axes of symmetry, your equation for an ellipse is centered at $(h,k)$, but it is not rotated, i. It has co-vertices at (5 ± 3, –1), or (8, –1) and (2, –1). Once we have those we can sketch in the ellipse. Find the points at which this ellipse crosses the x-axis and show that the tangent lines at these points are parallel. How It Works. The parametric equations of an ellipse are and. |C|y the equation is transformed into σx02 + τy02 = −F. Here a > b > 0. This equation defines an ellipse centered at the origin. the graph is an ellipse if AC > 0, and in Section 5. You may ignore the Mathematica commands and concentrate on the text and figures. Ellipse drawing tool. Constructing (Plotting) a Rotated Ellipse. The equation x 2 – xy + y 2 = 3 re presents a "rotated ellipse,” that is, an ellipse whose axes are not parallel to the coordinate axes. For a given chord or triangle base, the. The length of the major axis is 2a, and the length of the minor axis is 2b. +- hat j b. Complex Growth. Rotation Creates the ellipse by appearing to rotate a circle about the first axis. Let x^2+3y^2=3 be the equation of an ellipse in the x-y plane. You may ignore the Mathematica commands and concentrate on the text and figures. Vertical Major Axis Example. They both have shape (eccentricity) and size (major axis). Since B = − 6 3 ≠ 0, the equation satisfies the condition to be a rotated ellipse. {\displaystyle Ax^ {2}+Bxy+Cy^ {2}+Dx+Ey+F=0}. If C∆ > 0, we have an imaginary ellipse, and if ∆ = 0, we have a point ellipse. Equations When placed like this on an x-y graph, the equation for an ellipse is: x 2 a 2 + y 2 b 2 = 1. The Rotated Ellipsoid June 2, 2017 Page 1 Rotated Ellipsoid An ellipse has 2D geometry and an ellipsoid has 3D geometry. Write the equation of the circle in standard form given the endpoints of the diameter: (-12, 10) and (-18, 12). They are located at (h±c,k) or (h,k±c). Let us consider a point P(x, y) lying on the ellipse such that P satisfies the definition i. The equation of the ellipse we discussed in class is 9 x2 - 4 xy + 6 y2 = 5. HELP?! Rotate the axes to eliminate the xy-term in the equation. centerof the ellipse gets rotated about point pand the new ellipseat the new center gets rotated about the new centerby angle a. We have step-by-step solutions for your textbooks written by Bartleby experts!. Here we plot it ContourPlotA9 x2-4 x y + 6 y2 − 5, 8x,-1, 1<, 8y,-1, 1<, Axes fi True, Frame fi False,. Except for degenerate cases, the general second-degree equation Ax2 + Bxy + Cy2 + Dx + Ey + F = 0 x¿y¿-term x¿ 2 4 + y¿ 1 = 1. I wish to plot an ellipse by scanline finding the values for y for each value of x. H(x, y) = A x² + B xy + C y² + D x + E y + F = 0 The basic principle of the incremental line tracing algorithms (I wouldn't call them scanline) is to follow the pixels that fulfill the equation as much as possible. The standard form of the equation of an ellipse with center (h, k) and major and minor axes of lengths 2a and 2b, respectively, where 0 < b < a, is. Since (− 6 3) 2 − 4 ⋅ 7 ⋅ 13 = − 346 < 0 and A ≠ C since 7 ≠ 13, the equation satisfies the conditions to be an ellipse. 18 < : 1 1 1 9 = ; (10) 3. By using this website, you agree to our Cookie Policy. Activity 4: Determining the general equation of an ellipse/ Determining the foci and vertices of an ellipse. I want to plot an Ellipse. Rotate the hyperbola : In the above graph, the preimage is in blue and the image (rotated) is. When you rotate the ellipse about y = 5, the "tire" above will be coming-out and going-in through z-direction. EINSTEIN, ALBERT and MICHELE BESSO. Here are formulas for finding these points. I know that i can draw it using ellipse equation, then rotate it, compute points and connect with lines. If an ellipse is rotated about one of its principal axes, a spheroid is the result. If the rotation is small the resulting ellipse is very nearly round, but if the rotation is large the ellipse becomes very flattened (or very elongated, depending upon how you look at the effect), and if the circle is rotated until it is edge-on to our line of sight the "ellipse" becomes just a straight line segment. The Rotated Ellipsoid June 2, 2017 Page 1 Rotated Ellipsoid An ellipse has 2D geometry and an ellipsoid has 3D geometry. In the interesting first case the set Q is an ellipse. k is y-koordinate of the center of the ellipse. If the center of the ellipse is at point (h, k) and the major and minor axes have lengths of 2a and 2b respectively, the standard equation is. HELP?! Rotate the axes to eliminate the xy-term in the equation. Sketch the graph of Solution. When rotated inside a square of side length 2 having corners at ), the envelope of the Reuleaux triangle is a region of the square with rounded corners. Equations When placed like this on an x-y graph, the equation for an ellipse is: x 2 a 2 + y 2 b 2 = 1. (1) Ellipse (2) Rotated Ellipse (3) Ellipse Representing Covariance. The following 12 points are on this ellipse: The ellipse is symmetric about the lines y. Write the equation of the circle in standard form given the endpoints of the diameter: (-12, 10) and (-18, 12). Here we plot it ContourPlotA9 x2-4 x y + 6 y2 − 5, 8x,-1, 1<, 8y,-1, 1<, Axes fi True, Frame fi False,. Given the equation of a conic, identify the type of conic. What are the applications of Ellipse in real life? The ellipse has a close reference with football when it is rotated on its major axis. Equation 3 thus becomes: rz n S(y, cos & - x, sin &)2 - N Eq. If and are nonzero, have the same sign, and are not equal to each other, then the graph may be an ellipse. | bartleby. The equation of an ellipse is a generalized case of the equation of a circle. In mathematics, a rotation of axes in two dimensions is a mapping from an xy-Cartesian coordinate system to an x'y'-Cartesian coordinate system in which the origin is kept fixed and the x' and y' axes are obtained by rotating the x and y axes counterclockwise through an angle. Let us consider the figure (a) to derive the equation of an ellipse. minimal expansion. lationship between two images is pure rotation, i. If the rotation is small the resulting ellipse is very nearly round, but if the rotation is large the ellipse becomes very flattened (or very elongated, depending upon how you look at the effect), and if the circle is rotated until it is edge-on to our line of sight the "ellipse" becomes just a straight line segment. Let the coordinates of F 1 and F 2 be (-c, 0) and (c, 0) respectively as shown. Except for degenerate cases, the general second-degree equation Ax2 + Bxy + Cy2 + Dx + Ey + F = 0 x¿y¿-term x¿ 2 4 + y¿ 1 = 1. h is x-koordinate of the center of the ellipse. (4) into the following canonical form (x 0 x 0) 2 a2 + (y0 y 0) 2 b2 = 1; (5) in which (x 0 0;y 0) is the center of the ellipse in the rotated coordinate system, and aand bare the lengths of the semi-axes. Expanding the binomial squares and collecting like terms gives. The equation b 2 = a 2 – c 2 gives me 16 – 9 = 7 = b 2. {\displaystyle B} is zero then the conic is not rotated and sits on the x- and y- axes. ; If and are nonzero and have opposite signs. Rotating an Ellipse. We shall now study the Cartesian representation of the hyperbola and the ellipse. If you enter a value, the higher the value, the greater the eccentricity of the ellipse. Show that this represents elliptically polarized light in which the major axis of the ellipse makes an angle. Here a > b > 0. Thanks, Michal Bozon. For a non-rotated conic: A. Complex Growth. However, when you graph the ellipse using the parametric equations, simply allow t to range from 0 to 2π radians to find the (x, y) coordinates for each value of t. If the Circle option is selected, the width and height of the drawn shape is kept the same. We have step-by-step solutions for your textbooks written by Bartleby experts!. | bartleby. − <: Ellipse − >: Hyperbola; Firstly, if B is not zero then the graph represents a rotated conic. When a>b, we have a prolate spheroid, that is, an ellipse rotated around its major axis; when a K > -1 = ellipse, -1 = parabolic, and K < -1 is hyperbolic; R is the radius of curvature. General Equations of Conics The graph of Ax 2 Cy is one of the 2 Dx Ey F 0 following : 1. However now i want the ellipse to rotate to the slopes so that the top of the ellipse faces in the direction of the line segments normal. In the rotated the major axis of the ellipse lies along the We can write the equation of the ellipse in this rotated as Observe that there is no in the equation. Ellipse configuration panel. 5 (a) with the foci on the x-axis. When we add an x y term, we are rotating the conic about the origin. I have the verticles for the major axis: d1(0,0. In this section, we will discuss the equation of a conic section which is rotated by. Because A = 7, and C = 13, you have (for 0 θ < π/2) Therefore, the equation in the x'y'-system is derived by making the following substitutions. Reversing translation : 137(X−10)² − 210(X−10)(Y+20)+137(Y+20)² = 968 This is equation of rotated ellipse relative to original axes. If an ellipse is rotated about one of its principal axes, a spheroid is the result. Moment of inertia is defined with respect to a specific rotation axis. Several exam. (4) into the following canonical form (x 0 x 0) 2 a2 + (y0 y 0) 2 b2 = 1; (5) in which (x 0 0;y 0) is the center of the ellipse in the rotated coordinate system, and aand bare the lengths of the semi-axes. In terms of the geometric look of E, there are three possible scenarios for E: E = ∅, E = p 1 p 2 ¯, the line segment with end-points p 1 and p 2, or E is an ellipse. It follows that 0 £e< 1 and p> 0, so that anellipse in polar coordinates with one focus at the origin and the other onthe positive x-axis is given by. The major axis in a vertical ellipse is represented by x = h; the minor axis is represented by y = v. Notes College Algebra teaches you how to find the equation of an ellipse given a graph. You may ignore the Mathematica commands and concentrate on the text and figures. A “standard ellipsoid” has a circular midsection. {\displaystyle Ax^ {2}+Bxy+Cy^ {2}+Dx+Ey+F=0}. So I'm trying to find the intersections of the equations $$\ {x^2\over 1^2} + {y^2\over 2^2} = 1$$ $$5x^2 - 6xy + 5y^2 = 8$$ Both of the equations represent an ellipse, with the first ellipse being a vertical ellipse and the second ellipse being first one rotated 315 degrees counterclockwise. The center is at (h, k). The equation may be cast in a more general form since the standard deviation is to be calculated as the axis is rotated about the point of average location. {\displaystyle B} is non-zero, then the conic is rotated about the axes, with the rotation centred on the origin. If a > b, the ellipse is stretched further in the horizontal direction, and if b > a, the ellipse is stretched further in the vertical direction. The general equation's coefficients can be obtained from known semi-major axis , semi-minor axis , center coordinates (∘, ∘), and rotation angle (the angle from the positive horizontal axis to the ellipse's major axis) using the formulae:. First, notice that the equation of the parabola y = x^2 can be parametrized by x = t, y = t^2, as t goes from -infinity to infinity; or, as a column vector, [x] = [t] [y] = [t^2]. Thus, for the equation to represent an ellipse that is not a circle, the coefficients must simultaneously satisfy the discriminant condition B 2 − 4 A C < 0 B^2 - 4AC< 0 B 2 − 4 A C < 0 and also A ≠ C. Find the points at which this ellipse crosses the x-axis and show that the tangent lines at these points are parallel. If the major axis lies along the y-axis, a and b are swapped in the equation of an ellipse (below). The form of the covariance matrix σ in the unrotated system follows from equation (14) using R. In the rotated the major axis of the ellipse lies along the We can write the equation of the ellipse in this rotated as Observe that there is no in the equation. A parametric form for (ii) is x=5. Find the points where the ellipse crosses the x-axis, and show that the tangent lines at these points are parallel. Let be the angle of rotation. Let’s see what happens when. a is the ellipse axis which is parallell to the x-axis when rotation is zero. The major axis of this ellipse is horizontal and is the red segment from (-2, 0) to (2, 0). The equation x^2 - xy + y^2 = 3 represents a "rotated ellipse," that is, an ellipse whose axes are not parallel to the coordinate axes. The general equation's coefficients can be obtained from known semi-major axis , semi-minor axis , center coordinates (∘, ∘), and rotation angle (the angle from the positive horizontal axis to the ellipse's major axis) using the formulae:. Equation of ellipse from its focus, directrix, and eccentricity Last Updated: 20-12-2018 Given focus(x, y), directrix(ax + by + c) and eccentricity e of an ellipse, the task is to find the equation of ellipse using its focus, directrix, and eccentricity. Let us consider the figure (a) to derive the equation of an ellipse. Squashed Circles and Gardeners. If we deform this circle according to the general two dimensional linear transformation, the eq below can be derived:. HELP?! Rotate the axes to eliminate the xy-term in the equation. An ellipse is a flattened circle. k is y-koordinate of the center of the ellipse. Two fixed points inside the ellipse, F1 and F2 are called the foci. You may ignore the Mathematica commands and concentrate on the text and figures. The parametric equations of an ellipse are and. Points p 1 and p 2 are called foci of the ellipse; the line segments connecting a point of the ellipse to the foci are the focal radii belonging to that point. The general transformation is Y = RX with inverse X = RTY. Below is the C++ representation of the above problem. By using a transformation (rotation) of the coordinate system, we are able to diagonalize equation (12). Step 2 : Trigonometric identity : Substitute and in above equation. I'm looking for a Cartesian equation for a rotated ellipse. Let us consider a point P(x, y) lying on the ellipse such that P satisfies the definition i. Our mission is to provide a free, world-class education to anyone, anywhere. , x(t), y(t), z(t)) or a more canonical form (e. I have the verticles for the major axis: d1(0,0. Determine the foci and vertices for the ellipse with general equation 2x^2+y^2+8x-8y-48. If $$\displaystyle D = b^2- 4ac$$, then it's an ellipse for $$\displaystyle D<0$$, a parabola for $$\displaystyle D = 0$$, and a hyperbola for $$\displaystyle D>0$$. This can always be converted to VBA Code. Entering 0 defines a circular ellipse. Find the points where the ellipse crosses the x-axis, and show that the tangent lines at these points are parallel. The equation of a line through the point and cutting the axis at an angle is. In particular that the shape made by rotation around the x-axis can stand on the top, if it is made from wood. The velocity equation for a hyperbolic trajectory has either + , or it is the same with the convention that in that case a is negative. The blue ellipse shows the original plot. The super ellipse belongs to the Lamé curves. For a surface obtained by rotating a curve around an axis, we can take a polygonal approximation to the curve, as in the last section, and rotate it around the same axis. The ellipse points are P = C+ x 0U 0 + x 1U 1 (1) where x 0 e 0 2 + x 1 e 1 2 = 1 (2) If e 0 = e 1, then the ellipse is a circle with center C and radius e 0. This equation of an elliptic cylinder is a generalization of the equation of the ordinary, circular cylinder (a = b). Several exam. A circle if A = C 2. The image of the disk will be an ellipse with these directions as the major and minor axes. Thus, the graph of this equation is either a parabola, ellipse, or hyperbola with axes parallel to the x and y-axes (there is also the possibility that there is no graph or the graph is a “degenerate” conic: a point, a line, or a pair of lines). The general equation's coefficients can be obtained from known semi-major axis , semi-minor axis , center coordinates (∘, ∘), and rotation angle (the angle from the positive horizontal axis to the ellipse's major axis) using the formulae:. Rotated Parabolas and Ellipse. General equations as a function of λ X, λ Z, and θ d λ’= λ’ Z +λ’ X-λ’ Z-λ’ X cos(2θ d) 2 2 γ λ’ Z-λ’ X sin(2θ d) 2 tan θ d = tan θ S X S Z α = θ d - θ (internal rotation) λ’ = 1 λ λ X = quadratic elongation parallel to X axis of finite strain ellipse λ Z = quadratic elongation parallel to Z axis of finite. All Forums. Erase the previous Ellipse by drawing the Ellipse at same point using black color. k is y-koordinate of the center of the ellipse. : Activity 2 - Using the Graph-Rotation Theorem. attempt to list the major conventions and the common equations of an ellipse in these conventions. Number of decimal places for input variable: (Note: Input value of 0 means input variable will be integer. Rewrite the equation in the general form, Identify the values of and from the general form. the graph is an ellipse if AC > 0, and in Section 5. is along the ellipse’s major axis, the correlation matrix is σ′ = σ′2 1 0 0 σ′2 2. I am trying to find an algorithm to derive the 4 angles, from the centre of a rotated ellipse to its extremities. Here is a simple calculator to solve ellipse equation and calculate the elliptical co-ordinates such as center, foci, vertices, eccentricity and area and axis lengths such as Major, Semi Major and Minor, Semi Minor axis lengths from the given ellipse expression. | bartleby. If the data is uncorrelated and therefore has zero covariance, the ellipse is not rotated and axis aligned. He provided me with some equations that I combined with a neat ellipse display program written by Wayne Landsman for the NASA Goddard IDL program library to come up with a "center of mass" ellipse fitting program, named Fit_Ellipse. 06274*x^2 - y^2 + 1192. The blue ellipse shows the original plot. Ellipse definition, a plane curve such that the sums of the distances of each point in its periphery from two fixed points, the foci, are equal. with the axis. Do the intersection points of two rotated parabolas lie on a rotated ellipse? 1. tive number has a square root. Combine multiple words with dashes(-), and seperate tags with spaces. The equation x 2 – xy + y 2 = 3 re presents a "rotated ellipse,” that is, an ellipse whose axes are not parallel to the coordinate axes. 1-c/a cos( q) Usually, we let e= c/aand let p= b2/a, where eis called the eccentricityof the ellipse and pis called theparameter. The moment of inertia of any extended object is built up from that basic definition. If the data is uncorrelated and therefore has zero covariance, the ellipse is not rotated and axis aligned. Rotation Creates the ellipse by appearing to rotate a circle about the first axis. At first blush, these are really strange exponents. I have the verticles for the major axis: d1(0,0. The chord perpendicular to the major axis at the center is the minor axis. R = distance between axis and rotation mass (ft, m) The moment of all other moments of inertia of an object are calculated from the the sum of the moments. The major axis of this ellipse is horizontal and is the red segment from (-2, 0) to (2, 0). com Since c ≤ a the eccentricity is always greater than 1 in the case of an ellipse. Since (− 6 3) 2 − 4 ⋅ 7 ⋅ 13 = − 346 < 0 and A ≠ C since 7 ≠ 13, the equation satisfies the conditions to be an ellipse. In the interesting first case the set Q is an ellipse. The center of this ellipse is the origin since (0, 0) is the midpoint of the major axis. The equation x^2 - xy + y^2 = 3 represents a "rotated ellipse," that is, an ellipse whose axes are not parallel to the coordinate axes. Disk method. The parametric equation of a parabola with directrix x = −a and focus (a,0) is x = at2, y = 2at. Express the equation in the standard form of a conic section. the graph is an ellipse if AC > 0, and in Section 5. This ellipse is called the distortion ellipse. The transformed ellipse is de-scribed by the equation a0x2. Torna cartesian equation of rotated ellipse ogni moment group ingannatore. g, with major axis aligned with X-axis, minor axis aligned with Y-axis … but that’s certainly not an “arbitrary” ellipse. +- hat j b. Torna cartesian equation of rotated ellipse ogni moment group ingannatore. The amount of correlation can be interpreted by how thin the ellipse is. ) Then my equation is: Write an equation for the ellipse with vertices (4, 0) and (–2, 0). How It Works. Our mission is to provide a free, world-class education to anyone, anywhere. We have also seen that translating by a curve by a fixed vector ( h , k ) has the effect of replacing x by x − h and y by y − k in the equation of the curve. In mathematics, a rotation of axes in two dimensions is a mapping from an xy-Cartesian coordinate system to an x'y'-Cartesian coordinate system in which the origin is kept fixed and the x' and y' axes are obtained by rotating the x and y axes counterclockwise through an angle. An ellipse represents the intersection of a plane surface and an ellipsoid. , the 3D analog to the 2D form ((X*X)/a)+((Y*Y)/b)=1). Moment of inertia is defined with respect to a specific rotation axis. If C∆ > 0, we have an imaginary ellipse, and if ∆ = 0, we have a point ellipse. This gives a surface composed of many "truncated cones;'' a truncated cone is called a frustum of a cone. E, qua, euros' Sale, per forza di guerra. Differential Equations (10) Discrete Mathematics (4) Discrete Random Variable (5) Disk Washer Cylindrical Shell Integration (2) Division Tricks (1) Domain and Range of a Function (1) Double Integrals (3) Eigenvalues and Eigenvectors (1) Ellipse (1) Empirical and Molecular Formula (2) Enthalpy Change (2) Expected Value Variance Standard. The parameters of an ellipse are also often given as the semi-major axis, a, and the eccentricity, e, 2 2 1 a b e =-. In the rotated the major axis of the ellipse lies along the We can write the equation of the ellipse in this rotated as Observe that there is no in the equation. When rotated inside a square of side length 2 having corners at ), the envelope of the Reuleaux triangle is a region of the square with rounded corners. A more general figure has three orthogonal axes of different lengths a, b and c, and can be represented by the equation x 2 /a 2 + y 2 /b 2 + z 2. Solution The equation of an ellipse usually appears when the plot of the from MECHANICAL MAE351 at Korea Advanced Institute of Science and Technology. Constructing (Plotting) a Rotated Ellipse. The parameters of an ellipse are also often given as the semi-major axis, a, and the eccentricity, e, 2 2 1 a b e =-. Textbook solution for Single Variable Calculus: Early Transcendentals 8th Edition James Stewart Chapter 3. When the center of the ellipse is at the origin and the foci are on the x-axis or y-axis, then the equation of the ellipse is the simplest. The longer axis, a, is called the semi-major axis and the shorter, b, is called the semi-minor axis. A circle if A = C 2. Rotating Ellipse. Equations When placed like this on an x-y graph, the equation for an ellipse is: x 2 a 2 + y 2 b 2 = 1. First I want to look at the case when , and. This can always be converted to VBA Code. x¿y¿-system x¿-axis. I should also mention, the ellipse is to be drawn on linear or log-log coordinates. If the x- and y-axes are rotated through an angle, say θ,. Let be the angle of rotation. Show that this represents elliptically polarized light in which the major axis of the ellipse makes an angle. (Since I wasn't asked for the length of the minor axis or the location of the co-vertices, I don't need the value of b itself. The equations of tangent and normal to the ellipse $$\frac{{{x^2}}}{{{a^2}}} + \frac{{{y^2}}}{{{b^2}}} = 1$$ at the point $$\left( {{x_1},{y_1}} \right)$$ are \frac. If $a>b$, the ellipse is stretched further in the horizontal direction, and if $b>a$, the ellipse is stretched further in the vertical direction. Find dy dx. Suppose an ellipse is described by the equation ax2 + bxy + cy2 + dx + ey + f = 0 with a;b;c;d;e;f 2 F. 4 degrees and 90. asked • 04/02/15 find the equation of the image of the ellipse x^2/4 + y^2/9 when rotated through pi/4 about origin. Below is a list of parametric equations starting from that of a general ellipse and modifying it step by step into a prediction ellipse, showing how different parts contribute at each step. I want to plot an Ellipse. Torna cartesian equation of rotated ellipse ogni moment group ingannatore. Moment of inertia is defined with respect to a specific rotation axis. If C∆ > 0, we have an imaginary ellipse, and if ∆ = 0, we have a point ellipse. com Since c ≤ a the eccentricity is always greater than 1 in the case of an ellipse. xcos a − ysin a 2 2 5 + xsin. I am not very sure if my solution is correct but I'd rather try and put it up and let people evaluate if it's correct: The ellipse would look something like the below image: Since the ellipse is rotated along Y axis it will form circles(of vary. This video derives the formulas for rotation of axes and shows how to use them to eliminate the xy term from a general second degree polynomial. Major axis is vertical. To rotate the graph of the parabola about the origin, we rotate each point individually. (An ellipse where a = b is in fact a circle. For ellipses not centered at the origin, simply add the coordinates of the center point (e, f) to the calculated (x, y). h is x-koordinate of the center of the ellipse. We shall now study the Cartesian representation of the hyperbola and the ellipse. fftial Calculus Grinshpan Rotated Ellipse The implicit equation x2 xy +y2 = 3 describes an ellipse. The equation of the ellipse we discussed in class is 9 x2 - 4 xy + 6 y2 = 5. What is the volume of the solid? Step 2: Determine the boundaries of the integral Since the rotation is around the y-axis, the boundaries will be between y = 0 and y = 1 Step 4: Evaluate integrals to find volume Step 1:. Ellipse configuration panel. The higher the value from 0 through 89. This equation defines an ellipse centered at the origin. If $$\displaystyle D = b^2- 4ac$$, then it's an ellipse for $$\displaystyle D<0$$, a parabola for $$\displaystyle D = 0$$, and a hyperbola for $$\displaystyle D>0$$. However, I could not find anywhere an equation for a spheroid that does not have its axis or revolution along the x,y, or z axis. Rewrite the equation 2x^2+√3 xy+y^2−2=0 in a rotated x^′ y^′-system without an x^′ y^′-term. We can apply one more transformation to an ellipse, and that is to rotate its axes by an angle, θ, about the center of the ellipse, or to tilt it. Constructing (Plotting) a Rotated Ellipse. Use the information provided to write the equation of the ellipse in standard form. For the maps we consider, the axes of the distortion ellipse are in the north/south and the east/west directions. Rotation of axis After rotating the coordinate axes through an angle theta, the general second-degree equation in the new x'y'-plane will have the form __________. How to make an ellipse No one knows for sure when the ellipse was discovered, but in 350 BCE the Ancient Greeks knew about the ellipse as a member of the group of two-dimensional geometric figures called conic sections. This equation has vertices at (5, –1 ± 4), or (5, 3) and (5, –5). the sum of distances of P from F 1 and F 2 in the plane is a constant 2a. Far deirun figlio e cartesian equation of rotated ellipse jackpot. R = distance between axis and rotation mass (ft, m) The moment of all other moments of inertia of an object are calculated from the the sum of the moments. For ellipses not centered at the origin, simply add the coordinates of the center point (e, f) to the calculated (x, y). The parameters of an ellipse are also often given as the semi-major axis, a, and the eccentricity, e, 2 2 1 a b e =-. I was able to find the equation of an ellipse where its major axis is shifted and rotated off of the x,y, or z axis. If the major axis lies along the y-axis, a and b are swapped in the equation of an ellipse (below). has offshore analysis. (25) Here, σ′ 1 is the 1-sigma confidence value along the minor axis of the ellipse, and σ′ 2 is that along the major axis (σ′ 2 ≥ σ′ 1). P(at2, 2at) tangent We shall use the formula for the equation of a straight line with a given gradient, passing through a given point. You will probably has to project the data, create the ellipse and project back to WGS to make it work; I think the snippet provided by Darren does not include rotation (bearing). Rewrite the equation 2x^2+√3 xy+y^2−2=0 in a rotated x^′ y^′-system without an x^′ y^′-term. major axis is along y-axis. We can come up with a general equation for an ellipse tilted by θ by applying the 2-D rotational matrix to the vector (x, y) of coordinates of the ellipse. It has co-vertices at (5 ± 3, –1), or (8, –1) and (2, –1). 4 we saw that the graph is a hyperbola when AC < 0. Vertical Major Axis Example. They are located at (h±c,k) or (h,k±c). The ellipse that is most frequently studied in this course has Cartesian equation; where. Note: When tracing feature is ON, shading feature is OFF. A ray of light passing through a focus will pass through the other focus after a single bounce (Hilbert and Cohn-Vossen 1999, p. The objective is to rotate the x and y axes until they are parallel to the axes of the conic. Find the points at which this ellipse crosses the x -axis and show that the tangent lines at these points are parallel. I wish to plot an ellipse by scanline finding the values for y for each value of x. The orthonormality of the axis directions and Equation (1) imply x i = U i (P C). You don't have to use power in contrast to the Columbus' egg. The Steiner ellipse can be extended to higher dimensions with one more point than the dimension. Rotate the hyperbola : In the above graph, the preimage is in blue and the image (rotated) is. Equation 3 thus becomes: rz n S(y, cos & - x, sin &)2 - N Eq. Writing Equations of Ellipses Centered at the Origin in Standard Form. Log InorSign Up. Tracing for Cartesian Equations: ON OFF Video To trace a graph, click on the radio button to the right of the input equation. B is the. The following 12 points are on this ellipse: (p 3;0); (0; p 3); ( p 3; p 3); ( p 3;. Eliminate the parameter : Consider. Two fixed points inside the ellipse, F1 and F2 are called the foci. y axis [see Figs. An ellipse is a circle scaled (squashed) in one direction, so an ellipse centered at the origin with semimajor axis a and semiminor axis b < a has equation. Values between 89. The ellipse points are P = C+ x 0U 0 + x 1U 1 (1) where x 0 e 0 2 + x 1 e 1 2 = 1 (2) If e 0 = e 1, then the ellipse is a circle with center C and radius e 0. Example : Given ellipse : 4 2 (x − 3) 2 + 5 2 y 2 = 1 b 2 X 2 + a 2 Y 2 = 1 a 2 > b 2 i. Elliptic cylinders are also known as cylindroids , but that name is ambiguous, as it can also refer to the Plücker conoid. It is important to know the differences in the equations to help quickly identify the type of conic that is represented by a given equation. It's "just" twice the rotation: 2 is a regular number so doubles our rotation rate to a full -180 degrees in a unit of time. The Steiner ellipse has the minimal area surrounding a triangle. Earth’s orbit has an eccentricity of less than 0. It has co-vertices at (5 ± 3, –1), or (8, –1) and (2, –1). A ray of light passing through a focus will pass through the other focus after a single bounce (Hilbert and Cohn-Vossen 1999, p. This form of the ellipse has a graph as shown below. The point $$\left( {h,k} \right)$$ is called the center of the ellipse. a is the ellipse axis which is parallell to the x-axis when rotation is zero. A circle if A = C 2. Example : Given ellipse : 4 2 (x − 3) 2 + 5 2 y 2 = 1 b 2 X 2 + a 2 Y 2 = 1 a 2 > b 2 i. In other words, from the centre to the points where the bounding box touches the ellipse's line. x¿y¿-system x¿-axis. The equation of such an ellipse we can write in the usual form 2 2 + 2 =1 (1) The slope of the tangent line to this ellipse has evidently the form (Dvořáková, et al. ' Draw an ellipse centered at (cx, cy) with dimensions' wid and hgt rotated angle degrees. The Steiner ellipse can be extended to higher dimensions with one more point than the dimension. It is a procedure for drawing an approximation to an ellipse using 4 arc sections, one at each end of the major axes (length a) and one at each end of the minor axes (length b). If the center of the ellipse is at the origin, the equation simplifies to (x 2 /a 2) + (y 2 /b 2)=1. We can apply one more transformation to an ellipse, and that is to rotate its axes by an angle, θ, about the center of the ellipse, or to tilt it. Example of the graph and equation of an ellipse on the. Equation of ellipse from its focus, directrix, and eccentricity Last Updated: 20-12-2018 Given focus(x, y), directrix(ax + by + c) and eccentricity e of an ellipse, the task is to find the equation of ellipse using its focus, directrix, and eccentricity. Then write the equation in standard form. Major axis is horizontal. , 2015) ´= − 2√1− 𝑥2 2 (2) For the tangent point of the line with the slope 𝑔∝ and our ellipse then holds 𝑇= √1− 𝑇 2 2 (3) 124. By the way the correct rotation. It has the following form: (x - c₁)² / a² + (y - c₂)² / b² = 1. Thus, the standard equation of an ellipse is x 2 a 2 + y 2 b 2 = 1. Then write the equation in standard form. It has co-vertices at (5 ± 3, –1), or (8, –1) and (2, –1). Consider an ellipse that is located with respect to a Cartesian frame as in figure 3 (a ≥ b > 0, major axis on x-axis, minor axis on y-axis). If the data is uncorrelated and therefore has zero covariance, the ellipse is not rotated and axis aligned. This gives a surface composed of many "truncated cones;'' a truncated cone is called a frustum of a cone. Writing Equations of Ellipses Centered at the Origin in Standard Form. Determine the foci and vertices for the ellipse with general equation 4x^2+9y^2-48x+72y+144=0. We have also seen that translating by a curve by a fixed vector ( h , k ) has the effect of replacing x by x − h and y by y − k in the equation of the curve. By a suitable choice of coordinate axes, the equation for any conic can be reduced to one of three simple r forms: x 2 / a 2 + y 2 / b 2 = 1, x 2 / a 2 − y 2 / b 2 = 1, or y 2 = 2px, corresponding to an ellipse, a hyperbola, and a parabola, respectively. If you are given an equation of ellipse in the form of a function whose value is a square root, you may need to simplify it to make it look like the equation of an ellipse. STRAIN ELLIPSE. The eccentricity of an ellipse is e =. 3) Calculate the lengths of the ellipse axes, which are the square root of the eigenvalues of the covariance matrix: A E C R = H L A E C A J R = H Q A O : ? ; 4) Calculate the counter‐clockwise rotation (θ) of the ellipse: à L 1 2 Tan ? 5 d l 1 = O L A ? P N = P E K p I l 2 T U : ê T ; 6 F : ê U ; 6 p h. GeoGebra Math Apps Get our free online math tools for graphing, geometry, 3D, and more!. Rotation of Axes 3 Coordinate Rotation Formulas If a rectangular xy-coordinate system is rotated through an angle to form an ^xy^- coordinate system, then a point P(x;y) will have coordinates P(^x;y^) in the new system, where (x;y)and(^x;y^) are related byx =^xcos − y^sin and y =^xsin +^ycos : and x^ = xcos +ysin and ^y = −xsin +ycos : EXAMPLE 1 Show that the graph of the equation xy = 1. Solution The equation of an ellipse usually appears when the plot of the from MECHANICAL MAE351 at Korea Advanced Institute of Science and Technology. Other forms of the equation. Keyword-suggest-tool. It follows that 0 £e< 1 and p> 0, so that anellipse in polar coordinates with one focus at the origin and the other onthe positive x-axis is given by. and through an angle of 30°. The equation of the ellipse in the rotated coordinates is. is along the ellipse’s major axis, the correlation matrix is σ′ = σ′2 1 0 0 σ′2 2. The ellipse may be rotated to a di erent orientation by a 2 2 rotation matrix R= 2 4 cos sin sin cos 3 5 The major axis direction (1;0) is rotated to (cos ;sin ) and the minor axis direction (0;1) is rotated to ( sin ;cos ). 3) Calculate the lengths of the ellipse axes, which are the square root of the eigenvalues of the covariance matrix: A E C R = H L A E C A J R = H Q A O : ? ; 4) Calculate the counter‐clockwise rotation (θ) of the ellipse: à L 1 2 Tan ? 5 d l 1 = O L A ? P N = P E K p I l 2 T U : ê T ; 6 F : ê U ; 6 p h. The center of the circle used to be at the origin. Approximately sketch the ellipse - the major axis of the ellipse is x-axis. The equation stated is going to have xy terms, and so there needs to be a suitable rotation of axes in order to get the equation in the standard form suitable for the recommended definite integration. The moment of inertia of a point mass with respect to an axis is defined as the product of the mass times the distance from the axis squared. H(x, y) = A x² + B xy + C y² + D x + E y + F = 0 The basic principle of the incremental line tracing algorithms (I wouldn't call them scanline) is to follow the pixels that fulfill the equation as much as possible. Introduce some delay in function(in ms). How might I go about deriving such an equation. For a plain ellipse the formula is trivial to find: y = Sqrt[b^2 - (b^2 x^2)/a^2] But when the axes of the ellipse are rotated I've never been able to figure out how to compute y (and possibly the extents of x). Let us consider a point P(x, y) lying on the ellipse such that P satisfies the definition i. Therefore, equations (3) satisfy the equation for a non-rotated ellipse, and you can simply plot them for all values of b from 0 to 360 degrees. Learn how each constant and coefficient affects the resulting graph. (iii) is the equation of the rotated ellipse relative to the centre. Differential Equations (10) Discrete Mathematics (4) Discrete Random Variable (5) Disk Washer Cylindrical Shell Integration (2) Division Tricks (1) Domain and Range of a Function (1) Double Integrals (3) Eigenvalues and Eigenvectors (1) Ellipse (1) Empirical and Molecular Formula (2) Enthalpy Change (2) Expected Value Variance Standard. It can be a parametric formulation (e. (1) Ellipse (2) Rotated Ellipse (3) Ellipse Representing Covariance. Rotated Ellipse The implicit equation x2 xy +y2 = 3 describes an ellipse. (1) which is in the form Ax2+ Bxy+ Cy2= 1, with Aand Cpositive. Writing Equations of Rotated Conics in Standard Form Now that we can find the standard form of a conic when we are given an angle of rotation, we will learn how to transform the equation of a conic given in the form $A{x}^{2}+Bxy+C{y}^{2}+Dx+Ey+F=0$ into standard form by rotating the axes. Let the coordinates of F 1 and F 2 be (-c, 0) and (c, 0) respectively as shown. (a) Find the points at which this ellipse crosses the x-axis. Major axis is vertical. By using this website, you agree to our Cookie Policy. H(x, y) = A x² + B xy + C y² + D x + E y + F = 0 The basic principle of the incremental line tracing algorithms (I wouldn't call them scanline) is to follow the pixels that fulfill the equation as much as possible. Ellipse graph from standard equation. Find dy dx. If and are nonzero, have the same sign, and are not equal to each other, then the graph may be an ellipse. Except for degenerate cases, the general second-degree equation Ax2 + Bxy + Cy2 + Dx + Ey + F = 0 x¿y¿-term x¿ 2 4 + y¿ 1 = 1. asked • 04/02/15 find the equation of the image of the ellipse x^2/4 + y^2/9 when rotated through pi/4 about origin. The Danish author and scientist Piet Hein (1905-1996) dealt with the super ellipse in great detail (book 4). For any ellipse, 0 < e < 1. 1) and (e, f) = (e. Rewrite the equation 2x^2+√3 xy+y^2−2=0 in a rotated x^′ y^′-system without an x^′ y^′-term. In other words, the equation of the plane through the center of the circle sloping away from the drawing plane with slope m is given by (3. Solve them for C, D, E. Example : Given ellipse : 4 2 (x − 3) 2 + 5 2 y 2 = 1 b 2 X 2 + a 2 Y 2 = 1 a 2 > b 2 i. For the Earth–sun system, F1 is the position of the sun, F2 is an imaginary point in space, while the Earth follows the path of the ellipse. The equation is (x - h) squared/a squared plus (y - k) squared/a squared equals 1. This makes the analysis somewhat easier. Rotated Parabolas and Ellipse. Rotate the ellipse by applying the equations: RX = X * cos_angle + Y * sin_angle RY = -X * sin_angle + Y * cos_angle. The Steiner ellipse has the minimal area surrounding a triangle. A more general figure has three orthogonal axes of different lengths a, b and c, and can be represented by the equation x 2 /a 2 + y 2 /b 2 + z 2. 3 Major axis, minor axis and rotated angle of a ellipse Find the major axis, minor axis and rotated angle, where the major axis is twice of the longest radius and the minor axis is also twice of the shortest radius. Show that this represents elliptically polarized light in which the major axis of the ellipse makes an angle. This ellipse is called the distortion ellipse. − <: Ellipse − >: Hyperbola; Firstly, if B is not zero then the graph represents a rotated conic. Consider an ellipse that is located with respect to a Cartesian frame as in figure 3 (a ≥ b > 0, major axis on x-axis, minor axis on y-axis). In mathematics, a rotation of axes in two dimensions is a mapping from an xy-Cartesian coordinate system to an x'y'-Cartesian coordinate system in which the origin is kept fixed and the x' and y' axes are obtained by rotating the x and y axes counterclockwise through an angle. How might I go about deriving such an equation. A conic in a rotated coordinate system takes on the form of , where the prime notation represents the rotated axes and associated coefficients. point is expressed as (yi = yi cos O - x, sin 0). none of these. I have the verticles for the major axis: d1(0,0. for the ellipse tting is to transform Eq. and through an angle of 30°. Activity 4: Determining the general equation of an ellipse/ Determining the foci and vertices of an ellipse. Donate or volunteer today! Site Navigation. An ellipse is a unique figure in astronomy as it is the path of any orbiting body around another. Then: (Canonical equation of an ellipse) A point P=(x,y) is a point of the ellipse if and only if Note that for a = b this is the equation of a circle. I am trying to find an algorithm to derive the 4 angles, from the centre of a rotated ellipse to its extremities. The radii of the ellipse in both directions are then the variances. Solved Examples Q 1: Find out the coordinates of the foci, vertices, lengths of major and minor axes, and the eccentricity of the ellipse 9x 2 + 4y 2 = 36. tive number has a square root. Log InorSign Up. Solve them for C, D, E. Parabola if A or C = 0 therefore AC = 0 B. 4 we saw that the graph is a hyperbola when AC < 0. I first solved the equation of the ellipse for y, getting y= '. The length a always refers to the major axis. Erase the previous Ellipse by drawing the Ellipse at same point using black color. com Since c ≤ a the eccentricity is always greater than 1 in the case of an ellipse. Because A = 7, and C = 13, you have (for 0 θ < π/2) Therefore, the equation in the x'y'-system is derived by making the following substitutions. For a given chord or triangle base, the. (3), we get (x 0+D=(2A 0))2 (F0=A0) + (x0+E=(2C0))2 (F0=C0) = 1; (6). A ray of light passing through a focus will pass through the other focus after a single bounce (Hilbert and Cohn-Vossen 1999, p. Then the equation of the ellipse in this new coordinate system becomes. If the major axis lies along the y-axis, a and b are swapped in the equation of an ellipse (below). Notes College Algebra teaches you how to find the equation of an ellipse given a graph. (An ellipse where a = b is in fact a circle. First I want to look at the case when , and. The ellipse that is most frequently studied in this course has Cartesian equation; where. Substituting in the values for x and y above, we get an equation for the new coordinates as a function of the old coordinates and the angle of rotation: x' = x × cos (β) - y × sin (β) y' = y × cos (β) + x × sin (β). 4 we saw that the graph is a hyperbola when AC < 0. The major axis is parallel to the X axis. The ellipse can be rotated. A constructional method for drawing an ellipse in drafting and engineering is usually referred to as the "4 center ellipse" or the "4 arc ellipse". Figure 3: Polarization Ellipse. If an ellipse is rotated about one of its principal axes, a spheroid is the result. Determine the minimum and maximum X and Y limits for the ellipse. At first blush, these are really strange exponents. ) (11 points) The equation x2−xy+y2 = 3 represents a “rotated ellipse”—that is, an ellipse whose axes are not parallel to the coordinate axes. y axis [see Figs. is along the ellipse’s major axis, the correlation matrix is σ′ = σ′2 1 0 0 σ′2 2. A circle in 3D is parameterized by six numbers: two for the orientation of its unit normal vector, one for the radius, and three for the circle center. The equation of the ellipse we discussed in class is 9 x2 - 4 xy + 6 y2 = 5. the sum of distances of P from F 1 and F 2 in the plane is a constant 2a. Log InorSign Up. I know that i can draw it using ellipse equation, then rotate it, compute points and connect with lines. Erase the previous Ellipse by drawing the Ellipse at same point using black color. HELP?! Rotate the axes to eliminate the xy-term in the equation. If the major axis lies along the y-axis, a and b are swapped in the equation of an ellipse (below). Once we have those we can sketch in the ellipse. Center of ellipse will be the mid point of first and second point always. Accordingly, we can find the equation for any ellipse by applying rotations and translations to the standard equation of an ellipse. General Equation. To rotate the graph of the parabola about the origin, we rotate each point individually. When a>b, we have a prolate spheroid, that is, an ellipse rotated around its major axis; when a K > -1 = ellipse, -1 = parabolic, and K < -1 is hyperbolic; R is the radius of curvature. Consider an ellipse that is located with respect to a Cartesian frame as in figure 3 (a ≥ b > 0, major axis on x-axis, minor axis on y-axis). If $$\displaystyle D = b^2- 4ac$$, then it's an ellipse for $$\displaystyle D<0$$, a parabola for $$\displaystyle D = 0$$, and a hyperbola for $$\displaystyle D>0$$. Apply a square completion method to Eq. Here is a short (and probably inaccurate, because I don't really understand the math) explanation for how it works. Rotate the ellipse by applying the equations: RX = X * cos_angle + Y * sin_angle RY = -X * sin_angle + Y * cos_angle. Matrix transformations are affine and map a point such as that to the expected point on the rotated ellipse, but these transformations don't work like that. For the Earth–sun system, F1 is the position of the sun, F2 is an imaginary point in space, while the Earth follows the path of the ellipse. Then the equation of the ellipse in this new coordinate system becomes. We have also seen that translating by a curve by a fixed vector ( h , k ) has the effect of replacing x by x − h and y by y − k in the equation of the curve. Draw the Ellipse at calculated point using white color. To verify, here is a manipulate, which plots the original -3. Question 625240: If the ellipse defined by the equation 16x^2+4y^2+96x-8y+84=0 is translated 6 units down and 7 units to the left, write the standard equation of the resulting ellipse Answer by Edwin McCravy(18045) (Show Source):. phi is the rotation angle. Two fixed points inside the ellipse, F1 and F2 are called the foci. 829648*x*y - 196494 == 0 as ContourPlot then plots the standard ellipse equation when rotated, which is. Expanding the binomial squares and collecting like terms gives. If it is rotated about the major axis, the spheroid is prolate, while rotation about the minor axis makes it oblate. If C∆ > 0, we have an imaginary ellipse, and if ∆ = 0, we have a point ellipse. The rotated axes are denoted as the x′ axis and the y′ axis. Rotation Creates the ellipse by appearing to rotate a circle about the first axis. Therefore, equations (3) satisfy the equation for a non-rotated ellipse, and you can simply plot them for all values of b from 0 to 360 degrees. A Rotated Ellipse In this handout I have used Mathematica to do the plots. the sum of distances of P from F 1 and F 2 in the plane is a constant 2a. The equations of tangent and normal to the ellipse\frac{{{x^2}}}{{{a^2}}} + \frac{{{y^2}}}{{{b^2}}} = 1$$at the point$$\left( {{x_1},{y_1}} \right)$$are$$\frac. Repeat from Step 1. Let x' and y' be the new set of axes along the principal axes of the ellipse. lationship between two images is pure rotation, i. And for a hyperbola it is: x 2 a 2 − y 2 b 2 = 1. Because A = 7, and C = 13, you have (for 0 θ < π/2) Therefore, the equation in the x'y'-system is derived by making the following substitutions. The major axis of this ellipse is horizontal and is the red segment from (-2, 0) to (2, 0). The amount of correlation can be interpreted by how thin the ellipse is. In mathematics, a rotation of axes in two dimensions is a mapping from an xy-Cartesian coordinate system to an x'y'-Cartesian coordinate system in which the origin is kept fixed and the x' and y' axes are obtained by rotating the x and y axes counterclockwise through an angle. Here is a reference to plotting an ellipse, without rotation of the major axis from the horizontal: Ellipse in a chart. Here a > b > 0. Sketch the graph of Solution. He provided me with some equations that I combined with a neat ellipse display program written by Wayne Landsman for the NASA Goddard IDL program library to come up with a "center of mass" ellipse fitting program, named Fit_Ellipse. Put ellipse equation in. Thus an ellipse may be drawn using two thumbtacks and a string. The velocity equation for a hyperbolic trajectory has either + , or it is the same with the convention that in that case a is negative. E, qua, euros' Sale, per forza di guerra. The Rotated Ellipsoid June 2, 2017 Page 1 Rotated Ellipsoid An ellipse has 2D geometry and an ellipsoid has 3D geometry. Moment of inertia is defined with respect to a specific rotation axis. A more general figure has three orthogonal axes of different lengths a, b and c, and can be represented by the equation x 2 /a 2 + y 2 /b 2 + z 2. Cartesian Equations of the ellipse and hyperbola. x2 a2 + y2 b2 = 1. Ellipse drawing tool. If the center of the ellipse is at point (h, k) and the major and minor axes have lengths of 2a and 2b respectively, the standard equation is. The chord perpendicular to the major axis at the center is the minor axis. If σ 6= τ the set Q is a hyperbola when F 6= 0. The longer axis, a, is called the semi-major axis and the shorter, b, is called the semi-minor axis. This video derives the formulas for rotation of axes and shows how to use them to eliminate the xy term from a general second degree polynomial. Applying the methods of Equation of a Transformed Ellipsenow leads to the following equation for a standard ellipse which has been rotated through an angle α. Standard Equations of Ellipse. (Since I wasn't asked for the length of the minor axis or the location of the co-vertices, I don't need the value of b itself. Im making a small sample, kinda like line rider except with less functionablilty and an ellipse. A constructional method for drawing an ellipse in drafting and engineering is usually referred to as the "4 center ellipse" or the "4 arc ellipse". 3) Calculate the lengths of the ellipse axes, which are the square root of the eigenvalues of the covariance matrix: A E C R = H L A E C A J R = H Q A O : ? ; 4) Calculate the counter‐clockwise rotation (θ) of the ellipse: à L 1 2 Tan ? 5 d l 1 = O L A ? P N = P E K p I l 2 T U : ê T ; 6 F : ê U ; 6 p h. +- hat j b. -The equation x2 − xy + y2 = 3 represents a "rotated" ellipse, which means the axes of the ellipse are not parallel to the coordinate axes (feel free to graph the ellipse on wolframalpha to get a picture). This way we only draw one object (instead of a thousand) and x and y are now the arrays of all of these points (or coordinates) for the ellipse. If it is rotated about the major axis, the spheroid is prolate, while rotation about the minor axis makes it oblate. Express the equation in the standard form of a conic section. Several exam. This gives a surface composed of many "truncated cones;'' a truncated cone is called a frustum of a cone. In this section, we will discuss the equation of a conic section which is rotated by. Given an ellipse on the coordinate plane, Sal finds its standard equation, which is an equation in the form (x-h)²/a²+(y-k)²/b²=1. The objective is to rotate the x and y axes until they are parallel to the axes of the conic. This video derives the formulas for rotation of axes and shows how to use them to eliminate the xy term from a general second degree polynomial. Here is a simple calculator to solve ellipse equation and calculate the elliptical co-ordinates such as center, foci, vertices, eccentricity and area and axis lengths such as Major, Semi Major and Minor, Semi Minor axis lengths from the given ellipse expression. Question 625240: If the ellipse defined by the equation 16x^2+4y^2+96x-8y+84=0 is translated 6 units down and 7 units to the left, write the standard equation of the resulting ellipse Answer by Edwin McCravy(18045) (Show Source):. The following equation on the polar coordinates (r, θ) describes a general ellipse with semidiameters a and b, centered at a point (r 0, θ 0), with the a axis rotated by φ relative to the polar axis:. 5 (a) with the foci on the x-axis. The distance between the center and either focus is c, where c 2 = a 2 - b 2. ; If and are equal and nonzero and have the same sign, then the graph may be a circle. An ellipse is a circle scaled (squashed) in one direction, so an ellipse centered at the origin with semimajor axis a and semiminor axis b < a has equation. Since (− 6 3) 2 − 4 ⋅ 7 ⋅ 13 = − 346 < 0 and A ≠ C since 7 ≠ 13, the equation satisfies the conditions to be an ellipse. 1) and we are back to equations (2). Here is a reference to plotting an ellipse, without rotation of the major axis from the horizontal: Ellipse in a chart. First, notice that the equation of the parabola y = x^2 can be parametrized by x = t, y = t^2, as t goes from -infinity to infinity; or, as a column vector, [x] = [t] [y] = [t^2]. | 2020-11-25T04:59:59 | {
"domain": "inognicasa.it",
"url": "http://inognicasa.it/equation-of-a-rotated-ellipse.html",
"openwebmath_score": 0.8127440810203552,
"openwebmath_perplexity": 457.9006674781333,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9859363737832231,
"lm_q2_score": 0.8688267864276108,
"lm_q1q2_score": 0.8566079312561694
} |
https://math.stackexchange.com/questions/1425468/what-is-the-chance-of-a-sample-in-white-noise-to-be-a-local-extremum | # What is the chance of a sample in white noise to be a local extremum?
I draw a long sequence of values from a uniform or normal distribution. Observing the resulting sequence, what is the chance of any point in the sequence (aside from the first and last where it is not defined) to be a local extremum? I.e. either a local maximum (the point is larger than both its neighbors) or a local minimum (the point is smaller than both its neighbors).
At first glance, you would think that once you've drawn the 1st point, the expectancy of the 2nd to be smaller\bigger than the 1st is 50% and then the 3rd is bigger\smaller in 50% hence 0.5*0.5 + 0.5*0.5 = 0.5 chance. However there is additional information in the 2nd point being larger than the 1st and hence a bigger (than 50%) chance that the 3rd point is also smaller than the 2nd.
A matlab code which calculates these chances result a chance of 2/3 (0.66%) :
sum(diff(sign(diff(normrnd(0,1,1,10000))))~=0)/10000 % Normal distribution
sum(diff(sign(diff(rand(1,10000))))~=0)/10000 % Uniform distribution
Iv'e tried to apply a combinatoric reasoning to explain this result : As being extremum is defined by 3 points only, let's narrow the problem to observe only 3 points. There are 6 possible orderings to 3 points, in 4 of which the second point is a local extremum, hence the 2/3 chance.
Is this a correct reasoning? Are there other better ways to approach the problem (Integrating the probability density function)?
• By probability you here mean the expectancy of the number of local extrema relative to the number of points in each realization, correct? Sep 7 '15 at 18:14
• Indeed. The expected proportion of points which are local extrema from all the points in an instance of a randomized sequence. Sep 7 '15 at 19:23
• You have the right answer and the right reasoning. Another way to look at things: if you have a cube with the position in the cube dictated by the uniform distribution, then its individual coords are uniformly distributed. The planes that satisfy the equations $x=y$, $y=z$ and $x=z$ cut the cube into six equal parts. Two of them satisfy your condition. This uses the fact that distribution is irrelevant, as pointed out by Ian and A. Donda. Sep 7 '15 at 20:33
I have only a partial theoretical answer, plus another numeric simulation.
As pointed out in a comment to Ian's answer, the solution does not depend on the distribution of the data (as long as it is continuous), so I'm simply using uniformly distributed data on [0, 1].
First, I can confirm that in simulations the expected frequency of local extrema appears to not depend on the length $n$ of sequences, and appears to be exactly 2/3. Here is Matlab code to demonstrate this:
N = 1e6; % number of realizations
n = 100; % length of sequence
ext = diff(sign(diff(rand(n, N)))) ~= 0; % extrema
mean(mean(ext)) % relative frequency across points, estimated expectancy across realizations
First, why exactly 2/3? As the original poster pointed out, the answer of course depends on the probability of occurrences of different orderings, for simplicity for $n = 3$. There are six such orderings: (1, 2, 3), (1, 3, 2), (2, 1, 3), (2, 3, 1), (3, 1, 2), (3, 2, 1). Because the underlying three random variables are independent and identically distributed, any permutation of the three does not change their joint distribution. The probability distribution across orderings therefore has to be invariant under these permutations. This means this distribution must be uniform, and we therefore have a probability of 1/6 for each ordering.
Since four of the orderings lead to a local extremum, namely those with a 1 or a 3 in the middle position, we have a probability of 4/6 = 2/3 for a local extremum.
As pointed out by Ian, the same probability does not necessarily apply to the situation with $n > 3$. If there is a local maximum at one point, there cannot be a local maximum at each of the neighboring points, i.e. we have a negative local correlation w.r.t. the occurrence of a local maximum. It is however possible that there is a local minimum at the next position. I do not see a way to sort out these correlations theoretically, but I looked at them in the simulated data:
R = corr(ext');
The correlation matrix looks like this:
It appears that the autocorrelation is stationary. Averaging across diagonals we can extract the autocorrelation function
lags = -(n - 3) : n - 3;
r = nan(size(lags));
for k = 1 : numel(lags)
r(k) = mean(diag(R, lags(k)));
end
with this result:
According to this we do have a small negative autocorrelation (-0.125) at lag 1, but an even smaller positive autocorrelation (0.025) at lag 2. Beyond that, correlations are smaller than $\pm$0.001. So my only guess is that the negative autocorrelation at lag 1 is somehow perfectly counterbalanced by the positive autocorrelation at lag 2 so that the average probability of occurrence of a local extremum stays at 2/3.
Generalization of the order-statistics approach: Just as we enumerated the possible orderings for $n = 3$ above, we can do so also for larger $n$ computationally, and then determine the probability of a local extremum under the assumption (following the same argument as above) that all orderings are equally possible. Here is an implementation in Matlab which uses the Symbolic Math Toolbox to perform exact arithmetic on rational numbers:
for n = 3 : 10
uo = perms(1 : n);
ext = diff(sign(diff(uo'))) ~= 0;
ratio = sprintf('%d / %d', sum(sum(ext)), numel(ext));
fprintf('%d : %s = %s\n', n, ratio, char(sym(ratio)))
end
The result is
3 : 4 / 6 = 2/3
4 : 32 / 48 = 2/3
5 : 240 / 360 = 2/3
6 : 1920 / 2880 = 2/3
7 : 16800 / 25200 = 2/3
8 : 161280 / 241920 = 2/3
9 : 1693440 / 2540160 = 2/3
10 : 19353600 / 29030400 = 2/3
The first number in the ratios is the number of local extrema, across all $n - 2$ positions and all $n!$ possible orderings, the second is the number of possible local extrema, $(n - 2) ~ n!$.
So now we have the computational but exact proof that the probability is exactly 2/3 for $n$ up to 10. Beyond that, the number of possible orderings becomes so large that it takes very long to evaluate.
I have the feeling this could lead to a proof by induction for arbitrary $n$, but do not know yet how to do so.
Some thoughts here:
Consider a sequence of $N$ iid random variables $X_i$. We have $N-2$ Bernoulli random variables $\{ B_i \}_{i=2}^{N-1}$ which are $1$ when $X_i$ is an extremum and $0$ otherwise. It is not immediately obvious that the $B_i$ are even independent; certainly the version where $B_i$ is $1$ when $X_i$ is a maximum are not independent! So there may be some nontrivial coupling to be considered here.
We can remove this coupling (if it exists) by restricting attention to the case $N=3$ for a moment. Denote the PDF of the variables separately by $f$. Then the probability that $B_2=1$ is
$$\int_{-\infty}^\infty \int_{-\infty}^\infty \int_{\max \{ x,z \}}^\infty f(x) f(y) f(z) dy dx dz + \int_{-\infty}^\infty \int_{-\infty}^\infty \int_{-\infty}^{\min \{ x,z \}} f(x) f(y) f(z) dy dx dz.$$
This can be simplified by splitting over which of $x,z$ is the maximum/minimum. In the uniform case it is straightforward to compute analytically.
In general this probability is not independent of the distribution used. For example, in the Bernoulli case, this quantity is $p^2(1-p)+p^2(1-p)=p-p^2$, so it is larger for $p$ closer to $1/2$. So while I think you are right that it is $2/3$ in the uniform case (I seem to recall a geometric proof that each of the $n!$ "ordered tetrahedra" of the $n$-cube have equal probability under the uniform distribution), I would expect it to be different in general.
• The property of being a local extremum only depends on the ordering of data. It is therefore invariant under strictly increasing transformations of the data, which means every continuous distribution can be transformed into the uniform distribution without changing extrema. I therefore believe that the probability in question should not depend on the distribution. Sep 7 '15 at 18:16
• @A.Donda Indeed you're right; concretely, if $X$ is continuously distributed with CDF $F$, then $F$, considered with codomain $(0,1)$, is invertible. Moreover $F^{-1}$ is an increasing function $F^{-1}(X)$ is $U(0,1)$ distributed. This will also work if $X$ is not continuous but rather merely has no discrete part. That's an interesting discrepancy, which in view of the central limit theorem is surprising, at least to me.
– Ian
Sep 7 '15 at 18:26
• I'd be interested to know what you think of my partial answer. Sep 7 '15 at 19:34
• Minor error in my previous comment: $F(X)$ is $U(0,1)$. $F^{-1}(X)$ needn't make sense.
– Ian
Sep 8 '15 at 15:18 | 2022-01-18T00:10:15 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1425468/what-is-the-chance-of-a-sample-in-white-noise-to-be-a-local-extremum",
"openwebmath_score": 0.8878214359283447,
"openwebmath_perplexity": 299.72183470168204,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9859363754361599,
"lm_q2_score": 0.8688267728417087,
"lm_q1q2_score": 0.8566079192974502
} |
https://math.stackexchange.com/questions/3340314/what-does-the-math-symbol-propto-mean | # What does the math symbol $\propto$ mean?
I came across this symbol in my engineering class and I have never seen it before. Anyone know this?
• You know, you can always raise your hand and ask :) Likely others in the class were also not familiar with it. – Jair Taylor Aug 31 '19 at 18:32
• Well.. More like reading it in my book and came across it :) – C. K. Aug 31 '19 at 18:42
It typically means proportional to. Such that
If $$y=cx$$ for some constant $$c$$ we say
$$y\propto x$$ so that when x grows, y grows proportionally by the ratio $$c$$
Alternatively inverse proportionality is when
$$y=c\frac{1}{x}$$ so that when x gets smaller, y gets bigger proportionally by $$c$$
$$y\propto \frac{1}{x}$$ to my knowledge there isn’t a symbol specifically for inverse proportionality and $$y\propto \frac{1}{x}$$ is used instead
• Is there a symbol for inverse proportionality as well? – C. K. Aug 31 '19 at 18:34
• No I don’t think so typically $$y\propto \frac{1}{x}$$ is used – Colin Hicks Aug 31 '19 at 18:43
• This solved my question. Thanks – C. K. Aug 31 '19 at 18:44
As others have pointed out, this symbol means "is proportional to." That is, $$a \propto b$$ means that there is some constant $$C$$ such that $$a = Cb.$$
That being said, in the interests of "teaching a man to fish", it is worth pointing out that it is often possible to "reverse engineer" the meaning of specific mathematical symbols using DeTeXify. In this case, drawing the mysterious symbol gives
Either \propto or \varpropto seems to give the correct symbol. TeX often uses "var" to indicate a variation of a symbol (for example \epsilon $$\epsilon$$ vs \varepsilon $$\varepsilon$$; \theta $$\theta$$ vs \vartheta $$\vartheta$$), so "propto" seems like a reasonable guess as to what this symbol should be called. Googling this term gives a large number of results, many of which are relevant.
• This is awesome! I'll use this in the future if I come across more mystic symbols! – C. K. Aug 31 '19 at 18:45
• Glad I could be of use. Honestly, DeTeXify is one of my favorite things on the interwebs. There is also a free-to-use desktop version (you can pay something like 8 bucks to remove shareware reminders, if you desire---I think it is worth it to support a snazzy bit of hardware). – Xander Henderson Aug 31 '19 at 18:48
$$\propto$$ means "proportional to."
This symbol is often used, primarily in physics, to denote direct proportionality. so the RHS and LHS of your expression only differ by some scalar multiple. | 2020-08-12T01:35:09 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3340314/what-does-the-math-symbol-propto-mean",
"openwebmath_score": 0.8876305818557739,
"openwebmath_perplexity": 700.2627493610719,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9859363723369031,
"lm_q2_score": 0.8688267745399466,
"lm_q1q2_score": 0.8566079182790873
} |
https://www.physicsforums.com/threads/finding-eigenvectors-of-1-1-1-1-1-1-1-1-1.621052/ | # Homework Help: Finding eigenvectors of [[1,-1,-1],[-1,1,-1],[-1,-1,1]]
1. Jul 15, 2012
### thetrystero
he eigenvalues of the 3x3 matrix [[1,-1,-1],[-1,1,-1],[-1,-1,1]] are 2,2, and -1.
how can i compute the eigenvectors?
for the case lambda=2, for example, i end up with an augmented matrix [[-1,-1,-1,0],[-1,-1,-1,0],[-1,-1,-1,0]] so i'm stuck at this point.
much appreciated.
2. Jul 15, 2012
### Ray Vickson
So, you need to solve the linear system
$$x_1 + x_2 + x_3 = 0\\ x_1 + x_2 + x_3 = 0\\ x_1 + x_2 + x_3 = 0$$
There are lots of solutions. In fact, you should be able to find two linearly independent solution vectors, corresponding to the double eigenvalue 2.
RGV
3. Jul 15, 2012
### HallsofIvy
I prefer to work from the basic definitions (perhaps I just never learned these more sophisticated methods!):
Saying that 2 is an eigenvalue of this matrix means there exist a non-zero vector such that
$$\begin{bmatrix}1 & -1 & -1 \\ -1 & 1 & -1 \\ -1 & -1 & 1\end{bmatrix}\begin{bmatrix}x \\ y \\ z\end{bmatrix}x - y- z\\ -x+ y- z \\ -x- y+ z\end{bmatrix}= \begin{bmatrix}2x \\ 2y\\ 2z\end{bmatrix}$$
which gives the three equations x- y- z= 2x, -x+ y- z= 2y, -x- y+ z= 2z which are, of course, equivalent to -x- y- z= 0, -x- y- z= 0, -x- y- z= 0. Those three equations are the same. We can, for example, say that z= -x- y so that any vector of the form <x, y, -x- y>= <x, 0, -x>+ <0, y, -y>= x<1, 0, -1>+ y<0, 1, -1> is an eigenvector. Notice that the eigenvalue, 2, not only has algebraic multiplicity 2 (it is a double root of the characteristic equation) but has geometric multiplicity 2 (the space of all corresponding eigenvalues is 2 dimensional).
Similarly, the fact that -1 is an eigenvalue means there are x, y, z, satisfying x- y- z= -x, -x+ y- z= -y, -x- y+ z= -z which are, of course, equivalent to 2x- y- z= 0, -x+ 2y- z= 0, -x- y+ 2z= 0. If we subtract the second equation from the first, we eliminate z to get 3x- 3y= 0 so y= x. Putting that into the third equation, 2x+ 2z= 0 so z= -x.
Any eigenvector corresponding to eigenvalue -1 is of the form <x, x, -x>= x<1, 1, -1>.
4. Jul 15, 2012
### thetrystero
by that reasoning, can i not have <1,-1,0> and <-1,1,0> as my two solutions for eigenvalue 2? but wolframalpha says i need to have the case where y=0.
also, i think the solution for eigenvalue -1 is <1,1,1>
5. Jul 16, 2012
### Ray Vickson
Your two listed vectors (for eigenvalue 2) are just multiples of each other. You need two *linearly independent* eigenvectors, such as <1,-1,0> and <1,0,-1>, or <0,-1,1> and <1,-1/2, -1/2>, etc. There are infinitely many possible pairs of vectors <x1,y1,z1> and <x2,y2,z2> that are linearly independent and satisfy the equation x+y+z=0. Any such pair will do.
RGV
Last edited: Jul 16, 2012
6. Jul 16, 2012
### thetrystero
yes, i had thought of that, but found it uncomfortable that of all the many possibilities, both my professor and wolframalpha chose the cases y=0 and z=0 as solutions, so i was wondering what made these two special compares to the others.
7. Jul 17, 2012
### Ray Vickson
There is nothing special about these choices, except for the fact that they both have one component = 0 so are, in a sense, the simplest possible. However, you could equally take x=0 and y=0 or x=0 and z=0.
RGV
8. Jul 17, 2012
### HallsofIvy
If you have a vector that depends upon parameters, say, <x, y, -x- y> as I have above, then choosing x= 0, y= 1 gives you <0, 1, -1> and choosing x= 1, y= 0 gives <1, 0, -1>. That is, in effect, the same as writing <x, y, -x- y>= <x, 0, -x>+ <0, y, -y>= x<1, 0, -1>+ y<0, 1, -1>, showing that any such vector is a linear combination of <1, 0, -1> and <0, 1, -1>. | 2018-09-20T08:25:59 | {
"domain": "physicsforums.com",
"url": "https://www.physicsforums.com/threads/finding-eigenvectors-of-1-1-1-1-1-1-1-1-1.621052/",
"openwebmath_score": 0.9078712463378906,
"openwebmath_perplexity": 2063.505154144845,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9859363706839658,
"lm_q2_score": 0.8688267711434708,
"lm_q1q2_score": 0.8566079134942621
} |
https://math.stackexchange.com/questions/1741273/how-can-i-know-the-analytic-continuation-exists-in-certain-cases | # How can I know the analytic continuation exists in certain cases?
As pointed in Does the analytic continuation always exists? we know it doesn't always exist.
But: take the $\Gamma$ function: the first definition everyone meet is the integral one: $$z\mapsto\int_{0}^{+\infty}t^{z-1}e^{-t}\,dt$$ which defines an holomorphic function on the half plane $\{\Re z>0\}$. Moreover we immediately get the functional equation: $$\Gamma(z+1)=z\Gamma(z)\;,\;\;\;\forall\; \Re z>0.$$ This equation is used to extend the function on the whole complex plane (minus the negative integers)... but: WHY CAN WE DO THIS?!
We know that there is an holomorphic function $\Gamma$ which can be expressed as the integral on that half plane. Why are we allowed to write $$\Gamma\left(\frac12\right)=-\frac12\Gamma\left(-\frac12\right)$$ for example? LHS is defined, RHS, NOT!!! But where's the problem? Simply let's define $\Gamma\left(-\frac12\right)$ in such a way... but why can we do this? How can I know that this function I named $\Gamma$ which is holomorphic on the above half plane admits an extension?
• You are just using the same name for two differenti objects. – N74 Apr 13 '16 at 20:11
Once you have the functional equation for $\Gamma$, we can define a new function $\tilde \Gamma$ defined on the half-plane $\operatorname{Re} z > -1$ (except $z=0$) by $$\tilde \Gamma(z) = \frac1z \Gamma(z+1).$$ It's clear that $\tilde \Gamma$ is holomorphic on $\{ \operatorname{Re} z > -1 \} \setminus \{ 0 \}$, and coincides with $\Gamma$ on $\operatorname{Re} z > 0$ (because of the functional equation). In other words, $\tilde \Gamma$ is an analytic continuation of $\Gamma$ to $\{ \operatorname{Re} z > -1 \} \setminus \{ 0 \}$. So we may as well call $\tilde \Gamma$ by $\Gamma$.
Repeating the above construction, we can define a "new $\Gamma$-function" on successively larger sets until we get something defined and holomorphic on the whole complex plane except the non-positive integers.
• I don't agree. you are totally obfuscating the process of analytic continuation this way : what about the analytic continuation on a path of intersecting disks ? how do you know that the analytic continuation doesn't depend on the path (the fact that $\Gamma(s)$ is meromorphic ?) – reuns Apr 13 '16 at 21:47
The following are citations from the classic Applied and Computational Complex Analysis, Vol. I by P. Henrici. The chapter 3: Analytic Continuation provides a thorough treatise of the theme. Here we look at two aspects, which should help to clarify the situation.
At first we take a look when two functions $f(z)$ and $g(z)$ are analytic continuations from each other.
Theorem 3.2d: (Fundamental lemma on analytic continuation)
Let $Q$ be a set with point of accumulation $q$ and let $R$ and $S$ be two regions such that their intersection contains $Q$ and $q$ is connected. If $f$ is analytic on $R$, $g$ is analytic on $S$, and $f(z)=g(z)$ for $z\in Q$, then $f(z)=g(z)$ throughout $R\cap S$ and $f$ and $g$ are analytic continuations of each other.
We observe, we need at least a set $Q$ with an accumulation point where analytic functions $f$ and $g$ have to coincide. This set is part of the intersection of two regions $R$ and $S$ where $f$ and $g$ are defined. Finally we conclude that throughout $R\cap S$ the functions coincide.
The second aspect sheds some light at functional relationships in connection with analytic continuation. We can read in
Section 3.2.5: Analytic Continuation by Exploiting Functional Relationships
Occasionally an analytic continuation of a function $f$ can be obtained by making use of a special functional relationship satisfied by $f$. Naturally this method is restricted to those functions for which such relationships are known.
He continues with example 15 which seems that P. Henrici had precisely a user with OPs question in mind.
Example 15:
Let the function $g$ possess the following properties:
• (a) $g$ is analytic in the right half-plane: $R:\Re (z)>0$
• (b) For all $z\in R, zg(z)=g(z+1)$
We assert that $g$ can be continued analytically into the whole complex plane with the exception of the points $z=0,-1,-2,\ldots$.
We first continue $g$ into $S:\Re (z)>-1,z\neq 0$. For $z\in S$,let $f$ be defined by \begin{align*} f(z):=\frac{1}{z}g(z+1) \end{align*} For $z\in S, \Re(z+1)>0$. Hence by virtue of (a) $f$ is analytic on $S$. In view of (b) $f$ agrees with $g$ on the set of $R$. Since $S$ is a region, $f$ represents the analytic continuation of $g$ from $R$ to $S$. We note that $f$ satisfies the functional relation $f(z+1)=zf(z)$ on the whole set $S$.
Denoting the extended function again by $g$, we may use the same method to continue $g$ analytically into the set $\Re(z)>-2,z\neq 0,-1$, and thus step by step into the region $z\neq 0,-1,-2,\ldots$.
Of course this example addresses the Gamma Function $\Gamma(z)$ which is treated in detail in chapter 8, vol. 2. He then continues with further methods of analytic continuation, such as the principle of continuous continuation and the symmetry principle. | 2020-10-24T04:05:11 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1741273/how-can-i-know-the-analytic-continuation-exists-in-certain-cases",
"openwebmath_score": 0.924250602722168,
"openwebmath_perplexity": 156.7815633080759,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9859363733699888,
"lm_q2_score": 0.8688267660487572,
"lm_q1q2_score": 0.8566079108048874
} |
http://math.stackexchange.com/questions/158651/orthonormal-basis | # Orthonormal basis
Consider $\mathbb{R}^3$ together with inner product $\langle (x_1, x_2, x_3), (y_1, y_2, y_3) \rangle = 2x_1 y_1+x_2 y_2+3 x_3 y_3$. Use the Gram-Schmidt procedure to find an orthonormal basis for $W=\text{span} \left\{(-1, 1, 0), (-1, 1, 2) \right\}$.
I don't get how the inner product $\langle (x_1, x_2, x_3), (y_1, y_2, y_3) \rangle = 2 x_1 y_1+x_2 y_2+3 x_3 y_3$ would affect the approach to solve this question.. When I did the gram-schmidt, I got $v_1=(-1, 1, 0)$ and $v_2=(0, 0, 2)$ but then realized that you have to do something with the inner product before finding the orthonormal basis. Can someone please help me?
Update: So far I got $\{(\frac{-1}{\sqrt{3}}, \frac{1}{\sqrt{3}}, 0), (0, 0, \frac{2}{\sqrt{12}})\}$ as my orthonormal basis but I'm not sure if I am doing it right with the given inner product.
-
please try to learn TeX syntax. I did it for you in this case. – Siminore Jun 15 '12 at 13:16
@Alice You got the same solution I got :) To convince yourself it's right, just compute the pairwise inner products between your solution vectors, and you will find they are orthnormal! – rschwieb Jun 15 '12 at 13:29
The choice of inner product defines the notion of orthogonality.
The usual notion of being "perpendicular" depends on the notion of "angle" which turns out to depend on the notion of "dot product".
If you change the way we measure the "dot product" to give a more general inner product then we change what we mean by "angle", and so have a new notion of being "perpendicular", which in general we call orthogonality.
So when you apply the Gram-Schmidt procedure to these vectors you will NOT necessarily get vectors that are perpendicular in the usual sense (their dot product might not be $0$).
Let's apply the procedure.
It says that to get an orthogonal basis we start with one of the vectors, say $u_1 = (-1,1,0)$ as the first element of our new basis.
Then we do the following calculation to get the second vector in our new basis:
$u_2 = v_2 - \frac{\langle v_2, u_1\rangle}{\langle u_1, u_1\rangle} u_1$
where $v_2 = (-1,1,2)$.
Now $\langle v_2, u_1\rangle = 3$ and $\langle u_1, u_1\rangle = 3$ so that we are given:
$u_2 = v_2 - u_1 = (0,0,2)$.
So your basis is correct. Let's check that these vectors are indeed orthogonal. Remember, this is with respect to our new inner product. We find that:
$\langle u_1, u_2\rangle = 3(-1)(0) + (1)(0) + 2(0)(2) = 0$
(here we also happened to get a basis that is perpendicular in the traditional sense, this was lucky).
Now is the basis orthonormal? (in other words, are these unit vectors?). No they arent, so to get an orthonormal basis we must divide each by its length. Now this is not the length in the usual sense of the word, because yet again this is something that depends on the inner product you use. The usual Pythagorean way of finding the length of a vector is:
$||x||=\sqrt{x_1^2 + ... + x_n^2} = \sqrt{x . x}$
It is just the square root of the dot product with itself. So with more general inner products we can define a "length" via:
$||x|| = \sqrt{\langle x,x\rangle}$.
With this length we see that:
$||u_1|| = \sqrt{2(-1)(-1) + (1)(1) + 3(0)(0)} = \sqrt{3}$
$||u_2|| = \sqrt{2(0)(0) + (0)(0) + 3(2)(2)} = 2\sqrt{3}$
(notice how these are different to what you would usually get using the Pythagorean way).
Thus an orthonormal basis is given by:
$\{\frac{u_1}{||u_1||}, \frac{u_2}{||u_2||}\} = \{(\frac{-1}{\sqrt{3}}, \frac{1}{\sqrt{3}}, 0), (0,0,\frac{1}{\sqrt{3}})\}$
-
Thanks so much for the thorough explanation! :) – Alice Jun 15 '12 at 13:36
Perhaps you carried out the Gram-Schmidt algorithm using the ordinary inner product? I think that is the only way you could have gotten through without using the given inner product :)
Anyhow, you need to use the given inner product at each step of the orthonormalization procedure. Changing the inner product will change the output of the algorithm, because different inner products yield different lengths of vectors and report different "angles" between vectors.
For example, when you begin with the first step (normalizing $(-1,1,0)$, you should compute that $\langle(-1,1,0),(-1,1,0)\rangle=3$, and so the first vector would be $\frac{1}{\sqrt{3}}(-1,1,0)$.
-
Thanks a lot! I understand it now. Oh, I was also wondering if it asks to find the orthogonal projection of (1, 1, 1) onto the same W, I guess we still apply the given inner product? I got (1/3, 1/3, 1) for the projection and I want to make sure I'm doing it right.. – Alice Jun 15 '12 at 13:37
@alice I got a negative sign on the middle entry when I tried a moment ago... can you check your work to see if that's missing in your computation? – rschwieb Jun 15 '12 at 13:44
Oh, oops, I did forget the negative sign in my calculation. Thanks again :) – Alice Jun 15 '12 at 13:46 | 2014-07-28T20:46:31 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/158651/orthonormal-basis",
"openwebmath_score": 0.9294528365135193,
"openwebmath_perplexity": 242.64000925429696,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.985936372130286,
"lm_q2_score": 0.8688267643505193,
"lm_q1q2_score": 0.8566079080534459
} |
https://www.codecogs.com/library/maths/approximation/regression/linear.php | I have forgotten
my Password
Or login with:
• http://facebook.com/
• https://www.google.com/accounts/o8/id
• https://me.yahoo.com
COST (GBP)
3.50
0.00
0
# Linear
viewed 16538 times and licensed 1105 times
Calculates the linear regression parameters and evaluates the regression line at arbitrary abscissas
Controller: CodeCogs
C++
## Class Linear
Linear regression is a method to best fit a linear equation (straight line) of the form to a collection of points , where is the slope and the intercept on the axis.
The algorithm basically requires minimisation of the sum of the squared distance from the data points to the proposed line. This is achieved by calculating the derivative with respect to a and b and setting these to zero.
Let us define the following
Then the slope is
the intercept on the Y axis
Below you will find the regression graph for a set of arbitrary points, which were also used in the forthcoming example. The regression line, displayed in red, has been calculated using this class.
### Example 1
The following example displays the slope, Y intercept and regression coefficient for a certain set of 7 points.
#include <codecogs/maths/approximation/regression/linear.h>
#include <iostream>
#include <iomanip>
using namespace std;
int main()
{
double x[7] = { 1.5, 2.4, 3.2, 4.8, 5.0, 7.0, 8.43 };
double y[7] = { 3.5, 5.3, 7.7, 6.2, 11.0, 9.5, 10.27 };
Maths::Regression::Linear A(7, x, y);
cout << " Slope = " << A.getSlope() << endl;
cout << "Intercept = " << A.getIntercept() << endl << endl;
cout << "Regression coefficient = " << A.getCoefficient() << endl;
cout << endl << "Regression line values" << endl << endl;
for (double i = 0.0; i <= 3; i += 0.6)
{
cout << "x = " << setw(3) << i << " y = " << A.getValue(i);
cout << endl;
}
return 0;
}
Output:
Slope = 0.904273
Intercept = 3.46212
Regression coefficient = 0.808257
Regression line values
x = 0 y = 3.46212
x = 0.6 y = 4.00469
x = 1.2 y = 4.54725
x = 1.8 y = 5.08981
x = 2.4 y = 5.63238
x = 3 y = 6.17494
### Authors
Lucian Bentea (August 2005)
##### Source Code
Source code is available when you agree to a GP Licence or buy a Commercial Licence.
Not a member, then Register with CodeCogs. Already a Member, then Login.
## Members of Linear
#### Linear
Linear( int n double* x double* y )[constructor]
Initializes the class by calculating the slope, intercept and regression coefficient based on the given constructor arguments.
### Note
The slope should not be infinite.
n The number of initial points in the arrays x and y x The x-coordinates of points y The y-coordinates of points
#### GetValue
doublegetValue( double x )
x the abscissa used to evaluate the linear regression function
#### GetCoefficient
doublegetCoefficient( )
The regression coefficient indicated how well linear regression fits to the original data. It is an expression of error in the fitting and is defined as:
This varies from 0 (no linear trend) to 1 (perfect linear fit). If and , then r is considered to be equal to 1.
## Linear Once
doubleLinear_once( int n double* x double* y double a )
This function implements the Linear class for one off calculations, thereby avoid the need to instantiate the Linear class yourself.
### Example 2
The following graph fits a straight line to the following values:
x = 1 y = 0.22
x = 2 y = 0.04
x = 3 y = -0.13
x = 4 y = -0.17
x = 5 y = -0.04
x = 6 y = 0.09
x = 7 y = 0.11
There is an error with your graph parameters for Linear_once with options n=7 x="1 2 3 4 5 6 7" y="0.22 0.04 -0.13 -0.17 -0.04 0.09 0.11" a=1:7 .input
Error Message:Function Linear_once failed. Ensure that: Invalid C++
### Parameters
n The number of initial points in the arrays x and y x The x-coordinates of points y The y-coordinates of points a The x-coordinate for the output location
### Returns
the interpolated y-coordinate that corresponds to a.
##### Source Code
Source code is available when you agree to a GP Licence or buy a Commercial Licence.
Not a member, then Register with CodeCogs. Already a Member, then Login. | 2019-12-07T01:26:06 | {
"domain": "codecogs.com",
"url": "https://www.codecogs.com/library/maths/approximation/regression/linear.php",
"openwebmath_score": 0.3409360647201538,
"openwebmath_perplexity": 2206.2878395361117,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9879462211935646,
"lm_q2_score": 0.8670357701094303,
"lm_q1q2_score": 0.8565847127192638
} |
http://www.aimath.org/textbooks/beezer/Dsection.html | Almost every vector space we have encountered has been infinite in size (an exception is Example VSS). But some are bigger and richer than others. Dimension, once suitably defined, will be a measure of the size of a vector space, and a useful tool for studying its properties. You probably already have a rough notion of what a mathematical definition of dimension might be --- try to forget these imprecise ideas and go with the new ones given here.
## Dimension
Definition D (Dimension) Suppose that $V$ is a vector space and $\set{\vectorlist{v}{t}}$ is a basis of $V$. Then the dimension of $V$ is defined by $\dimension{V}=t$. If $V$ has no finite bases, we say $V$ has infinite dimension.
This is a very simple definition, which belies its power. Grab a basis, any basis, and count up the number of vectors it contains. That's the dimension. However, this simplicity causes a problem. Given a vector space, you and I could each construct different bases --- remember that a vector space might have many bases. And what if your basis and my basis had different sizes? Applying Definition D we would arrive at different numbers! With our current knowledge about vector spaces, we would have to say that dimension is not "well-defined." Fortunately, there is a theorem that will correct this problem.
In a strictly logical progression, the next two theorems would precede the definition of dimension. Many subsequent theorems will trace their lineage back to the following fundamental result.
Theorem SSLD (Spanning Sets and Linear Dependence) Suppose that $S=\set{\vectorlist{v}{t}}$ is a finite set of vectors which spans the vector space $V$. Then any set of $t+1$ or more vectors from $V$ is linearly dependent.
The proof just given has some monstrous expressions in it, mostly owing to the double subscripts present. Now is a great opportunity to show the value of a more compact notation. We will rewrite the key steps of the previous proof using summation notation, resulting in a more economical presentation, and even greater insight into the key aspects of the proof. So here is an alternate proof --- study it carefully.
\begin{proof} {\bf (Alternate Proof of Theorem SSLD)} We want to prove that any set of $t+1$ or more vectors from $V$ is linearly dependent. So we will begin with a totally arbitrary set of vectors from $V$, $R=\setparts{\vect{u}_j}{1\leq j\leq m}$, where $m>t$. We will now construct a nontrivial relation of linear dependence on $R$.
Each vector $\vect{u_j}$, $1\leq j\leq m$ can be written as a linear combination of $\vect{v}_i$, $1\leq i\leq t$ since $S$ is a spanning set of $V$. This means there are scalars $a_{ij}$, $1\leq i\leq t$, $1\leq j\leq m$, so that
\begin{align*} \vect{u}_j&=\sum_{i=1}^{t}a_{ij}\vect{v_i}&&1\leq j\leq m \end{align*}
Now we form, unmotivated, the homogeneous system of $t$ equations in the $m$ variables, $x_j$, $1\leq j\leq m$, where the coefficients are the just-discovered scalars $a_{ij}$,
\begin{align*} \sum_{j=1}^{m}a_{ij}x_j=0&&1\leq i\leq t \end{align*}
This is a homogeneous system with more variables than equations (our hypothesis is expressed as $m>t$), so by Theorem HMVEI there are infinitely many solutions. Choose one of these solutions that is not trivial and denote it by $x_j=c_j$, $1\leq j\leq m$. As a solution to the homogeneous system, we then have $\sum_{j=1}^{m}a_{ij}c_{j}=0$ for $1\leq i\leq t$. As a collection of nontrivial scalars, $c_j$, $1\leq j\leq m$, will provide the nontrivial relation of linear dependence we desire,
\begin{align*} \sum_{j=1}^{m}c_{j}\vect{u}_j &=\sum_{j=1}^{m}c_{j}\left(\sum_{i=1}^{t}a_{ij}\vect{v_i}\right) &&\text{ Definition TSVS}\\ &=\sum_{j=1}^{m}\sum_{i=1}^{t}c_{j}a_{ij}\vect{v}_i &&\text{ Property DVA}\\ &=\sum_{i=1}^{t}\sum_{j=1}^{m}c_{j}a_{ij}\vect{v}_i &&\text{ Property CMCN}\\ &=\sum_{i=1}^{t}\sum_{j=1}^{m}a_{ij}c_{j}\vect{v}_i &&\text{Commutativity in $\complex{}$}\\ &=\sum_{i=1}^{t}\left(\sum_{j=1}^{m}a_{ij}c_{j}\right)\vect{v}_i &&\text{ Property DSA}\\ &=\sum_{i=1}^{t}0\vect{v}_i &&\text{$c_j$ as solution}\\ &=\sum_{i=1}^{t}\zerovector &&\text{ Theorem ZSSM}\\ &=\zerovector &&\text{ Property Z} \end{align*}
That does it. $R$ has been undeniably shown to be a linearly dependent set. \end{proof} Notice how the swap of the two summations is so much easier in the third step above, as opposed to all the rearranging and regrouping that takes place in the previous proof. In about half the space. And there are no ellipses ($\ldots$).
Theorem SSLD can be viewed as a generalization of Theorem MVSLD. We know that $\complex{m}$ has a basis with $m$ vectors in it (Theorem SUVB), so it is a set of $m$ vectors that spans $\complex{m}$. By Theorem SSLD, any set of more than $m$ vectors from $\complex{m}$ will be linearly dependent. But this is exactly the conclusion we have in Theorem MVSLD. Maybe this is not a total shock, as the proofs of both theorems rely heavily on Theorem HMVEI. The beauty of Theorem SSLD is that it applies in any vector space. We illustrate the generality of this theorem, and hint at its power, in the next example.
Example LDP4: Linearly dependent set in $P_4$.
Theorem SSLD is indeed powerful, but our main purpose in proving it right now was to make sure that our definition of dimension (Definition D) is well-defined. Here's the theorem.
Theorem BIS (Bases have Identical Sizes) Suppose that $V$ is a vector space with a finite basis $B$ and a second basis $C$. Then $B$ and $C$ have the same size.
Theorem BIS tells us that if we find one finite basis in a vector space, then they all have the same size. This (finally) makes Definition D unambiguous.
## Dimension of Vector Spaces
We can now collect the dimension of some common, and not so common, vector spaces. \begin{theorem}{DCM}{Dimension of $\complex{m}$}{complex vector space!dimension} The dimension of $\complex{m}$ (Example VSCV) is $m$. \end{theorem} \begin{proof} Theorem SUVB provides a basis with $m$ vectors. \end{proof}
Theorem DP (Dimension of $P_n$) The dimension of $P_{n}$ (Example VSP) is $n+1$.
\begin{theorem}{DM}{Dimension of $M_{mn}$}{matrix vector space!dimension} The dimension of $M_{mn}$ (Example VSM) is $mn$. \end{theorem} \begin{proof} Example BM provides a basis with $mn$ vectors. \end{proof}
Example DSM22: Dimension of a subspace of $M_{22}$.
Example DSP4: Dimension of a subspace of $P_4$.
Example DC: Dimension of the crazy vector space.
It is possible for a vector space to have no finite bases, in which case we say it has infinite dimension. Many of the best examples of this are vector spaces of functions, which lead to constructions like Hilbert spaces. We will focus exclusively on finite-dimensional vector spaces. OK, one infinite-dimensional example, and then we will focus exclusively on finite-dimensional vector spaces.
Example VSPUD: Vector space of polynomials with unbounded degree.
## Rank and Nullity of a Matrix
For any matrix, we have seen that we can associate several subspaces --- the null space (Theorem NSMS), the column space (Theorem CSMS), row space (Theorem RSMS) and the left null space (Theorem LNSMS). As vector spaces, each of these has a dimension, and for the null space and column space, they are important enough to warrant names.
Definition NOM (Nullity Of a Matrix) Suppose that $A$ is an $m\times n$ matrix. Then the nullity of $A$ is the dimension of the null space of $A$, $\nullity{A}=\dimension{\nsp{A}}$.
Definition ROM (Rank Of a Matrix) Suppose that $A$ is an $m\times n$ matrix. Then the rank of $A$ is the dimension of the column space of $A$, $\rank{A}=\dimension{\csp{A}}$.
Example RNM: Rank and nullity of a matrix.
There were no accidents or coincidences in the previous example --- with the row-reduced version of a matrix in hand, the rank and nullity are easy to compute.
Theorem CRN (Computing Rank and Nullity) Suppose that $A$ is an $m\times n$ matrix and $B$ is a row-equivalent matrix in reduced row-echelon form with $r$ nonzero rows. Then $\rank{A}=r$ and $\nullity{A}=n-r$.
Every archetype (appendix A) that involves a matrix lists its rank and nullity. You may have noticed as you studied the archetypes that the larger the column space is the smaller the null space is. A simple corollary states this trade-off succinctly. (See technique LC.)
Theorem RPNC (Rank Plus Nullity is Columns) Suppose that $A$ is an $m\times n$ matrix. Then $\rank{A}+\nullity{A}=n$.
When we first introduced $r$ as our standard notation for the number of nonzero rows in a matrix in reduced row-echelon form you might have thought $r$ stood for "rows." Not really --- it stands for "rank"!
## Rank and Nullity of a Nonsingular Matrix
Let's take a look at the rank and nullity of a square matrix.
Example RNSM: Rank and nullity of a square matrix.
The value of either the nullity or the rank are enough to characterize a nonsingular matrix.
Theorem RNNM (Rank and Nullity of a Nonsingular Matrix) Suppose that $A$ is a square matrix of size $n$. The following are equivalent.
1. A is nonsingular.
2. The rank of $A$ is $n$, $\rank{A}=n$.
3. The nullity of $A$ is zero, $\nullity{A}=0$.
With a new equivalence for a nonsingular matrix, we can update our list of equivalences (Theorem NME5) which now becomes a list requiring double digits to number.
Theorem NME6 (Nonsingular Matrix Equivalences, Round 6) Suppose that $A$ is a square matrix of size $n$. The following are equivalent.
1. $A$ is nonsingular.
2. $A$ row-reduces to the identity matrix.
3. The null space of $A$ contains only the zero vector, $\nsp{A}=\set{\zerovector}$.
4. The linear system $\linearsystem{A}{\vect{b}}$ has a unique solution for every possible choice of $\vect{b}$.
5. The columns of $A$ are a linearly independent set.
6. $A$ is invertible.
7. The column space of $A$ is $\complex{n}$, $\csp{A}=\complex{n}$.
8. The columns of $A$ are a basis for $\complex{n}$.
9. The rank of $A$ is $n$, $\rank{A}=n$.
10. The nullity of $A$ is zero, $\nullity{A}=0$. | 2013-05-24T03:58:04 | {
"domain": "aimath.org",
"url": "http://www.aimath.org/textbooks/beezer/Dsection.html",
"openwebmath_score": 0.9146049618721008,
"openwebmath_perplexity": 325.4569079225906,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9879462197739625,
"lm_q2_score": 0.867035763237924,
"lm_q1q2_score": 0.8565847046997394
} |
https://math.stackexchange.com/questions/1645967/what-do-we-actually-prove-using-induction-theorem/1646299 | # What do we actually prove using induction theorem?
Here is the picture of the page of the book, I am reading:
$$P_k: \qquad 1+3+5+\dots+(2k-1)=k^2$$ Now we want to show that this assumption implies that $P_{k+1}$ is also a true statement: $$P_{k+1}: \qquad 1+3+5+\dots+(2k-1)+(2k+1)=(k+1)^2.$$ Since we have assumed that $P_k$ is true, we can perform operations on this equation. Note that the left side of $P_{k+1}$ is the left side of $P_k$ plus $(2k+1)$. So we start by adding $(2k+1)$ to both sides of $P_k$: \begin{align*} 1+3+\dots+(2k-1) &= k^2 &P_k\\ 1+3+\dots+(2k-1)+(2k+1) &= k^2+(2k+1) &\text{Add $(2k+1)$ to both sides.} \end{align*} Factoring the right side of this equation, we have $$1+3+\dots+(2k-1)+(2k+1) =(k+1)^2 \qquad P_{k+1}$$ But this last equation is $P_{k+1}$. Thus, we have started with $P_k$, the statement we assumed true, and performed valid operations to produce $P_{k+1}$, the statement we want to be true. In other words, we have shown that if $P_k$ is true, then $P_{k+1}$ is also true. Since both conditions in Theorem 1 are satisfied, $P_n$ is true for all positive integers $n$.
In the top line there is written that we have to show that 'the assumption $P_k$ is true implies $P_{k+1}$ is true'.
And what I think is that: as long as we know that the state of a proposition being true for any positive integer $k$ after Base number implies that proposition is true for integer $k+1$, we have to show that $P_k$ is true. However I don't have an idea yet how to show the truth of $P_k$.
So, my first question is that who is right? My book or me? And if I am right then how can I show the truth of $P_k$?
2) This may be taken as the second question but it is also bit annoying.
In the second paragraph it is written, "Since we have assumed that $P_k$ is true, we can perform operations on it".
Why is it necessary for an equation to be true for performing operations on it? Well, this is not much annoying as compared to the first question because whole the induction theory depends upon that.
• Who is right about what? You haven't made a claim that contradicts the book, Feb 8 '16 at 13:32
• I have tried to retype the text from your picture. It would be rather difficult task, considering the low quality of the image. Luckily Google and Google Books returns some places with the same text. Could you perhaps add to your post a more precise reference than "the book I am reading"? Feb 8 '16 at 16:27
• You prove the $P_1$ is true. We don't know that $P_n$ is true for all $n$. Only that it is true for $n = 1$. Then we prove that IF it is true for a specific $P_k$ then it is true for $P_{k+1}$. Once we prove these two things: 1) $P_1$ is true. and 2) $P_k \implies P_{k+1}$. Then we can conclude and we have proven 3) $P_n$ is true for all n. Your comment. If we prove it is true for all $P_k$ then it is true for $P_{k+1}$ is irrelevant as we are not assuming and certainly haven't proven it is true for all $P_k$. We have assumed it for ONE $P_k$. Feb 8 '16 at 17:25
• @MarkJoshi Induction works on many structures. Induction of the form "start at zero and add one" also works on finite fields, that is arithmetic $\pmod p$. It is not correct at all to say that natural numbers are precisely the set for which induction works. Feb 9 '16 at 6:02
• Once there is enough evidence to make you believe something is true, it can suddenly become much easier to understand why it is true. Given the multitude of experts here (and the authoer himself) telling you the book is right I recommend, as strange as it sounds, to reread the chapter on induction trying to believe it (and everyone on this site) is right and understand why that is, rather than trying to argue why it is wrong. Feb 9 '16 at 13:25
Induction is, intuitively, an outline of an infinite proof.
You first prove $P(0)$, the base case.
Then you prove $P(1)$ follows from $P(0)$.
Then you prove that $P(2)$ follows from $P(1)$.
Et cetera.
In general, if you know that $P(k+1)$ follows from $P(k)$, and you know that $P(0)$ is true, then you know how to prove $P(n)$ for any natural number. For example, $P(4)$ follow because $P(0)\implies P(1)\implies P(2)\implies P(3)\implies P(4)$.
So you don't need to know how to show $P(k)$ to show that $P(k)$ implies $P(k+1)$. Once you've shown this implication, and $P(0)$, we know how to write a non-inductive finite proof of $P(10000)$ or $P(10^{99})$.
Your book is right. You have to show that $$P_k \implies P_{k+1}$$
Because if you prove that and since you said it is true for base number, the proposition is automatically proved true for every $n \in S$.
Let us see how!
To prove "$P_k \implies P_{k+1}$" means you are proving that a particular proposition has a quality of being true for $k+1$ because of being true for $k$.
Improvement (Edit): Just as you do in case of domino effect. Suppose it is reported that the cornered domino has been pushed over. And then you find out that dominoes are arranged such that whenever $k^{th}$ domino falls, $(k+1)^{th}$ falls. You would at once say that all dominoes must fall.
Remember that these two conditions of induction theorem simultaneously imply that a certain proposition is true.
i) It is true for base number
ii) It is true for an integer $k$ implies it is true for integer $k+1$ after it.
For the proposition in your picture, if you would take base number $2$ then after doing all that (done in your book), you would prove that proposition is true for integer $2$ and integers after it. But since we usually ask to prove that conjecture for every $n \in \mathbb N$, we take base number $1$ not $2$ because the smallest number of set $\mathbb N$ is $1$.
• It may help to think that you are supposed to prove the arrow in-between the two propositions. Feb 9 '16 at 4:00
(Wanted to post as a comment, but it is too long :/)
When we prove something by induction we prove that our claim is correct for a base case (for example, n=1). Afterwards we assume (not proving, only assuming) that our claim stands for some arbitrary value k and than, based on the assumption we prove it holds for k+1.
Note that based on the proof of base case and the proof that if $P_k$ is true we show that $P_{k+1}$ is true. Example: you prove that some claim is correct for $n=1$, then you assume it is correct for some n=k. Now you prove for n=k+1.
We proved for n=1 and using the assumption that the claim stands for n=k we proved for n=k+1. In this case, we know that the claim stands for n=1, thus we can now say it stands for n=2. Afterwards we can say that it stands for n=3 and so on, hence it stands for any n.
That is how we use induction - you don't prove that $P_k$ is true, you assume it it true - that is a big difference.
If my explanation is not coherent enough, you might want to google "Induction" and read a little more.
Good luck.
• "...that our claim stands for some arbitrary $k$ and then, based on the assumption we prove that claim holds for $k+1$" . Okay.. if we prove the claim for $k+1$ . At what place in the picture, I can see such prove? Feb 8 '16 at 14:12
• All that shown is how they are assuming that $P_k$ is true and how they use it to prove that $P_{k+1}$ is true. Feb 8 '16 at 14:23
• After reading this "$P_k$ holds implies $P_{k+1}$ holds", I guess that we should show truth of $P_k$ and not of $P_{k+1}$ because it is an implication and once we prove $P_k$, then according to our claim ($P_k \implies P_{k+1}$), $P_{k+1}$ would get proved automatically. Feb 8 '16 at 14:30
• @Man_Of_Wisdom, the induction assumption is that $P_k$ is true, we don't need to prove it. We show that because it is true (take it as a fact), than $P_{k+1}$ is also true. We know that "$P_k$ holds", we want to show that $P_{k+1}$ holds. Feb 8 '16 at 14:43
• You show the truth of $P_1$. Than you show that if it's true for $P_k$ which may or may not be the same as $P_1$ then it must be true for $P_{k+1}$. From that we know that $P_1 \implies P_2$. And therefore $P_2 \implies P_3$. Now we can hire an undergraduate to sit in an office to continue "$...\implies P_{2,345}$ and $P_{2,345} \implies P_{2,346}$ and $P_{2346} \implies ...$". Or we can conclude: 1) ($P_1$ is true). and 2) ($P_k \implies P_{k+1}): \implies$ 3) $P_n$ is true for all $n$. Feb 8 '16 at 17:31
## What do we actually prove using induction theorem?
Probably an example is needed here. Let's suppose you want to prove the little Gauss. Suppose we want to prove this Theorem for all $$n\in\mathbb{N}$$:
$$1+2+3+ \cdots+ n = \sum_{k=1}^n k = \frac{n(n+1)}{2}$$
The statement $$A(n)$$ is thus given by: $$\sum_{k=1}^n k = \frac{n(n+1)}{2}$$ With induction this is really simple. Step one lets suppose $$n=1$$: $$1 = \sum_{k=1}^1 k = \frac{1(1+1)}{2} = \frac{n(n+1)}{2}$$ Done the statement is true for $$n=1$$. The good part about induction is we can select the number for which we want to prove it so we could also have shown with $$n=0$$. So dont try to prove it for $$n$$ beeing any number but prove it for one number you selected, almost everytime $$n=0$$ or $$n=1$$.
Now we have to prove that if the statement is true for $$n$$ it is also true for $$n+1$$. Meaning if $$A(n)$$ is true, $$A(n+1)$$ is also true:
$$\underbrace{1+ 2+ \cdots + n}_{\sum_{k=1}^n k} + n+1 = \sum_{k=1}^n k + n+1 \overset{A(n) \text{ is true}}{=} \frac{n(n+1)}{2} + n+1 = \frac{n^2+n+2n+2}{2} = \frac{n^2+3n+2}{2} = \frac{(n+1)(n+2)}{2}$$
So now we know the statement is true for $$n+1$$ or $$A(n+1)$$ is true.
### Prove without induction
Proving this statement without induction is not that simple. You can do it like this. Suppose $$n\in \mathbb{N}$$, then we can look at the sum and rearange that:
$$n$$ is even
$$1+2+\cdots + n = (1 + n) + (2+n-1) + \cdots + \left(\frac{n}{2} + \frac{n}{2} +1\right)=\frac{n}{2}(n+1) = \frac{n(n+1)}{2}$$
and for $$n$$ is odd
$$1+2+\cdots + n = (1+n) + (2+n-1) + \cdots + \left(\frac{n-1}{2} + \frac{n+1}{2} + 1\right) + \frac{n+1}{2} = (n+1)\frac{n-1}{2} + \frac{n+1}{2} = \frac{n(n+1)}{2}$$
### Conclusion
So let's conclude, we can use induction here but we can go without it too. The first though is much simpler because it uses the induction and if you get your head around it, it's easier to verify that it is true. The other one is using induction internally for the rearanging and you have to think alot if this is true or if I made a mistake.
## What does the operation part mean?
The funny thing is that induction can be generalized in a lot of things. If we look at the set $$\mathbb{N}$$ than we can define it as follows $$1$$ is in $$\mathbb{N}$$ and if $$n\in\mathbb{N}$$ than $$n+1$$ is also in $$\mathbb{N}$$, meaning lets call that $$(\star)$$:
1. $$1 \in \mathbb{N}$$
2. $$\forall n\in\mathbb{N} \Rightarrow n+1\in\mathbb{N}$$
If you look at that you will find this is looking a lot like induction. So basically you can say if you have the statement $$A(n)$$ and you want to show that it is true for all $$n\in\mathbb{N}$$ you do the following, you determine the set $$\mathfrak{A}$$ for which the statement is true. This set is given by: $$\mathfrak{A} = \{n\in\mathbb{N} | A(n) \text{ is true}\}$$ And now if we know $$1\in\mathfrak{A}$$ and for all $$n\in\mathfrak{A}$$ it follows $$n+1\in\mathfrak{A}$$ then it follows from $$(\star)$$: $$\mathfrak{A} = \mathbb{N}$$ so the statement is true for all $$n\in\mathbb{N}$$
## More generalized induction shemes
The example where this is generalized is the "principle of the good set" (this is the german name translated directly), it's used in measure theory and works with sets that are way bigger then $$\mathbb{N}$$ in fact with sets that have the cardinality of $$\mathbb{R}$$. And it works as follows. A $$\sigma$$-Algebra $$\mathcal{A}$$ of a set $$\Omega$$ fullfills the following axioms:
1. $$\Omega \in \mathcal{A}$$
2. $$A\in \mathcal{A} \Rightarrow \Omega\setminus A\in\mathcal{A}$$
3. $$A_1,A_2,\cdots \in \mathcal{A} \Rightarrow \bigcup_{k=1}^\infty A_k \in \mathcal{A}$$
Now we have a function $$\sigma$$ that generates $$\sigma$$-Algebras for a given subset $$\mathcal{G}\subset \mathscr{P}(\Omega)$$ just using the axioms 1-3 just more often than $$\mathbb{R}$$ has elements. Like $$\mathbb{N}$$ is generated by $$\{1\}$$, if we take $$1$$ and then we add $$1$$ onto it put the result $$2$$ in our set. Then we take $$2$$ and add $$1$$ onto it and put it in our set until we cannot add any more element because we have all of them already. Back to the example, $$\mathcal{G}$$ is called a generator if $$\sigma(\mathcal{G}) = \mathcal{A}$$, this also means $$\Omega \in \mathcal{G}$$. Now if we want to show a statement $$B(A)$$ is true for all $$A\in\mathcal{A}$$ we just show:
1. $$B(G)$$ is true for all $$G\in \mathcal{G}$$
2. $$B(A)$$ is true then it is also true for $$B(\Omega\setminus A)$$
3. $$B(A_1),\ B(A_2),\ \cdots$$ is true than its also true for $$B\left(\bigcup_{k=1}^\infty A_k\right)$$
So here 1. shows that we know the statement is true for at least a generating set of $$\mathcal{A}$$, like in induction we know the statement is true for $$\{1\}$$ which generates $$\mathbb{N}$$ using $$n+1$$. And then we show it for the operations 2. and 3. like we would in induction where we show it for $$n+1$$.
## Conclusion
I hope I didn't confuse you too much. But I am certain that after some time you will accept inductions and learn how to use them. And if you once return here or have to explain it to somebody else, after using it a lot of times. You will fully understand it. People often find induction counter-intuitiv, I have some ideas why but I think that would really go to far. So my hint, accept it first, use it, try to explain it und you will have understood this and similar shemes.
Intuitively you can view this as a chain reaction, because once you show that $\color{green}{P_0}$ is true, then by applying $\color{blue}{P_k \Rightarrow P_{k+1}}$ repeatedly you can reach any number $n \geq 0$ and see that $P_n$ is indeed true.
$$\color{green}{P_0} \color{blue}{\Rightarrow} P_1 \color{blue}{\Rightarrow} \dots \color{blue}{\Rightarrow} P_n \color{blue}{\Rightarrow} \dots .$$
At no point are you directly proving that $P_1$, $P_2$, $\dots$ are true, instead they follow from base case and induction step.
• What is $P_k \implies P_{k+1}$? I think my question actually asks about this. Feb 8 '16 at 14:17
• It is the implication, you are proving $P_{k+1}$ under assumption of $P_{k}$, i.e. you are assuming that $P_{k}$ is true, and you show that then $P_{k+1}$ is true. Once you prove that, you can apply this for case $k=0$ (because you know that $P_0$ is true). By that you immediately get that $P_1$ is true. And you can repeat that...
– Sil
Feb 8 '16 at 14:20
• You must combine it with the base case, in which you proved that $P_0$ is true. Both base case AND induction step then imply that $P_1$ is true. But now you have $P_1$ is true and you can use it same way with induction step, to arrive at $P_2$ being true, etc... Reaching any $n$ if you repeat the process.
– Sil
Feb 8 '16 at 14:25
• And that is exactly what you do, in case of $P_0$. You have $P_0$ and $P_0 \implies P_1$, therefore $P_1$. The trick is that you ONLY need to show this for $P_0$, everything else follows from the $P_k \implies P_{k+1}$.
– Sil
Feb 8 '16 at 14:48
• You prove $P_k$ is true when $k = 1$. You do not need to assume that $k \ne 1$ or that $P_k$ is true for any value of $k$ other than 1. But you don't want to assume that $k$ has any characteristic that pertains only to 1. You want to prove that it is true for some $k$ of which you don't know the value of, then it must also be true of $k + 1$. You know it is true for some $k$ because it is true for $k = 1$. It might also be true for $k = 7,921$ but you don't know that. You can't say I know it is true for $P_1$ so I will prove it for $P_2$ because you want something more general. Feb 8 '16 at 17:39
The part about assuming $P_k$ is an application of basic rules of propositional logic.
Suppose you have a proposition of the form $A \implies B$. One way you might be able to prove this proposition is by using the rule for Introduction of an Implication:
1. Assume that $A$ is true.
2. Using $A$ as a statement already known to be true, prove $B$.
3. "Discharge" the assumption that $A$ is true.
4. Conclude that $A \implies B$ is true.
Of course this works only if you are able to come up with a proof of $B$ in the context of Step 2. But while you are in that context, you already know $A$ is true, and you do not have to prove it.
Once you have proven $A \implies B$, if you later find out that $A$ actually is true (not just assumed to be true in the limited context of Step 2, above), you can apply a rule called Modus Ponens:
1. Given that $A \implies B$ is known to be true.
2. Given that $A$ is known to be true.
3. Conclude that $B$ is true.
None of these rules of logic has used anything from the method of induction, but the method of induction uses those rules of logic.
We prove $P_0$.
We prove $P_k \implies P_{k+1}$ for any possible value of $k$, using the rule for Introduction of an Implication.
Now we know $P_0$, and since $P_k \implies P_{k+1}$ for any $k$, including the case $k=0$, we have $P_0 \implies P_1$. Apply Modus Ponens:
1. Given that $P_0 \implies P_1$ is true.
2. Given that $P_0$ is true.
3. Conclude that $P_1$ is true.
Now we know $P_1$, and $P_k \implies P_{k+1}$ is still true in the case $k=1$, so we can apply Modus Ponens again:
1. Given that $P_1 \implies P_2$ is true.
2. Given that $P_1$ is true.
3. Conclude that $P_2$ is true.
We can repeat this process as many times as needed to show $P_3$, then $P_4$, then $P_5$.
The principle of induction merely summarizes this long, repetitive, boring process so that we can relatively quickly show $P_n$ for any given non-negative integer $n$. And since we can always do this no matter what non-negative value we choose for $n$, we conclude $P_n$ is true for all $n$.
By the way, I would write at least one of the sentences in the book a little differently. Instead of "perform operations on this equation" I would write "use algebraic manipulation of $P_k$ to prove that other equations are true". That is,
Since we have assumed that $P_k$ is true, we can use algebraic manipulation of $P_k$ to prove that other equations are true.
I hope this clears up one of the points of confusion.
• According to point no. 2 we have to prove that $P_{k+1}$ is true based on the assumption that $P_k$ is true. The author shows us that $$P_{k}+T_{k+1}=P_{k+1}$$. Is it the way to implement point no.2? And, if it iss then how? I mean in what way proving $P_{k}+T_{k+1}=P_{k+1}$ is equivalent to proving $P_{k+1}$ under the assumption?? Feb 9 '16 at 10:41
• What does $P_k + T_{p+1}$ mean? $P_k$ is an equation. Does it make sense to write $(5^2 = 25) + 3$? I would never write math that way. But do you agree that if $a$ and $b$ are numbers and $a=b$, then $a$ and $b$ are the same number, so $a + 3$ is the same as $b +3$, that is, $a+3=b+3$? That's what we mean by "add ___ to both sides of the equation." And yes, when you start with a true equation, then (the left side plus something) is equal to (the right side plus the same something), which is how the author derived $P_{k+1}$. Feb 9 '16 at 12:52
• DavidK first of all, adding something to an equation always means it is added to its both sides. In other words, it can never mean to add something to only side of equation because two sides make up an equation. So $P_k+T_{k+1}$ is not wrong anyway. It is not necessary for something to be right. When it is only "not wrong", we can adopt it. Feb 9 '16 at 18:41
• Secondly, you said that since we assumed $P_k$ is true, we add $T_{k+1}$ to its both sides and then if it becomes $P_{k+1}$, it is proved that $P_{k+1}$ is true. And to support it, you said that if something is added to a true equation such as $3^{2}=9$ then the new equation for example $3^{2}-1=9-1$ is necessarily true (because it is the property of equation). In case of $3^{2}=9$ we are certain that the equation is true but in case of $P_k$ we are at the sake of assumption. These two things are different. I am sure I am missing something but please explain more accurately. Feb 9 '16 at 18:54
• On the second point, that's the whole point of Introduction of an Implication. To prove $P\implies Q$, assume $P$ is true and show that $Q$ is true under this assumption. Of course now all you have actually shown is that $Q$ is true if $P$ is true; but that is what $P\implies Q$ means. Feb 9 '16 at 19:09
There is another way of thinking of this.
Throw out the standard induction theorem, and replace it with this one:
Any nonempty set of natural numbers has a smallest element.
This seems obviously true, right? This is called the well-ordering principle, and it is equivalent to induction. Here's how you do induction with the well-ordering principle (WOP for short):
1. Prove that $P_0$ holds (or $P_1$, if you don't consider zero to be a "natural number").
2. Take this set: $$S = \{x | x \in \mathbb{N} \wedge \neg P_x \}$$ That's the set of natural numbers for which our proposition does not hold.
3. Assume $S$ is non-empty. Then, by WOP, it has a least element, which we can call $k+1$. In particular, we know that $\neg P_{k+1}$ (we know that the proposition does not hold for $k+1$, since it's an element of $S$).
4. By step (1), we know $k+1 > 0$ (or $k+1 > 1$, if you started with $P_1$). So $k$ is a natural number. If we had not proven step (1), $k$ could be equal to negative one (or zero), which would invalidate the rest of the proof.
5. Prove that $\neg P_{k}$; that is, prove that the proposition is not true for $k$, based on our knowledge that it is not true for $k+1$.
6. Since $k < k + 1$ and $k \in S$, we have a contradiction ($k + 1$ isn't the smallest element of the set). That means our assumption in step (3) must be false. The set of counterexamples is empty, meaning the proposition must hold for every number.
Note that these statements are equivalent: $$P_k \implies P_{k+1} \\ \neg P_{k+1} \implies \neg P_k$$ If you can prove one, you can prove the other quite easily. Standard induction asks you to prove the first, while WOP-based induction requires you to prove the second.
In the end, it's just a matter of notation. But you might find one form of induction more intuitive than the other. If you are going to use this method, you should note that we usually talk about $k$ and $k - 1$ instead of $k+1$ and $k$; I used the latter because it corresponds more obviously to standard induction.
"as long as we know that the state of a proposition being true for any positive integer k after Base number implies that proposition is true for integer k+1, we have to show that Pk is true. "
1) No, you don't. If I wanted to show that IF Godzilla were 10 times as large in all dimensions as a T. Rex THEN Godzilla would be weigh 1,000 times as much, I do NOT need to prove that Godzilla is 10 times as large in all dimensions as a T. Rex. I CAN'T prove Godzilla is 10 times as large in all dimensions as a T. Rex because Godzilla isnt 10 times as large in all dimensions as a T. Rex.
and
2) You already have shown $P_k$ is true for some k. You have shown it is true for $k = 1$. Reread that statement. It does not say "we will assume this is true for all $P_k$ and show that means it is true for all $P_{k+1}$". It says "we will show that if it is true for (one) $P_k$ than that means it is true for $P_{k+1}$".
Thus we have proven an infinite loop: If it is true for k, then it is true for k + 1: Relabel k+1 as k and it is true for k + 1 (which is the old k + 2): Relabel and repeat indefinitely and we have proven it is true for all $n \ge k$. Now we just need to demonstrate that it is true for a single base case k. Oh, wait. It's true for k = 1. Therefore it is true for $n \ge 1$.
As for you second question....
"Why is it necessary for an equation to be true for performing operations on it?"
It isn't. You can perform operations on untrue equations. You just can't assume any of the results are true.
Let me write down: "2 + 2 = 5". Let me perform operations on them:
$7^{2+2} - 37*5 = 7^5 - 37*{2 + 2}$
$2401 - 185 = 16,807 - 74 -74$
$2,216=16,659$
I can do that. But I can't get any meaningful results.
====
In a very abstract and philosophical way a proof by induction is a tiny bit like a proof by contradiction.
In both you assume a statement without proof and you reach a conclusion that would be true if your assumption were.
In a proof by induction you might prove that following: "If any even number is divisible by two then they all are divisible by two."
And in a proof by contradiction you might prove the following: "If any odd number is divisible by two then they all are divisible by two."
What's the difference in these statements? They are both shown in the exact same way ("if $2|n$ then $2|n + 2$ and $2|n - 2$"). They are both equally valid. And furthermore, they are both TRUE.
But in the proof by induction you verify a single confirmation: "2 is divisible by two. Therefore all even numbers are."
While in a proof by contradiction you contradict with a contradiction: "3 is not divisible by two. Therefore no odd numbers are."
Think about Peano system (see footnotes) for defining natural numbers. You take an initial number, which you call 1, and then you say:
• Element 1 exists in the set.
• Any successor of an element n in the set, say $suc(n)$, exists in the set.
With these axioms, you construct the whole set of natural numbers. You don't need to prove such elements exist in a deductive was as you think, but you must construct the proof and you have the building blocks for that (while in the deductive method, the path is the inverse).
So, the analogous of Peano system applies for the induction:
• Prove that $P(1)$ exists, where 1 is just the base case you want to start the proof with. This is the equivalent of the Peano system for the value 1.
• The constructive step: Prove that $P(n+1)$ is true when $P(n)$ was true. In this case, you move forward in the demonstration (induction actually means move towards the end) to construct the successive cases in the proof.
So you initially prove $P(1)$. If you prove $P(n) => P(n+1)$ then you can start traversing infinitely:
• $P(2) because P(1)$
• $P(3) because P(2)$
• ...
You never prove the $n$ case, because $P(n)$ is in hypotetic case which derives ultimately from the case $P(1)$, and formerly being $P(n)$, it was $P(n+1)$ in former... iterations.
So, the way to think why you should assume $P(n)$ is true, is because:
• In an initial case, $P(1)$ has been proven true by you. If you didn't, then you should start for it.
• Since the construction is upward, instead of downward, you should not think that you have to prove the n-case, but that you have alreadr proven it.
Footnotes:
• You are not forced to start from 1 and upwards. You can take any number you want, and so you will restrict the proof on it and further.
• Other induction processes can traverse entire integer set instead of just natural numbers. Such proof system involves prooving $n+1$ and $n-1$. | 2022-01-23T22:38:44 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1645967/what-do-we-actually-prove-using-induction-theorem/1646299",
"openwebmath_score": 0.8577460646629333,
"openwebmath_perplexity": 166.37486381312206,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.987946222258266,
"lm_q2_score": 0.8670357494949105,
"lm_q1q2_score": 0.8565846932763611
} |
https://math.stackexchange.com/questions/3260433/compute-int-0-infty-frac-operatornameli-3x1x2-dx | # Compute $\int_0^\infty \frac{\operatorname{Li}_3(x)}{1+x^2}\ dx$
How to evaluate $$\int_0^\infty \frac{\operatorname{Li}_3(x)}{1+x^2}\ dx\ ?$$
where $$\displaystyle\operatorname{Li}_3(x)=\sum_{n=1}^\infty\frac{x^n}{n^3}$$ , $$|x|\leq1$$
I came across this integral while I was working on $$\displaystyle \displaystyle\int_0^1 \frac{\operatorname{Li}_3(x)}{1+x^2}\ dx\$$ and here is how I established a relation between these two integrals:
$$\int_0^1 \frac{\operatorname{Li}_3(x)}{1+x^2}\ dx=\int_0^\infty \frac{\operatorname{Li}_3(x)}{1+x^2}\ dx-\underbrace{\int_1^\infty \frac{\operatorname{Li}_3(x)}{1+x^2}\ dx}_{x\mapsto 1/x}$$
$$=\int_0^\infty \frac{\operatorname{Li}_3(x)}{1+x^2}\ dx-\int_0^1 \frac{\operatorname{Li}_3(1/x)}{1+x^2}\ dx$$ $$\left\{\color{red}{\text{add the integral to both sides}}\right\}$$
$$2\int_0^1 \frac{\operatorname{Li}_3(x)}{1+x^2}\ dx=\int_0^\infty\frac{\operatorname{Li}_3(x)}{1+x^2}\ dx+\int_0^1 \frac{\operatorname{Li}_3(x)-\operatorname{Li}_3(1/x)}{1+x^2}\ dx$$
$$\{\color{red}{\text{use}\ \operatorname{Li}_3(x)-\operatorname{Li}_3(1/x)=2\zeta(2)\ln x-\frac16\ln^3x+i\frac{\pi}2\ln^2x}\}$$
$$=\int_0^\infty\frac{\operatorname{Li}_3(x)}{1+x^2}\ dx+2\zeta(2)\underbrace{\int_0^1\frac{\ln x}{1+x^2}\ dx}_{-G}-\frac16\underbrace{\int_0^1\frac{\ln^3x}{1+x^2}\ dx}_{-6\beta(4)}+i\frac{\pi}2\underbrace{\int_0^1\frac{\ln^2x}{1+x^2}\ dx}_{2\beta(3)}$$
$$=\int_0^\infty\frac{\operatorname{Li}_3(x)}{1+x^2}\ dx-2\zeta(2)G+\beta(4)+i\pi \beta(3)$$
Then
$$\int_0^1 \frac{\operatorname{Li}_3(x)}{1+x^2}\ dx=\frac12\int_0^\infty\frac{\operatorname{Li}_3(x)}{1+x^2}\ dx-\zeta(2)G+\frac12\beta(4)+i\frac{\pi}2 \beta(3)\tag{1}$$
where $$\displaystyle\beta(s)=\sum_{n=0}^\infty\frac{(-1)^n}{(2n+1)^s}\$$ is the the Dirichlet beta function.
So any idea how to evaluate any of these two integrals?
Thanks.
• What are you asking? – clathratus Jun 13 '19 at 2:17
• Note the formula you got from the triglogarithmic identity involved you being cavalier about branch cuts; you are missing a factor of $\frac{16 i \pi^4}{512}$ – Brevan Ellefsen Jun 13 '19 at 4:04
• @clathratus I am not sure if you are being sarcastic or not but I'll take as not. If you want to know what I'm asking just ignore the body and read the title please. – Ali Shather Jun 13 '19 at 4:28
• @BrevanEllefsen wolfram gives the closed form. – Ali Shather Jun 13 '19 at 4:30
• @BrevanEllefsen I dont think I'm missing any. To make sure, just take x=1/2 and compare the two sides. – Ali Shather Jun 13 '19 at 4:45
Using the generalized integral expression of the polylogrithmic function which can be found in the book (Almost) Impossible Integrals, Sums and series page 4.
$$\int_0^1\frac{x\ln^n(u)}{1-xu}\ du=(-1)^n n!\operatorname{Li}_{n+1}(x)$$ and by setting $$n=2$$ we get
$$\operatorname{Li}_{3}(x)=\frac12\int_0^1\frac{x\ln^2 u}{1-xu}\ du$$
we can write
$$\int_0^\infty\frac{\operatorname{Li}_{3}(x)}{1+x^2}\ dx=\frac12\int_0^1\ln^2u\left(\int_0^\infty\frac{x}{(1-ux)(1+x^2)}\ dx\right)\ du$$ $$=\frac12\int_0^1\ln^2u\left(-\frac12\left(\frac{\pi u}{1+u^2}+\frac{2\ln(-u)}{1+u^2}\right)\right)\ du,\quad \color{red}{\ln(-u)=\ln u+i\pi}$$
$$=-\frac{\pi}{4}\underbrace{\int_0^1\frac{u\ln^2u}{1+u^2}\ du}_{\frac3{16}\zeta(3)}-\frac12\underbrace{\int_0^1\frac{\ln^3u}{1+u^2}\ du}_{-6\beta(4)}-i\frac{\pi}2\underbrace{\int_0^1\frac{\ln^2u}{1+u^2}\ du}_{2\beta(3)}$$
Then
$$\int_0^\infty\frac{\operatorname{Li}_{3}(x)}{1+x^2}\ dx=-\frac{3\pi}{64}\zeta(3)+3\beta(4)-i\pi\beta(3)\tag{2}$$
Bonus:
By combining $$(1)$$ in the question body and $$(2)$$, the imaginary part $$i\pi\beta(3)$$ nicely cancels out and we get
$$\int_0^1 \frac{\operatorname{Li}_3(x)}{1+x^2}\ dx=2\beta(4)-\zeta(2)G-\frac{3\pi}{128}\zeta(3)$$
where $$\beta(4)$$ $$=\frac{1}{768}\psi^{(3)}(1/4)-\frac{\pi^4}{96}$$
• The formula \begin{align}\Re\int_0^\infty\frac{\operatorname{Li}_{3}(x)}{1+x^2}\ dx&=\frac12\int_0^1\ln^2u\left(\int_0^\infty\frac{x}{(1-ux)(1+x^2)}\ dx\right)\ du\end{align} is weird. In the right side it's $\int_0^1 \frac{\operatorname{Li}_3(x)}{1+x^2}\ dx$ not its real part. There is no indication for the values of x involved in the formula about polylogarithm in the beginning of your post. Anyway your closed-form matches numeric evaluation. – FDP Nov 22 '19 at 8:45
• its edited now. thanks for pointing this out. – Ali Shather Nov 22 '19 at 8:58
• The formula for polylogarithm in the beginning of your post is valid for $|x|<1$ only. If you take $x=2$ the denominator is $1-2u$ and $1-2u=0$ when $u=1/2\in [0;1]$ – FDP Nov 22 '19 at 15:42
• Yes .. actually i got the right formula yesterday and I'll fix the problem today. – Ali Shather Nov 22 '19 at 16:18
• When you consider the integral $$\int_0^\infty\frac{x}{(1-ux)(1+x^2)}\ dx$$ as a CauchyPrincipalValue you don't get the imagniary part. It arises from the $\epsilon$ contour around that singularity. – Diger Nov 22 '19 at 22:05
For a different solution, use the first result from A simple idea to calculate a class of polylogarithmic integrals by using the Cauchy product of squared Polylogarithm function by Cornel Ioan Valean.
Essentially, the main new results in the presentation are:
Let $$a\le1$$ be a real number. The following equalities hold: $$\begin{equation*} i) \ \int_0^1 \frac{\log (x)\operatorname{Li}_2(x) }{1-a x} \textrm{d}x=\frac{(\operatorname{Li}_2(a))^2}{2 a}+3\frac{\operatorname{Li}_4(a)}{a}-2\zeta(2)\frac{\operatorname{Li}_2(a)}{a}; \end{equation*}$$ $$\begin{equation*} ii) \ \int_0^1 \frac{\log^2(x)\operatorname{Li}_3(x) }{1-a x} \textrm{d}x=20\frac{\operatorname{Li}_6(a)}{a}-12 \zeta(2)\frac{\operatorname{Li}_4(a)}{ a}+\frac{(\operatorname{Li}_3(a))^2}{a}. \end{equation*}$$ For a fast proof, see the paper above (series expansion combined with the Cauchy product of squared Polylogarithms)
The use of these new results with integrals allows you to obtain your result elegantly, but also other results that are (very) difficult to obtain by other means, including results from the book, (Almost) Impossible Integrals, Sums, and Series.
BONUS: Using these results you may also establish that (or the versions with integration by parts applied).
$$i) \ \int_0^1 \frac{\arctan(x) \operatorname{Li}_2(x)}{x}\textrm{d}x$$ $$=\frac{1}{384}\left(720\zeta(4)+105\pi\zeta(3)+384\zeta(2)G-\psi^{(3)}\left(\frac{1}{4}\right)\right),$$ $$ii)\ \int_0^1 \frac{\arctan(x) \operatorname{Li}_2(-x)}{x}\textrm{d}x$$ $$=\frac{1}{768}\left(\psi^{(3)}\left(\frac{1}{4}\right)-384\zeta(2)G-126\pi\zeta(3)-720\zeta(4)\right).$$
EXPLANATIONS (OP's request): The following way in large steps shows the amazing possible creativity in such calculations.
We'll want to focus on the integral, $$\displaystyle \int_0^1 \frac{\arctan(x)\operatorname{Li}_2(x)}{x}\textrm{d}x$$ which is a translated form of the main integral.
Now, based on $$i)$$ where we plug in $$a=i$$ and then consider the real part, we obtain an integral which by a simple integration by parts reveals that
$$\int_0^1 \frac{\arctan(x)\operatorname{Li}_2(x)}{x}\textrm{d}x=\int_0^1 \frac{\arctan(x)\log(1-x) \log(x)}{x}\textrm{d}x+\frac{17}{48}\pi^2 G+\frac{\pi^4}{32}-\frac{1}{256}\psi^{(3)}\left(\frac{1}{4}\right).$$
Looks like we need to evaluate one more integral and we're done. Well, if you read the book (Almost) Impossible Integrals, Sums, and Series (did you?), particularly the solutions in the sections 3.24 & 3.25 you probably observed the powerful trick of splitting the nonnegative real line at $$x=1$$ with the hope of getting the same integral in the other side but with an opposite sign. Therefore, with such a careful approach (since we need to avoid the divergence issues), we obtain immediately that $$\int_0^1 \frac{\arctan(x)\log(1-x) \log(x)}{x}\textrm{d}x$$ $$=\frac{1}{2} \underbrace{\int_0^1 \frac{\arctan(x)\log^2(x)}{x}\textrm{d}x}_{\text{Trivial}}+\frac{\pi}{4}\underbrace{\int_0^1 \frac{\log(x)\log(1-x)}{x}\textrm{d}x}_{\text{Trivial}}$$ $$-\frac{1}{2}\Re\left \{\int_0^{\infty}\frac{\arctan(1/x) \log(1-x)\log(x)}{x}\textrm{d}x\right \},$$
and the last integral works simply nice with Cornel's strategy described in the second part of this post (it involves the use of Cauchy Principal Value) https://math.stackexchange.com/q/3488566.
• Nice (+1) but it's not clear how to get the integral in the question from using the first result. – Ali Shather Dec 30 '19 at 5:39
• The explanation is really amazing and helpful. thank you. – Ali Shather Dec 30 '19 at 20:07 | 2020-08-05T13:52:03 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/3260433/compute-int-0-infty-frac-operatornameli-3x1x2-dx",
"openwebmath_score": 0.8812828660011292,
"openwebmath_perplexity": 399.5297511282832,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9799765633889135,
"lm_q2_score": 0.8740772351648677,
"lm_q1q2_score": 0.8565752050533502
} |
https://math.stackexchange.com/questions/2017988/example-of-a-continuous-function-f-mathbbr-to-0-1-such-that-f-is-not-uni | # Example of a continuous function $f:\mathbb{R} \to [0,1]$ such that f is not uniformly continuous.
I am trying to find an example of a continuous function $f:\mathbb{R} \to [0,1]$ such that f is not uniformly continuous. I have been playing around with the $sin$ function and I have noticed that $|sin(x^2)|$ is a continuous function from $\mathbb{R}$ to $[0,1]$. This function I am thinking is not uniformly continuous since it behaves a lot like $\sin(x^2)$.
What I am most interested is in how you come up with a function that meets the criteria. For example, when I first saw this question, I immediately thought of the $\sin$ function. Are there any other functions that meet this requirement and are easier to work with in case that I want to prove that it is not uniformly continuous. Is this how you come up with functions, that is, by looking at the rate of oscillations? It seems to me that the only functions that would work are variations of the $sin$ function. I would appreciate it if you could give advice on how to approach this problem.
• you cannot just say "it oscillates a lot" to prove a function is not uniformly continuous, start with the definition
– Nick
Nov 17 '16 at 6:02
• you are right. Right now I am just constructing graphs that I think are not uniformly continuous and trying to come up with possible functions that meet the requirements. I am just trying to see how people approach the problem rather than proving it. Nov 17 '16 at 6:05
• You can use the mean value theorem to prove that $\frac 1 2 + \frac 1 2 \sin(x^2)$ isn't uniformly continuous since the derivative becomes arbitrarily large as $x \to \infty$. Nov 17 '16 at 6:15
The key to finding such a function, is to look for a function with "arbitrarily large derivative" (this doesn't work for all such functions, as the comment below points out) . This is to say, we want some problems with infinity, to be created on the end points.
For example, let us consider $f(x) = \frac{1}{2} + \frac{1}{2}\sin x^2$, which has a derivative of $2x \cos x^2$, and also has range in $[0,1]$ (the $\frac{1}{2}$ is just some adjustment to ensure that the range is in $[0,1]$). This derivative blows up as $x$ increases. Hence, we consider this as a candidate.
Fix a natural number $N$. We will find a pair of points $x,y$ such that $|f(x)-f(y)| \geq N|x-y|$.
Now, all we need is to use the mean value theorem: We know that $f(x)-f(y) = (x-y) f'(z)$ where $z$ is some point between $x$ and $y$. Now, suppose we chose $x$ and $y$ in a region where $f'(z)$ could only be a large positive quantity, greater than the given $N$. Then of course, $|f(x)-f(y)| >N|(x-y)|$.
I leave you to formalize the details, but it is clear, that $f$ cannot be uniformly continuous, despite being differentiable.
• "The key to finding such a function, is to look for a function with "arbitrarily large derivative". " That doesn't always work. Consider $\sin(e^x)/(1+x^2).$ Also the derivative of your function doesn't "blow up" as I understand the term. Rather it oscillates unboundedly.
– zhw.
Nov 17 '16 at 7:14
• @zhw I see. The point is, my function oscillates unboundedly, as you have said. I mistakenly wrote it as blowing up, but the meaning should be clear. But I understood the example of $\sin (e^x)/ (1+x^2)$, so thank you for pointing it out. It is a function with arbitrarily large derivative, but reducing oscillations, hence could be uniformly continuous. Nov 17 '16 at 7:25
"Right now I am just constructing graphs that I think are not uniformly continuous and trying to come up with possible functions that meet the requirements. I am just trying to see how people approach the problem rather than proving it."
This is a good strategy, and it's often how I approach analysis problems as well. Having an intuition that a result is "correct" (even if it's due to hand-waving reasoning) is good to have before starting a rigorous proof.
Uniform continuity is sort of like a stronger, "global" version of regular continuity. To wit, if you start with a given $\varepsilon > 0$, then you can find a $\delta > 0$ such that $|x-y| < \delta \implies |f(x) - f(y)| < \varepsilon$, regardless of where you are in the domain. On the other hand, we talk about regular continuity "at a given point", and the $\delta$ depends on that chosen point.
Your idea with $f(x) = |\sin(x^2)|$ is perfect. Indeed, if we suppose that this is uniformly continuous, then we can arrive at a contradiction for essentially the reason you cite: it oscillates faster and faster as we move away from the origin.
To take advantage of this problem, choose $\varepsilon$ less than the amplitude of the wave; e.g. $\varepsilon = 1/2$. If we suppose there exists a $\delta > 0$ such that $|x-y| < \delta$, then $||\sin(x^2)| - |\sin(y^2)|| < 1/2$, we can make this fail by going out far enough away from the origin so that our function goes through an entire oscillation in a $\delta$-interval. Recall that a sine wave reaches a "crest" or a "trough" at every $(2k + 1/2) \pi$ radians and every $(2k + 3/2) \pi$ radians. Solving $x^2 = (2k + 1/2) \pi$ gives the values at which $\sin(x^2)$ reaches a crest, and a similar equation for the troughs. Taking an absolute value results in all of these becoming crests (with new troughs in between each at integer multiples of $\pi$). Now just go out sufficiently far so that the distance between these crests is less than the chosen $\delta$. I'll leave the details to you to crunch out (one can show that the distance between the crests limits to zero).
• Apparently, thge OP added the absolute value to achieve the desired codomain $[0,1]$ Nov 17 '16 at 6:36
• Oh!! I lost sight of the forest for the trees. Thanks for pointing that out @HagenvonEitzen. Nov 17 '16 at 6:38
• Thank you Kaj Hansen your explanation is very helpful! Nov 17 '16 at 7:00
• Glad I could help @AnP. ! Nov 17 '16 at 7:18 | 2022-01-25T13:12:59 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/2017988/example-of-a-continuous-function-f-mathbbr-to-0-1-such-that-f-is-not-uni",
"openwebmath_score": 0.930029034614563,
"openwebmath_perplexity": 119.5976233561056,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517488441416,
"lm_q2_score": 0.8757870029950159,
"lm_q1q2_score": 0.8565650098942448
} |
https://math.stackexchange.com/questions/1675109/if-i-say-g-1-dots-g-n-freely-generates-a-subgroup-of-g-does-that-mean | # If I say $\{g_1,\dots,g_n\}$ freely generates a subgroup of $G$, does that mean the elements $g_i$ are all distinct?
Let $G$ be a group and let $g_1,\dots,g_n$ elements of $G$. If I say that $\{g_1,\dots,g_n\}$ freely generates a free subgroup of $G$, say $H$, do I mean that the rank of $H=n$ or should I consider the case when two elements are equal (for example $g_1=g_2$) and so the rank of $H<n$ ? thank you
• If the elements freely generate $\;H\;$ it can't be $\;g_i=g_j\;,\;\;i\neq j$, nor $\;g_i=1\;$ for some $\;i\;$ – DonAntonio Feb 28 '16 at 1:31
• @Joanpemo I agree that it can't be $g_i=1$ but why it can't be $g_i=g_j$ for $i\not=j$? thank you – Richard Feb 28 '16 at 1:34
• Else you would have the ralation $g_ig_j^{-1}$. – Daniel Robert-Nicoud Feb 28 '16 at 1:38
• @Joanpemo It depends on the context, but the way the question is phrased, i.e., without context, the statement is not strictly false. There is a perfectly nice free group which is free on the set $\{x,x,y\}$. It is the free group on $\{x,y\}$, since $\{x,x,y\}=\{x,y\}$. – Zach Blumenstein Feb 28 '16 at 1:44
• @ZachBlumenstein Thank you. Yet, as sets, $\;\{x,x,y\}=\{x,y\}\;$ so no problem, indeed. I though understand something different when I read "a set of free generators", and as far as I know this implies $\;g_i\neq g_j\;$ for $\;i\neq j\;$, otherwise the universal property of a free group in the number of free generators given isn't fulfilled. – DonAntonio Feb 28 '16 at 2:42
Strictly speaking, what you wrote means that the set $\{g_1,\dots,g_n\}$ freely generates $H$, so the rank of $H$ is the number of (distinct) elements of that set, which may or may not be $n$. However, people frequently abuse language and say what you wrote when they really mean that additionally the $g_i$ should all be distinct. In context, there is usually not too much risk of confusion, and as a practical matter, if you see someone write this, you should judge for yourself which interpretation would make more sense in context. I don't really know of a succinct and completely standard way to express the latter meaning; I might write it as "the elements $(g_1,\dots,g_n)$ freely generate a subgroup", writing it as a tuple to stress that you are actually saying that the map $\{1,\dots,n\}\to H$ sending $i$ to $g_i$ satisfies the universal property to make $H$ a free group on the set $\{1,\dots,n\}$. What's really going on is that a collection of (potential) "free generators of a group $H$" should not be thought of as just a subset of $H$ but as a set together with a map to $H$ (and if the group really is freely generated by a map, the map must be injective).
(This terminological difficulty is not restricted to this context. For instance, a similar issue quite frequently comes up when talking about linear independence: if $v\in V$ is a nonzero element of a vector space, then the set $\{v,v\}$ is linearly independent (since it actually has only one element), but the tuple $(v,v)$ is not.)
• I don't exactly agree. When we say that a (finite or infinite) set $X$ is a set of free generators of a group $F$, we don't need to specify any function, because the function is just inclusion. There is no mistake there. But I do agree that the notation $\{g_1,\ldots,g_n\}$ taken to mean that $g_i$ are distinct is bad, because it literally means what that $X=\{g_1,\ldots,g_n\}$ is a free generating set of $F$, while it probably really means that $g_1, \ldots, g_n$ generate $F$ freely, which is not the same thing if $\lvert X\rvert\neq n$. – tomasz Feb 28 '16 at 8:34
• Sure, there is nothing wrong with saying that a subset freely generates a group. But it is more natural to formulate it in terms of maps from a set, and the failure to do so is the cause of the terminological awkwardness when saying "$(g_1,\dots,g_n)$ freely generates" a group, and the frequency with which people incorrectly write "$\{g_1,\dots,g_n\}$ freely generates" when that's not really what they mean. – Eric Wofsey Feb 28 '16 at 8:41 | 2021-01-21T19:52:04 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/1675109/if-i-say-g-1-dots-g-n-freely-generates-a-subgroup-of-g-does-that-mean",
"openwebmath_score": 0.8420447707176208,
"openwebmath_perplexity": 156.17697226308653,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517488441415,
"lm_q2_score": 0.8757869916479466,
"lm_q1q2_score": 0.8565649987962237
} |
http://mathhelpforum.com/advanced-statistics/74249-probability-success.html | # Math Help - probability of success
1. ## probability of success
I am trying to answer this problem for an exam.Can anyone please help me.I wonder if I should be using the formula prob=nCr (p^r) q^(n-r).
The probability for students from a certain university to pass Mathematics and Economics are 3/7 and 5/7 respectively. If one student fails both subjects and there are four students who passed both subjects find the number of students who took the test.
2. Hello, hven191!
This doesn't require any fancy formulas . . .
The probability for students at a certain university
to pass Mathematics and Economics are 3/7 and 5/7 respectively.
One student failed both subjects and four students passed both subjects.
Find the number of students who took the test.
Make a Venn diagram of the students who passed.
Code:
*---------------------------------------*
| |
| *-----------------------* |
| | Math | |
| | only *---------------+-------* |
| | M | Both | | |
| | | 4 | | |
| | | | | |
| *-------+---------------* Eco | |
| | E only | |
| *-----------------------* |
| Neither 1 |
*---------------------------------------*
Let $M$ = number of students that passed Math only.
Let $E$ = number of students that passed Economics only.
We know there are 4 that passed both, and 1 that failed both.
The total number of students is: . $N \:=\:M + E + 5$
The number of students that passed Math is: . $M + 4$
The probability that a student passed Math is $\tfrac{3}{7}$
We have: . $\frac{M+4}{M+E+5} \:=\:\frac{3}{7} \quad\Rightarrow\quad 4M - 3E \:=\:-13\;\;{\color{blue}[1]}$
The number of students that passed Economics is $E + 4$
The probability that a student passes Economics is $\tfrac{5}{7}$
We have: . $\frac{E+4}{M+E+5} \:=\:\frac{5}{7} \quad\Rightarrow\quad 5M - 2E \:=\:3\;\;{\color{blue}[2]}$
$\begin{array}{cccccc}\text{Multiply {\color{blue}[1]} by -2:} & \text{-}8M + 6E &=& 26 & {\color{blue}[3]}\\
\text{Multiply {\color{blue}[2]} by 3:} & 15M - 6E &=& 9 & {\color{blue}[4]}\end{array}$
$\text{Add {\color{blue}[3]} and {\color{blue}[4]}: }\;7M \:=\:35 \quad\Rightarrow\quad\boxed{ M \:=\:5}$
Substitute into [1]: . $4(5) - 3E \:=\:-13 \quad\Rightarrow\quad\boxed{ E \:=\:11}$
Therefore, the number of students is: . $N \;=\;M + E + 5 \;=\;5 + 11 + 5 \;=\;{\color{red}21}$
3. ## Thanks
Thanks a lot Soroban! The solution you gave was very simple and easy to understand.The illustration was a big help. Although the answer does not match the answer in the book I'm using I'm still convinced with your answer. The answer in the book which is 28 may just be a typo-graphical error. Thanks! | 2014-07-31T03:08:36 | {
"domain": "mathhelpforum.com",
"url": "http://mathhelpforum.com/advanced-statistics/74249-probability-success.html",
"openwebmath_score": 0.687724769115448,
"openwebmath_perplexity": 794.9017559484801,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517443658749,
"lm_q2_score": 0.8757869867849166,
"lm_q1q2_score": 0.8565649901179212
} |
http://math.stackexchange.com/questions/266629/find-the-infinite-sum-sum-n-1-infty-frac12n-1/266638 | # Find the infinite sum $\sum_{n=1}^{\infty}\frac{1}{2^n-1}$
How to evaluate this infinite sum? $$\sum_{n=1}^{\infty}\frac{1}{2^n-1}$$
-
Why the expression appears so small? How can I enlarge that? : ( – Ryan Dec 28 '12 at 18:54
use \displaystyle – sxd Dec 28 '12 at 18:55
If I'm not wrong, this one appears in one volume of Ramanujan's notebook. (Chris) – Chris's sis Dec 28 '12 at 18:59
$1.606695152415291$ – Henry Dec 28 '12 at 19:12
Not an answer, but it is easy to convert the formula to: $$\sum_{m=1}^\infty \frac{\tau(m)}{2^m}$$ where $\tau(m)$ is the number of distinct divisors of $m$. – Thomas Andrews Dec 28 '12 at 19:33
show 6 more comments
I think you wanna see this:
Ramanujan’s Notebooks Part I
Click me and try Entry $14$ (ii) / pag 146 where you set $x=\ln2$
Chris.
-
I want to write out that particular equation just because it's so . . . out there: $$\sum_{k \ge 1}\frac{1}{e^{kx}-1}=\frac{\gamma}{x}-\frac{\log x}{x}+\frac{1}{4}-\sum_{1 \le k \le n}\frac{B^2_{2k}x^{2k-1}}{(2k)(2k)!}+R_n,$$ where $\gamma$ is Euler's constant, $B_{2k}$ are the conventionally defined Bernoulli numbers, $x>0$, $n\ge 1$, and $R_n$ is a constant bounded by the inequality $$|R_n|\le \frac{|B_{2n}B_{2n+2}|x^{2n}}{(2n)!}\left(\frac{x^2}{4\pi^2}+\frac{\pi^2}{6} \right).$$ For our case, let $x=\ln 2$ as Chris's sister stated. – 000 Dec 29 '12 at 18:00
@Limitless: thank you for the details provided! (+1) – Chris's sis Dec 29 '12 at 18:03
Yes. I found it. It is called the Erdős-Borwein Constant.
$$E=\sum_{n\in Z^+}\frac{1}{2^n-1}$$
According to the page, Erdős showed that it is irrational.
-
I almost see in each step in the link a possible nice question to ask. :D (+1) – Chris's sis Dec 28 '12 at 19:24
Is it known whether the Erdos-Borwein Constant is transcendental or not? – Rustyn Dec 28 '12 at 19:27
I have no idea. I guess no. – Amr Dec 28 '12 at 19:35
@Rustyn: Please use markdown formatting for non-mathematical things like italics and bold in normal text. I've edited your comment. – Zev Chonoles Dec 28 '12 at 19:42
@ZevChonoles Ok, I will in the future. – Rustyn Dec 28 '12 at 19:44
show 1 more comment
$$\displaystyle \sum _{k=1}^n \frac{1}{\left(\frac{1}{q}\right)^k-\frac{1}{r}}=\frac{r}{\log (q)} \left(\psi _q^{(0)}\left(1-\frac{\log (r)}{\log (q)}\right)-\psi _q^{(0)}\left(n+1-\frac{\log (r)}{\log (q)}\right)\right)$$
In trying to get Mathematica to solve the series, I eventually found the preceding form which assumes $0<q<1$. If we take $q=1/2$, $r=1$ and let n approach infinity, we get the same solution that Amr references. The partial sum solution utilizes the function, $\psi _q^{(n)}(z)$.
$$\displaystyle \lim_{n\to \infty } \, \frac{1}{\log (1/2)}\left(\psi _{\frac{1}{2}}^{(0)}(1)-\psi _{\frac{1}{2}}^{(0)}(n+1)\right)=1+\frac{\psi _{\frac{1}{2}}^{(0)}(1)}{\log (1/2)}=E$$
-
(This is not meant as an answer but it is too long for the comment.box)
Since you mentioned interest in variations of the problem: here is a text, in which L. Euler discussed that sum:
"Consideratio quarumdam serierum quae singularibus proprietatibus sunt praeditae" (“Consideration of some series which are distinguished by special properties”)
L. Euler Eneström-index E190.
You can find it online.
A further discussion of this by Prof. Ed Sandifer, where he sheds light on a very interesting discussion about a "false series for the logarithm" at which that constant pops up (and which actually had pointed me originally to L.Euler's article):
A false logarithm series (Discussion of E190)
Ed. Sandifer in: "How Euler did it" Dec 2007
http://www.maa.org/editorial/euler/How%20Euler%20Did%20It%2050%20false%20log%20series.pdf
I've fiddled then with it myself a bit further, maybe you find that amateurish explorations interesting too. The constant is part of a consideration on page 5.
- | 2013-12-13T12:09:14 | {
"domain": "stackexchange.com",
"url": "http://math.stackexchange.com/questions/266629/find-the-infinite-sum-sum-n-1-infty-frac12n-1/266638",
"openwebmath_score": 0.9496000409126282,
"openwebmath_perplexity": 1802.1115540306073,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517424466175,
"lm_q2_score": 0.8757869835428965,
"lm_q1q2_score": 0.8565649852661971
} |
https://math.stackexchange.com/questions/233220/is-bbb-z-3-x-langle-x2-1-rangle-cong-bbb-z-3-times-bbb-z-3/233228 | # Is $\Bbb Z_3 [x] / \langle x^2 - 1 \rangle \cong \Bbb Z_3 \times \Bbb Z_3$?
I have paired off the zero divisors in these two rings, and there are exactly 4 pairs of distinct zero divisors in both rings:
In $\Bbb Z_3[x]/\langle x^2 - 1\rangle$ the zero divisor pairs are $$x+1, x+2 \qquad x+1, 2x+1, \qquad 2x+1, 2x+2 \qquad x+2, 2x+2$$
Similarly, in $\Bbb Z_3 \times \Bbb Z_3$ the zero divisor pairs are $$(0, 1), (1, 0) \qquad (0,1), (2,0) \qquad (0, 2), (1, 0) \qquad (0, 2), (2, 0)$$ The fact that the zero divisors match up indicates to me that these rings should indeed be isomorphic, as zero divisors are one of the structures which have failed checks in the past on rings such as $\Bbb Z_2 [x] / \langle x^2 \rangle \ncong \Bbb Z_2 \times \Bbb Z_2$.
My Question
Is the only homomorphism (and thus isomorphism) between these two rings an enumerative one, where I explicitly map each of the nine elements of $\Bbb Z_3[x]/ \langle x^2 - 1\rangle$ to an element in $\Bbb Z_3 \times \Bbb Z_3$? I can do that if necessary, but it seems an inelegant and brute force method. However, since these rings are so small, it may be the best way to go about it. Thanks!
$Z_3[x]/(x^2-1)\cong Z_3[x]/(x-1)\times Z_3[x]/(x+1)\cong Z_3\times Z_3$. The map is induced by $ax+b\rightarrow (a+b, -a+b)$
• Thank you all for the answers! They give me much more intuition about the result than a simple enumerative homomorphism. – Moderat Nov 9 '12 at 0:28
• Actually I am finding it hard to see that this map is onto. Can you elaborate? – Moderat Nov 10 '12 at 21:03
• @jmi4 $x-1\rightarrow (0,1)$, $-x-1\rightarrow (1,0)$ – i. m. soloveichik Nov 11 '12 at 0:02
In $R:=(\mathbb Z/3\mathbb Z)[x]$, $x-1$ and $x+1$ generate two distinct maximal ideals $\mathfrak m_1, \mathfrak m_2$. We have $(x^2-1)R=\mathfrak m_1 \mathfrak m_2$. By Chinese Remainder Theorem, we have a canonical isomorphism $$R/(x^2-1)R \to R/\mathfrak m_1\times R/\mathfrak m_1.$$ Now $R/\mathfrak m_i$ is isomorphic to $\mathbb Z/3\mathbb Z$. So your rings are isomorphic.
There are exatly two isomorphisms. One is given as above, the other one is obtained by composition this isomorphism with the permutation in the second ring. (Hints: the only automorphism of $\mathbb Z/3\mathbb Z$ is the identity).
1. A homomorphism from a quotient $A/\theta\to B$ may be given by its lift $A\to B$ such that its kernel factors through $\theta$.
2. A homomorphism from a polynomial ring $R[x_1,..,x_i,..]$ to an $R$-algebra is unquely determined by the image of the indeterminants $x_i$.
3. $1$ must be mapped to the unit, that is $(1,1)$, and by 2., the image of $x$ can be freely chosen, and it determines all other images.
Since the ideals $(x-1)$ and $(x+1)$ both contain $(x^2-1)$ (because $x-1,x+1\mid x^2-1$ in $\mathbf{F}_3[x]$), there are natural surjective homomorphisms $\pi_1:\mathbf{F}_3[x]/(x^2-1)\rightarrow\mathbf{F}_3[x]/(x-1)$ and $\pi_2:\mathbf{F}_3[x]/(x^2-1)\rightarrow\mathbf{F}_3[x]/(x+1)$. Explicitly $\pi_1(f+(x^2-1))=f+(x-1)$ and $\pi_2(f+(x^2-1))=f+(x+1)$. This gives a homomorphism
$\pi:\mathbf{F}_3[x]/(x^2-1)\rightarrow\mathbf{F}_3[x]/(x-1)\times\mathbf{F}_3[x]/(x+1)$.
Note that $f+(x^2-1)$ is in the kernel of this map if and only if both $x-1$ and $x+1$ divide $f$, and since the polynomials $x-1,x+1$ are relatively prime, their product $x^2-1$ divides $f$, so $f\in (x^2-1)$, and thus $f+(x^2-1)=0$ in $\mathbf{F}_3[x]/(x^2-1)$. Thus $\pi$ is injective. There are a couple ways to see that $\pi$ is surjective, and which one you prefer depends I guess on what you know. For example, since both the source and the target of $\pi$ are $\mathbf{F}_3$-vector spaces of dimension $2$, and the map is $\mathbf{F}_3$-linear, it has to be an isomorphism. Or, since the ideals $(x-1)$ and $(x+1)$ together generate the unit ideal $\mathbf{F}_3[x]$, since the ideal they generate contains $(x+1)-(x-1)=1+1=2$, a unit in $\mathbf{F}_3$, you can invoke the Chinese remainder theorem (which will actually tell you injectivity of the above map as well as surjectivity). Or you can try to directly find, for given $f_1,f_2\in\mathbf{F}_3[x]$, a polynomial $f$ such that $f-f_1\in(x-1)$ and $f-f_2\in(x+1)$. Then $f+(x^2-1)$ will map to $(f_1+(x-1),f_2+(x+1))$.
In any case, the map is an isomorphism. | 2019-08-22T02:39:37 | {
"domain": "stackexchange.com",
"url": "https://math.stackexchange.com/questions/233220/is-bbb-z-3-x-langle-x2-1-rangle-cong-bbb-z-3-times-bbb-z-3/233228",
"openwebmath_score": 0.9580441117286682,
"openwebmath_perplexity": 131.9940751963664,
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9706877692486438,
"lm_q2_score": 0.8824278757303677,
"lm_q1q2_score": 0.8565619462155301
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.